openAI Assistant API for chatbot

Has anyone built a chatbot with AppFarm? Is it possible to implement the openAI Assistant API?

It would be awesome to see an example of this!

Hello, and welcome to our community!
Although we don’t have an example of this in our Showroom, we have successfully created a chatbot in Appfarm using the OpenAI Assistants API. You can follow the instructions stated in the docs, and I have provided a few comments to explain how this can be done in Create below:

In our case, we created four datasources for the app: Question, Answer, Thread (conversation) and Run (execution).

  1. Create an Assistant defining its custom instructions and picking a model. We did this in the OpenAI admin page, but it can be done using the API as well.
  2. Create a Thread when a user starts a conversation. Link this thread to the question in context.
  3. Add Messages to the Thread as the user ask questions, and send question to the thread (https://api.openai.com/v1/threads/${threadId}/messages)
  4. Run the Assistant on the Thread to trigger responses. https://api.openai.com/v1/threads/${threadId}/runs/${runId}/steps
    Keep executing to get the steps until status is ‘completed’, this can be done by using a while-loop.
  5. When status on the run is completed, you can get the answer. Map and display the last message in the thread to the user. https://api.openai.com/v1/threads/${threadId}/messages

It is possible to refine it and make it more advanced, but this is a good starting point to continue exploring the API - I hope this is helpful. :slightly_smiling_face:

2 Likes

Thank you so much for this, it took a bit of time to get it working but we got it working and have indeed made it more advanced by making it do queries for us and with some major prompt engineering make it behave like our own assistant. Thou we are at a point were we think that the assistant needs a few more upgrades since we can’t make it consistant in the responses we get since it kind of works but every now and then it doesn’t. We will put this project on 25% speed and try it more later on again when a few more updates have arrived.

1 Like

Hello again!
As we kind of need to figure out if this can work or not we’ve upped the speed on this project again. Since there might be alternative solutions to the problem we are facing. So.
Sending questions and having a conversation with the bot is all well and good but, we want to use our own data from our application in the conversation so the user can ask questions. Like a customer service bot.
But we have no reliable solution to this right now. Our current solution is to have the assistant parse the users question into a query and then we use that response on our own data and then we send that answer to the bot as a second input to get the query answer in human-like words. And well, it works, sometimes.
Would you have any examples or possible solutions to make this more fluid and without errors occuring every now and then. I Know that prompt engineering is a big part of it aswell but like, running the thread two times to get one answer isn’t that great, and we feel also like that could compromise some of the data, possibly.
Then there is the security-question. How safe is appfarm, openai and the connection between them? I might also add that we are all juniors and interns working on this :slight_smile:

My hope for an answer on this is that “We are actually working on a component that will arrive next week, this will solve all your problems” :wink:
How would you solve it?
There are a lot of sites with service bots today and we’d assume that we aren’t to far off and that appfarm also has a way to solve this.

Hi!

There are many companies exploring AI the way you are, and all or most of these applications must tackle instability and errors. The thing you are asking might be a single component for your use case and technology, but there are many different use cases, types of APIs and vendors in this space - and their offerings are changing rapidly. Appfarm is exploring AI as part of the platform (for boosting development speed), and we are working on reusable integrations. The latter might be relevant for your use case, but it is not ready on this side of the summer.

Your use case might be solved with the Open AI Assistants API, as you are currently doing. This API allows you to combine “chatGPT” with your company data, without any explicit “data engineering or data scientist” knowledge. But yes, it is a bit slow and unstable. Microsoft Azure has a similar offering that may be more stable, since you may run it in your own instance, but I believe it is more expensive.

Security: The connection between Appfarm and OpenAI is “server to server” and HTTPS, so it is as secure as you get it over the internet. The APIs of OpenAI also claim to not use API data for training. The same applies to Microsoft and Google (the two might appeal more to enterprises with regard to security and privacy).

To summarize the current state (from my personal point of view): Using the out-of-box APIs is fast, but currently a bit unstable. It might be improved with prompt optimization, and maybe also by moving up a tier or two in the OpenAI subscription. You might want to consider investing some time in setting up a “RAG” system (Retrievel Augmented Generation) in Microsoft (Azure OpenAI) or Google (Vertex AI) if you really want to operationalize things. RAG means adding your company docs or resources in a database (vector db) and using it for searching and retrieving information to feed into the LLM (such as GPT 4.5). RAG is what the Assistants API of OpenAI actually is, they have just simplified the setup (upload documents to the assistant). RAG, however, is a domain “outside” Appfarm platform.

2 Likes