Hi, we will now try to explore AI. Have anyone been able to send all the App’s data to Open AI and then asked Open AI to research the data sent? In other words, we plan to send a lot of data to Open AI and then ask Open AI to analyse it… As a live integration to the data seems to be impossible; could it be possible to download all data in a static file?
Hi,
The answer to this depends on the use case, and most of the answer is not directly related to Appfarm. There are many possibilties for this, depending on the amount of data, how often new data should be fed, how often you prompt etc. And also dependent on the amount of effort you want to put in.
In general, with regards to “sending all data to OpenAI”: Open AI received either text, images, or audio. And files (typically being transformed to text by OpenAI, so this is just a convenient “tool” provided to users by them).
If the data you are taking about is stored in Appfarm: For example a table of Products (with names, categories and prices), and a table of Order lines (with product references, amounts etc), and you want to automate this export, you may use the Export Data action node in Appfarm to export these as a CSV files, that may be sent to OpenAI Chat Completion endpoint together with the prompt.
For larger sets of data, I would consider an approach using file search / vector stores. This is built-in in OpenAI (Responses API and the File Search APIs. In this case, however, you must make sure to keep the file store up to date, so the approach is OK for data that does not change very often.
Another approach (not tested yet) for large and dynamic data sets may be to enable GraphQL endpoints on relevant object classes in Appfarm, and create a system instruction to OpenAI to generate GraphQL queries according to the table and property names, responding to the user need. This query may be fed into a web request such as described here. The result (relevant records) may then be fed to the chat completion API for the AI to analyze the relevant data according to the need/use case.
Thank you! We will as a starting point export all data tables as CSV files and prompt Open AI. Later we can try more advanced solutions.
Hi, I have created a File Object Class (openAIFileObjectClass) containg all CSV data. But seems like it is not transferred to Chat GDP. The Web Request Body Content contains this:
UPDATE: seems like this is the trick:
“file”: JSON.stringify(${openAIFileObjectClass})
This was a quick test. Anyway, I now get improvement suggestions from Open AI based on the data uploaded, but the responses are not accurate. So the conclusion is that Open AI is not able to fully understand the data model in the CSV file. I tried to help Open AI understand it by describing the model in words and I think this helps a little. So for now this is a problem.
Update: When switching from “gpt-3.5-turbo-1106” to “gpt-4-turbo” I get: “Please upload a valid CSV file to proceed with the analysis and recommendations.”
“The files are incorrectly referenced as JSON of ‘[object Object]’, which indicates a serialization error.”
Hi!
What your are trying to achieve is not supported in the API. You cannot send CSV files directly to the Chat Completion API. It is documented here.
I also see that you may have copied a bit “outdated” info from the guide in our documetation. The gpt-3.5-turbo is old. See the model guide here.
However: We have updated our guide now, and included some examples on how to proceed on some use cases regarding “Sending data to OpenAI”. Please read the following section:
PS: Regarding the conversion of CSV to string, that currently needs to be done using the function editor in Appfarm Create, but we have registered an internal challenge / feature request on that one to have it more explicit.
Good luck!
Thank you! Great advices and service.
To the forum: can anyone see what I do wrong here:
Open AI tells me that: “The provided dataset does not contain readable Goal, Challenge, or Task information. Please provide actual data for review.” Also it tells me “Ensure all imported data objects are legible and human-readable, not placeholders like [object Object].” “Add clear audit trails and visibility of imported data to enable traceability and accountability in the model.”
Change
“text”: CSVtoText(${…})
To
“text”: ${CSVtoText(…)}
You are not executing that function the way you have it in the screenshot. Should be as this:
${code returning some text here}
PS: just use ChatGPT og Gemini for such purposes - paste the code and ask the same question - it will locate the error immediately!
Wow, a few hours work and assistance from Appfarm and now I get advice from my new OKR and Risk Manager in Open AI. This guy looks promising, I think I hire him:
Clarify Key Result: Increase Customer Retention Rate
Goal: Improve Customer Satisfaction
Redefine KR to specify the retention rate target, timeframe, and baseline for clearer measurement.
Add Key Risk: High Churn Due to Product Downtime
Goal: Improve Customer Satisfaction
Introduce a Key Risk to monitor and mitigate churn from system outages, with assigned likelihood and impact.
Add Task: Quarterly NPS Survey Implementation
Key Result: Increase NPS Score
Assign task to design, distribute, and analyze a quarterly Net Promoter Score survey for actionable insights
Refine Key Risk: Data Breach Impact
Goal: Strengthen Data Security
Update Key Risk to specify affected data types, potential regulatory fines, and customer trust impact.
Add Key Result: Achieve ISO 27001 Certification
Goal: Strengthen Data Security
Add a measurable Key Result to obtain ISO 27001 by Q4, ensuring improved security and compliance posture.
Add Task: Weekly Progress Check-ins
Key Result: Complete New Feature Development
Assign weekly check-ins to track progress, identify blockers early, and adjust plans for timely delivery