Hello,
I have a performance-related question regarding services/flows. We have several heavy services that retrieve metadata from a large amounts of documents, checklists, certifications, etc. from external sites. (Usually 30-70 000 objects)
Based on previous discussions, documentation, ADFs, and conversations with AI-Ask, I understand that best practice is to actively use runtime data sources and batch-work with data connectors, avoid foreach loops where possible, read a limited amount of objects at a time, persist data once, run asynchronously, and otherwise apply relevant optimizations.
However, in practice this often performs poorly, as it tends to result in server timeouts or overload. When I’ve used Ask or other AI tools, I often get suggestions that either don’t fit with the no-code architecture, or involve mapping via functions. The latter, for example, caused high server load for us in mid-March to the point where you (Appfarm) had to tell us to calm down a bit ![]()
A concrete example: I want a service or combination of services that:
-
retrieves document metadata from an API (Unique Document ID, strings that can be used to download/navigate to the document)
-
updates existing documents in our data source if any changes have occured since last run
-
deletes documents in the data source that no longer are included in the API call
Currently, I have split the API calls into a separate action to reduce complexity. The action below aims to create, update and delete, while a previous action has retrieved all data from the API and stored in the Dokumenter i Landax (fra API) data source. The API-data is read into a runtime source (from API, temp), after which new documents are created directly in a data connector (alternatively, this could go via temp → persist).
The problem occurs when I try to delete or update the document objects in the data connector. At this point the service consistently crashes with the error “server did not get a response in time,” and it also impacts the entire environment’s API.
Previously I have attempted to read much smaller amounts of data to the runtime data sources and used pagination in the API-call to improve the performance. This has usually caused uniqueness constraint issues as the objects from the API have not necessarily had the same place as the corresponding object in the data source causing Object State = New to be applied to existing objects.
I am also aware that the easiest way to solve this is to not retrieve so many damn objects, but the upside of having all of these readily available to filter and sort in our apps have outweighed this so far.
Any recommendations for how I should approach this?

