The Fabasoft Integration for OData processes data in of two steps:
- During the first step (“Cube creation”), all objects as defined in the data source are queried and loaded into the OData service. The scope of objects considered is defined by the context of the data source (e.g. Teamroom). If the objects of the query result contain references to instances of classes contained in the data source, these instances are added to the object set. No queries are executed for the object classes “Object”, “User” and “Group”. Instead, the references to these instances are collected in the corresponding tables.
- During the second step (“Table creation”), the values per table are loaded and mapped to the structure as required by the OData query processor. This data is created per table on demand. However, all of the table rows are created independent of the amount of table rows queried.
Keep in mind that any caching within the OData services is performed per user. Thus, if multiple users access an OData service, they all work with their dedicated OData cache.
Performance of the Initial Call to an OData Service
In order to reduce the processing overhead of the initial call to an OData service you can
- reduce the number of entities or tables, and/or
- reduce the number of properties inside entities or tables with high row count.
Try removing unused properties to reduce the overall processing time for each entity. This is most likely the fastest way to resolve such performance issues.
Performance of Subsequent Calls to OData Service
Should the OData service lack performance with regards to subsequent calls, you may increase the value in the Cache Duration field. In this case, the data is stored in the cache for a longer period, hence subsequent calls may reuse the already existing data table.
Performance Optimization “Getting Only X Rows”
In OData request syntax there is a filter targeting the number of rows fetched. It is the “top” command and can be used as followed:
This top command is also used in some business intelligence tools like Microsoft Power BI when showing a preview of the data.
Because it is commonly used this command has been optimized. That means internally only that amount of entities are fetched and processed. For the user it is transparent – it just does what it should do, just a little faster than other commands.