2023 March Release

PerformancePermanent link for this heading

The Fabasoft Integration for OData processes data in of two steps:

  • During the first step (“Cube creation”), all objects as defined in the data source are queried and loaded into the OData service. The scope of objects considered is defined by the context of the data source (e.g. Teamroom). If the objects of the query result contain references to instances of classes contained in the data source, these instances are added to the object set. No queries are executed for the object classes “Object”, “User” and “Group”. Instead, the references to these instances are collected in the corresponding tables.
  • During the second step (“Table creation”), the values per table are loaded and mapped to the structure as required by the OData query processor. This data is created per table on demand. However, all of the table rows are created independent of the amount of table rows queried.

Keep in mind that any caching within the OData services is performed per user. Thus, if multiple users access an OData service, they all work with their dedicated OData cache.

Performance of the Initial Call to an OData ServicePermanent link for this heading

In order to reduce the processing overhead of the initial call to an OData service you can

  • reduce the number of entities or tables, and/or
  • reduce the number of properties inside entities or tables with high row count.
    Try removing unused properties to reduce the overall processing time for each entity. This is most likely the fastest way to resolve such performance issues.

Performance of Subsequent Calls to OData ServicePermanent link for this heading

Should the OData service lack performance with regards to subsequent calls, you may increase the value in the Cache Duration field. In this case, the data is stored in the cache for a longer period, hence subsequent calls may reuse the already existing data table.

Performance Optimization “Getting Only X Rows”Permanent link for this heading

In OData request syntax there is a filter targeting the number of rows fetched. It is the “top” command and can be used as followed:

odata/COO.1.506.4.2141/Scrum_Story?$top=500

This top command is also used in some business intelligence tools like Microsoft Power BI when showing a preview of the data.

Because it is commonly used this command has been optimized. That means internally only that amount of entities are fetched and processed. For the user it is transparent – it just does what it should do, just a little faster than other commands.

Performance Optimization “Updated Data”Permanent link for this heading

As many customers have the requirement to fetch data on a periodically base there is a performance optimization for the case that a filter is used to check if the objchangedat timestamp is higher than a given date.

That can be used, for example, to get every day the changed objects out of the Fabasoft Cloud. Internally this case is highly optimized by handling this case deep in the system and so avoiding much overhead.

How It Is WorkingPermanent link for this heading

First you have to add the objchangedat (System Change Timestamp in English) to your entity:

Then you can use the filter like that:

odata/COO.1.506.4.2141/Scrum_Story?$filter=System_Änderungszeitpunkt gt 2023-01-10T09:20+1

It is important to set the date in the correct format, otherwise you will get a message like this:

"message": "The query specified in the URI is not valid. The DateTimeOffset text '2023-01-10T09:20' should be in format 'yyyy-mm-ddThh:mm:ss('.'s+)?(zzzzzz)?' and each field value is within valid range."

LimitationsPermanent link for this heading

There are some limitations for using this feature:

  • Only the system change timestamp can be used efficiently. All other filters on fields will be processed normally.
  • Only greater than <data> can be used as this is the optimized use case: only loading data since a given timestamp.
  • As the optimization is done deeply in the system, the OData cache cannot be used. This means even if you have selected to cache the results for 1 hour, the results are regenerated on every request.