The following requirements apply to operating the Fabasphere in the “Private Cloud” operating model.
The following infrastructure is required.
Kubernetes Cluster
Data Storage via NFS File Share
Database
Container Registry
The following requirements apply for operation.
Required Services
Note: The required services are not part of the Fabasphere deployment.
Optional Services
Note: The optional services are not part of the Fabasphere deployment.
Configuration Management/Deployment
Note: The required tools are not part of the Fabasphere deployment.
External Cluster Access (TCP)
TCP/IP addresses must be provided for services with the service type “LoadBalancer” (e.g., MetalLB).
For the operation of the Fabasphere services in the “Private Cloud” operating model, at least the following resources per service are required (1000 registered users, 10 TB of data).
Service | CPU (Requested) | RAM | Persistent Storage | Remarks |
|---|---|---|---|---|
COO Service | 4 | 16 GB | 128 MB | DTM logs need persistent storage. |
Storage Service | 4 | 8 GB | 3 x 10 TB | 3 x NFS shares for redundant storage (no replicas). |
Web Service | 2 | 16 GB | - | As the number of objects increases, an increase in RAM is recommended. As user request load increases, an increase in CPU capacity is required. |
AT Service | 2 | 16 GB | - | |
IdP | 2 | 6 GB | - | |
EventQ | 2 | 2 GB | 2 GB | |
DTS | 16 | 64 GB | 8 GB | |
MIS | 4 | 12 GB | 50 GB | |
EXTCACHE | 2 | 8 GB | - | |
OData | 2 | 8 GB | - | Optional |
OpenAPI | 2 | 8 GB | - | Optional |
State Server | 2 | 4 GB | - | |
COODOTNET | 4 | 16 GB | - | Optional |
For an exemplary standard operation with 1000 registered users and 10 TB of data, at least the following number of replicas per service is required:
Service | Number | Remarks on Scaling |
|---|---|---|
COO Service | 3 | A dedicated database is required for each COO service. The minimum required resources of the individual COO service instances may vary depending on the configured object placement and may require different configurations. |
Storage Service | 3 | Two replicas are required to be active for handling requests. The third replica writes an online backup of the data and serves as a fallback. |
Web Service | 12 | Each web service provides a maximum of 64 threads. As the number of requests increases, the number of instances must be increased accordingly. |
AT Service | 2 | Depending on the number of automated tasks to be processed, it may be necessary to increase the number of AT services. |
IdP | 2 | The IdP is operated redundantly with two replicas. Higher scaling is not required. |
EventQ | 3 | Three replicas are required for operating the EventQ. This number must neither be increased nor decreased. |
DTS | 1 | DTS consists of individual microservices for the respective conversion tools. Scaling is only possible at the tool level. Automatic scaling mechanisms are provided to enable load-dependent scaling. |
MIS | 1 | MIS consists of individual microservices. Higher scaling is not required. |
EXTCACHE | 3 | It is recommended to provide one replica per worker node of the orchestration platform. |
OData | 2 | With increased request load, it may be necessary to increase the number of replicas. If predominantly larger data queries are performed, an increase in RAM is required. |
OpenAPI | 2 | With increased request load, it may be necessary to increase the number of replicas. |
State Server | 2 | With increased request load, it may be necessary to increase the number of replicas. |
COODOTNET | 2 | With increased request load through OData and OpenAPI, it may be necessary to increase the number of replicas. |
The specifications regarding resources and scaling are based on independent empirical values and represent the minimum requirements. Depending on the hardware used, the actual request load, and user behavior, an increase in resources or instances may be necessary.
Mindbreeze AI is operated on the same Kubernetes cluster. The required language model must be obtained directly, for example, from Hugging Face. Mindbreeze AI requires a “Persistent Volume Claim” to store the data necessary for AI use cases.
Recommendations:
Supported GPUs:
Mindbreeze AI supports CUDA GPUs with the following compute capability versions: 6.0, 6.1, 7.0, 7.5, 8.0, 9.0
A list of devices is maintained at the following external link: CUDA GPU Compute Capability: new window