Configuration
Core service configuration
| Environment variable | Description | Default value |
|---|---|---|
LOGGING_LEVEL_APP | Application logging level | INFO |
Service communication
| Environment variable | Description | Default value |
|---|---|---|
FLOWX_CMS_BASEURL | Base URL for the CMS service | http://cms-core:80 |
FLOWX_NOSQLDBRUNNER_BASEURL | Base URL for the NoSQL DB Runner service | http://nosql-db-runner:80 |
FLOWX_DOCUMENTPLUGIN_BASEURL | Base URL for the Document Plugin service | http://document-plugin:80 |
FLOWX_AISERVICE_BASEURL | Base URL for the AI Platform service | http://ai-platform-connected-graph:9100 |
FLOWX_WEBHOOKGATEWAY_BASEURL | Base URL for the Webhook Gateway service | http://webhook-gateway:80 |
FLOWX_WEBCRAWLER_BASEURL | Base URL for the web-crawler service (Web Page Extractor node) | http://web-crawler:80 |
FLOWX_WEBHOOKGATEWAY_BASEURL and FLOWX_WEBCRAWLER_BASEURL are available starting with FlowX.AI 5.6.0.Storage configuration
S3 storage for files in Integration Designer.| Variable | Default Value | Description |
|---|---|---|
APPLICATION_FILESTORAGE_TYPE | s3 | 🆕 Storage type |
APPLICATION_FILESTORAGE_PARTITIONSTRATEGY | NONE | 🆕 Partition strategy |
APPLICATION_FILESTORAGE_DELETIONSTRATEGY | delete | 🆕 Deletion strategy |
APPLICATION_FILESTORAGE_S3_ENABLED | true | 🆕 S3 enabled |
APPLICATION_FILESTORAGE_S3_SERVERURL | http://minio:9000 | S3 server URL |
APPLICATION_FILESTORAGE_S3_ENCRYPTIONENABLED | false | S3 encryption enabled |
APPLICATION_FILESTORAGE_S3_ACCESSKEY | - | S3 access key |
APPLICATION_FILESTORAGE_S3_SECRETKEY | - | S3 secret key |
APPLICATION_FILESTORAGE_S3_BUCKETPREFIX | workflows-bucket | 🆕 S3 bucket prefix |
Redis
Integration Designer uses Redis for caching workflow execution data. Cache-specific configuration:| Variable | Description | Default Value |
|---|---|---|
SPRING_CACHE_TYPE | Cache type | redis |
SPRING_CACHE_REDIS_KEYPREFIX | Cache key prefix | flowx:core:cache:integration-designer: |
SPRING_CACHE_REDIS_TIMETOLIVE | Cache time to live | 5000000 |
| Environment Variable | Description | Example Value | Status |
|---|---|---|---|
SPRING_DATA_REDIS_HOST | Redis server hostname | localhost | Recommended |
SPRING_DATA_REDIS_PORT | Redis server port | 6379 | Recommended |
SPRING_DATA_REDIS_PASSWORD | Redis authentication password | - | Recommended |
REDIS_TTL | Cache TTL in milliseconds | 5000000 | Optional |
Both
SPRING_DATA_REDIS_* and SPRING_REDIS_* variable prefixes are supported. The SPRING_DATA_REDIS_* prefix is the modern Spring Boot standard and is recommended for new deployments.For advanced Redis deployment modes (Sentinel, Cluster) and SSL/TLS setup, see the Redis Configuration guide. Note that Sentinel and Cluster modes are only supported by the Events Gateway service.
WebClient configuration
Integration Designer interacts with various APIs, some of which return large responses. To handle such cases efficiently, the FlowX WebClient buffer size must be configured to accommodate larger payloads, especially when working with legacy APIs that do not support pagination.| Environment variable | Description | Default value |
|---|---|---|
FLOWX_WEBCLIENT_BUFFERSIZE | Buffer size (in bytes) for FlowX WebClient | 10485760 (10MB) |
FLOWX_WEBCLIENT_BUFFERSIZEFORDOCUMENTPLUGIN | Buffer size for Document Plugin calls | 15728640 (15MB) |
FLOWX_WEBCLIENT_COMPRESSIONENABLED | Enable response compression | true |
Workflow execution configuration
| Environment variable | Description | Default value |
|---|---|---|
FLOWX_WORKFLOW_MAXEXECUTEDNODES | Maximum number of nodes a workflow can execute | 1000 |
FLOWX_WORKFLOW_MAXSUBWORKFLOWDEPTH | Maximum nesting depth for subworkflows | 100 |
FLOWX_SCRIPTENGINE_ALLOWUNSAFEEVAL | Allow unsafe eval in workflow scripts | false |
Retry configuration
| Environment variable | Description | Default value |
|---|---|---|
FLOWX_RETRY_THREADS | Worker threads for retry processing | 20 |
FLOWX_RETRY_PICKINGPAUSEMILLIS | Pause between retry picking batches (ms) | 500 |
FLOWX_RETRY_COOLDOWNAFTERSECONDS | Cooldown period after retry processing (sec) | 120 |
FLOWX_RETRY_SCHEDULER_UNBLOCKRETRIES_CRONEXPRESSION | Cron expression for unblocking retries | "*/2 * * * * ?" |
Database configuration
Advancing database
The Advancing Controller is a support service that optimizes advancing operations by ensuring efficient, balanced workload distribution—especially during scale-up and scale-down events. To enable Integration Designer to interact with the Advancing database, configure the following environment variables:It can connect to the same database as the FlowX.AI Engine.
| Environment variable | Description | Default value |
|---|---|---|
ADVANCING_DATASOURCE_URL | Database JDBC URL | jdbc:postgresql://postgresql:5432/advancing |
ADVANCING_DATASOURCE_USERNAME | Database username | flowx |
ADVANCING_DATASOURCE_PASSWORD | Database password | - |
ADVANCING_DATASOURCE_DRIVERCLASSNAME | JDBC driver class | org.postgresql.Driver |
Configuring the Advancing controller
| Environment variable | Description | Default value |
|---|---|---|
ADVANCING_THREADS | Number of worker threads for advancing operations | 20 |
ADVANCING_PICKINGBATCHSIZE | Number of events picked per batch | 30 |
ADVANCING_PICKINGPAUSEMILLIS | Pause duration between picking batches (ms) | 500 |
ADVANCING_COOLDOWNAFTERSECONDS | Cooldown period after processing (seconds) | 120 |
ADVANCING_SCHEDULER_HEARTBEAT_CRONEXPRESSION | Cron expression for the heartbeat | "*/2 * * * * ?" |
Available starting with FlowX.AI 5.5.0The advancing controller model was redesigned in 5.5.0 with separate picking and processing thread pools, replacing
ADVANCING_THREADS and ADVANCING_PICKINGBATCHSIZE.| Environment variable | Description | Default value |
|---|---|---|
ADVANCING_PICKINGTHREADS | Number of worker threads for reading from database (picking operations) | 1 |
ADVANCING_PROCESSINGTHREADS | Number of threads for parallel processing of advancing events | 20 |
ADVANCING_PROCESSINGBUFFERSIZE | Maximum buffer size for processing queue. Controls how many events can be queued | 20 |
ADVANCING_BLOCKPICKINGIFNOWORKERAVAILABLE | Block picking operations when no worker threads are available | true |
ADVANCING_PICKINGPAUSEMILLIS | Pause duration between picking batches (ms) | 50 |
ADVANCING_COOLDOWNAFTERSECONDS | Cooldown period after processing a batch (seconds) | 120 |
ADVANCING_SCHEDULER_HEARTBEAT_CRONEXPRESSION | Cron expression for the heartbeat | "*/2 * * * * ?" |
How the new advancing controller works:
-
Picking threads (
ADVANCING_PICKINGTHREADS): Controls how many worker threads read events from the database. This handles only the picking/reading operations. -
Processing buffer (
ADVANCING_PROCESSINGBUFFERSIZE): Acts as a queue between picking and processing. When the buffer is full, no new events are read. When there’s available space (even just 1 position), that amount of events will be read. -
Processing threads (
ADVANCING_PROCESSINGTHREADS): Controls how many threads process the advancing events in parallel. Events are processed instantly if processing threads are available. If all processing threads are busy, events accumulate in the buffer until it reaches capacity.
The Advancing Controller supports multiple database systems including PostgreSQL and Oracle. Ensure you configure the appropriate JDBC URL and driver class for your chosen database system.
Workflow partitioning configuration
Available starting with FlowX.AI 5.5.0
| Environment variable | Description | Default value | Options |
|---|---|---|---|
FLOWX_DATA_PARTITIONING_ENABLED | Turn on or off workflow instance partitioning | false | true, false |
FLOWX_DATA_PARTITIONING_INTERVAL | Time interval for creating partitions | MONTH | DAY, WEEK, MONTH |
FLOWX_DATA_PARTITIONING_ARCHIVING_ENABLED | Turn on or off automatic archiving of old partitions | false | true, false |
FLOWX_DATA_PARTITIONING_ARCHIVING_RETENTIONINTERVALS | Number of intervals to retain before archiving | 3 | Any positive integer |
FLOWX_DATA_PARTITIONING_ARCHIVING_CRONEXPRESSION | cron expression for archiving schedule | 0 0 1 * * ? | Any valid cron expression |
How workflow partitioning works:
- Partitioning: When turned on, workflow instances are stored in time-based partitions according to the specified interval.
- Partition interval: Determines how frequently new partitions are created (
DAY,WEEK, orMONTH). - Archiving: When turned on, automatically archives partitions older than the specified retention intervals. For example, with
MONTHinterval and3retention intervals, partitions older than 3 months will be archived. - Archiving schedule: The cron expression controls when the archiving process runs. The default
0 0 1 * * ?runs daily at 1:00 AM.
MongoDB
Integration Designer requires two MongoDB databases for managing integration-specific data and runtime data:- Integration Designer Database (
integration-designer): Stores data specific to Integration Designer, such as integration configurations, metadata, and other operational data. - Shared Runtime Database (
app-runtime): Shared across multiple services, this database manages runtime data essential for integration and data flow execution.
| Environment variable | Description | Default value |
|---|---|---|
SPRING_DATA_MONGODB_URI | Integration Designer MongoDB URI | mongodb://mongodb-0.mongodb-headless:27017/integration-designer |
SPRING_DATA_MONGODB_USERNAME | MongoDB username | integration-designer |
SPRING_DATA_MONGODB_PASSWORD | MongoDB password | |
SPRING_DATA_MONGODB_STORAGE | Storage type (Azure environments only) | mongodb (or cosmosdb) |
SPRING_DATA_MONGODB_RUNTIME_URI | Runtime MongoDB URI | mongodb://mongodb-0.mongodb-headless:27017/app-runtime |
SPRING_DATA_MONGODB_RUNTIME_USERNAME | Runtime MongoDB username | app-runtime |
SPRING_DATA_MONGODB_RUNTIME_PASSWORD | Runtime MongoDB password |
Integration Designer requires a runtime connection to function correctly. Starting the service without a configured and active runtime MongoDB connection is not supported.
Configuration parameters
There are two types of Config Params that can be read from the environment: variables and secrets. There is one provider for variables and secrets extracted from the environment variables, and two providers for the ones extracted from Kubernetes. By default, the variables and secrets are extracted from environment variables (env provider).
Configuration parameters from environment variables (default)
Theenv provider used for variables and secrets extracts them from environment variables. For security reasons, the env provider uses an allow list regex which defaults to FLOWX_CONFIGPARAM_.*. This means only environment variables that match this naming pattern can be read at runtime into configuration params (either as variables or secrets). Feel free to edit it to match the environment variables that you use in your deployment.
| Environment variable | Description | Default value |
|---|---|---|
FLOWX_CONFIGPARAMS_VARS_PROVIDER | Provider type for variables | env |
FLOWX_CONFIGPARAMS_VARS_ALLOWLISTREGEX | Regular expression to match allowed env variables for variables | FLOWX_CONFIGPARAM_.* |
FLOWX_CONFIGPARAMS_SECRETS_PROVIDER | Provider type for secrets | env |
FLOWX_CONFIGPARAMS_SECRETS_ALLOWLISTREGEX | Regular expression to match allowed env variables for secrets | FLOWX_CONFIGPARAM_.* |
Configuration parameters from Kubernetes Secrets and ConfigMaps
Use the following configuration to read Config Params from Kubernetes Secrets and ConfigMaps:| Environment variable | Description | Values |
|---|---|---|
FLOWX_CONFIGPARAMS_VARS_PROVIDER | Provider type for variables | k8s-configmaps |
FLOWX_CONFIGPARAMS_SECRETS_PROVIDER | Provider type for secrets | k8s-secrets |
| Environment variable | Description | Values |
|---|---|---|
FLOWX_CONFIGPARAMS_PROVIDERS_K8SCONFIGMAPS_CONFIGMAPSLIST_0 | Name of the ConfigMap to use for variables | flowx-configparams |
FLOWX_CONFIGPARAMS_PROVIDERS_K8SSECRETS_SECRETSLIST_0 | Name of the Secret to use for secrets | flowx-configparams |
You can configure multiple secrets and ConfigMaps by incrementing the index number (e.g.,
FLOWX_CONFIGPARAMS_PROVIDERS_K8SSECRETS_SECRETSLIST_1, FLOWX_CONFIGPARAMS_PROVIDERS_K8SCONFIGMAPS_CONFIGMAPSLIST_1). In dev environments, the typical ConfigMap/Secret name is flowx-rt. Values are overridden based on the order in which the maps are defined.The default provider is env, but there is a built-in allowlist with the regex pattern FLOWX_CONFIGPARAM_.*. This means only configuration parameters that match this naming pattern can be read at runtime, whether they are environment variables or secret variables.Kafka configuration
Kafka connection and security variables
| Environment variable | Description | Default value |
|---|---|---|
KAFKA_BOOTSTRAP_SERVERS | Kafka broker addresses (fallback: SPRING_KAFKA_BOOTSTRAP_SERVERS) | localhost:9092 |
KAFKA_SECURITY_PROTOCOL | Security protocol (fallback: SPRING_KAFKA_SECURITY_PROTOCOL) | PLAINTEXT or SASL_PLAINTEXT |
FLOWX_WORKFLOW_CREATETOPICS | Auto-create topics | false (default) |
Message size configuration
| Environment variable | Description | Default value |
|---|---|---|
KAFKA_MESSAGE_MAX_BYTES | Maximum message size | 52428800 (50MB) |
- Producer message max bytes
- Producer max request size
Consumer configuration
| Environment variable | Description | Default value |
|---|---|---|
KAFKA_CONSUMER_GROUPID_STARTWORKFLOWS | Start workflows consumer group | start-workflows-group |
KAFKA_CONSUMER_GROUPID_RESELEMUSAGEVALIDATION | Resource usage validation consumer group | integration-designer-res-elem-usage-validation-group |
KAFKA_CONSUMER_GROUPID_SYSTEMSYNC | System sync consumer group (5.5+) | system-sync-group |
KAFKA_CONSUMER_GROUPID_WORKFLOWSYNC | Workflow sync consumer group (5.5+) | workflow-sync-group |
KAFKA_CONSUMER_GROUPID_CORRECTIONAFTERAPPOPERATION | Correction after app operation consumer group (5.5+) | correction-after-app-operation-group |
KAFKA_CONSUMER_THREADS_STARTWORKFLOWS | Start workflows consumer threads | 3 |
KAFKA_CONSUMER_THREADS_RESELEMUSAGEVALIDATION | Resource usage validation consumer threads | 3 |
KAFKA_CONSUMER_THREADS_SYSTEMSYNC | System sync consumer threads (5.5+) | 3 |
KAFKA_CONSUMER_THREADS_WORKFLOWSYNC | Workflow sync consumer threads (5.5+) | 3 |
KAFKA_CONSUMER_THREADS_CORRECTIONAFTERAPPOPERATION | Correction after app operation consumer threads (5.5+) | 3 |
KAFKA_AUTHEXCEPTIONRETRYINTERVAL | Retry interval after authorization errors | 10 (seconds) |
Topic naming convention and pattern creation
The Integration Designer uses a structured topic naming convention that follows a standardized pattern, ensuring consistency across environments and making topics easily identifiable.Topic naming components
| Environment variable | Description | Default value |
|---|---|---|
KAFKA_TOPIC_NAMING_PACKAGE | Base package for topics | ai.flowx. |
KAFKA_TOPIC_NAMING_ENVIRONMENT | Environment identifier | |
KAFKA_TOPIC_NAMING_VERSION | Topic version | .v1 |
KAFKA_TOPIC_NAMING_SEPARATOR | Topic name separator | . |
KAFKA_TOPIC_NAMING_SEPARATOR2 | Alternative separator | - |
KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN | Engine receive pattern | engine.receive. |
KAFKA_TOPIC_NAMING_INTEGRATIONRECEIVEPATTERN | Integration receive pattern | integration.receive. |
ai.flowx.is the prefix (package + environment)eventsgatewayis the servicereceiveis the actionworkflowinstancesis the detail.v1is the suffix (version)
Kafka topic configuration
Core topics
| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_AUDIT_OUT | Topic for sending audit logs | ai.flowx.core.trigger.save.audit.v1 |
Events gateway topics
| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE | Topic for workflow instances communication | ai.flowx.eventsgateway.receive.workflowinstances.v1 |
UI flow session variable updates
Available starting with FlowX.AI 5.6.0
| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_UIFLOW_UPDATE_OUT | Topic for sending workflow results to process-engine for UI flow session variable updates | ai.flowx.core.trigger.ui-flow.update.v1 |
Engine and Integration communication topics
| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_ENGINEPATTERN | Pattern for Engine communication | ai.flowx.engine.receive. |
KAFKA_TOPIC_INTEGRATIONPATTERN | Pattern for Integration communication | ai.flowx.integration.receive.* |
Application topics
Outbound topics:| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_APPLICATION_OUT_SYNCRESPONSE | Sync response messages (5.5+) | ai.flowx.application-version.sync.out.v1 |
KAFKA_TOPIC_APPLICATION_OUT_CORRECTIONAFTERAPPOPERATIONRESPONSE | Correction after app operation response (5.5+) | ai.flowx.application-version.correction-after-app-operation.response.v1 |
KAFKA_TOPIC_BUILD_RUNTIMEDATA | Build runtime data (5.5+) | ai.flowx.build.runtime-data.v1 |
KAFKA_TOPIC_LICENSE_OUT_USAGE | License usage tracking (5.5+) | ai.flowx.license.usage.v1 |
The
sync.out.v1 and correction-after-app-operation.response.v1 topics exist since 5.1.x (produced by admin). Starting with 5.5.0, integration-designer also produces to these shared response topics for system and workflow sync/correction operations.| Environment variable | Description | Default Pattern |
|---|---|---|
KAFKA_TOPIC_APPLICATION_IN_RESELEMUSAGEVALIDATION | Resource usage validation requests | ai.flowx.application-version.resources-usages.sub-res-validation.request-integration.v1 |
KAFKA_TOPIC_APPLICATION_IN_SYSTEMSYNCREQUEST | System sync requests (5.5+) | ai.flowx.application-version.sync.system.in.v1 |
KAFKA_TOPIC_APPLICATION_IN_WORKFLOWSYNCREQUEST | Workflow sync requests (5.5+) | ai.flowx.application-version.sync.workflow.in.v1 |
KAFKA_TOPIC_APPLICATION_IN_CORRECTIONAFTERAPPOPERATION_SYSTEM | System correction after app operation (5.5+) | ai.flowx.application-version.correction-after-app-operation.system.request.v1 |
KAFKA_TOPIC_APPLICATION_IN_CORRECTIONAFTERAPPOPERATION_WORKFLOW | Workflow correction after app operation (5.5+) | ai.flowx.application-version.correction-after-app-operation.workflow.request.v1 |
KAFKA_TOPIC_RESOURCESUSAGES_REFRESH | Resource usage refresh commands | ai.flowx.application-version.resources-usages.refresh.v1 |
OAuth authentication variables (when using SASL_PLAINTEXT)
| Environment Variable | Description | Default Value |
|---|---|---|
KAFKA_OAUTH_CLIENT_ID | OAuth client ID | kafka |
KAFKA_OAUTH_CLIENT_SECRET | OAuth client secret | kafka-secret |
KAFKA_OAUTH_TOKEN_ENDPOINT_URI | OAuth token endpoint | kafka.auth.localhost |
When using the
kafka-auth profile, the security protocol will automatically be set to SASL_PLAINTEXT and the SASL mechanism will be set to OAUTHBEARER.Inter-Service topic coordination
When configuring Kafka topics in the FlowX ecosystem, ensure proper coordination between services:- Topic name matching: Output topics from one service must match the expected input topics of another service.
-
Pattern consistency: The pattern values must be consistent across services:
- Process Engine listens to topics matching:
ai.flowx.engine.receive.* - Integration Designer listens to topics matching:
ai.flowx.integration.receive.*
- Process Engine listens to topics matching:
-
Communication flow:
- Other services write to topics matching the Engine’s pattern → Process Engine listens
- Process Engine writes to topics matching the Integration Designer’s pattern → Integration Designer listens
The exact pattern value isn’t critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.
Kafka topics best practices
Large message handling for workflow instances topic
The workflow instances topic requires special configuration to handle large messages. By default, Kafka has message size limitations that may prevent Integration Designer from processing large workflow payloads. Recommendedmax.message.bytes value: 10485760 (10 MB)
Method: update using AKHQ (recommended)
-
Access AKHQ
- Open the AKHQ web interface
- Log in if authentication is required
-
Navigate to Topic
- Go to the “Topics” section
- Find the topic:
ai.flowx.eventsgateway.receive.workflowinstances.v1
-
Edit Configuration
- Click on the topic name
- Go to the “Configuration” tab
- Locate or add
max.message.bytes - Set the value to
10485760 - Save changes
CAS lib configuration
CAS lib is used to communicate with the authorization-service through SpiceDB.| Environment variable | Description | Default value |
|---|---|---|
FLOWX_SPICEDB_HOST | Spicedb host | spicedb |
FLOWX_SPICEDB_PORT | Spicedb port | 50051 |
FLOWX_SPICEDB_TOKEN | Spicedb db token | REPLACEME |
Configuring authentication and access roles
Integration Designer uses OAuth2 for secure access control. Set up OAuth2 configurations with these environment variables:| Environment variable | Description | Default value |
|---|---|---|
SECURITY_TYPE | Security type | oauth2 |
SECURITY_OAUTH2_BASESERVERURL | Base URL for OAuth2 authorization server | |
SECURITY_OAUTH2_REALM | OAuth2 realm name | |
SECURITY_OAUTH2_CLIENT_CLIENTID | Client ID for token introspection | |
SECURITY_OAUTH2_CLIENT_CLIENTSECRET | Client secret for token introspection | |
SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTID | Service account client ID | flowx-integration-designer-sa |
SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTSECRET | Service account client secret | |
SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKENURI | Provider token URI | ${SECURITY_OAUTH2_BASESERVERURL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token |
Access Management
Integration Designer service account
Configuring logging
To control the log levels for Integration Designer, set the following environment variables:| Environment variable | Description | Default value |
|---|---|---|
LOGGING_LEVEL_ROOT | Root Spring Boot logs level | INFO |
LOGGING_LEVEL_APP | Application-level logs level | INFO |
Configuring admin ingress
The Integration Designer service uses the standard FlowX.AI ingress pattern. For complete setup instructions including the full ingress template, CORS configuration, and troubleshooting, see the Ingress Configuration Guide. Service-specific values for Integration Designer:- Ingress name:
integration-designer-admin - Service path:
/integration(/|$)(.*) - Service name:
integration-designer - Rewrite target:
/$2 - Fx-Workspace-Id: Required
Complete Ingress Configuration
View the centralized ingress guide for the complete configuration template, annotations reference, and best practices.
Monitoring and maintenance
To monitor the performance and health of the Integration Designer, use tools like Prometheus or Grafana. Configure Prometheus metrics with:| Environment variable | Description | Default value |
|---|---|---|
MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED | Enable Prometheus metrics export | false |
RBAC configuration
Integration Designer requires specific RBAC (Role-Based Access Control) permissions to access Kubernetes ConfigMaps and Secrets, which store necessary configurations and credentials. Set up these permissions by enabling RBAC and defining the required rules:get, list, watch) to ConfigMaps, Secrets, and Pods, which is essential for retrieving application settings and credentials required by Integration Designer.
Troubleshooting
Common issues
Integration Designer fails to start
Integration Designer fails to start
Symptoms: Service crashes on startup or fails health checks.Solutions:
- Verify MongoDB connection URIs for both the
integration-designerandapp-runtimedatabases - Check that Kafka broker addresses are reachable and the security protocol is correct
- Ensure the Keycloak service account is properly configured and the client secret is valid
- Review logs at
LOGGING_LEVEL_APP=DEBUGfor detailed startup error messages
REST connector calls failing
REST connector calls failing
Symptoms: Workflow executions fail at REST connector nodes with timeout or connection errors.Solutions:
- Verify network connectivity between the Integration Designer pod and the target system
- Check SSL/TLS certificates if the target system uses HTTPS
- Review timeout settings and increase
FLOWX_WEBCLIENT_BUFFERSIZEif responses are large - Ensure firewall rules and network policies allow outbound traffic to the target host and port
Workflow execution errors
Workflow execution errors
Symptoms: Workflows start but fail during execution with Kafka or data mapping errors.Solutions:
- Verify that all required Kafka topics exist and are correctly named across services
- Check that input/output parameter mappings match the expected data model
- Ensure the
KAFKA_TOPIC_ENGINEPATTERNandKAFKA_TOPIC_INTEGRATIONPATTERNvalues are consistent with the Process Engine configuration - Review the events gateway topic (
KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGE) for message delivery issues
Data sources not connecting
Data sources not connecting
Symptoms: Data sources configured in Integration Designer show connection errors or timeouts.Solutions:
- Verify that the credentials for the target data source are correct and not expired
- Check network access between the Integration Designer pod and the data source endpoint
- Ensure firewall rules allow traffic on the required ports
- For S3-compatible storage issues, verify
APPLICATION_FILESTORAGE_S3_SERVERURLand access key configuration
Related resources
Integration Designer
Learn about the Integration Designer and how to build integration workflows
Building a Connector
Step-by-step guide for creating connectors in Integration Designer
Redis Configuration
Complete Redis setup including Sentinel and Cluster modes
Kafka Authentication
Configure Kafka security and authentication
IAM Configuration
Identity and access management setup
Events Gateway Setup
Configure the Events Gateway for inter-service communication

