This guide provides a comprehensive reference for configuring the FlowX Admin microservice using environment variables and configuration files.
Infrastructure Prerequisites
Before setting up the Admin microservice, ensure the following components are properly set up:
Database Instance : The Admin microservice connects to the same database as the FlowX.AI Engine.
MongoDB : For additional data management.
Redis : For caching and transient data storage.
Kafka : For audit logs, events, and messaging (if using FlowX.AI Audit functionality).
Core configuration
Server configuration
Environment Variable Description Default Value SERVER_PORTPort on which the Admin service will run 8080SPRING_APPLICATION_NAMEName of the application used for service discovery adminSPRING_JACKSON_SERIALIZATION_INDENTOUTPUTEnable indented JSON output true
Database configuration
The Admin microservice connects to the same PostgreSQL or Oracle database as the FlowX.AI Engine for storing process definitions.
Environment Variable Description Default Value SPRING_DATASOURCE_URLJDBC URL for database connection jdbc:postgresql://localhost:5432/flowxSPRING_DATASOURCE_USERNAMEDatabase username postgresSPRING_DATASOURCE_PASSWORDDatabase password [your-secure-password]
You will need to make sure that the user, password, connection link and database name are configured correctly, otherwise, you will receive errors at start time.
The database schema is managed by a Liquibase script provided with the Engine.
MongoDB configuration
The Admin microservice also connects to a MongoDB database instance for additional data management.
Environment Variable Description Default Value DB_USERNAMEMongoDB username data-modelDB_PASSWORDMongoDB password [your-secure-password]DB_NAMEMongoDB database name data-modelSPRING_DATA_MONGODB_URIMongoDB connection URI mongodb://${DB_USERNAME}:${DB_PASSWORD}@localhost:27017/${DB_NAME}?retryWrites=trueSPRING_DATA_MONGODB_UUIDREPRESENTATIONUUID representation format standardSPRING_DATA_MONGODB_STORAGEStorage type (Azure environments) mongodb or cosmosdbMONGOCK_CHANGELOGSSCANPACKAGE_0_Mongock changelog scan package ai.flowx.admin.data.model.config.mongockMONGOCK_TRANSACTIONENABLEDEnable transactions for Mongock operations false
Ensure that the MongoDB configuration is compatible with the same database requirements as the FlowX.AI Engine, especially if sharing database instances.
Redis and caching configuration
Admin Service uses Redis for caching and storing transient data. Configure Redis connection using the standard Redis environment variables.
Quick reference:
Environment Variable Description Example Value Status SPRING_DATA_REDIS_HOSTRedis server hostname localhostRecommended SPRING_DATA_REDIS_PORTRedis server port 6379Recommended SPRING_DATA_REDIS_PASSWORDRedis authentication password yourpasswordRecommended REDIS_TTLCache TTL in milliseconds 5000000Optional
The older SPRING_REDIS_* variables (e.g., SPRING_REDIS_HOST, SPRING_REDIS_PORT, SPRING_REDIS_PASSWORD) are deprecated and will be removed in a future FlowX version. Please use the corresponding SPRING_DATA_REDIS_* variables instead.
For complete Redis configuration including Sentinel mode, Cluster mode, and SSL/TLS setup, see the Redis Configuration guide.
Kafka configuration
The Admin microservice uses Kafka for sending audit logs, managing scheduled timer events, platform component versions, and start timer event updates.
General Kafka settings
Environment Variable Description Default Value SPRING_KAFKA_BOOTSTRAPSERVERSKafka broker addresses localhost:9092SPRING_KAFKA_SECURITY_PROTOCOLSecurity protocol PLAINTEXTKAFKA_MESSAGE_MAX_BYTESMaximum message size in bytes 52428800 (50MB)
Kafka producer configuration
Environment Variable Description Default Value SPRING_KAFKA_PRODUCER_KEYSERIALIZERKey serializer class org.apache.kafka.common.serialization.StringSerializerSPRING_KAFKA_PRODUCER_VALUESERIALIZERValue serializer class org.springframework.kafka.support.serializer.JsonSerializerSPRING_KAFKA_PRODUCER_MAXREQUESTSIZEMaximum request size 52428800 (50MB)
Kafka consumer configuration
Environment Variable Description Default Value KAFKA_CONSUMER_GROUPID_GENERICPROCESSINGGeneric processing consumer group generic-processing-groupKAFKA_CONSUMER_THREADS_GENERICPROCESSINGGeneric processing threads 6KAFKA_CONSUMER_GROUPID_PROCESSSYNCProcess sync consumer group process-sync-groupKAFKA_CONSUMER_THREADS_PROCESSSYNCProcess sync consumer threads 6KAFKA_CONSUMER_GROUPID_BUSINESSRULESYNCBusiness rule sync consumer group business-rule-sync-groupKAFKA_CONSUMER_THREADS_BUSINESSRULESYNCBusiness rule sync consumer threads 6KAFKA_CONSUMER_GROUPID_REUSABLETEMPLATESYNCReusable template sync consumer group reusable-template-sync-groupKAFKA_CONSUMER_THREADS_REUSABLETEMPLATESYNCReusable template sync consumer threads 6KAFKA_CONSUMER_GROUPID_PROCESSCORRECTIONAFTERAPPOPERATIONProcess correction after app operation consumer group process-correction-after-app-operation-groupKAFKA_CONSUMER_THREADS_PROCESSCORRECTIONAFTERAPPOPERATIONProcess correction after app operation consumer threads 6KAFKA_AUTHEXCEPTIONRETRYINTERVALAuth exception retry interval (seconds) 10
Topic naming configuration
Environment Variable Description Default Value DOTReference to the primary separator ${kafka.topic.naming.separator}DASHReference to the secondary separator ${kafka.topic.naming.separator2}KAFKA_TOPIC_NAMING_PACKAGEBase package name ai${dot}flowx${dot}KAFKA_TOPIC_NAMING_ENVIRONMENTEnvironment name KAFKA_TOPIC_NAMING_VERSIONTopic version ${dot}v1KAFKA_TOPIC_NAMING_SEPARATORPrimary separator .KAFKA_TOPIC_NAMING_SEPARATOR2Secondary separator -KAFKA_TOPIC_NAMING_PREFIXCombined prefix ${kafka.topic.naming.package}${kafka.topic.naming.environment}KAFKA_TOPIC_NAMING_SUFFIXCombined suffix ${kafka.topic.naming.version}
Kafka topics configuration
Application topics
Environment Variable Description Pattern Default Value KAFKA_TOPIC_APPLICATION_SYNCRESPONSESync response topic ${kafka.topic.naming.prefix}application-version.sync.out${kafka.topic.naming.suffix}ai.flowx.application-version.sync.out.v1KAFKA_TOPIC_APPLICATION_PROCESSSYNCProcess sync topic ${kafka.topic.naming.prefix}application-version.sync.process.in${kafka.topic.naming.suffix}ai.flowx.application-version.sync.process.in.v1KAFKA_TOPIC_APPLICATION_BUSINESSRULESYNCBusiness rule sync topic ${kafka.topic.naming.prefix}application-version.sync.business-rule.in${kafka.topic.naming.suffix}ai.flowx.application-version.sync.business-rule.in.v1KAFKA_TOPIC_APPLICATION_REUSABLETEMPLATESYNCReusable template sync topic ${kafka.topic.naming.prefix}application-version.sync.reusable-template.in${kafka.topic.naming.suffix}ai.flowx.application-version.sync.reusable-template.in.v1KAFKA_TOPIC_APPLICATION_RESOURCEUPDATEPROPAGATIONResource update propagation topic ${kafka.topic.naming.prefix}application-version.resource.update.propagation${kafka.topic.naming.suffix}ai.flowx.application-version.resource.update.propagation.v1KAFKA_TOPIC_APPLICATION_CORRECTIONAFTERAPPOPERATION_IN_PROCESSCorrection after app operation → process request ${kafka.topic.naming.prefix}application-version.correction-after-app-operation.process.request${kafka.topic.naming.suffix}ai.flowx.application-version.correction-after-app-operation.process.request.v1KAFKA_TOPIC_APPLICATION_CORRECTIONAFTERAPPOPERATION_IN_BUSINESSRULECorrection after app operation → business rule ${kafka.topic.naming.prefix}application-version.correction-after-app-operation.business-rule.request${kafka.topic.naming.suffix}ai.flowx.application-version.correction-after-app-operation.business-rule.request.v1KAFKA_TOPIC_APPLICATION_CORRECTIONAFTERAPPOPERATION_IN_REUSABLETEMPLATECorrection after app operation → reusable template ${kafka.topic.naming.prefix}application-version.correction-after-app-operation.reusable-template.request${kafka.topic.naming.suffix}ai.flowx.application-version.correction-after-app-operation.reusable-template.request.v1KAFKA_TOPIC_APPLICATION_CORRECTIONAFTERAPPOPERATION_OUTCorrection after app operation response topic ${kafka.topic.naming.prefix}application-version.correction-after-app-operation.response${kafka.topic.naming.suffix}ai.flowx.application-version.correction-after-app-operation.response.v1
Audit topics
Environment Variable Description Pattern Default Value KAFKA_TOPIC_AUDIT_OUTAudit output topic ${kafka.topic.naming.prefix}core${dot}trigger${dot}save${dot}audit${kafka.topic.naming.suffix}ai.flowx.core.trigger.save.audit.v1
Environment Variable Description Pattern Default Value KAFKA_TOPIC_PLATFORM_COMPONENTSVERSIONS_INComponents versions caching topic ${kafka.topic.naming.prefix}core${dot}trigger${dot}platform${dot}versions${dot}caching${kafka.topic.naming.suffix}ai.flowx.core.trigger.platform.versions.caching.v1
Events gateway topics
Environment Variable Description Pattern Default Value KAFKA_TOPIC_EVENTSGATEWAY_OUT_MESSAGECommands message output topic ${kafka.topic.naming.prefix}eventsgateway${dot}process${dot}commands${dot}message${kafka.topic.naming.suffix}ai.flowx.eventsgateway.process.commands.message.v1
Build topics
Environment Variable Description Pattern Default Value KAFKA_TOPIC_BUILD_RUNTIMEDATABuild runtime data topic ${kafka.topic.naming.prefix}build${dot}runtime-data${kafka.topic.naming.suffix}ai.flowx.build.runtime-data.v1KAFKA_TOPIC_BUILD_STARTTIMEREVENTS_OUT_UPDATESStart timer events updates topic ${kafka.topic.naming.prefix}build${dot}start${dash}timer${dash}events${dot}updates${dot}in${kafka.topic.naming.suffix}ai.flowx.build.start-timer-events.updates.in.v1
Resource topics
Environment Variable Description Pattern Default Value KAFKA_TOPIC_RESOURCESUSAGES_REFRESHResources usages refresh topic ${kafka.topic.naming.prefix}application${dash}version${dot}resources${dash}usages${dot}refresh${kafka.topic.naming.suffix}ai.flowx.application-version.resources-usages.refresh.v1
OAuth authentication for Kafka
When using the kafka-auth profile, the following variables configure OAuth for Kafka:
Environment Variable Description Default Value KAFKA_OAUTH_CLIENTIDOAuth client ID kafkaKAFKA_OAUTH_CLIENTSECRETOAuth client secret kafka-secretKAFKA_OAUTH_TOKEN_ENDPOINT_URIOAuth token endpoint URI kafka.auth.localhost
When using the kafka-auth profile, the security protocol will automatically be set to SASL_PLAINTEXT and the SASL mechanism will be set to OAUTHBEARER.
CAS lib configuration
Environment Variable Description Default Value FLOWX_SPICEDB_HOSTSpiceDB server hostname spicedbFLOWX_SPICEDB_PORTSpiceDB server port 50051FLOWX_SPICEDB_TOKENSpiceDB authentication token spicedb-token
Logging configuration
The FlowX Admin microservice provides granular control over logging levels for different components:
Environment Variable Description Default Value LOGGING_LEVEL_ROOTLog level for root Spring Boot microservice INFOLOGGING_LEVEL_APPLog level for application-specific code DEBUG
Changing log levels at runtime
You can adjust log levels dynamically without restarting the service using Spring Boot Actuator endpoints. This is particularly useful for troubleshooting and debugging in production environments.
Example: Change log level for a specific package
curl 'http://localhost:8081/actuator/loggers/ai.flowx.admin' \
-i -X POST \
-H 'Content-Type: application/json' \
-d '{"configuredLevel":"DEBUG"}'
Common logger packages:
ai.flowx.admin - Application-specific logs
org.springframework - Spring Framework logs
org.mongodb.driver - MongoDB driver logs
org.apache.kafka - Kafka client logs
Available log levels:
TRACE, DEBUG, INFO, WARN, ERROR, OFF
Using DEBUG or TRACE log levels in production may impact performance and generate large log volumes. Revert to INFO or WARN after troubleshooting is complete.
Localization settings
Environment Variable Description Default Value APPLICATION_DEFAULTLOCALEDefault locale for the application enAPPLICATION_SUPPORTEDLOCALESList of supported locales en, ro
Health monitoring
Environment Variable Description Default Value MANAGEMENT_HEALTH_DB_ENABLEDEnable database health checks trueMANAGEMENT_HEALTH_KAFKA_ENABLEDEnable Kafka health checks trueMANAGEMENT_SERVER_ADDRESSManagement server bind address 0.0.0.0MANAGEMENT_SERVER_PORTManagement server port 8081MANAGEMENT_SERVER_BASEPATHBase path for management endpoints /manageMANAGEMENT_SECURITY_ENABLEDEnable security for management endpoints falseMANAGEMENT_ENDPOINTS_WEB_BASEPATHBase path for actuator endpoints /actuatorMANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDEEndpoints to expose health,info,metrics,metric,prometheusMANAGEMENT_ENDPOINT_HEALTH_PROBES_ENABLEDEnable Kubernetes probes trueMANAGEMENT_ENDPOINT_HEALTH_SHOWDETAILSShow health check details alwaysMANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLEDEnable Prometheus metrics export false
Environment Variable Description Default Value FLOWX_PLATFORMHEALTH_NAMESPACEKubernetes namespace for health checks flowxFLOWX_PLATFORMHEALTH_MANAGEMENTBASEPATHBase path for management endpoints ${management.server.base-path}FLOWX_PLATFORMHEALTH_ACTUATORBASEPATHBase path for actuator endpoints ${management.endpoints.web.base-path}FLOWX_PLATFORMHEALTH_ANNOTATIONNAMEKubernetes annotation name for health checks flowx.ai/healthFLOWX_PLATFORMHEALTH_ANNOTATIONVALUEKubernetes annotation value for health checks true
Multi-edit and undo/redo configuration
Environment Variable Description Default Value FLOWX_MULTIEDIT_TTLTime-to-live for multi-edit sessions in seconds 45FLOWX_UNDOREDO_TTLTime-to-live for undo/redo actions in seconds 86400FLOWX_UNDOREDO_CLEANUP_CRONEXPRESSIONCron expression for undo/redo cleanup 0 0 2 ?FLOWX_UNDOREDO_CLEANUP_DAYSDays to keep deleted undo/redo items 2
Resources usage configuration
Environment Variable Description Default Value FLOWX_LIB_RESOURCESUSAGES_ENABLEDEnable resources usage tracking trueFLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_ENABLEDEnable listener for resource usage refreshes trueFLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_THREADCOUNTThread count for resource usage collector 5FLOWX_LIB_RESOURCESUSAGES_REFRESHLISTENER_COLLECTOR_MAXBATCHSIZEMaximum batch size for resource usage collection 1000FLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_GROUPID_RESOURCESUSAGESREFRESHConsumer group ID for resource usage refresh adminResourcesUsagesRefreshGroupFLOWX_LIB_RESOURCESUSAGES_KAFKA_CONSUMER_THREADS_RESOURCESUSAGESREFRESHNumber of consumer threads for resource usage refresh 3FLOWX_LIB_RESOURCESUSAGES_KAFKA_TOPIC_RESOURCE_USAGES_REFRESHKafka topic for resource usage refresh ${kafka.topic.resources-usages.refresh}FLOWX_LIB_RESOURCESUSAGES_KAFKA_AUTHEXCEPTIONRETRYINTERVALRetry interval in seconds after auth exceptions 3
Authentication and Authorization Configuration
The FlowX Admin microservice supports authentication and authorization through OpenID Connect (with Keycloak as the default provider) and allows detailed role-based access control.
OpenID Connect Configuration
Environment Variable Description Default Value SECURITY_TYPESecurity type oauth2SECURITY_OAUTH2CLIENTEnable OAuth2 client enabledSECURITY_OAUTH2_BASESERVERURLBase URL of the OAuth2 server SECURITY_OAUTH2_REALMOAuth2 realm name SECURITY_OAUTH2_CLIENT_CLIENTIDOAuth2 client ID SECURITY_OAUTH2_CLIENT_CLIENTSECRETOAuth2 client secret
Service Account Configuration
The following service account configuration is deprecated but still supported for backward compatibility.
Environment Variable Description Default Value SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTIDService account client ID flowx-${SPRING_APPLICATION_NAME}-saSECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRETService account client secret client-secret
Spring Security OAuth2 Client Configuration
Environment Variable Description Default Value SPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_INTROSPECTIONURIToken introspection URI ${SECURITY_OAUTH2_BASESERVERURL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token/introspectSPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_CLIENTIDResource server client ID ${SECURITY_OAUTH2_CLIENT_CLIENTID}SPRING_SECURITY_OAUTH2_RESOURCESERVER_OPAQUETOKEN_CLIENTSECRETResource server client secret ${SECURITY_OAUTH2_CLIENT_CLIENTSECRET}SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_PROVIDERIdentity provider name mainAuthProviderSPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTNAMEClient name mainIdentitySPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTIDClient ID ${SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTID}SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRETClient secret ${SECURITY_OAUTH2_SERVICEACCOUNT_ADMIN_CLIENTSECRET}SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_AUTHORIZATIONGRANTTYPEAuthorization grant type client_credentialsSPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_AUTHENTICATION_METHODClient authentication method client_secret_postSPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKENURIProvider token URI ${SECURITY_OAUTH2_BASESERVERURL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token
Identity Provider Configuration
Environment Variable Description Default Value OPENID_PROVIDEROpenID provider type keycloak (possible values: keycloak, entra)FLOWX_AUTHENTICATE_CLIENTIDClient ID for authentication service flowx-platform-authenticateFLOWX_PROCESS_DEFAULTROLESDefault roles for processes FLOWX_ROLE
Keycloak Configuration
Environment Variable Description Default Value OPENID_KEYCLOAK_BASE_SERVER_URLKeycloak server URL ${SECURITY_OAUTH2_BASESERVERURL}OPENID_KEYCLOAK_REALMKeycloak realm ${SECURITY_OAUTH2_REALM}OPENID_KEYCLOAK_CLIENT_CLIENT_IDKeycloak client ID ${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTID}OPENID_KEYCLOAK_CLIENT_CLIENT_SECRETKeycloak client secret ${SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENTSECRET}
Microsoft Entra ID configuration
Environment Variable Description Default Value OPENID_ENTRA_GRAPH_SCOPEMicrosoft Graph API scope https://graph.microsoft.com/.defaultOPENID_ENTRA_TENANT_IDMicrosoft Entra tenant ID OPENID_ENTRA_CLIENT_IDMicrosoft Entra client ID ${SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTID}OPENID_ENTRA_CLIENT_SECRETMicrosoft Entra client secret ${SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET}OPENID_ENTRA_PRINCIPAL_IDMicrosoft Entra principal ID
The role-based access control is configured in the application YAML and grants specific permissions for platform management, user management, process management, integrations management, and configuration management.
Ingress configuration
The Admin service uses the standard FlowX.AI ingress pattern. For complete setup instructions including the full ingress template, CORS configuration, and troubleshooting, see the Ingress Configuration Guide .
Service-specific values for Admin:
Ingress name: admin-admin
Service path: /((.*)) or /admin(/|$)(.*)
Service name: admin
Rewrite target: /$2
Fx-Workspace-Id: Required
Complete Ingress Configuration View the centralized ingress guide for the complete configuration template, annotations reference, and best practices.
In production environments, never use the default service account credentials. Always configure secure, environment-specific credentials for authentication.
Sensitive information such as passwords and client secrets should be managed securely using environment variables or a secrets management solution in production environments.