Skip to main content

Infrastructure prerequisites

  • MongoDB
  • Kafka
  • OpenID Connect Settings

Dependencies

  • MongoDB database
  • Ability to connect to a Kafka instance used by the FlowX Engine
  • Scheduler service account - required for using Start Timer event node - see here for more details.
The service comes with most of the required configuration properties pre-filled. However, certain custom environment variables need to be set up.
This service needs to connect to a Mongo database that has replicas, in order to work correctly.

Scheduler configuration

Scheduler

scheduler:
  thread-count: 30  # Configure the number of threads to be used for sending expired messages.
  callbacks-thread-count: 60 # Configure the number of threads for handling Kafka responses, whether the message was successfully sent or not
  cronExpression: "*/10 * * * * *" #every 10 seconds
  retry: # new retry mechanism
    max-attempts: 3
    seconds: 1
    thread-count: 3
    cronExpression: "*/10 * * * * *" #every 10 seconds
  cleanup:
    cronExpression: "*/25 * * * * *" #every 25 seconds
  • SCHEDULER_THREAD_COUNT: Used to configure the number of threads to be used for sending expired.
  • SCHEDULER_CALLBACKS_THREAD_COUNT: Used to configure the number of threads for handling Kafka responses, whether the message was successfully sent or not.
The “scheduler.cleanup.cronExpression” is valid for both scheduler and timer event scheduler.

Retry mechanism

  • SCHEDULER_RETRY_THREAD_COUNT: Specify the number of threads to use for resending messages that need to be retried.
  • SCHEDULER_RETRY_MAX_ATTEMPTS: This configuration parameter sets the number of retry attempts. For instance, if it’s set to 3, it means that the system will make a maximum of three retry attempts for message resending.
  • SCHEDULER_RETRY_SECONDS: This configuration parameter defines the time interval, in seconds, for retry attempts. For example, when set to 1, it indicates that the system will retry the operation after a one-second delay.

Cleanup

  • SCHEDULER_CLEANUP_CRONEXPRESSION: It specifies how often, in seconds, events that have already been processed should be cleaned up from the database.

Recovery mechanism

flowx:
  timer-calculator:
    delay-max-repetitions: 1000000
You have a “next execution” set for 10:25, and the cycle step is 10 minutes. If the instance goes down for 2 hours, the next execution time should be 12:25, not 10:35. To calculate this, you add 10 minutes repeatedly to 10:25 until you reach the current time. So, it would be 10:25 + 10 min + 10 min + 10 min, until you reach the current time of 12:25. This ensures that the next execution time is adjusted correctly after the downtime.
  • FLOWX_TIMER_CALCULATOR_DELAY_MAX_REPETITIONS: This means that, for example, if our cycle step is set to one second and the system experiences a downtime of two weeks, which is equivalent to 1,209,600 seconds, and we have the “max repetitions” set to 1,000,000, it will attempt to calculate the next schedule. However, when it reaches the maximum repetitions, an exception is thrown, making it impossible to calculate the next schedule. As a result, the entry remains locked and needs to be rescheduled. This scenario represents a critical case where the system experiences extended downtime, and the cycle step is very short (e.g., 1 second), leading to the inability to determine the next scheduled event.

Timer event scheduler

Configuration for Timer Event scheduler designed to manage timer events. Similar configuration to scheduler.
timer-event-scheduler:
  thread-count: 30
  callbacks-thread-count: 60
  cronExpression: "*/1 * * * * *" #every 1 seconds
  retry:
    max-attempts: 3
    seconds: 1
    thread-count: 3
    cronExpression: "*/5 * * * * *" #every 5 seconds

OpenID connect settings

Environment variableDescriptionDefault value
SECURITY_TYPESecurity typeoauth2
SECURITY_OAUTH2_BASE_SERVER_URLBase URL of the OpenID server
SECURITY_OAUTH2_REALMOAuth2 realm name
SECURITY_PATHAUTHORIZATIONS_0_PATHSecurity path pattern/api/**
SECURITY_PATHAUTHORIZATIONS_0_ROLESALLOWEDRoles allowed for path accessANY_AUTHENTICATED_USER
SECURITY_OAUTH2_CLIENT_CLIENT_IDClient ID for token introspection
SECURITY_OAUTH2_CLIENT_CLIENT_SECRETClient secret for token introspection
SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_IDService account client IDflowx-scheduler-core-sa
SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRETService account client secret
SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKEN_URIProvider token URI${SECURITY_OAUTH2_BASE_SERVER_URL}/realms/${SECURITY_OAUTH2_REALM}/protocol/openid-connect/token
The service account is essential for enabling the Start Timer event node. More details about the necessary service account, here: Scheduler service account

Configuring datasoruce (MongoDB)

The MongoDB database is used to persist scheduled messages until they are sent back. The following configurations need to be set using environment variables:
  • SPRING_DATA_MONGODB_URI: The URI for the MongoDB database.

Configuring Kafka

Core Kafka settings

Environment VariableDescriptionDefault Value
KAFKA_BOOTSTRAP_SERVERSKafka broker addresses (fallback: SPRING_KAFKA_BOOTSTRAP_SERVERS)localhost:9092
KAFKA_SECURITY_PROTOCOLSecurity protocol for Kafka connections (fallback: SPRING_KAFKA_SECURITY_PROTOCOL)PLAINTEXT
SPRING_KAFKA_CONSUMER_GROUPIDConsumer group identifierscheduler-consumer
KAFKA_MESSAGE_MAX_BYTESMaximum message size (bytes)52428800 (50 MB)
KAFKA_AUTHEXCEPTIONRETRYINTERVALRetry interval after authorization exceptions (seconds)10

Consumer configuration

Environment VariableDescriptionDefault Value
KAFKA_CONSUMER_THREADSNumber of Kafka consumer threads1
KAFKA_CONSUMER_SCHEDULEDTIMEREVENTS_THREADSNumber of threads for starting Timer Events1
KAFKA_CONSUMER_SCHEDULEDTIMEREVENTS_GROUPIDConsumer group for starting timer eventsscheduled-timer-events
KAFKA_CONSUMER_STOPSCHEDULEDTIMEREVENTS_THREADSNumber of threads for stopping Timer Events1
KAFKA_CONSUMER_STOPSCHEDULEDTIMEREVENTS_GROUPIDConsumer group for stopping timer eventsstop-scheduled-timer-events

OAuth authentication (when using SASL_PLAINTEXT)

Environment VariableDescriptionDefault Value
KAFKA_OAUTH_CLIENT_IDOAuth client IDkafka
KAFKA_OAUTH_CLIENT_SECRETOAuth client secretkafka-secret
KAFKA_OAUTH_TOKEN_ENDPOINT_URIOAuth token endpointkafka.auth.localhost
When using the kafka-auth profile, the security protocol will automatically be set to SASL_PLAINTEXT and the SASL mechanism will be set to OAUTHBEARER.

Topic naming configuration

Environment VariableDescriptionDefault Value
KAFKA_TOPIC_NAMING_PACKAGEPackage prefix for topic namesai.flowx.
KAFKA_TOPIC_NAMING_ENVIRONMENTEnvironment segment for topic names
KAFKA_TOPIC_NAMING_VERSIONVersion suffix for topic names.v1
KAFKA_TOPIC_NAMING_SEPARATORPrimary separator for topic names.
KAFKA_TOPIC_NAMING_SEPARATOR2Secondary separator for topic names-

Kafka topics

Schedule topics

Environment VariableDescriptionDefault Value
KAFKA_TOPIC_SCHEDULE_IN_SETReceives scheduled message setting requestsai.flowx.core.trigger.set.schedule.v1
KAFKA_TOPIC_SCHEDULE_IN_STOPHandles requests to terminate scheduled messagesai.flowx.core.trigger.stop.schedule.v1

Timer events topics

Environment VariableDescriptionDefault Value
KAFKA_TOPIC_SCHEDULEDTIMEREVENTS_IN_SETTopic for setting timer eventsai.flowx.core.trigger.set.timer-event-schedule.v1
KAFKA_TOPIC_SCHEDULEDTIMEREVENTS_IN_STOPTopic for stopping timer eventsai.flowx.core.trigger.stop.timer-event-schedule.v1
Make sure the topics configured for this service don’t follow the engine pattern.

Configuring logging

The following environment variables can be set to control log levels:
  • LOGGING_LEVEL_ROOT: Root Spring Boot microservice logs (Default: INFO)
  • LOGGING_LEVEL_APP: App level logs (Default: INFO)

Troubleshooting

Common issues

Symptoms: Scheduled processes do not start at the expected times.Solutions:
  1. Verify the cron expressions in SCHEDULER_CLEANUP_CRONEXPRESSION and the scheduler cronExpression settings are correct
  2. Check that Kafka topics (KAFKA_TOPIC_SCHEDULE_IN_SET, KAFKA_TOPIC_SCHEDULE_IN_STOP) are correctly configured and accessible
  3. Confirm the scheduler service is healthy by checking /actuator/health
  4. Review scheduler logs at DEBUG level for missed or skipped executions
Symptoms: Scheduled messages are sent multiple times for the same event.Solutions:
  1. Check the replica count — running multiple replicas without proper leader election can cause duplicates
  2. Verify that the MongoDB replica set is healthy, as the scheduler relies on it for distributed locking
  3. Review SCHEDULER_THREAD_COUNT and SCHEDULER_CALLBACKS_THREAD_COUNT to ensure they are not excessively high for your workload
Symptoms: The scheduler service crashes or fails during startup.Solutions:
  1. Verify MongoDB connectivity and ensure the database has replicas enabled (required for the scheduler to work correctly)
  2. Check that Kafka bootstrap servers are reachable (KAFKA_BOOTSTRAP_SERVERS)
  3. Confirm the service account credentials are valid — verify SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTID and SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET
  4. Review startup logs for specific connection error messages
Symptoms: Timer event nodes in processes do not trigger at the configured times.Solutions:
  1. Verify the timer event configuration in the process definition (Start Timer, Intermediate Timer)
  2. Check that the timer event Kafka topics are correctly set (KAFKA_TOPIC_SCHEDULEDTIMEREVENTS_IN_SET, KAFKA_TOPIC_SCHEDULEDTIMEREVENTS_IN_STOP)
  3. Ensure the scheduler service account has the necessary permissions — see the Scheduler service account configuration
  4. Review the FLOWX_TIMER_CALCULATOR_DELAY_MAX_REPETITIONS setting if the system experienced recent downtime

Scheduled Processes

Configure and manage scheduled process executions

Timer Events

Timer event node types and configuration options

Redis Configuration

Complete Redis setup including Sentinel and Cluster modes
Last modified on March 25, 2026