Infrastructure prerequisites
The FlowX Data Search service requires the following infrastructure components:| Component | Purpose |
|---|---|
| Redis | Caching search results and configurations |
| Kafka | Message-based communication with the engine |
| Elasticsearch | Indexing and searching data |
Configuration
Kafka configuration
Configure Kafka communication using these environment variables and properties:Basic Kafka settings
| Variable | Description | Default Value |
|---|---|---|
KAFKA_BOOTSTRAP_SERVERS | Kafka broker addresses (fallback: SPRING_KAFKA_BOOTSTRAP_SERVERS) | localhost:9092 |
KAFKA_SECURITY_PROTOCOL | Security protocol for Kafka (fallback: SPRING_KAFKA_SECURITY_PROTOCOL) | PLAINTEXT |
KAFKA_CONSUMER_THREADS | Number of Kafka consumer threads | 1 |
KAFKA_MESSAGE_MAX_BYTES | Maximum message size | 52428800 (50 MB) |
OAuth authentication (when using SASL_PLAINTEXT)
| Environment Variable | Description | Default Value |
|---|---|---|
KAFKA_OAUTH_CLIENT_ID | OAuth client ID | kafka |
KAFKA_OAUTH_CLIENT_SECRET | OAuth client secret | kafka-secret |
KAFKA_OAUTH_TOKEN_ENDPOINT_URI | OAuth token endpoint | kafka.auth.localhost |
When using the
kafka-auth profile, the security protocol will automatically be set to SASL_PLAINTEXT and the SASL mechanism will be set to OAUTHBEARER.Topic naming configuration
The Data Search service uses a structured topic naming convention:ai.flowx.core.trigger.search.data.v1
| Variable | Description | Default Value |
|---|---|---|
KAFKA_TOPIC_NAMING_PACKAGE | Package prefix for topic names | ai.flowx. |
KAFKA_TOPIC_NAMING_ENVIRONMENT | Environment segment for topic names | |
KAFKA_TOPIC_NAMING_VERSION | Version suffix for topic names | .v1 |
KAFKA_TOPIC_NAMING_SEPARATOR | Primary separator for topic naming | . |
KAFKA_TOPIC_NAMING_SEPARATOR2 | Secondary separator | - |
KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERN | Engine receive pattern | engine.receive. |
Kafka topics
The service uses these specific topics:| Topic | Default Value | Purpose |
|---|---|---|
KAFKA_TOPIC_DATA_SEARCH_IN | ai.flowx.core.trigger.search.data.v1 | Incoming search requests |
KAFKA_TOPIC_DATA_SEARCH_OUT | ai.flowx.engine.receive.core.search.data.results.v1 | Outgoing search results |
Elasticsearch configuration
Configure Elasticsearch connection using the following environment variables:| Variable | Description | Default Value | Default Value |
|---|---|---|---|
SPRING_ELASTICSEARCH_REST_URIS | URL(s) of Elasticsearch nodes (no protocol) | - | elasticsearch:9200 |
SPRING_ELASTICSEARCH_REST_PROTOCOL | Connection protocol | https | https or http |
SPRING_ELASTICSEARCH_REST_DISABLESSL | Disable SSL verification | false | false |
SPRING_ELASTICSEARCH_REST_USERNAME | Authentication username | - | elastic |
SPRING_ELASTICSEARCH_REST_PASSWORD | Authentication password | - | your-password |
SPRING_ELASTICSEARCH_INDEXSETTINGS_NAME | Index name for search data | process_instance | process_instance |
Security configuration
Configure authentication and authorization with these variables:| Variable | Description | Default Value |
|---|---|---|
SECURITY_TYPE | Security type | oauth2 |
SECURITY_OAUTH2_BASESERVERURL | Base URL for OAuth2 server | |
SECURITY_OAUTH2_REALM | OAuth2 realm name | |
SECURITY_OAUTH2_CLIENT_CLIENT_ID | Client ID for token introspection | |
SECURITY_OAUTH2_CLIENT_CLIENT_SECRET | Client secret for token introspection |
Logging configuration
Control the verbosity of logs with these variables:| Variable | Description | Default Value |
|---|---|---|
LOGGING_LEVEL_ROOT | Root Spring Boot log level | INFO |
LOGGING_LEVEL_APP | Application-specific log level | INFO |
Elasticsearch index configuration
The Data Search service creates and manages Elasticsearch indices based on the configured index pattern. The default index name isprocess_instance.
Index pattern
The service derives the index pattern from thespring.elasticsearch.index-settings.name property. This pattern is used to query across multiple indices that match the pattern.
Sample search query
Below is an example of a search query generated by the Data Search service for Elasticsearch:Integration with Kibana
Kibana provides a powerful interface for visualizing and exploring data indexed by the Data Search service.Using Kibana with FlowX Data Search
- Connect Kibana to the same Elasticsearch instance
- Create an index pattern matching your configured index name
- Use the Discover tab to explore indexed data
- Create visualizations and dashboards based on your data
Kibana is an open-source data visualization and exploration tool designed primarily for Elasticsearch. It serves as the visualization layer for the Elastic Stack, allowing users to interact with their data stored in Elasticsearch to perform various activities such as querying, analyzing, and visualizing data. For more information, visit the Kibana official documentation.
Best practices
-
Security:
- Store sensitive credentials in Kubernetes Secrets
- Use TLS for Elasticsearch and Kafka communication
- Implement network policies to restrict access
-
Performance:
- Scale the number of replicas based on query load
- Adjust Kafka consumer threads based on message volume
- Configure appropriate resource limits and requests
-
Monitoring:
- Set up monitoring for Elasticsearch, Kafka, and Redis
- Create alerts for service availability and performance
- Monitor disk space for Elasticsearch data nodes
Troubleshooting
Elasticsearch connection failures
Elasticsearch connection failures
Symptoms: Service fails to start or search requests return errors.Solutions:
- Verify Elasticsearch is running and accessible at the configured URL
- Check that credentials in
SPRING_ELASTICSEARCH_REST_USERNAMEandSPRING_ELASTICSEARCH_REST_PASSWORDare correct - Ensure SSL settings match your environment — set
SPRING_ELASTICSEARCH_REST_DISABLESSLtotrueif not using TLS - Confirm the protocol in
SPRING_ELASTICSEARCH_REST_PROTOCOLmatches your Elasticsearch setup (httpsorhttp)
Database and Redis issues
Database and Redis issues
Symptoms: Cache misses, stale search results, or Redis connection errors.Solutions:
- Verify Redis is running and accessible
- Check Redis authentication credentials
- Ensure network policies allow traffic between the Data Search pod and Redis
- Monitor Redis memory usage — eviction policies may cause cache misses under high load
Kafka sync failures
Kafka sync failures
Symptoms: Search requests are not received or results are not delivered back to the engine.Solutions:
- Verify Kafka topics exist — check that
KAFKA_TOPIC_DATA_SEARCH_INandKAFKA_TOPIC_DATA_SEARCH_OUTtopics are created - Check Kafka permissions for the consumer group
- Ensure bootstrap servers in
KAFKA_BOOTSTRAP_SERVERSare correctly specified - If using OAuth, verify the token endpoint is accessible and credentials are valid
Indexing performance
Indexing performance
Symptoms: Slow search responses, high latency, or timeouts.Solutions:
- Increase
KAFKA_CONSUMER_THREADSto process more messages in parallel - Verify the Elasticsearch cluster health — yellow or red status impacts performance
- Check index size and shard distribution in Elasticsearch
- Monitor Elasticsearch JVM heap usage and adjust resource limits if needed
- Review the
KAFKA_MESSAGE_MAX_BYTESsetting if large payloads are being processed
Related resources
Elasticsearch Indexing
Configure Elasticsearch indexing for process data
Redis Configuration
Complete Redis setup including Sentinel and Cluster modes
Kafka Authentication
Configure Kafka security and authentication
IAM Configuration
Identity and access management setup

