When to use
Use this pattern when you need to:- Validate AI-extracted document data against a trusted source (TMS, ERP, core banking system, database)
- Produce auditable, structured comparison results with per-field match status
- Route exceptions based on severity and overall match rate
- Replace manual spot-checking of document extraction results
Architecture
The workflow accepts two data sets as input, runs them through a TEXT_UNDERSTANDING comparison agent, and routes the result based on the exception report.| Node | Type | Purpose |
|---|---|---|
| Compare | TEXT_UNDERSTANDING | Field-by-field comparison of extracted vs. expected values |
| Route | Condition | Branch based on overall match rate and exception severity |
| Auto-approve | End / next step | High-match documents proceed without intervention |
| Human review | End / next step | Medium-match documents queued for manual review |
| Reject | End / next step | Low-match documents flagged for rejection |
Implementation
Input structure
The comparison node receives two objects: the AI-extracted values and the system-of-record values.Comparison prompt
Configure the TEXT_UNDERSTANDING node with a prompt that instructs the LLM to perform structured comparison.Output schema
Define the output schema on the TEXT_UNDERSTANDING node to enforce structured results.Severity routing
Configure a Condition node after the comparison to route based on the exception report.| Route | Condition | Action |
|---|---|---|
| Auto-approve | matchRate >= 95 and no CRITICAL exceptions | Proceed to next processing step |
| Human review | matchRate >= 70 and matchRate < 95, or any WARNING exceptions | Queue for manual review |
| Reject | matchRate < 70 or any CRITICAL exception | Flag for rejection and notify |
Configuration reference
| Setting | Value | Description |
|---|---|---|
| Node type | TEXT_UNDERSTANDING | Comparison agent |
| Temperature | 0 | Use deterministic output for consistent comparisons |
| Output format | JSON schema | Structured exception report |
| Max tokens | 2000 | Sufficient for detailed field-by-field reports |
Exception severity levels
| Severity | Description | Examples |
|---|---|---|
| CRITICAL | Data integrity issue that blocks processing | Missing required field, numeric value off by more than 10%, wrong document matched to shipment |
| WARNING | Discrepancy that requires human judgment | Name variation beyond fuzzy match threshold, date format ambiguity, unit conversion needed |
| INFO | Minor difference, typically acceptable | Abbreviation vs. full name, trailing whitespace, formatting differences |
Variations
Threshold-based routing
Instead of fixed severity categories, use configurable thresholds stored in a FlowX Database data source. This allows business users to adjust auto-approve and reject boundaries without modifying the workflow.Multi-document cross-reference
Extend the pattern to compare fields across multiple documents in the same shipment. For example, verify that the total weight on the bill of lading matches the sum of weights on the packing list and that both align with the commercial invoice.Audit trail generation
Add a downstream node that persists the full exception report (including all MATCH results) to a database or document store. This provides a complete audit trail for compliance and quality monitoring over time.Related resources
AI patterns overview
All available AI patterns and how to combine them
Fan-out extraction
Classify and extract from multiple document types
Hybrid AI + business rules
Combine AI with deterministic logic for auditable decisions
AI node types
Reference for all AI node types including TEXT_UNDERSTANDING

