Apache Kafka is a distributed event streaming platform built for high-throughput, fault-tolerant, real-time data pipelines. In an invoicing context, Kafka acts as the event bus: TallyArc publishes invoice lifecycle events (created, sent, viewed, paid, overdue, disputed) to Kafka topics, and any number of downstream systems consume and act on those events independently — without tight coupling between systems.
The event-driven invoicing architecture
A Kafka-based invoice event architecture looks like this:
- TallyArc publishes events to Kafka topics:
invoices.created,invoices.paid,invoices.overdue - Downstream consumers process these events independently:
- ERP system updates AR balance on
invoices.paid - Data warehouse ingests all events for analytics
- CRM updates customer account status on
invoices.overdue - Slack/Teams notification service alerts the finance team
- ML pipeline updates payment risk scores
- ERP system updates AR balance on
Each consumer operates independently, at its own pace, without creating dependencies between systems.
Kafka topic schema (invoice event)
{
"event_type": "invoice.paid",
"event_id": "evt_abc123",
"timestamp": "2025-01-15T14:32:00Z",
"invoice_id": "inv_xyz789",
"client_id": "cli_456",
"amount": 4750.00,
"currency": "GBP",
"payment_method": "stripe_card",
"invoice_total": 4750.00,
"days_to_pay": 12
}
Connecting TallyArc to Kafka
- In TallyArc, go to Data → Kafka → Connect
- Enter your Kafka bootstrap servers (e.g.
broker1:9092,broker2:9092) - Configure SASL/SSL credentials if your cluster requires authentication
- Set the target topic prefix (e.g.
tallyarc→ events publish totallyarc.invoices) - Save — events begin streaming immediately
Confluent Cloud
For teams that don't want to manage Kafka infrastructure, Confluent Cloud is the easiest option — fully managed Kafka with a generous free tier. TallyArc connects to Confluent Cloud using the same settings; use the Confluent bootstrap server URL and API key/secret credentials.