top of page

How Kafka Stream Connectors Power ServiceNow & Microsoft Interoperability

  • Writer: Oliver Nowak
    Oliver Nowak
  • Jun 18
  • 3 min read

As enterprise data becomes increasingly distributed across cloud platforms, systems of action like ServiceNow need real-time access to external data - not after it’s batched, copied, and transformed, but as it happens.


That’s where Kafka stream connectors step in.

In the visual above, I've mapped out a real-world scenario where IoT sensor data from Microsoft Fabric is streamed into ServiceNow, enriched with Workflow Data Fabric, and actioned by AI Agents - all in near real-time.


To appreciate what’s happening behind the scenes, let’s walk through the technical architecture, integration flow, and the role Kafka plays in enabling this level of interoperability.


The Role of Kafka in Workflow Data Fabric

Apache Kafka is more than just a message bus - it’s a distributed streaming platform that allows systems to publish and subscribe to streams of records in real-time.

 

In the Workflow Data Fabric model, Kafka acts as the bridge between data at rest (in Microsoft Fabric) and workflows in motion (inside ServiceNow). Here’s how:

 

Stream Connector Architecture: How does it actually work?

A Kafka Stream Connector is made up of a few key components:

1. Source Connector

Think of this as a listener. It connects to an external data system (e.g., Microsoft Fabric OneLake). It detects new data, structures it as Kafka records, and publishes those into a Kafka topic. In this case, anomaly alerts from IoT data are pushed from Microsoft Fabric to Kafka via a source connector.


2. Kafka Broker / Cluster

This is the streaming backbone. It receives the records from the source connector and stores them in a durable, ordered, append-only log (a topic). Kafka ensures high availability, partitioned scalability, and publish-subscribe patterns, so multiple consumers can process the same data stream independently.


3. Sink Connector

On the other end, a sink connector pulls Kafka messages and sends them to a target system. In our diagram, ServiceNow acts as the sink - consuming messages via its Hermes Kafka Cluster, which integrates with flow triggers, consumers, and import APIs.


4. Consumers Inside ServiceNow

Once Kafka delivers the alert to the Hermes Kafka Cluster, ServiceNow uses one of several consumption patterns:

  • Script Consumer: Custom JavaScript logic to interpret and process incoming messages.

  • RTE Consumer: For real-time ingestion and event creation.

  • Transform Map Consumer: To load structured data into tables via import sets.


Each of these consumers is subscribed to a Kafka topic using the Subscriptions API, which manages topic registration and stream routing.

 

Real-Time Data Flow: What Happens in the Use Case?

Let’s walk through the technical lifecycle in the context of our original use case:

 

Step 1: Sensor Data Ingest

IoT devices emit telemetry to Microsoft Fabric, where data lands in OneLake. Microsoft Fabric performs anomaly detection on this stream using Synapse Real-time Analytics.


Step 2: Event Streamed to Kafka

Once an anomaly is flagged, an event record is pushed to Kafka using a stream connector.


Kafka brokers store the alert as a message in a topic.


Step 3: Message Consumed by ServiceNow

ServiceNow’s Hermes Kafka Cluster subscribes to this topic.


Using a configured RTE Consumer or Transform Map, ServiceNow:

  • Creates an OT incident.

  • Triggers automated workflows.

  • Assigns the issue to the right team.


This happens without polling, without duplication, and in near real-time.


Step 4: AI Agent Interprets the Data

An AI Agent in ServiceNow reads contextual information related to the incident.


This data is not imported - it’s accessed in-place via Workflow Data Fabric using a Zero Copy connector, allowing the agent to:

  • Retrieve live sensor metadata or history.

  • Understand what “normal” looks like.

  • Decide on next-best actions.


Step 5: Data Remains in Fabric

Thanks to Workflow Data Fabric, ServiceNow doesn’t need to replicate data into its own tables. Instead, it uses stream-aware data access to pull directly from Microsoft Fabric in real-time. This dramatically reduces data duplication, latency, and overhead.


Step 6: Maintenance Record Published Back via Kafka

Once the issue is resolved, ServiceNow pushes a maintenance record back into Microsoft Fabric using the Kafka Publisher - either via:

  • A Publisher Flow Action Step, or

  • A Scriptable Publisher.


This outbound message is sent to a Kafka topic configured to flow into Microsoft Fabric via a Kafka Sink Connector, completing the bi-directional stream.


Why Not Just Use APIs?

You might ask: “Why Kafka instead of REST APIs?”


Because APIs are great for request/response, not for continuous, real-time data sync.


Kafka offers:

  • Persistent delivery (durable logs)

  • Replayability (read the same message multiple times)

  • Decoupling (publishers don’t need to know about consumers)

  • Scalability (horizontal partitioning and replication)


This architecture is ideal for event-driven automation, especially when dealing with high-volume IoT data, multiple consumers, or cross-platform orchestration.


To close out, if you’re building connected workflows that span cloud platforms, the combination of Kafka stream connectors + Workflow Data Fabric + AI agents gives you:

  • Real-time data awareness

  • Event-driven automation

  • Smart decisions based on live data

  • A scalable, low-latency integration model


The end result? More responsive operations, fewer data silos, and AI that is able to act in real-time using the latest business context.

Comentários


©2020 by The Digital Iceberg

bottom of page