How Can We Help?

Search for answers or browse our knowledge base.

Documentation | Tutorials | Support

Table of Contents
< All Topics
Print

Event Driven Integration

Version 1.0

28/06/2023

1.
Introduction to Integration

In the complex world of information systems, the need for
systems to communicate and collaborate with each other is fundamental. This
intercommunication is no simple task as it involves transferring data between
systems that may be built on entirely different architectures, formats, or
technologies. The integration team will be confronted with a choice to be made
between various strategies.  Choosing the optimal strategy for a particular
integration challenge will be key to achieving both immediate and long-term
success.  These various strategies are referred to as integration patterns,
each offering a particular solution that facilitates effective communication
between different systems.

Integration patterns are a set of design practices, honed
through the collective experience of countless developers and architects, which
provide standardized, reusable solutions to recurring problems in the field of
system integration. They can be seen as the blueprints for solving common
integration challenges, thereby ensuring that developers do not have to
reinvent the wheel every time they encounter a similar problem.

A good integration pattern not only aids in the
communication between systems but also promotes the principles of loose
coupling, high cohesion, and scalability. It further ensures that the integrity
of data is maintained, system performance is optimal, and any potential risks
or failures are handled appropriately.

These patterns cover a wide variety of scenarios, from the
transformation of data formats to routing, message construction, and system
orchestration. Some well-known patterns include Message Channel, Message
Translator, Publish-Subscribe Channel, and many more, each catering to different
aspects of system integration.

In this era of digital transformation, and with the
proliferation of microservices and cloud-based solutions, the significance of
integration patterns has grown exponentially. They have become a key
consideration for organizations looking to enable seamless cooperation across an
inventory of existing and planned systems. Understanding and implementing these
patterns can significantly enhance the robustness, efficiency, and the maintainability
of an organization’s IT assets over time.

Even though the landscape of technology keeps evolving,
these integration patterns, time-tested and refined, remain as relevant and
essential as ever. It’s not just about connecting systems; it’s about doing so
in a manner that is scalable, resilient, and effective. Integration patterns
deliver the very foundation upon which systems can evolve in a stable manner.

For the purpose of this discussion, we have grouped the
various integration patterns into Data and API centric patterns and discuss
each separately.  We then examine the pros and cons of a new pattern, a new
approach: Event-Driven Integration.

1.1
Data-Centric Integration

Data integration refers to the process of combining data
from various sources into a unified view, allowing organizations to analyze and
make informed decisions based on a comprehensive data set. There are several
approaches to data integration, each with its own advantages and
considerations. Here are some common data integration approaches:

·
ETL (Extract, Transform, Load): ETL is a
traditional approach widely used for data integration. It involves extracting
data from multiple sources, transforming it to meet the desired structure or
format, and loading it into a target system such as a data warehouse. ETL
typically requires predefined schemas and is suitable for batch processing of
large volumes of data.

·
ELT (Extract, Load, Transform): ELT
is a variant of ETL where data is first extracted and loaded into a target
system, such as a data lake, without significant transformation. The
transformation is then performed within the target system using tools like SQL
or big data processing frameworks. ELT leverages the processing power of the
target system and enables organizations to work with raw data before
transforming it;

·
Data Replication: In this approach, data is
replicated from source systems to a central repository or data warehouse in
near real-time or scheduled intervals. Replication ensures that the data is
always up to date and readily available for analysis. It can be achieved
through database replication techniques, log-based change data capture (CDC),
or messaging systems;

·
Data Mesh/Virtualization: Data virtualization
allows organizations to access and query data from multiple sources without
physically moving or integrating it. It provides a logical layer that abstracts
the complexities of underlying data sources, making it easier to access and
integrate data on the fly. Data virtualization reduces the need for data
movement and can provide real-time access to distributed data sources;

·
Data Fabric/Federations: Data federation involves
accessing and querying data resident in its original source system on-demand,
without physically consolidating or moving it. It creates a logical abstraction
layer that allows organizations to treat distributed data sources as a single
virtual database. Data federation is useful when data latency or privacy
concerns make it impractical to centralize all data.

These approaches can be combined or customized based on the
specific requirements of an organization’s data integration project. The choice
of approach depends on factors such as data volume, latency requirements,
system compatibility, data quality, and the desired level of integration
complexity.

Data Driven integration is well suited to use cases such as
AI/ML training where a large set of data is required to train the initial
models. The data does not need to be kept up to date, or the training occurs
relatively infrequently. Other scenarios such as analytics and reporting where
the data does not need to be up to date or is analyzing large amount of
historic data and is not time critical. The following outlines a few sample
user cases where data integration approaches may be appropriate integration
patterns.

·
Healthcare: Integration of data from different
sources like electronic medical records (EMRs), lab results, and health monitoring
devices can help provide comprehensive patient care. For example, predictive
analytics can be used to anticipate health issues and suggest preventive
measures.

·
Retail: In the retail sector, integrating data from
sales, inventory, and customer behavior can help in better understanding demand
and supply. Predictive analytics can be used to forecast sales, optimize
inventory, and personalize marketing efforts to enhance customer experience.

·
Supply Chain Management: By integrating data from
various sources such as production, inventory, transportation, and suppliers,
companies making data-driven decisions to improve efficiency and reduce costs.

·
Banking and Finance: Integration of data from
various systems (like customer data, transaction data, market data risk
assessment, and personalized banking services. For example, machine learning
algorithms can be used to analyze spending patterns and flag suspicious
transactions.

·
Manufacturing: Integration of data from production,
sales, and supply chain can help optimize production planning, predict
maintenance of machines (Predictive maintenance), and improve product quality.

What all of these use cases have in common is the use of
large amounts of data for the purposes of reporting or building models of
behavior.

1.1.1
Disadvantages of Data-Driven Integration

Data-Driven Integration can be highly effective in certain
scenarios, but it does have potential disadvantages that need to be considered.
Here are some of them:

·
No support for real-time: Data Driven Integration
typically operates in a batch mode or on a scheduled basis, and therefore it
may not reflect the real-time status of data. This can be a problem for systems
that need up-to-the-minute or real-time data integration.

·
System Coupling: Data Driven Integration usually
requires more tightly coupled systems. If one system changes its data format or
schema, the other systems that are integrated based on this data may need to be
modified. This can lead to increased costs and effort in maintenance and
upgrades.

·
Performance Impact: Data synchronization or
transfer processes can be resource-intensive, especially for large volumes of
data. This can impact the performance of the involved systems, particularly if
data transfers are not carefully scheduled and managed.

·
Data Inconsistencies: If not properly managed, Data
Driven Integration can lead to data inconsistencies. For instance, if two
systems are updated almost simultaneously, it can be difficult to ensure that
both have the same, most up-to-date data.

·
Complexity in Error Handling: Errors during data
transfer or synchronization can be difficult to manage and rectify. They may
require complex transaction handling and rollback procedures to ensure data
integrity.

·
Security Risks: Exposing data for integration may
increase the potential risk of data breaches if proper security measures are
not in place. The more systems that have access to the data, the higher the
risk.

Overall, while Data-Driven Integration can be a valuable
approach for integrating systems, these potential disadvantages need to be considered
during the design and implementation stages of an integration project.

1.2
API Centric Integration

Integration approaches refer to the methods and techniques
used to connect or combine different systems, applications, or components to
work together seamlessly. There are several common integration approaches, each
with its own advantages and considerations:

·
Point-to-Point Integration: Point-to-point
integration involves establishing direct connections between two systems or
applications. In this approach, each system is connected individually to every
other system it needs to communicate with. While it may be simple to set up
initially, this approach becomes complex and difficult to manage as the number
of systems increases.

·
Enterprise Service Bus (ESB): An Enterprise Service
Bus is a middleware infrastructure that provides a centralized, scalable, and
flexible platform for integrating various systems. It allows systems to
communicate through a common bus or messaging backbone. The ESB handles message
routing, transformation, and mediation between systems, enabling seamless
integration. ESBs often support multiple integration patterns, such as
publish/subscribe and request/reply.

·
Service-Oriented Architecture (SOA): Service-Oriented
Architecture is an architectural approach that focuses on designing systems as
a collection of services. Services are self-contained, modular units that
expose well-defined interfaces and can be invoked by other systems or applications.
SOA promotes loose coupling and reusability, making it easier to integrate
systems both internally within an organization and externally with partners.

·
Application Programming Interface (API) Integration:
API integration involves leveraging APIs, which are sets of rules and protocols
that allow different software applications to communicate with each other. APIs
expose specific functionalities or data of an application, enabling other
applications to access and interact with them. API integration is commonly used
in web services and cloud-based integrations.

These are some of the common integration approaches used in
modern systems and applications. Organizations choose the appropriate approach
based on their specific requirements, complexity, scalability, and the
technologies involved. It’s important to consider factors like security,
performance, maintainability, and future extensibility when deciding on an
integration approach. The following outlines some sample Application Integration
examples.

·
E-Commerce Platform and Logistics Provider: An
e-commerce platform can integrate with a logistics provider via an API,
allowing users to get updates on order status, shipping details, and tracking
information. This integration can facilitate smoother business operations and
improve customer experience.

·
Customer Relationship Management (CRM) and Email Marketing:
APIs can connect CRM systems like Salesforce with email marketing platforms
such as Mailchimp. This integration enables companies to leverage customer data
for more targeted and personalized email marketing campaigns.

·
Financial Institutions and FinTech: Banks can
integrate with FinTech solutions through APIs to offer services like up-to-date
account balance checks, fund transfers, and payment services. This provides
customers with a more comprehensive and seamless banking experience.

·
Social Media and Business Websites: Businesses can
integrate their websites with social media platforms using APIs. This allows
for features like social login, display of social feeds on the website, and
direct sharing of content to social media platforms.

·
Healthcare Systems Integration: Healthcare
providers can use APIs to integrate electronic health record (EHR) systems with
other healthcare applications. This could allow for sharing of patient data,
scheduling appointments, and e-prescription services.

·
HR Systems and Payroll: APIs can connect HR systems
to payroll and benefits providers, helping to streamline HR processes. This can
reduce the administrative burden on HR teams and increase data accuracy.

All of these are traditional API integration use cases that
are often human driven. We will look at the Event-Driven uses cases and the
Event-Driven integration patterns below.

1.3
Disadvantages of API Driven Integration

API-driven integration involves using Application
Programming Interfaces (APIs) to enable communication and data exchange between
different software applications. While this approach has many advantages,
including flexibility, scalability, and the ability to support real-time data
exchange, it also has some potential disadvantages:

·
API Complexity: APIs can vary greatly in their
complexity. Some APIs are simple and straightforward to use, while others may
be quite complex, requiring a deep understanding of the specific API and its
associated software application.

·
System Coupling: API Driven Integration requires
tight coupling between systems. Changes at the API, Data format or schema
requires changes to the system consuming the API.

·
API Versioning: APIs are often updated or changed
by their providers. These changes can break existing integrations if not
managed carefully. API consumers must be aware of the versioning and update
their systems accordingly to avoid disruptions.

·
Rate Limiting: Many API providers impose limits on
how many API calls can be made within a certain timeframe (often to prevent abuse
or to manage load on their systems). This can limit the performance and
scalability of systems that rely on these APIs for integration. This also
forces the consumer of the API to implement complex logic to manage the rate or
use other complex technologies to manage these rate issue.

·
Dependence on External Systems: If you’re using
third-party APIs, you are dependent on those systems and their reliability. If
an API you depend on goes down, is deprecated, or changes its functionality,
your system may be adversely affected.

·
Security and Privacy Concerns: APIs can be
potential targets for hackers. Also, data shared through APIs could potentially
be exploited if not properly secured. Furthermore, if you’re dealing with
sensitive data, API integration needs to be designed with privacy regulations
and best practices in mind.

·
Integration Complexity: It can be challenging to
manage integrations with a large number of APIs, each of which may have
different conventions, data formats, error handling mechanisms, and other
characteristics.

Some of the complexities of API-Driven Integration can be
alleviated with other technologies, such as ESB’s and API Gateways. But these products
do introduce yet more technology, management and maintenance.  So although they
can at first appear to be a good solution, over time, they will often become a
burden and will need to be centrally managed.

Despite these potential disadvantages, API-driven
integration remains a powerful and often essential approach for enabling
communication between different software systems. Careful planning, robust
error handling, and thorough security practices can mitigate many of these
potential issues.

2.
Event-Driven Integration

Event-driven integration is an approach to system
integration where different software components or systems/applications
communicate and interact with each other through events. An event represents a
state change within one system or application that is of interest to other systems
or applications. An event could be an IoT style event (such as a new reading
from a sensor), changes to a record in a database or application-level state
changes such as a new order. This integration style allows systems or
application to react and respond to events in a decoupled and asynchronous
manner, enabling scalability, flexibility, and loose coupling between systems
and application. Event Driven Integration consists of Event Producers
(publishers) and Event Consumers (subscribers), there can be multiple
subscribers to a given publisher.

A picture containing text, screenshot, font, line Description automatically generated

Event Driven Integration typically consists of the following
actors and facilities.

·
Events: These are notifications that indicate a
change in state or a significant occurrence within a system. An event could be
anything from a user clicking a button, to a sensor reading exceeding a certain
threshold, to a new record being added to a database.

·
Event Producers/Publishers: These are the sources
of events. They generate or produce events but don’t dictate what should be
done in response to them.

·
Event Consumers/Subscribers: These are the
components or services that listen/subscribe to and react to events. They act
based on the events they receive. Typically, there could be many Consumers/Subscribers
to Event Rpoducer/Publisher the publisher of an event is generally
unaware of the Consumers/Subscribers

·
Event Channels or Brokers: In many event-driven systems,
especially those that operate at scale, there’s a need for an intermediary that
can efficiently route events from producers to consumers. These intermediaries,
often called event brokers or event channels, can provide additional
capabilities like event storage, filtering, and transformation.

Event-Driven Integration can apply to many domains and
industries. The following are some sample uses cases where Event-Driven
Integration patterns could be applied.

·
Ground Based Air Defense System: Ground Based Air
Defense System consist of multiple sensor platforms, Radar, Drones, Satellite
Imagery events from these systems need to be filters, aggregate and detect
situations of interest and distributed the events to the relevant fire control
systems in a coordinated and real time manor. This will reduce the reduce the
time from detection to action and also reduce the amount of irrelevant
information that an operator of a fire control system may have to deal with.

·
E-commerce platforms: In an e-commerce platform,
actions like order placement, payment, and shipment are all significant events.
When a customer places an order, it can trigger an event that initiates payment
processing, inventory updates, and shipment processes. This approach allows for
a more seamless and efficient integration of different business processes.

·
Internet of Things (IoT) applications: IoT devices
generate an enormous amount of data, and event-driven integration plays a
crucial role in handling this data. For instance, a temperature sensor in an
IoT network could generate an event whenever the temperature crosses a certain
threshold. This event could trigger other actions, like adjusting a thermostat
or sending a notification to a user.

·
Healthcare systems: In healthcare, patient data is
constantly updated and used across different systems. For example, if a
patient’s lab results are updated, an event can be triggered that notifies the
patient’s healthcare provider. Event-driven integration makes it easier to keep
all involved parties updated and coordinate care.

·
Banking and finance systems: Many banking
transactions like money transfers, withdrawals, deposits, etc., can trigger
events. For instance, when a customer makes a transaction exceeding a certain
limit, an event could be triggered to inform the bank’s fraud detection system
for additional verification.

·
Supply chain and logistics: In supply chain
systems, events like a product moving from one location to another or changes
in inventory levels can trigger other processes. For example, when inventory
levels drop below a certain point, an event could be triggered to order more
products from suppliers.

·
Real-time analytics: Many businesses use real-time
analytics to gain insights and make quick decisions. Event-driven integration
is crucial in this case as it allows data from different sources to be
aggregated and analyzed as soon as it’s generated. For instance, a sudden
increase in website traffic can trigger an event to alert the marketing and IT
teams.

·
Social media platforms: On social media platforms,
user actions like liking a post, sharing content, or making a comment are
events. These events can trigger notifications, updates in feeds, and other
processes.

Event-Driven Integration patterns are perfectly suited to
scenarios and use cases the exhibit the following behaviors.

·
Real-time Responsiveness and the need for information to be up to
date in real-time, for instance fraud detection in banking or alerting systems
in healthcare.

·
Event-Driven integration allows systems to be loosely coupled. Systems
are not directly connected to one another, which enhances maintainability and
reduces the risk associated with changes or failures in any one system. So,
situations where the systems are evolving at their own pace and multiple systems
need to be integrated is often a good fit for Event-Driven Integration.

2.1
Advantages of Event-Driven Integration

Event-driven integration offers several advantages in the
context of software development and system integration. Here are some key
advantages:

·
Loose coupling: Event-driven integration promotes
loose coupling between components or services within a system. Instead of
direct and tightly coupled interactions, components communicate indirectly
through events. This decoupling allows for greater flexibility, scalability,
and maintainability of the system over time.

·
Asynchronous communication: Events enable
asynchronous communication between components. When an event occurs, it can be
published and consumed by interested parties without requiring immediate or
synchronous responses. This asynchronous nature enables components to operate
independently and efficiently, leading to better performance and
responsiveness.

·
Scalability: Event-driven integration facilitates scalability
by distributing workloads across multiple components or services. As events are
published, they can be processed by multiple subscribers in parallel. This
distributed processing capability enables systems to handle high volumes of
events and efficiently scale horizontally as the demand increases.

·
Extensibility: Event-driven architecture allows for
easy extensibility. New components or services can be added to the system by
subscribing to relevant events without modifying existing components. This
modularity and extensibility make it easier to introduce new functionalities or
integrate with external systems.

·
Event-driven patterns: Event-driven integration
enables the implementation of powerful design patterns such as
publish/subscribe, event sourcing, and message queuing. These patterns provide
enhanced reliability, fault tolerance, and event-driven analytics, which can be
leveraged to build robust and resilient systems.

·
Integration of heterogeneous systems: Event-driven
integration is particularly well-suited for integrating heterogeneous systems
or services developed on different platforms or using different technologies.
By abstracting communication through events, systems can exchange data and
trigger actions regardless of their underlying implementation details.

·
Real-time processing: Event-driven architectures
are well-suited for real-time processing scenarios. Events can be processed as
they occur, allowing for immediate reactions or updates. This is especially
useful in domains such as real-time analytics, IoT applications, and
event-driven workflows.

Overall, event-driven integration provides a flexible,
scalable, and loosely coupled approach to system integration, enabling
efficient communication, extensibility, and integration of heterogeneous
systems.

2.2
Fit for purpose!

Deciding which approach to use when building an integration
strategy is very complex and several factors need to be considered. Also, an
integration strategy may involve multiple integration strategies.

·
Does the data need to be up to date?: Often,
reporting systems do not require up-to-date information. The use of data for
the purposes of training AI/ML models does not need 100% current information,
for example. Therefore, in these scenarios, more traditional approaches to data
integration work well;

·
Existing synchronous API: Often, a given system
will only expose a synchronous API and, therefore, API level integration is the
only approach possible. Although Event-Driven integration may still be possible
– see below;

·
Low data change rates: When data is not changing rapidly,
or the rate of integration is low, API driven integration is a valid and
possible approach.

2.2.1
Combining Approaches

Event-Driven Integration can be combined with other
integration approaches to implement any integration strategy:

·
Data Integration and Event-Driven Integration: Data
integration patterns such as ETL can be combined with Event-Driven Integration.
For instance, an ETL approach may be used to load the initial data from one
system to another and then an Event-Driven Integration approach could be used
to keep the two sources synchronized;

·
API Integration and Event-Drive Integration: Often
applications may only provide an API to integrate. Event-Driven Integration
patterns can still be used by using Event-Driven Integration patterns to ingest
the inbound events and push these events into a system via an API. Care must be
taken with this approach to handle load on the API as they will often be rate
limited, but often this can be handled by buffering information before sending
onto the API.

3.
VANTIQ and Event-Driven Integration

VANTIQ is a platform designed for event-driven integration,
enabling organizations to develop real-time, event-driven applications and
systems. It provides a set of tools and capabilities that facilitate the
seamless integration of diverse data sources, services, and applications.

Here’s how VANTIQ supports event-driven integration:

·
Event Sources: VANTIQ supports a wide variety of off-the-shelf
event sources and a flexible mechanism of integrating additional event sources.

·
Publisher/Subscriber Model: The VANTIQ platform and
any application built within VANTIQ makes use of an internal pub/sub mechanism.
Therefore, everything deployed within VANTIQ, whether a new use-case or an
integration pattern, is event-driven in nature.

·
Asynchronous Services: Services are a common
approach to application development. VANTIQ services support both synchronous
and asynchronous interfaces, allowing developers to build applications from a
series of asynchronous services.

·
Reliable Events: Both external event sources and
internal event processing can be reliable and use an ‘At Least Once’ delivery
model.

·
Distributed Capabilities: Publisher and Subscribers
can be distributed across multiple tenants, VANTIQ instances and geographies. Subscribers
can be added dynamically and move location without affecting the publishers or
other subscribers.

·
Event Processing: Asynchronous Services process
event using Visual and textual Event Handlers. Event Handler both Visual and textual
are reliable and support ‘At Least Once’ processing. Each separate step within
a Visual Event Handler will maintain a check point and if an event is
redelivered the check pointing with skip steps that have already been
processed.

·
Distributed Event Broker: The VANTIQ Event Broker
uses a distributed/mesh-based architecture verses the classic centralized
approach that most event brokers/MOM’s use. This allows for great scalability
and reliability as it does not rely on a complex centralized broker.

·
Catalog: A centralized catalog of events and
synchronous services allows the developer to browse the available events and
services and publish or subscribe to events and asynchronous services
registered in the catalog. The catalog is only used to register an event and to
subscribe to events, it is not involved in the event distribution – this is
handled by the distributed event broker. The catalog also provides security mechanisms
to control who can see certain events and services that are registered.

·
Scalability and Deployment: VANTIQ is designed to
handle high-volume event streams and scale horizontally as per demand. It
supports deployment options both on-premises and in the cloud, allowing you to
choose the infrastructure that suits your needs.

·
Low Code: Integration in VANTIQ makes use of
various low code/visual modeling and development tools to greatly increase
developer productivity and scalability.

Overall, VANTIQ’s event-driven integration capabilities
empower organizations to build real-time, responsive integrations between
existing and new applications by seamlessly integrating disparate data sources
and systems. Vantiq allows the creation of Integration Applications/Services as
well as new Business Applications and allow the new Business Application to
integrate with the existing application in an event driven approach as well as
the new Business Application generating new Events that could be integrated
into new and existing applications. It enables integration architects to
harness the power of event-driven architecture to drive agility,
responsiveness, and innovation in their organization’s digital transformation
efforts.

3.1
Event and Event Sources

Events are generally obtained as streaming data. Streaming
data is data (aka events) that arrive continuously. There is no notion of an
end to the stream as events will be delivered periodically until the
application or the event producer is terminated. Events may be delivered at
high rates, with thousands of events arriving each second, or low rates, with
the interval between events measured in minutes, hours or days. If the arrival
rate is high, it is advisable to use the most efficient integration
technologies to obtain the events. If the rates are low, less efficient
technologies are perfectly adequate.

There are a number of key integration technologies available
for obtaining events:

·
Messaging system integrations such as MQTT, AMQP, Kafka and JMS,
Hyper Scaler Messaging systems such as Azure Event Hub, AWS Kinesis

·
IoT Platforms

·
VANTIQ REST/WS interface

·
Custom integrations of proprietary APIs

·
Vision Analytics and AI Platforms

All these sources are naturally event driver and streaming
but many integrations and systems do not support events and streaming
integrations. VANTIQ also provides mechanisms of consuming non-stream
data/event sources. The most efficient approach to integrating these sources is
to convert the non-streaming nature of them into streams of event at the periphery
of the system.

·
REST/API sources

·
Databases – JDBC, Big Data Databases

·
Files – CSV files

·
Legacy Systems

These systems are not streaming sources and will not deliver
events continuously and therefore mechanisms are built into VANTIQ to automatically
smooth the delivery of bulk/burst events into more manageable streams. Back
pressure can also be applied to streaming sources when the event rate exceeds
the system ability to process these events, further increasing the reliability
of the system.

A picture containing text, screenshot, diagram, design Description automatically generated

3.2
Decentralized Event Broker and Catalog

VANTIQ’s Event Broker is designed to distribute events across
multiple systems/applications and environments.

A picture containing text, screenshot, diagram, line Description automatically generated

The Event Broker uses a publish/subscribe model. Any
application, developed in any technology, may publish events to the broker. Any
application, developed in any technology, may subscribe to any event published
by the broker. In this way, business events produced by one application can be
consumed by other applications to create an event-driven system.

Events are made available by registering them in the
catalog. The event consists of a topic name and a schema for the event’s
payload. These events are registered in the broker event catalog. An
application wishing to publish the event registers themselves as a publisher.
An application wishing to receive events published to the registered topic,
register themselves as a subscriber. At runtime, when an event is published, it
is delivered to all subscribers.

The Broker supports a distributed pub/sub model supporting
publishing applications and subscribing applications in different geographic
regions.

VANTIQ applications can automatically publish events to the
Broker and subscribe to events from the Broker with a simple declaration making
VANTIQ applications the easiest to combine into an event-driven system. Once an
event stream is selected for inclusion in an application, the VANTIQ runtime is
automatically provisioned to deliver PUBLISHED messages to their SUBSCRIBERS.

Since there is no centralized broker it is much easier to
scale the system. Each publisher or subscriber can be scaled separately to
handle their expected workload rather than having to scale a single broker to
handle the combined workload.

3.3
None Streaming Sources

One of the challenges of Event-Driven Integration is the
that not all applications and systems will support a streaming/event-driven
interface. Although it would be possible to update the applications to add a
streaming interface, this is not a practical solution. Each Publisher or
Subscriber within the VANTIQ Event Broker is a separate event-driven
application. Publishers and Subscribers can act as gateways between applications
and systems that do not support a streaming interface and the pub/sub model
used in Event-Driven Integration.  A Subscriber could receive events and send
these events via an API to an application or integrate directly with a database
to insert these events into a database. A publisher could use polling, file
uploads etc. to receive non streaming data from an external system and convert
these into a stream of events that are published to the various Subscribers.

A picture containing text, screenshot, diagram, font Description automatically generated

3.4
Reliability

VANTIQ supports an ‘At Least Once’ reliability model, any
event marker as reliable will use a store and forward mechanisms for event
delivery. This reliability mechanism works across event sources that also
support ‘At Least Once’ delivery. For event sources that do not support ‘At
Least Once’ delivery mechanisms the event can become reliable once it hits the
VANTIQ platform. The internal pub/sub and the Event Broker distributed pub/sub mechanisms
support reliability. Within the Visual Event handlers – event processors –
reliability is supported through an in-memory check pointing mechanisms such
that when events are redelivered to the Visual Event Handler the check pointing
logic ascertains which tasks have been executed and start processing from the next
step. The checkpointing state is replicated across the cluster to support high
availability of in memory state.

3.5
Event Processing

Integration is not just about connecting system A with
system B. As events are flowing through the system they need to be checked,
enriched and transformed and in some cases filtered. VANTIQ Event Handlers and specifically
the low-code Visual Events Handler allow developers to implement this business
logic in a low code approach.  Visual Event Handler can be used on both the
publisher and subscriber side of any integration. Publishers could use the Visual
Event Handler to perform checks and transformations etc. to verify any event
data is valid and to transform the events into a standard format and perform
functions such as property level conversions. They can also enrich the event
data with additional information, for instance an AI platform that is detecting
situations of interest in a camera stream does not know the location of the
camera, this may be useful information for the subscribers so this additional
meta data can be added to the events before being published. Subscribers may
wish to only send certain events to some external system or databases so may
perform filtering or thresholding to reduce the load on the external systems.

A picture containing diagram, text, line, screenshot Description automatically generated

3.5.1
Event Processing and Meta Data

Certain events that are ingested may contain sensitive or
incompatible data that other systems are unable to process.  Frequently, data
sets may be sufficiently large that moving this information around would cause a
performance issue or swap available bandwidth. One such example would be
satellite image data and the AI analytics performed on these images. The image itself
does not immediately need to be published, but only the meta data and
situational awareness data associated with the image. The image location and
access to the raw image can be managed by the publisher and the publisher only
‘publishes’ the meta data about the image. Sensitive information could be
obfuscated before publishing the event data, this could be for security or
privacy purposes such as GDPR.

3.6
Multiple Brokers/Security Domains

It is possible for a published or subscriber to be connected
to multiple brokers. This allows events to flow between different security
domains or environments. The individual publishers could perform obfuscation
etc. on the events depending on the domain it is published to.

A picture containing text, diagram, screenshot, plan Description automatically generated

4.
Conclusions

In conclusion, VANTIQ’s event-driven integration
capabilities provide organizations with a powerful platform for developing
real-time, event-driven applications and systems. By seamlessly integrating
diverse data sources, services, and applications, VANTIQ enables organizations
to build responsive and agile integrations that drive innovation and digital
transformation.

VANTIQ offers a range of features that support event-driven
integration. The platform supports a wide variety of event sources, both
off-the-shelf and customizable, allowing organizations to efficiently obtain
events from different systems and technologies. The publisher/subscriber model
used by VANTIQ ensures that all applications and use cases deployed within the
platform are inherently event-driven, promoting a seamless flow of events
throughout the system.

With VANTIQ, developers can leverage asynchronous services
and reliable event delivery mechanisms, such as the ‘At Least Once’ model, to
ensure the robustness and scalability of their integrations. The distributed
capabilities of the platform allow for flexible deployment across multiple
tenants, instances, and geographic regions, enabling organizations to scale
their event-driven systems as per their needs.

The VANTIQ Event Broker, built on a decentralized and
distributed mesh architecture, serves as the backbone of event distribution,
supporting a publish/subscribe model and facilitating the seamless flow of
events between applications. The centralized catalog provides a convenient way
to browse and manage available events and services, ensuring secure access and
control.

VANTIQ’s event processing capabilities, including visual and
textual event handlers, enable developers to implement business logic, data
validation, transformation, enrichment, and filtering within a low-code
environment. This empowers organizations to effectively manage and process
incoming events to meet their specific integration requirements.

Moreover, VANTIQ recognizes the challenge of integrating
non-streaming sources into an event-driven system. The platform offers
mechanisms to convert non-streaming data sources into manageable event streams,
allowing seamless integration with legacy systems, REST/API sources, databases,
and files.

Overall, VANTIQ’s event-driven integration capabilities
provide organizations with a comprehensive solution for building real-time,
responsive, and scalable integrations. By leveraging the power of event-driven
architecture, organizations can drive agility, innovation, and success in their
digital transformation endeavors.

Was this article helpful?
4.5 out of 5 stars

1 rating

5 Stars 0%
4 Stars 100%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?
© Vantiq 2024 All rights reserved  •  Vantiq Corporation’s Privacy Policy and Cookie Policy