Event Processing Platforms vs Engines

Opher Etzion just made an interesting classification of the CEP tools market in his observations on the Bloor Research comments on CEP and Big Data, part of an increasing amount of coverage on CEP. To wit:

  • Event Processing Platform is a software that enables the creation of event processing network, handle the routing of events among agents, management, and other common infrastructure issues.
  • Event Processing Engine is a software that enables the creation of the actual function – in the EPN term implementing agents.

In the CEP Market analysis we don’t try to distinguish between these – probably because it would be contentious. For example, to some folks an “event processing network” is managed as a single process – possibly multi-threaded, but bounded on a single machine instance. To others (like TIBCO) the network is a message or event distribution mechanism for breaking the constraints of a single process or system (e.g. performance, scalability, and fault tolerance constraints). Furthermore “event processing agents” might be viewed as “event processing operations” – like a single pattern detection query, or a pattern matching rule, arranged in some kind of activity or business process diagram – or as more autonomous processing agents that can handle a number of operations and cooperate declaratively towards some solution.

If one views an Event Processing Platform as one that handles routing across multiple processes and distributed systems, then the potential candidates is reduced somewhat [*1]. Of course, any CEP engine can be used acoss multiple systems with a shared middleware infrastructure, but individually they are “blind” to the other agents and the design tools do not handle the cooperative nature of the agents. Of course, one can set up a message type to include management information to allow for some semblance of distributed control, but this is more likely to be a developer task than a platform capability.

Looking at something like TIBCO BusinessEvents, we can see this satisfies the requirements of a (physically distributed) Event Processing Platform:

  1. Enables a (computer) network of event processing agents – typically as a minimum of rule agents and cache /datagrid agents, in pretty much any configuration.
  2. Enables a (single process) network of event processing operations – typically the network is implemented as  declarative rules, but can be visualised as a network in a report.
  3. Enables different types of Event Processing Engines – apart from the rule agents, you can also have (continuous) query agents.  Rule agents can also be customised as “decision agents” (executing decision rules,  or decision tables), “analytics agents” (executing predictive analytics models in Spotfire S+ or R), or “optimization agents” (executing NuOpt optimization routines in  Spotfire Statistical Services) [*2]

Notes:

[*1] Other candidates for an Event Processing Platform across distributed systems include IBM Infosphere Streams (although IBM is very quiet these days about that), and EventZero. If there are any others please comment them, and if enough we’ll update the  Market Analysis with this classification…

[*2] Note that invoking Spotfire services involves invoking the Spotfire platform under the control of a rules agent; from an architecture point of view these are just SOA services, like calling BusinessWorks services during event processing.