The ESB is Dead – Testament of a System Integrator

By | April 5, 2017

In this article I will talk about why I believe that the enterprise service bus is unsuitable when you have, or expect, more than a handful of integration scenarios and my thoughts about in what direction to look instead. In addition I will add in some thoughts and experiences in related areas. I will not present a tried-and-tested replacement for the ESB – some parts are more or less unknown to me while I have extensive experience with other parts.
I will try to consider all the challenges that I have encountered but that does not mean that all the suggestions have to be implemented at once. As usual, a gradual change will probably be better, adopting suggestions and ideas to the particular case at hand.

It’s already all over the news – the ESB is dead.

If you are looking for code, this article is not for you. Contrary to my habits, this will be an article completely bereft of all code – just text and figures. As usual, I first and foremost write for myself. So also in this case, in which I write to substantiate some ideas that have arisen as of lately. My sudden demise is not close at hand, I hope, so the testament part in the title is more in the vein of passing something on.

Background

In this article I will look back on the last years during which I have worked in what I believe to be one of the largest integration competency centers in Sweden. The department has grown significantly during these years and there is a need to work more efficiently and to automate as much as possible since there simply isn’t the required manpower available to meet the increasing demands. I have worked mainly with the Mule ESB and I have a background as a server-side Java developer.

Enterprise Service Bus

Enterprise service bus with service-modules provided by the ESB (green) and integrations deployed in the ESB (orange).

An enterprise service bus is, to me, an application server that specializes in integrations. It provides services related to, for instance, communication (examples: HTTP, JMS, TCP), transformation (example: changing the format of a message from XML to CSV), routing (example: a message is routed differently depending on its contents) etc.
In addition, and as visualized in the figure above, the ESBs I have worked with, provide an environment in which to deploy integrations.

Merits of the ESB

So what are the merits of the ESB?

  • The ESB provides a relevant, well-rounded, set of features that are easy to use and well-documented.
    Hopefully these features are well-tested and free from bugs.
  • An ESB provides a standardized environment for your integrations.
  • It (sometimes) simplify development.
    The ESB that I have been working with for some time provides its own flavour of Spring XML configuration to write your integrations in. This makes it easier to develop simple integrations. Other products in this area let you chose between XML configuration and domain specific languages. There are also graphical tools to visualize your integration flows, but, in my opinion, they are of limited value as the complexity rises beyond the trivial.
  • Support.
    Bosses and managers love support. With an ESB, there usually is commercial support available.

Drawbacks of the ESB

Some drawbacks of the ESB that I have found are:

  • It is a deployment container.
    Integrations running in an ESB run within the same process and, in my world, within the same Java virtual machine. If one integration consumes large amounts of resources, the others will be affected. The ESB may enter a state where it has to be restarted, which means that all the integrations in the EBS will suffer downtime. Monitoring and controlling resources like CPU and memory usage for one single integration is not possible. Robust lifecycle-management of integrations is, according to my experience, impossible – all too often management tools have failed me and forced me to restart the entire ESB.
  • It comes with a set of third-party libraries and dependencies.
    When you develop an integration to run in an ESB, you have to be careful to use the exact same version of the library that the ESB comes with or else you may face unpredictable results. In our case, this has led to a corporate Maven pom-file, in which a substantial part is version numbers and dependency management declarations for the libraries in the ESB.
    On occasions I have encountered bugs in a third-party dependency of the ESB and have been unable to chose a version of the dependency in which the bug has been fixed for fear of introducing problems.
  • Forces you to upgrade all integrations running in the ESB when you update the ESB to a new version.
    The bare minimum would be to update dependencies and make sure that the integrations run as expected in the new version of the ESB. More likely is that you will have to remove use of deprecated features, fix parts that no longer works as expected in the new version etc. In our case we have 100+ integrations/components so even the best-case scenario means spending quite some time on something that does not add any value to the customers.
  • Requires, to varying degrees, developers to be familiar with the product.
    This depends a lot on how the ESB is implemented and whether it adheres to common concepts and patterns. However this is not something that is unique to ESBs, but can be applied to all types of products. People having experience with an ESB are more scarce than people having experience with, for example, the Spring framework.
  • May result in a potentially expensive vendor lock-in.
    In the case of the Mule ESB, there is a community edition and an enterprise edition. The company behind it is naturally not interested in people using the free community edition and are using various measures to steer both users and partners to the enterprise edition. We have seen this trend become increasingly clear over the last year or two. In other cases there isn’t even a free alternative available.
  • No automatic management of individual integrations running in the ESB.
    I have never seen an ESB that offers automatic management of individual integrations. With automatic management for an integration I mean automatic restart of the integration if the integration is deemed to be in an error state. The ESB I mainly have been working with does implement restart of the entire ESB instance if it has become unresponsive by using the Tanuki service wrapper.

Alternative to the ESB

So what to do instead?

Well, I haven’t invented anything new so I’ll just provide some ideas that I may some day in the future expand on in greater detail.

I also need to mention that one of my basic requirements is that the software I use in my integration platform should be free and, preferably, open source. If there is commercial support available then even better, but I should be able to chose whether to spend money or not.

Containerization

Containerization is central to my proposed approach. The main reason is that I want to be able to handle all standalone units of software, that is integrations, applications, services etc, in the same manner.

Example of different types of standalone software units that may be found in my integration landscape. Each software unit is running in its own container.

I like lists, so here is a list of the benefits that containerization is expected to give me in my integration landscape:

  • Each unit of software is managed in the same way – as a container.
    Significantly simplifies operations.
  • Automatic orchestration of containers.
    Provided that container management software with this feature is used.
  • All units of software can be monitored in the same way.
  • Controlled environment for each unit of software.
    Each unit of software execute in a carefully specified container. Containers may be discarded at any time which discourage manual tinkering.
  • Individual resource control for each unit of software.
    Example of resources are CPU, RAM memory and IO access speed.
  • Test and production environments are identical.
    Same containers, only deployed on a different set of servers.
  • Easier and faster to set up an environment.
    A test environment can be discarded and a new can be set up with minimal effort. This has the positive side-effect of having a better, more predictable, testing environment. In addition it becomes easier to maintain multiple environments if so desired.
  • Enables uniform integration testing of a unit of software.
    I imagine that this will give developers a standardized way to write integration tests. When running the tests, the software to be tested also executes in one or more containers allowing several different tests to run at the same time, for instance on a build server, without affecting each other.
  • Enables automated testing where a larger number of software units are involved.
    For example, regression testing.
  • Makes virtualization software unnecessary.
    Not only will it be possible to save the money spent on virtualization software, but running containers instead of virtual machines is also expected to let you use a larger percentage of the server’s capacity for your applications etc.
  • Being ready to run units of software off-premise, in the notorious cloud, if desired.

Container Orchestration

Conductor image by FriedC.

I have hopes for reduced human intervention when it comes to operating my integration platform. I wish for an orchestration tool that can monitor the state of the instances of my units of software and determine whether they operate as expected or not. If an instance does not operate as expected, it is to be stopped and a replacement instance started. I also wish for automatic scaling of my integration platform; if the instances of one and the same type of unit of software is under heavy load, then an additional instance is to be started. When the load is low, then instances can be stopped.

For container orchestration I have put together the following list of tools that I would want to have a closer look at:

 

 

  • Kubernetes
    Kubernetes from Google must be the great grandfather of container orchestration tools and seem to be very complete as far as features concern.
  • Apache Mesos
    I am not entirely sure that Mesos is what I am looking for but definitely something that I want to take a closer look at.
  • Nomad by HashiCorp
    I hadn’t heard about Nomad until I started researching this article, so I really cannot say much about it but I include it on my list of container orchestration tools to check out.

Units of Software

In the concept units of software I include integrations, services and applications that are developed in-house. I also include third party standalone software, such as message brokers, that one may need in an integration landscape. This section will focus on the former type of software units, namely the ones developed in-house.

Some requirements for in-house developed units of software are:

  1. Standalone units.
    No application server, ESB or similar, should be required.
  2. Scalable.
    Multiple instances of a unit of software should be able to run at the same time.
  3. Location insensitive.
    Should be able to run anywhere in a group of servers.
  4. Disposable.
    If one unit malfunctions, I should be able to shut it down and replace it with a new instance.
  5. Centralized configuration.
    There should be a centralized service for configuration information that is highly available and that allows for convenient configuration management, such as maintaining a hierarchy of configurations in which individual properties or groups of properties can be overridden.
  6. Should not have dependencies on other units of software or similar so that making internal modifications to one unit of software forces me to update another unit of software.
  7. Should be monitored in the same way.
    This include not only exposing monitoring over for instance JMX, but also producing output to log files that can be conveniently gathered and interpreted by some tool like the ELK-stack or similar.

If you have been following the trends in software development it should be apparent that the above requirements are heavily inspired by microservices architecture. There is one significant difference: The units of software I envision may not necessarily be part of an application – one unit of software may not necessarily collaborate with other units of software to form a greater whole.

Technology Choices

Here I have to say a few words on choice of technology for in-house development.

This question is not only a technology question, but also a question of the competence available in your area and the competence your organization is capable of attracting. In my case I would chose technology that is mainstream so that there is a chance that people that apply for the jobs in the organization have some previous experience.

In addition you will want to chose technology which has an active user community on the internet so that you can learn from other’s experience.

Finally I would want to insist on that the technology chosen is open-source. When the shit hits the fan, the search engines come up with no useful help and there seem to be a missing section in the documentation where the solution to your problem should have been, you will want to be able to dig into the source-code of the frameworks to see how things really work. With access to the source-code you will also be able to create temporary bug-fixes in the frameworks you use until such fixes are incorporated in the framework itself.

Standalone Units

So, how would I want to develop and maintain these in-house developed units of software?

I like the Spring Boot concept and so I would use Spring Boot. This would enable me to develop standalone units of software regardless of whether I want to run my unit of software in a web container such as Tomcat or as a plain java application. In the former case, Spring Boot will package my web application and an embedded instance of Tomcat in one and the same JAR file.
In addition the complete, standalone, JAR file that Spring Boot per default produces can be conveniently packaged into a Docker container as I have shown in an earlier article.

Regardless of the type of development, I think that having some type of limit of the size of the standalone software units would be beneficial. One of my main motivations for developing software in components is to reduce maintenance by, among other things, reducing the complexity of each component.

Embeddable Integration Framework

Since I am considering an integration platform, I would evaluate some embeddable integration framework like, for instance, Spring Integration or Apache Camel and use it in the units of software that have the need for this.

In-House Libraries

Developing in-house libraries is acceptable as long as they are actively maintained and there is a clear gain to be made from each addition to the library. The features delivered by the in-house libraries should be features found nowhere else – I would personally rather chose to use an open-source library, even if it does not completely satisfy my requirements, rather than develop something own. If you chose to develop in-house libraries, then I would also introduce a corporate Maven pom-file (assuming that you use Maven) to lift the burden of having determine the proper version of any additional third-party libraries off the developers.

One of the main things to remember about developing your own libraries is that you don’t just throw some code together in a project and get a library. Libraries need planning and maintenance and code quality standards that are above the requirements you should place on your regular production code. This is of course true regardless of the type of software developed.

Project Templates

I recommend creating Maven archetypes or some other type of project templates for the different types of standalone software units you expect to develop. Such project templates should standardize the project structure and introduce the basic dependencies of the technology stack of choice. Maybe even let the project template create some basic version of the component with an outline for how to implement the component and an outline for how to test the component.

Automatic Release and Version Management for Libraries and Project Templates

Automating release and version management of the libraries and project templates developed in-house is a must, in order to avoid a lot of very tedious work. One suggestion is to maintain these projects as modules in a Maven project and to have one and the same version number for all the different artifacts, in order to reduce confusion and the amount of manual labour that goes into releasing several projects that sometimes have dependencies to each other.

Example of Maven project hierarchy in which a corporate Maven pom, additional Maven pom-files, project templates and libraries can be arranged to facilitate easier version management and releases.

Scalability, Reliability and Internal Communication

When using asynchronous communication with a message broker, like JMS in the Java world, to pass messages between software units it is very simple to achieve scalability and reliability; just deploy multiple instances of a software unit.

Multiple units of software listening on one and the same queue consuming only messages aimed at the type and version of the units.

In the figure above, each software unit will only receive messages that have the name and version of the type of software unit set as metadata on the messages. When there are multiple instances of a type of software unit, as in the case with the green SoftwareUnitA v1.00 they will share the load processing the messages aimed at that type of software unit, in this case the green messages.

An instance that consumes a message from a queue but fails to process the message can roll back the message to the queue.
Thus with asynchronous communication like this, we will have automatic scaling and failover without introducing any additional software or hardware except for a message broker.

With synchronous communication, like HTTP, it is not quite as easy to obtain scalability and reliability without introducing additional software. I will not go into details but just say that to solve this requirement for synchronous communication, I would want to take a close look at Spring Cloud and the different parts that it consists of. I see similarities between the needs that arise in a micro-service based application and the needs related to synchronous communication in my imaginary integration platform, for instance regarding routing, service discovery, load balancing etc.

With an integration platform, HTTP may not be enough as far as synchronous communication is concerned. The HL7 standard may use the MLLP transport protocol. A feasible strategy can be having a MLLP receiver component at the edge of the integration platform, behind a load balancer, which transforms the HL7 messages to HL7-over-HTTP. Regular micro-service technology for HTTP can then be used to route messages within the integration platform until the message leaves the platform, when it is sent by a MLLP sender component.

Example of how synchronous communication can be used with HTTP
and an additional protocol in my integration platform.

An idea to explore in greater detail is to allow for synchronous communication up to the border of the integration platform and then use asynchronous message-based communication for communication inside the integration platform. This would have to be tested carefully to determine the magnitude of the overhead added and how the solution behaves under load.

To me, the default choice regarding communication inside my integration platform would be asynchronous message-based communication. There would have to be a very good motivation in order for me to consider anything else, due to the additional complexity introduced by synchronous communication.
With this said, I would be careful to select the message broker to be used in my integration platform, since experience tells me it is an important, if not the most important, component. Not only does it need to be reliable and scalable, but it should also be as fast as possible.

Location Insensitivity

As above in regard to scalability and reliability, location insensitivity depends closely on the mode of communication employed by software units.

If the software unit uses asynchronous message-based communication, then I do not see any need for a service registry or similar. Regardless of where it is deployed, the software unit connects with a message broker and pulls the messages it is interested in and places any messages it produces onto some queue. A software unit does not need to know where other software units it communicates with are located.

Message-based interaction between multiple software units and an external message source.

In the above figure we can see how multiple software units are communicating through one queue in a message broker. It is assumed that all message consumers, Software Unit 2 and 4, both are allowed to receive all the messages on the queue Q1. Thus if Software Unit 4 becomes unavailable, Software Unit 2 will automatically take over and process all the messages.
Notice the absence of load balancers, edge services etc etc.

With synchronous communication a software unit must know the address of the other software units it interacts with. To accomplish this, software units register with a service registry when they are ready to receive messages. Other software units ask the service registry for the addresses of software units it wishes to interact with.

Scheduled Jobs

Scheduled activities is not uncommon, at least not in the integration platform I have worked with. Originally we chose to implement scheduling in each component that were to execute a scheduled job. Later a centralized scheduler was implemented that sent trigger messages to queues to trigger the execution of a scheduled job in components.
Depending on which container orchestration solution is used, there may be a scheduler that can be used. Kubernetes, for instance, supports scheduling of jobs using, among others, cron expressions. This makes it possible to avoid having to implement job scheduling yourself and instead rely on the container orchestration tool.
If a scheduled job is implemented in a stand-alone unit of software, then this unit of software can be developed as to immediately perform its work when being launched and then quit. The unit of software is later executed in a container launched by the orchestration tool.

Scheduling of a job using the scheduling of the container orchestration tool. The scheduled job runs at the time it is started by the orchestration tool and then terminates.

If a scheduled job schedules work that is performed by multiple units of software working together, then you may want to implement a very small, generic, unit of software that just sends a trigger message received as a parameter to a queue which name is also received as a parameter and quits. This unit of software can then be packaged and scheduled in the same manner as in the case described above.

Scheduling of a job involving multiple units of software using the scheduling of the container orchestration tool. The job trigger unit sends a trigger message and then terminates. The software units performing the job are long-running.

Monitoring and Logging

I will not go into depth about monitoring – you need good monitoring regardless of whether you use an ESB or not. I like the ELK-stack and have written about it earlier here and here. You may want to use Elastalert or a commercial alternative for alerting.

I do want to say a few words on logging, which really are applicable to any non-trivial system landscape. In such a system landscape you will want to capture all the logs into something like Elasticsearch, in order to make them easily searchable and to be able to query for, for instance, statistics. Not only does this include all the logs from the in-house developed units of software but also the logs produced by third party libraries used by these units of software.
The logs written by the software you develop yourself is easy to handle – create a log library that produces logs in JSON format. Such log does not need to be parsed but can be fed directly into Elasticsearch.

Log output from third-party libraries can be handed in different ways depending on how much effort you are willing to invest. The simplest way that also requires the least amount of effort is to feed entire lines of log into Elasticsearch as one unit. While this is a lazy way, it is still much better than not having that log output in Elasticsearch.
An approach that require more effort is to create parsing instructions, for example for Filebeat.

Final Words

As I had expected, this article did help me to put some of my thoughts into writing and fill in some gaps here and there. It has also made me increasingly motivated to continue my learning and exploration. In some areas I wish I had more information to provide, but I hope I will be able to get back on the subject some time in the future.

Happy coding!

3 thoughts on “The ESB is Dead – Testament of a System Integrator

  1. Chris

    Great article … did you actually implement something along these thoughts ?

    Reply
    1. Ivan Krizsan Post author

      Thanks!
      I currently work in an organization that moves slowly and where politics is a major factor, but let’s say that I still haven’t given up all hope.
      To be honest, there is no choice if we want to be able to handle future (not that far away really) demands.

      Reply
  2. Charles Harvey

    Ivan, thank you so much for the informative article! Also for the mule docker files which you have published. I am also in a large organization where we use mule community and have written our own management console which is a bit more robust than mule’s commercial ARM. We have used Mule as a lightweight container and do most of our work in Java which does allow us portability without too much heavy lifting. We are currently trying to solve a lot of the problems that you have mentioned here using Docker and Kubernetes alongside app design. Do you have any information regarding whether mule community 4 will ever be supported in anypoint studio? It seems suspiciously hard to find information regarding that fact.

    Best,
    Charles Harvey
    University of Pennsylvania

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *