In this article I will look at the new non-blocking HTTP communication available in Mule 3.6 CE. The old HTTP connectors and proxy pattern will be compared with the new with a focus on reliability under load and ease of configuration.
I will also examine the best pre-3.6 alternative, as far as HTTP communication is concerned, for those that are not in a position to migrate immediately.
A practical approach will be used and I will show how to develop three different HTTP proxies, a HTTP stub service to be proxied and how to load-test the different proxies using Gatling.
As part of the examples, we will see how to set up a Gatling load-test project that can be run from within your IDE in a convenient fashion
Scenario
The scenario with the three different types of proxies, the stub service and Gatling looks like in this figure:
Logos:
Tomcat logo by The Apache Tomcat Project Team – licensed under Apache License 2.0 via Wikimedia Commons.
Gatling logo copyright Gatling Corp, used with permission.
MuleSoft logo copyright by MuleSoft, used with permission.
Prerequisites
First and foremost, choose a 64-bit recent Java runtime environment. I have run all the tests in this article using JDK 1.7u72.
Second, make sure that the operating system you are using allows for many sockets to be opened. Since I am using Mac OS X, I used the issued the following commands in a terminal window:
sudo sysctl -w kern.maxfilesperproc=300000 sudo sysctl -w kern.maxfiles=300000 sudo sysctl -w net.inet.ip.portrange.first=1024
The above was found in the Gatling documentation, where information on how to raise the maximum number of open ports/files for Linux is also given.
HTTP Service Stub Servlet
The HTTP service stub is a simple servlet which I run in a standalone Tomcat 8. Using Eclipse with the JavaEE extensions or Spring Tool Suite, it is implemented in the following steps:
- Download Tomcat and add a new server in the IDE if required.
- Create a new Dynamic Web Project with the context root “ProxiedServiceStub”.
I also named my project “ProxiedServiceStub”. - Implement the servlet as shown in this listing:
package se.ivankrizsan.proxiedservicestub.servlet; import java.io.BufferedReader; import java.io.IOException; import java.text.SimpleDateFormat; import java.util.Date; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * This servlet acts as a proxied service when load-testing the different types * of proxy constructs implemented using Mule ESB 3.6 Community Edition. * Access this servlet using the URL: * http://localhost:8080/ProxiedServiceStub/ServiceStubServlet * * @author Ivan Krizsan */ @WebServlet(urlPatterns = { "/ServiceStubServlet" }) public class ServiceStubServlet extends HttpServlet { /* Constant(s): */ private static final long serialVersionUID = 4036391400918846212L; private static final int READ_BUFFER_SIZE = 100; private static final String APPEND_STRING = "-appended by servlet "; private static final String DATETIME_FORMAT = "yyyy-MM-dd - HH:mm:ss"; private static final long REPLY_DELAY_MILLISEC = 500; @Override protected void service(final HttpServletRequest inHttpServletRequest, final HttpServletResponse inHttpServletResponse) throws ServletException, IOException { final StringBuffer theResponseBodyBuffer = new StringBuffer(); final String theRequestBody = obtainPayloadFromRequest(inHttpServletRequest); theResponseBodyBuffer.append(theRequestBody); theResponseBodyBuffer.append(APPEND_STRING); final SimpleDateFormat theDateFormatter = new SimpleDateFormat(DATETIME_FORMAT); final String theDateTimeString = theDateFormatter.format(new Date()); theResponseBodyBuffer.append(theDateTimeString); try { Thread.sleep(REPLY_DELAY_MILLISEC); } catch (final InterruptedException theException) { /* Ignore exceptions. */ } inHttpServletResponse.getWriter().write(theResponseBodyBuffer.toString()); inHttpServletResponse.flushBuffer(); } /** * Retrieves the request body string representation from the supplied HTTP servlet request. * * @param inHttpServletRequest Request from which to retrieve request body. * @return Request body string representation, or empty string if no request body present. * @throws IOException If error occurs reading data from request. */ private String obtainPayloadFromRequest(final HttpServletRequest inHttpServletRequest) throws IOException { final BufferedReader theRequestBodyReader = inHttpServletRequest.getReader(); final char[] theReadBuffer = new char[READ_BUFFER_SIZE]; final StringBuffer theRequestBodyStringBuffer = new StringBuffer(); int theReadCount = 0; while (theReadCount != -1) { theReadCount = theRequestBodyReader.read(theReadBuffer); if (theReadCount > 0) { theRequestBodyStringBuffer.append(theReadBuffer, 0, theReadCount); } } return theRequestBodyStringBuffer.toString(); } }
The above servlet will retrieve any contents in the body of the request and append a string as well as the current date and time. If the body contained “bodycontents” then the response will be something similar to “bodycontents-appended by servlet 2015-03-01 – 14:22:50”.
In addition, there will be a delay of 500 milliseconds before a response is given for a request.
Note that I have overridden the service method in the servlet superclass; this is in order to handle the requests for all the different HTTP methods in one and the same place.
Mule ESB HTTP Proxies
The Mule ESB proxies are also developed in Eclipse or Spring Tool Suite using the Anypoint Studio Eclipse plug-in. If you haven’t already, install this plug-in using this update site: http://studio.mulesoft.org/r4/updates Also install the Mule 3.6.1 Community Edition runtime using this update site: http://studio.mulesoft.org/r4/studio-runtimesUpdate: Try finding valid update sites here: https://docs.mulesoft.com/studio/latest/studio-update-sites
- In your IDE, create a new Mule Project.
I call my project “CompareHTTPProxy”. - In src/main/resources, create a file named “log4j2.xml” with the following contents.
The reason for this Log4J2 configuration file is to turn of excessive logging that otherwise appears in the console during the tests.
<?xml version="1.0" encoding="UTF-8"?> <Configuration shutdownHook="disable"> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%-5p %d [%t] %c: %m%n"/> </Console> </Appenders> <Loggers> <AsyncRoot level="WARN"> <AppenderRef ref="Console"/> </AsyncRoot> </Loggers> </Configuration>
- Open the XML file in src/main/app that contains the Mule configuration created when the project created.
In my case the name of this file is “comparehttpproxy.xml”. - Replace the contents of Mule configuration file from the previous step with:
<?xml version="1.0" encoding="UTF-8"?> <mule xmlns:pattern="http://www.mulesoft.org/schema/mule/pattern" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jetty="http://www.mulesoft.org/schema/mule/jetty" xmlns:test="http://www.mulesoft.org/schema/mule/test" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/pattern http://www.mulesoft.org/schema/mule/pattern/current/mule-pattern.xsd http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/jetty http://www.mulesoft.org/schema/mule/jetty/current/mule-jetty.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd http://www.mulesoft.org/schema/mule/test http://www.mulesoft.org/schema/mule/test/current/mule-test.xsd" version="CE-3.6.0"> <!-- The Mule pattern HTTP proxy. Available in earlier versions of the Mule ESB, deprecated in Mule ESB 3.6. --> <pattern:http-proxy name="mulePatternHttpProxy"> <http:inbound-endpoint address="http://0.0.0.0:9101"/> <http:outbound-endpoint address="http://0.0.0.0:8080/ProxiedServiceStub/ServiceStubServlet" responseTimeout="10000"/> </pattern:http-proxy> <!-- The HTTP proxy which uses a Jetty inbound endpoint with non-blocking IO and a regular outbound HTTP endpoint. The regular HTTP outbound endpoint has been deprecated in Mule ESB 3.6 but nowhere in the Mule 3.6 documentation are any indications of the Jetty transport having been deprecated. --> <jetty:connector name="jettyNonblockingConnector" useContinuations="true"/> <flow name="httpProxyWithJettyNonblockingInbound"> <jetty:inbound-endpoint connector-ref="jettyNonblockingConnector" address="http://0.0.0.0:9201" exchange-pattern="request-response"/> <!-- Propagate at least the HTTP Content-Type in this simplistic proxy. --> <copy-properties propertyName="Content-Type"/> <http:outbound-endpoint address="http://0.0.0.0:8080/ProxiedServiceStub/ServiceStubServlet" responseTimeout="10000"/> </flow> <!-- The HTTP proxy using the new non-blocking HTTP listener and requester. --> <http:listener-config name="nonblockingHttpListenerConfig" host="0.0.0.0" port="9301" usePersistentConnections="false"/> <http:request-config name="nonblockingHttpRequestConfig" host="0.0.0.0" basePath="" usePersistentConnections="false"/> <flow name="muleNonblockingHttp"> <http:listener config-ref="nonblockingHttpListenerConfig" path="/"/> <http:request config-ref="nonblockingHttpRequestConfig" path="ProxiedServiceStub/ServiceStubServlet" port="8080" method="POST" responseTimeout="10000"/> </flow> </mule>
Note that:
- The first HTTP proxy in the file is a HTTP pattern proxy.
I listens on the port 9101 and forwards any requests to the stub servlet, with a 10 second timeout waiting for the response.
All the patterns, including this HTTP proxy pattern, are deprecated in Mule 3.6 and set to be removed in Mule 4.0. - The second HTTP proxy uses a Jetty inbound endpoint listening on port 9201 and a regular HTTP outbound endpoint.
The Jetty inbound endpoint uses a Jetty connector which is configured to use non-blocking IO.
The regular HTTP inbound and outbound endpoints are deprecated in Mule 3.6 and, as the patterns, set to be removed in Mule 4.0.
In order for the Content-Type HTTP header to be included in the request sent to the stub servlet, a copy-properties transformer must be used. - The third HTTP proxy uses the new non-blocking HTTP listener and requester found in Mule 3.6.
Note the separate listener and request configuration blocks before the flow, which are referred to by the listener and request elements inside the flow.
For details on the new HTTP connector, please refer to the Mule documentation. - In both the <listener-config> and the <request-config>, the usePersistentConnections attribute is set to false.
This was my initial setting and, as we will see, this will affect the performance of the proxy.
Develop Gatling Load Tests
The next step in this example is to develop the Gatling load tests. For the sake of convenience and in order to be able to share the load tests as any other Maven project, I will develop the load tests in a Maven project in Eclipse.
Before we do that, the Scala IDE Eclipse plug-in need to be installed. At the time of writing, using Gatling 2.1.2, version 4.0.0 or newer of the Scala IDE plug-in is required.
The webpage listing the update sites is located here.
A word of warning for users of Spring Tool Suite: If you want to install the Scala IDE plug-in from the Spring Dashboard within Spring Tool Suite, first make sure that this is the correct version. I recommend installing the Scala IDE plug-in using one of the update sites on the scala-ide.org webpage. |
Having installed the Scala IDE plug-in, the load test project is created as follows:
- Select File → New → Maven Project.
We are going to create a project using a Maven archetype. - Select the Maven archetype with the group id “io.gatling.highcharts” and the artifact id “gatling-highcharts-maven-archetype”.
At the time of writing, the latest version of this archetype is version 2.1.2.
- Specify the group id, artifact id, version and package of the load test project.
The values I have used are shown in the figure below.
- Click the Finish button.
- In the default package, next to the “Engine.scala” file, create a new file named “HttpProxySimulation.scala” with the contents below.
Gatling load-tests are regular Scala classes and this class is an abstract class that specifies common properties for the Gatling load-tests we will develop in this example in order for all the load-tests to have the same number of users, the same duration, the same number of simulated users etc.
import io.gatling.core.scenario.Simulation import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** * This class defines the basic properties of a Gatling simulation used to load-test a HTTP proxy. * * @author Ivan Krizsan */ abstract class HttpProxySimulation extends Simulation { /* Simulation timing and load parameters. */ val rampUpTimeSecs = 20 val testTimeSecs = 60 val noOfUsers = 200 val noOfRequestPerSeconds = 600 val minWaitMs = 20 milliseconds val maxWaitMs = 100 milliseconds /* Request HTTP body contents. */ val requestBody = "test1" /* Expected string in response HTTP body. */ val expectedResponseBodyPart = requestBody + "-appended by servlet" /* Expected response HTTP status. */ val expectedHttpStatus = 200 }
- In the same package, create a new file named “OneMuleJettyProxySimulation.scala” with this contents:
import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** * This simulation load-tests a HTTP proxy implemented in Mule using a * non-blocking Jetty inbound endpoint and a regular HTTP outbound endpoint. * * @author Ivan Krizsan */ class OneMuleJettyProxySimulation extends HttpProxySimulation { val baseURL = "http://localhost:9201" val baseName = "OneMuleJettyHttpProxy" val requestName = baseName + "-request" val scenarioName = baseName + "-scenario" val httpProtocol = http .baseURL(baseURL) .acceptHeader("text/plain") .userAgentHeader("Gatling") val testScenario = scenario(scenarioName) .during(testTimeSecs) { exec(http(requestName) .post("") .body(StringBody(requestBody)) .header("Content-Type", "text/plain") .check(status.is(expectedHttpStatus)) .check(regex(expectedResponseBodyPart).exists)) .pause(minWaitMs, maxWaitMs) } setUp( testScenario .inject(atOnceUsers(noOfUsers))) .throttle(reachRps(noOfRequestPerSeconds) in (rampUpTimeSecs seconds), holdFor(testTimeSecs seconds)) .protocols(httpProtocol) }
- At the same location, create a new file named “OneMuleNonblockingHttpProxySimulation.scala”:
import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** * This simulation load-tests a Mule HTTP proxy implemented using the new non-blocking Mule * HTTP listener and requester. * * @author Ivan Krizsan */ class OneMuleNonblockingHttpProxySimulation extends HttpProxySimulation { val baseURL = "http://localhost:9301" val baseName = "OneMuleNonblockingHttp" val requestName = baseName + "-request" val scenarioName = baseName + "-scenario" val httpProtocol = http .baseURL(baseURL) .acceptHeader("text/plain") .userAgentHeader("Gatling") val testScenario = scenario(scenarioName) .during(testTimeSecs) { exec(http(requestName) .post("") .body(StringBody(requestBody)) .header("Content-Type", "text/plain") .check(status.is(expectedHttpStatus)) .check(regex(expectedResponseBodyPart).exists)) .pause(minWaitMs, maxWaitMs) } setUp( testScenario .inject(atOnceUsers(noOfUsers))) .throttle(reachRps(noOfRequestPerSeconds) in (rampUpTimeSecs seconds), holdFor(testTimeSecs seconds)) .protocols(httpProtocol) }
- Create another file named “OneMulePatternHttpProxySimulation.scala” at the same location:
import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** * This simulation load-tests a Mule HTTP proxy implemented using the HTTP proxy pattern. * * @author Ivan Krizsan */ class OneMulePatternHttpProxySimulation extends HttpProxySimulation { val baseURL = "http://localhost:9101" val baseName = "OneMuleHttpProxyPattern" val requestName = baseName + "-request" val scenarioName = baseName + "-scenario" val httpProtocol = http .baseURL(baseURL) .acceptHeader("text/plain") .userAgentHeader("Gatling") val testScenario = scenario(scenarioName) .during(testTimeSecs) { exec(http(requestName) .post("") .body(StringBody(requestBody)) .header("Content-Type", "text/plain") .check(status.is(expectedHttpStatus)) .check(regex(expectedResponseBodyPart).exists)) .pause(minWaitMs, maxWaitMs) } setUp( testScenario .inject(atOnceUsers(noOfUsers))) .throttle(reachRps(noOfRequestPerSeconds) in (rampUpTimeSecs seconds), holdFor(testTimeSecs seconds)) .protocols(httpProtocol) }
- Finally, create the last Gatling simulation class in a file named “ServletStubDirectSimulation.scala”:
import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ /** * This simulation performs requests directly against the servlet stub. * * @author Ivan Krizsan */ class ServletStubDirectSimulation extends HttpProxySimulation { val baseURL = "http://localhost:8080/ProxiedServiceStub/ServiceStubServlet" val baseName = "ServletStubDirect" val requestName = baseName + "-request" val scenarioName = baseName + "-scenario" val httpProtocol = http .baseURL(baseURL) .acceptHeader("text/plain") .userAgentHeader("Gatling") val testScenario = scenario(scenarioName) .during(testTimeSecs) { exec(http(requestName) .post("") .body(StringBody(requestBody)) .header("Content-Type", "text/plain") .check(status.is(expectedHttpStatus)) .check(regex(expectedResponseBodyPart).exists)) .pause(minWaitMs, maxWaitMs) } setUp( testScenario .inject(atOnceUsers(noOfUsers))) .throttle(reachRps(noOfRequestPerSeconds) in (rampUpTimeSecs seconds), holdFor(testTimeSecs seconds)) .protocols(httpProtocol) }
- Perform a Maven clean and a Maven build on the load-test project.
All the above concrete Gatling load-tests are similar in that they consist of the following parts:
- Create a HTTP protocol.
The HTTP protocol specifies which (base) URL should be used in the test scenario(s) using the protocol, which HTTP headers that should be set on the requests etc etc. - Create a test scenario.
A test scenario specifies what actions should be taken as far as one type of user or client is concerned:
– The duration and timing of the test.
– What request is to be sent (a HTTP POST request), what the request should contain as far as body contents and additional HTTP headers are concerned.
– How the expected outcome should be verified. - A simulation.
This is the block that starts with setUp.
The simulation defines what should happen during the load-test, possibly orchestrating multiple scenarios each representing a different type of user or client.
Our simulation has but one single client.
The Gatling load-test project should now be ready for use.
Load-Test the Servlet Stub
The first exercise will be to load-test the servlet stub, without going through any proxy. We do this in order to obtain a baseline for the subsequent load-tests and to ensure that the stub itself is able to cope with the load.
- Start the web application containing the servlet stub we developed earlier.
- Verify that the service stub is up and running by issuing a request in a browser to http://localhost:8080/ProxiedServiceStub/ServiceStubServlet
- In the “httpproxyloadtest” project we created earlier, locate the “Engine.scala” file in the default package in src/test/scala, right-click on this class and select Run As Scala Application.
This starts Gatling, which discovers the simulations in the default package, and asks you to select the simulation which you want to run.
You should see this in the console view in your Eclipse IDE:
Choose a simulation number: [0] OneMuleJettyProxySimulation [1] OneMuleNonblockingHttpProxySimulation [2] OneMulePatternHttpProxySimulation [3] ServletStubDirectSimulation
- Select the ServletStubDirectSimulation by, in my case, entering 3 and pressing return.
- When asked for a simulation id, press return to use the default value.
If you run a simulation several times, you can enter a custom name for the simulation run here. Gatling will append a string of digits to the simulation run, so old simulation runs will not be deleted even if you chose to always use the default value. - When asked for a run description, press return to use the default value.
This is an optional run description that will be visible in the report generated after the simulation has finished running. - Wait for about 60 seconds, which is the duration of our load-tests, and observe the console output.
- When Gatling has finished executing the test, refresh the project in Eclipse and expand the target directory, expand the results directory in the target directory and, finally, also expand the only child directory of the results directory.
The view in the package explorer should look something like this:
- Right-click the index.html file and select Open With → Web Browser.
You should see the graphical Gatling report for the simulation run in the Eclipse IDE. - Examine the part of the report containing statistics of the simulation run.
My version looks like this:
Here we can see that out of 20928 requests, all succeeded and none failed and that there
were approximately 345 requests per second.
We can also see the response time distribution – there is also a special staple-diagram for
this a bit further down in the report:
- Scroll down a bit and locate the Response Time Percentiles over Time (OK) graph.
This graph shows how the response time varied over time.
Note the small handles in the overview graph below the main graph. These allow you to zoom in on a smaller section of the graph to examine details.

Gatling simulation run report response time percentiles over time section for the servlet stub load-test.
- Below that graph there is a graph showing the Number of Requests Per Second.
My version of the graph looks like this:
- Finally there is a Number of Responses Per Second graph.
For the test we just ran, this graph looks more or less identical to the Number of Requests Per Second graph.
From the above, we can draw the conclusion that the stub does cope with the load and, in most of the cases, only adds a few milliseconds of processing time in addition to the 500 millisecond delay we implemented in the stub servlet.
Load-Test the Mule Pattern HTTP Proxy
We are now going to run the load-test on the HTTP proxy implemented using the Mule HTTP proxy pattern. The section of the Mule configuration file that implements this proxy looks like this:
<!-- The Mule pattern HTTP proxy. Available in earlier versions of the Mule ESB, deprecated in Mule ESB 3.6. --> <pattern:http-proxy name="mulePatternHttpProxy"> <http:inbound-endpoint address="http://0.0.0.0:9101"/> <http:outbound-endpoint address="http://0.0.0.0:8080/ProxiedServiceStub/ServiceStubServlet" responseTimeout="10000"/> </pattern:http-proxy>
- In the “httpproxyloadtest” project we created earlier, locate the “Engine.scala” file in the default package in src/test/scala, right-click on this class and select Run As Scala Application.
- Choose the OneMulePatternHttpProxySimulation simulation, which should be simulation number 2.
- Use the default simulation id and run description by pressing the enter key twice.
- Wait until the simulation has finished.
- Refresh the load-test project in Eclipse.
- Navigate to the simulation report directory.
In my case it is target/results/onemulepatternhttpproxysimulation-1425713094827. - Open the index.html file in the report directory using the IDE web browser.
- Examine the different parts of the report as we did when we load-tested the servlet stub.
In the statistics section we can note the following:
- There was a total of 2075 requests, of which 1933 succeeded and 142 failed.
- Approximately 26 requests were processed per second.
- The shortest response time was 2 milliseconds.
Since there is a delay of 500 milliseconds in the servlet stub, this is obviously a response time of a request that failed in the proxy, before reaching its final destination. - The mean response time was 5933 milliseconds.
Below the statistics section there is a section we did not see in the previous report; the error section:
Here we can see the different types of errors that occurred during the simulation and the number of errors of each type. Errors such as “Connection reset by peer” and “Remotely closed” indicates that the server, i.e. our Mule proxy, has terminated the connections while “connection timed out” tells us that the client, which is Gatling, has timed out due to the server (proxy) not responding in time.
Looking at the response time distribution section in the report, we can see failures at many different response times:

Gatling simulation run report response time distribution section for the Mule pattern HTTP proxy load-test.
Conclusions:
- The Mule HTTP pattern proxy is not even capable of coping with 10% of the number of requests that the servlet stub were.
- There are a (small) number of requests that complete under 500 milliseconds.
Since there is a delay of 500 milliseconds in the servlet stub, this is not possible and indicates some kind of malfunction in the proxy. - The proxy fails when being put under load.
As we saw in the error report, approximately 65% of the failures were caused by failure in the proxy.
Load-Test the Mule Jetty HTTP Proxy
The time has now come to tests the home-grown HTTP proxy that has a Jetty inbound endpoint. The advantage of the Jetty transport in Mule is that it, even prior to Mule 3.6, had support for non-blocking I/O. Unfortunately there is only an inbound endpoint and no outbound endpoint available in the Jetty transport. For simplicity’s sake I will refer to the proxy that has the Jetty inbound endpoint as the “Jetty proxy”.
The Jetty connector has a few configuration options, so we will run the load-test a couple of times tweaking the settings between each time.
The following procedure will be used each time we are to run a load-test of the Jetty proxy:
- In the “httpproxyloadtest” project we created earlier, locate the “Engine.scala” file in the default package in src/test/scala, right-click on this class and select Run As Scala Application.
- Choose the OneMuleJettyProxySimulation simulation, which should be simulation number 0.
- Enter a simulation id and press the enter key.
The name of the simulation id will be used in the name of the directory containing the report from the load-test. - When asked for a run description, press the enter key.
- Wait until the simulation has finished.
- Refresh the load-test project in Eclipse.
- Navigate to the simulation report directory.
Jetty Proxy With Continuations
The first load-test run will be on the Jetty proxy that is using continuations, which is the Jetty name for non-blocking I/O. Its Mule configuration looks like this:
<!-- The HTTP proxy which uses a Jetty inbound endpoint with non-blocking IO and a regular outbound HTTP endpoint. The regular HTTP outbound endpoint has been deprecated in Mule ESB 3.6 but nowhere in the Mule 3.6 documentation are any indications of the Jetty transport having been deprecated. --> <jetty:connector name="jettyNonblockingConnector" useContinuations="true"/> <flow name="httpProxyWithJettyNonblockingInbound"> <jetty:inbound-endpoint connector-ref="jettyNonblockingConnector" address="http://0.0.0.0:9201" exchange-pattern="request-response"/> <!-- Propagate at least the HTTP Content-Type in this simplistic proxy. --> <copy-properties propertyName="Content-Type"/> <http:outbound-endpoint address="http://0.0.0.0:8080/ProxiedServiceStub/ServiceStubServlet" responseTimeout="10000"/> </flow>
Having run the load-test we examine the report:
In the statistics section we can see the following:
- There was a total of 1869 requests, of which 1685 succeeded and 184 failed.
- Approximately 29 requests were processed per second.
- The shortest response time was 503 milliseconds.
- The mean response time was 6410 milliseconds.
In the error section of the report we can see that there was only one type of error:
The type of error seen above tells us that increasing the client timeout may reduce the number of failures, since the server has not terminated any connections during the load-test.
Looking at the response time distribution section of the report, we can see that either the requests succeeded within a reasonable time or they failed:

Gatling simulation run report response time distribution section for the first Mule Jetty HTTP proxy load-test.
From the diagram we see that for the requests that succeeded, an overhead of approximately 300 milliseconds were added.
Further experiments, that I will not show here, hint at increasing the number of receiver-threads reduce the number of errors, even down to zero, but also increase the latency. The number of receiver-threads are configured on the Jetty connector like in this example:
<jetty:connector name="jettyNonblockingConnector" useContinuations="true"> <receiver-threading-profile maxThreadsActive="100" maxThreadsIdle="100"/> </jetty:connector>
Jetty Proxy Without Continuations
The second load-test run will be on the Jetty proxy with continuations disabled. The flow itself is identical to before, the difference lies in the Jetty connector which is configured like this:
<jetty:connector name="jettyNonblockingConnector" useContinuations="false"> </jetty:connector>
Selected parts of the load-test report look like this:
In the statistics section we can see the following:
- There was a total of 2157 requests, which all succeeded.
- Approximately 33 requests were processed per second.
- The shortest response time was 509 milliseconds.
- The mean response time was 5481 milliseconds.
This was quite a surprise – not only did the proxy outperform the version with continuations, but the success rate was 100%.
The response time distribution diagram shows the drawback of not using continuations:

Gatling simulation run report response time distribution section for the second Mule Jetty HTTP proxy load-test.
The majority of the requests have added more than a ten-fold overhead.
This can also be seen in the statistics report section; the 50th percentile of the response time is 5951 milliseconds when not using continuations, but only 508 milliseconds when using continuations.
If we click the (only) request group OneMuleJettyHttpProxy-requests in the statistics section, as marked in the figure below, a new report section named Latency Time Percentiles Over Time (OK) will be shown.
The Latency Time Percentiles Over Time (OK) report section looks like this for the Jetty proxy without continuations load-test:

Gatling simulation run report latency percentiles over time section for the second Mule Jetty HTTP proxy load-test.
This confirms the information, as well as shows variations over time, as far as the high latency of requests through a proxy that does not use continuations.
Conclusions:
- Regardless of whether using continuations or not, the HTTP proxy with an inbound Jetty endpoint were not able to process more than about 2000 requests during the load-test.
- The Jetty proxy with continuations can give us a low latency, if we are able to accept around failure in about 10% of the requests.
- A Jetty proxy that does not use continuations does not display any errors but adds a considerable overhead for each request.
- The Jetty proxy that uses continuations can be configured to give us zero failures by allocating a larger number of threads, but its characteristics will become very similar to a HTTP proxy not using continuations with a high latency.
- The outbound endpoint, which is still a HTTP endpoint with blocking I/O, play a significant role affecting the performance of the proxy.
Load-Test the Mule Non-Blocking HTTP Transport
We have finally arrived at the, at least as far as I am concerned, much anticipated non-blocking HTTP Mule transport and the load-tests. I use the plural form here, since we are going to run two load-tests. The reason for this is because I actually did not fully understand the implications of the configuration I first used.
By now you should have quite some experience of running Gatling load-tests in Eclipse. We’ll use the same procedure as before, except that we’ll choose simulation 1, OneMuleNonblockingHttpProxySimulation, when Gatling asks us which load-test to run.
Mule Non-Blocking HTTP Without Persistent Connections
The original settings for the proxy using the new non-blocking HTTP transport was using connections that were not persistent:
<!-- The HTTP proxy using the new non-blocking HTTP listener and requester. --> <http:listener-config name="nonblockingHttpListenerConfig" host="0.0.0.0" port="9301" usePersistentConnections="false"/> <http:request-config name="nonblockingHttpRequestConfig" host="0.0.0.0" responseTimeout="10000" usePersistentConnections="false"/> <flow name="muleNonblockingHttp"> <http:listener config-ref="nonblockingHttpListenerConfig" path="/"/> <http:request config-ref="nonblockingHttpRequestConfig" path="ProxiedServiceStub/ServiceStubServlet" port="8080" method="POST"/> </flow>
Running the load-test with the above proxy implementation, we can see that the performance is significantly better than with the other proxies but that there still are a few errors:

Gatling simulation run report statistics section for the first Mule non-blocking HTTP proxy load-test.
We can also see that the response times are lower; with the value for the 50th percentile only adding 137 millisecond to the 500 millisecond delay in the servlet stub.
The number of requests per second up at 193, which is around five times the result we have seen with the best of the other HTTP proxies.
Looking at the error, we see that all the errors are some kind of failures in the proxy:
Note that the proxy did reply to even these requests and gave a HTTP status code of 500 to the client.
Mule Non-Blocking HTTP With Persistent Connections
Having realized my mistake, I changed the configurations of both the listener and requester to use persistent connections. The configuration now looks like this:
<!-- The HTTP proxy using the new non-blocking HTTP listener and requester. --> <http:listener-config name="nonblockingHttpListenerConfig" host="0.0.0.0" port="9301" usePersistentConnections="true"/> <http:request-config name="nonblockingHttpRequestConfig" host="0.0.0.0" responseTimeout="10000" usePersistentConnections="true"/> <flow name="muleNonblockingHttp"> <http:listener config-ref="nonblockingHttpListenerConfig" path="/"/> <http:request config-ref="nonblockingHttpRequestConfig" path="ProxiedServiceStub/ServiceStubServlet" port="8080" method="POST"/> </flow>
Running the load-test again, we can see that the performance of the proxy increased a little and that it performed without any errors:

Gatling simulation run report statistics section for the second Mule non-blocking HTTP proxy load-test.
Not only are there no errors, but the performance is even better:
- The total number of requests processed is 14361.
- The number of requests per second is 236 – over twice the result we saw with non-persistentsdf connections.
- Response time 50th percentile is at 506 milliseconds.
Final Conclusion
The new non-blocking HTTP transport in Mule 3.6 is the way to go if you are doing HTTP communication with Mule; excellent performance and able to cope with quite a load.
None of the HTTP transports in earlier versions of Mule are really performant. I get the feeling that the best you can aspire at is to reduce the errors under heavy load, in which case you should look at the Jetty HTTP transport for receiving requests.
A word of advice is not to let an excellent tool like Gatling rest unused. Several times during the writing of this article did I have preconceptions regarding what I believed was the obvious outcome of a load-test, only to be surprised by the reports from Gatling.
Load-testing involves tweaking configurations, testing and testing again. We have seen that developing load-tests with Gatling and running them is very convenient from within a common IDE like Eclipse. Given that the load-tests are developed in a Maven project, I assume that automating them using, for instance, Jenkins will be simple.