This article will show how to monitor a Java Virtual Machine (JVM) running in a Docker container using JMX and the ELK stack, consisting of Elasticsearch, Logstash and Kibana, running in another Docker container. In addition there will be a simple example on how to use Docker Compose.
In the example I will run an instance of the Mule ESB Community Edition in the JVM, since I am planning to write more on the subject, but the monitoring techniques shown here are applicable to anything running in a JVM that expose a set of JMX managed beans.
Prerequisites
I will use Docker Machine and Docker Compose, both being part of the Docker Toolbox.
Depending on your operating system, here are the different paths of setup:
- Ubuntu.
Install Docker on Ubuntu.
Install Docker Compose.
When I tried to install Docker Compose on Ubuntu, I reverted to manually downloading it, copying the binary to /usr/local/bin and then making it executable. In addition I have to use sudo every time I want to run Docker Compose under Ubuntu. - Windows.
Install Docker Toolbox. Docker Compose is not included in the Windows version as of writing this article. - Mac OS X.
Install Docker Toolbox.
In addition I will use the Docker image containing the ELK stack created in an earlier article. I recommend building this image before attempting the example in this article.
Finally you will need to download the Mule CE runtime from which a few configuration files will be copied. I have used version 3.7.0 but the important thing is that the version matches the version of Mule in the Docker image, as to get the correct configuration files.
Starting Docker
Again depending on your operating system, there are different paths as to get Docker up and running.
- Ubuntu
No further preparations needed. Just start Docker with “sudo docker” and Docker Compose with “sudo docker-compose”. - Windows.
Keep on reading the instructions on how to install Docker Toolbox for Windows if you did not finish them earlier.
To start Docker at a later occasion, use the Docker Quickstart Terminal. - Mac OS X.
Keep on reading the instructions on how to install Docker Toolbox for Mac OS X if you did not finish them earlier.
To start Docker at a later occasion, use the Docker Quickstart Terminal.
Running the Mule CE Docker Container
In the example I will be running an instance of the Mule ESB in a Docker container with one Mule application that generates events that we later can observe. Information about the Docker image that will be used can be found here. Three mount points in the Docker image will be used:
- /opt/mule/apps
Contains the default Mule application as well as the small Mule event-generating application. - /opt/mule/conf
Contains Mule configuration, Mule application configuration and the configuration files of the Java Service Wrapper from Tanuki Software. The latter will be used to configure Mule to expose JMX managed beans.
For additional details on JMX monitoring of a Mule ESB instance, please refer to my earlier article here. - /opt/mule/logs
Logs from the Mule ESB and the Mule applications running in it will be written to this directory.
For each of the mount points listed I will create a directory in the host. Note that the usual Docker limitations apply concerning the locations of these directories if you are running Docker in OS X or Windows, as described here.
- Download and unpack the Mule CE runtime.
- Create a directory that is to contain the directories for the three mount points.
I call this directory “MuleShared”. - In the MuleShared directory, create three directories named “apps”, “conf” and “logs”.
- Copy the contents of the apps directory from the Mule CE runtime to the MuleShared/apps directory just created.
In my case there is just one single directory “default” containing the default Mule application. - Copy the contents of the conf directory from the Mule CE runtime to the MuleShared/conf directory just created.
In Mule 3.7.0 CE there are five files. - In the MuleShared/apps directory, create a directory named “mule-perpetuum-mobile”.
This is the directory that contains the event-generating example Mule application. - In the MuleShared/apps/mule-perpetuum-mobile directory, create a file named “mule-config.xml” with the following contents:
<?xml version="1.0" encoding="UTF-8"?> <mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:quartz="http://www.mulesoft.org/schema/mule/quartz" xmlns:vm="http://www.mulesoft.org/schema/mule/vm" xmlns:client="http://www.mulesoft.org/schema/mule/client" xmlns:management="http://www.mulesoft.org/schema/mule/management" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/quartz http://www.mulesoft.org/schema/mule/quartz/current/mule-quartz.xsd http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd http://www.mulesoft.org/schema/mule/client http://www.mulesoft.org/schema/mule/client/current/mule-client.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd http://www.mulesoft.org/schema/mule/management http://www.mulesoft.org/schema/mule/management/current/mule-management.xsd"> <description> An integration perpetuum mobile</description> <!-- Quartz connector with one single thread to prevent parallel execution of jobs. --> <quartz:connector name="oneThreadQuartzConnector"> <quartz:factory-property key="org.quartz.threadPool.threadCount" value="1"/> </quartz:connector> <!-- Flow that periodically generates events that are sent to another flow. --> <flow name="eventGeneratingFlow"> <quartz:inbound-endpoint jobName="eventGeneratingJob" repeatInterval="500" repeatCount="-1" connector-ref="oneThreadQuartzConnector"> <quartz:event-generator-job> <quartz:payload>go</quartz:payload> </quartz:event-generator-job> </quartz:inbound-endpoint> <logger level="ERROR" message="Generated an event!"/> <vm:outbound-endpoint path="eventReceiverEndpoint" exchange-pattern="one-way"/> </flow> <!-- Flow that receives events. --> <flow name="eventReceivingFlow"> <vm:inbound-endpoint path="eventReceiverEndpoint" exchange-pattern="one-way"/> <logger level="ERROR" message="Received an event!"/> </flow> </mule>
- Set the permissions of the MuleShared directory and all underlying files and directories as to allow anyone to read and write to it.
The following terminal command can be used:
chmod -R +rwx ./MuleShared - If you are on OS X or Windows, launch your Docker machine.
docker-machine start default
This assumes that the name of your Docker machine is “default”. - OS X and Windows users also need to find the IP address of their Docker machine:
docker-machine ip default
This again assumes that the name of your Docker machine is “default”.
If you are running Docker under Ubuntu you need to start a Docker container and then issue the following command to learn the IP address of the container:
sudo docker inspect [container id or name here] | grep IPAddress - Open the file MuleShared/conf/wrapper.conf and add the following wrapper parameters:
Note! Replace the IP address 192.168.99.100 with the IP address from the previous step.
Also note that the numbers at the end of each property name may need to be adjusted as to have consecutive numbering of the properties in the wrapper.conf file.
In my case the last property in my wrapper.conf file was named “wrapper.java.additional.14” so my property-numbering starts at 15.
# Enables remote JMX management without authentication or SSL over port 1096. wrapper.java.additional.15=-Dcom.sun.management.jmxremote wrapper.java.additional.16=-Dcom.sun.management.jmxremote.port=1096 wrapper.java.additional.17=-Dcom.sun.management.jmxremote.rmi.port=1096 wrapper.java.additional.18=-Dcom.sun.management.jmxremote.authenticate=false wrapper.java.additional.19=-Dcom.sun.management.jmxremote.ssl=false wrapper.java.additional.20=-Djava.rmi.server.hostname=192.168.99.100
The Mule Docker container can now be started using the following command:
docker run -d --name mule_server -p 1096:1096 -v [insert your absolute path here]/MuleShared/apps:/opt/mule/apps -v [insert your absolute path here]/MuleShared/conf:/opt/mule/conf -v [insert your absolute path here]/MuleShared/logs:/opt/mule/logs vromero/mule:3.7.0
The above command starts the Mule Docker container running in the background (-d option) and gives it the name “mule_server” (using the –name option), exposes port 1096 using the -p option and mounts the three directories we created earlier in the Docker container, using the -v option at the corresponding path.
If you now start the Mule Docker container, there should be several log files appearing in the MuleShared/logs directory and there should be a file named “mule-perpetuum-mobile-anchor.txt” appearing in the MuleShared/apps directory. Stop the Mule Docker container when you are finished observing it using the command “docker stop [container id]”. You may also want to remove the container using “docker rm [container id]”.
Running the ELK Docker Container
As before, I will use my own ELK Docker image that I created in an earlier article. This Docker image is not available at Docker Hub so you have to build it yourself if you want to try the example.
As with the Mule Docker image, certain preparations need to be made before starting an ELK Docker container. Two mount points will be used with the ELK stack; one that contains configuration and another that will contain logs.
- Create a directory that is to contain the directories for the two mount points.
I call this directory “ELKShared”. - Set the permissions of the ELKShared directory as to allow anyone to read and write to it.
The following terminal command can be used:
chmod +rwx ./ELKShared - In the ELKShared directory, create two directories named “config” and “logs”.
- In the ELKShared/logs directory create three directories named “elasticsearch”, “kibana” and “logstash”.
- In the ELKShared/config directory create three directories named “elasticsearch”, “kibana” and “logstash”.
- In the ELKShared/config/elasticsearch directory create a file named “elasticsearch.yml” with the following contents:
This changes the location of the Elasticsearch log files to a directory which will be mounted in the Docker host.
# Commented-out examples have been removed. # Path to log files: path.logs: /opt/logs/elasticsearch
- Download the logging.yml Elasticsearch configuration file and save it in the ELKShared/config/elasticsearch directory.
- In the ELKShared/config/kibana directory create a file named “kibana.yml” with the following contents:
The only modification is to configure a path to a file which Kibana is to write log to.
# Kibana is served by a back end server. This controls which port to use. port: 5601 # The host to bind the server to. host: "0.0.0.0" # The Elasticsearch instance to use for all your queries. elasticsearch_url: "http://localhost:9200" # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false, # then the host you use to connect to *this* Kibana instance will be sent. elasticsearch_preserve_host: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations # and dashboards. It will create a new index if it doesn't already exist. kibana_index: ".kibana" # The default application to load. default_app_id: "discover" # Time in milliseconds to wait for responses from the back end or elasticsearch. # This must be > 0 request_timeout: 300000 # Time in milliseconds for Elasticsearch to wait for responses from shards. # Set to 0 to disable. shard_timeout: 0 # Set to false to have a complete disregard for the validity of the SSL # certificate. verify_ssl: true # If you would like to send the log output to a file you can set the path below. # This will also turn off the STDOUT log output. log_file: /opt/logs/kibana/kibana.log # Plugins that are included in the build, and no longer found in the plugins/ folder bundled_plugin_ids: - plugins/dashboard/index - plugins/discover/index - plugins/doc/index - plugins/kibana/index - plugins/markdown_vis/index - plugins/metric_vis/index - plugins/settings/index - plugins/table_vis/index - plugins/vis_types/index - plugins/visualize/index
- In the ELKShared/config/logstash directory, create a file named “filter-config.xml” with the contents below.
This is a Logstash filter configuration which, if the type of the event received is “jmx” and if the metric_path contains either “ProcessCpuLoad” or “SystemCpuLoad”, then a field “cpuLoad” and the value being that of “metric_value_number” multiplied by 100. The event processing is implemented using the Ruby filter plug-in.
filter { if [type] == "jmx" { if ("ProcessCpuLoad" in [metric_path] or "SystemCpuLoad" in [metric_path]) { ruby { code => "event['cpuLoad'] = event['metric_value_number'] * 100" } } } }
- In the ELKShared/config/logstash directory, create a file named “input-config.xml” with the contents below.
The only source of events in this Logstash input configuration is the JMX plug-in, which configuration is located in the /opt/config/logstash/jmx directory. Polling for JMX data will occur once every 15 seconds. In addition, all events generated by the JMX plug-in will have a field “type” that has the value “jmx”.
input { jmx { path => "/opt/config/logstash/jmx" polling_frequency => 15 type => "jmx" } }
- Next to the two other Logstash configuration files, create a file named “output-config.xml” with the following contents:
Logstash will send events to the Elasticsearch instance at “localhost” using the elasticsearch output plug-in and also print the events to the console using the stdout output plug-in. Printing the events to the console is only to allow us to see what happens in the example – this kind of configuration is not suitable in a production environment.
output { elasticsearch { host => localhost } stdout { codec => rubydebug } }
- In the ELKShared/config/logstash directory, create a directory named “jmx”.
- In the ELKShared/config/logstash/jmx directory, create a file named “logstash-jmx.conf” with the contents below.
This configuration file tells the Logstash JMX plug-in which RMI server to connect to and which JMX attributes to poll for data.
For additional information about the Logstash JMX plug-in, see this web page.
Note that the host is specified at two locations and the value is “muleserver”. The value “muleserver” is the alias that we will later define when we start the Mule Docker container.
{ "host" : "muleserver", "port" : 1096, "url" : "service:jmx:rmi:///jndi/rmi://muleserver:1096/jmxrmi", "alias" : "mule_jvm", "queries" : [ { "object_name" : "java.lang:type=OperatingSystem", "object_alias" : "OperatingSystem", "attributes" : ["SystemCpuLoad"] } ] }
- Set the permissions of the ELKShared directory and all underlying files and directories as to allow anyone to read and write to it.
The following terminal command can be used:
chmod -R +rwx ./ ELKShared
The ELK stack has some problems regarding file permissions of log files, which I have tried to make the developers aware of, but for now we have to work around the problem:
- Start an ELK stack Docker container:
docker run --rm -p 5601:5601 -p 9200:9200 -p 5000:5000 -v [insert your absolute path here]/ELKShared/config:/opt/config -v [insert your absolute path here]/ELKShared/logs:/opt/logs krizsan/elk:v1
- Observe the terminal output.
After a while there should be an error similar to this:
Error: Failed to open /opt/logs/logstash/logstash.log for writing: Permission denied - /opt/logs/logstash/logstash.log This is often a permissions issue, or the wrong path was specified? You may be interested in the '--configtest' flag which you can use to validate logstash's configuration before you choose to restart a running system.
- Press ctrl-c to terminate the Docker container.
- Set the permissions of the ELKShared directory and all underlying files and directories as to allow anyone to read and write to it:
chmod -R +rwx ./ ELKShared
If you are using Mac OS X then chmod will not suffice and you have to use the Get Info window of the ELKShared directory and then select Apply To Enclosed Items… in the cogwheel-menu at the bottom to set the appropriate permissions. - Try starting an ELK stack Docker container again:
docker run --rm -p 5601:5601 -p 9200:9200 -p 5000:5000 -v [insert your absolute path here]/ELKShared/config:/opt/config -v [insert your absolute path here]/ELKShared/logs:/opt/logs krizsan/elk:v1
- Observe the terminal.
After a while there should be a line saying (the number may differ):
INFO: [logstash-7a0e09ae7380-1442407170-11698] started
In the log directories there should be log written to the “elasticsearch.log”, “kibana.log” and “logstash.log” files. Do not delete these files! - Press ctrl-c to terminate the Docker container.
We have now finished the preparations for the two different Docker containers we will launch as part of the example.
Composing the Two Docker Containers
If you are on Windows then Docker Compose is not available to you at the time of writing this, so you may want to skip to the end of this section.
Docker Compose allows us, among other things, to start and stop a number of services executed in Docker containers using one single command.
The Docker Compose configuration file, named “docker-compose.yml”, for the Mule Docker container and the ELK Docker container of this article’s example looks like this:
mule_server: image: vromero/mule:3.7.0 ports: - "1096:1096" volumes: - ./MuleShared/apps:/opt/mule/apps - ./MuleShared/conf:/opt/mule/conf - ./MuleShared/logs:/opt/mule/logs stdin_open: false tty: false elk_stack: image: krizsan/elk:v1 ports: - "5601:5601" - "9200:9200" - "5000:5000" volumes: - ./ELKShared/config:/opt/config - ./ELKShared/logs:/opt/logs links: - mule_server:muleserver stdin_open: false tty: false
The Docker Compose configuration file reference is available here, I will only briefly go through the above file:
- The section starting with “mule_server:” determines how the Mule Docker service will be started.
Note that “mule_server” is the name of the service, not the name of the Docker container that will be started – unless specified, which is discouraged by the Docker documentation, a unique, generated, name will be used for each Docker container started by Docker Compose. - “image:” specifies the Docker image that implements the service.
With Docker Compose you can either use a ready-made Docker image that is available either locally or in the Docker Hub or you can refer a Docker file using the build key. In the latter case Docker Compose will build the Docker image before starting the service. - The “ports” key has the same function as the -p option we have used when starting a Docker container.
The format of the values is [host port]:[container port]. - The “volumes” key specifies where in the host file system Docker container mount points are to be exposed.
The format of the values is [path in host]:[path in container]. As can be seen relative paths may be used. The location of such a path is calculated relative to the location of the Docker Compose configuration file. In this example, the MuleShared and the ELKShared directories are located in the same directory as the Docker Compose configuration file. - The “stdin_open” key is is equivalent to the -i flag of the Docker run command.
It tells Docker whether to keep the standard stream input to the Docker container open. - Similarly the “tty” key corresponds to the -t flag of the Docker run command.
In this example, Docker is not to allocate a TTY console connected to the Docker containers started. - Finally, the “links” key is used to create a link between the service in which it is specified and a Docker container in another service.
The format of the value is [service name]:[service alias]. Recall that we used the alias “muleserver” in the Logstash JMX plug-in configuration file when specifying the host to query for JMX data.
Having saved the above Docker Compose configuration file, two Docker containers can now be started using the single command (Ubuntu users need to add sudo):
docker-compose up
After the two Docker containers have started up properly, there will be console output similar to this at a regular interval:
elk_stack_1 | INFO: [logstash-69a1646d969e-1442407170-11698] started elk_stack_1 | { elk_stack_1 | "@version" => "1", elk_stack_1 | "@timestamp" => "2015-09-20T11:36:25.765Z", elk_stack_1 | "host" => "muleserver", elk_stack_1 | "path" => "/opt/config/logstash/jmx", elk_stack_1 | "type" => "jmx", elk_stack_1 | "metric_path" => "mule_jvm.OperatingSystem.SystemCpuLoad", elk_stack_1 | "metric_value_number" => 0.002808607847464561, elk_stack_1 | "cpuLoad" => 0.2808607847464561 elk_stack_1 | }
The output indicates that Logstash has polled the OperatingSystem.SystemCpuLoad in the JVM running Mule for data and also displays the data that was obtained at the key “metric_value_number”.
The value at the key “cpuLoad” was added by the Logstash filter that we defined in the file “filter-config.xml”.
Congratulations, you are now monitoring the Mule ESB instance running in one Docker container using the ELK stack running in another Docker container!
To stop the two containers, press ctrl-c. If you want to remove all the Docker containers started by Docker Compose, this can conveniently be accomplished using the command “docker-compose rm”.
Starting the Two Docker Containers in Windows
If you are using Windows, you can start the two Docker containers in the usual manner using the following commands:
docker run -it --name mule_server -p 1096:1096 -v [insert your absolute path here]/MuleShared/apps:/opt/mule/apps -v [insert your absolute path here]/MuleShared/conf:/opt/mule/conf -v [insert your absolute path here]/MuleShared/logs:/opt/mule/logs vromero/mule:3.7.0 docker run --rm -p 5601:5601 -p 9200:9200 -p 5000:5000 -v [insert your absolute path here]/ELKShared/config:/opt/config -v [insert your absolute path here]/ELKShared/logs:/opt/logs krizsan/elk:v1
Looking at CPU Usage in Kibana
We have the figures but it is even nicer to be able to look at a graph showing, for instance, the CPU usage.
- If you are using OS X or Windows, obtain the IP address of your Docker machine using the following command:
docker-machine ip [name of Docker machine]
If you are on Ubuntu, use “localhost” for the IP. - Open the URL http://[your Docker machine IP]:5601
The Kibana web application should appear. - If Kibana asks you to configure an index pattern, just click the Create button.
- Click the Discover menu in the upper left corner on the Kibana webpage.
You should see a lot of events. - Click the clock symbol in the upper right corner.
Next to my clock it says “Last 15 minutes”. This is the period of time which data is displayed. I change this to “Last 30 minutes”.
- Click the “Auto-refresh” that appears to the left of the clock symbol you just clicked and change its value from “off” to “10 seconds”.
This should cause the display to be updated every 10 seconds.
- Click the clock symbol until the time period and refresh interval are hidden again.
- In the text field at the top of the page enter the following query and press return:
metric_path=”mule_jvm.OperatingSystem.SystemCpuLoad” AND host:muleserver
This filters the events so that only those containing system CPU load data from the muleserver host are displayed.
- Click the little diskette icon to the right of the text field in which you entered the query and save the query using the name “MuleServer_SystemCPULoad”.
- Click the Visualize menu to the right of the Discover menu at the top of the page.
- Select the Line chart visualization as step 1 in creating a new visualization.
- Select “From a saved search” in step 2 in creating a new visualization.
Then select the MuleServer_SystemCPULoad search that you just saved. - Configure the metrics (y-axis) and buckets (x-axis) as shown in the picture below.
- When you are done, press the green button with the triangle that points to the right.
You should now see a diagram visualizing the CPU load of the Mule server during the last 30 minutes.
We can see that CPU usage is around 1% with occasional spikes around 5%.
This concludes this part in this series. In the next part I will look closer into monitoring of a Mule CE instance and the applications running in it using the above technique.
great tutorial, thanks