In this article I will revisit the example from my previous article Alerting with the ELK Stack and Elastalert. The same scenario as in the earlier article will be used; a Mule ESB CE instance with a Mule application will be monitored using JMX.
The new example will use Metricbeat instead of Logstash to poll JMX data from the Mule instance. I will give a few words of advice in connection to JMX monitoring with Metricbeat. In addition I will use the most recent version of Elasticsearch and Kibana available, which at the time of writing is 5.4.0, and the latest version of Elastalert, which is compatible with Elasticsearch 5.4.0.
The complete updated example is available in the master branch of my repository on GitHub, the previous version is still available in the same repository but located in the original branch.
Updates
The list below is an attempt to list all the changes of the example compared to that of the earlier article. There are a number of advantages that come from the update, such as more features in the newer version of Kibana, Elasticsearch and Elastalert. I will make no attempt to list those here, but refer the curios to the change lists of the different products.
- Updated to Mule ESB CE version 3.8.1 and using my own Docker image available here.
This Docker image contains Mule ESB CE on Alpine Linux and Oracle Java 8.
In addition this image has the Jolokia agent installed and activated in Mule ESB, which exposes JMX data over HTTP.
- Uses Metricbeat to poll JMX data from the Mule ESB instance instead of Logstash.
Beats 5.4.0 was released recently and this release includes a Jolokia module for Metricbeat which makes it possible to poll JMX data from a Java virtual machine using the HTTP API exposed by Jolokia. - Created a Docker image with Mule ESB CE and Metricbeat.
This Docker image will be built when starting the example using Docker compose. While it should be possible to poll for JMX data from an instance of Metricbeat running in a separate Docker container, I have chosen to run Metricbeat in the same Docker container as Mule ESB. - Uses the official Elasticsearch and Kibana Docker images from Elastic.
These images has X-Pack installed, which enables HTTP basic authentication when connecting to Elasticsearch and requires logging in to Kibana. Note that X-Pack will run a limited time using a trial license after which it will be disabled. - Updated my Elastalert image on DockerHub with support for HTTP basic authentication.
The reason for this is in order for Elastalert to be able to connect to an Elasticsearch instance with X-Pack installed and enabled.
Jolokia Agent in Mule ESB CE
While it is not the focus of this article, I will briefly mention the modifications made to my Mule ESB CE Docker image in order to enable JMX monitoring over HTTP using the Jolokia Mule agent.
- The Jolokia Mule agent JAR-file is copied to the lib/opt directory in the Mule installation.
- A Mule application named “jolokia-enabler” is deployed in the Mule instance.
This application contains one single Mule configuration file that activates and configures the Jolokia Mule agent.
Metricbeat and JMX Monitoring
As mentioned earlier, I chose to have Metricbeat running in the same Docker container as Mule ESB CE.
Mule ESB with Metricbeat Docker File
The Docker file used to create the Docker image with Mule ESB CE and Metricbeat is listed below.
# Mule ESB CE 3.8.1 with Metricbeat. FROM ivankrizsan/mule-docker:3.8.1 MAINTAINER Ivan Krizsan, https://github.com/krizsan # MetricBeat version. ENV METRICBEAT_VERSION=5.4.0 \ # MetricBeat installation directory. METRICBEAT_HOME=/opt/metricbeat # Install MetricBeat, which is going to monitor the Mule instance. RUN cd /opt && \ wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-${METRICBEAT_VERSION}-linux-x86_64.tar.gz && \ tar -xvvf metricbeat-${METRICBEAT_VERSION}-linux-x86_64.tar.gz && \ mv metricbeat-${METRICBEAT_VERSION}-linux-x86_64 metricbeat && \ mv ${METRICBEAT_HOME}/metricbeat.yml ${METRICBEAT_HOME}/metricbeat.example.yml && \ mv ${METRICBEAT_HOME}/metricbeat /bin/metricbeat && \ chmod +x /bin/metricbeat && \ mkdir -p ${METRICBEAT_HOME}/conf ${METRICBEAT_HOME}/data ${METRICBEAT_HOME}/logs && \ rm metricbeat-${METRICBEAT_VERSION}-linux-x86_64.tar.gz # Copy the script used to launch Mule ESB and MetricBeat when a container is started. COPY ./start-mule.sh /opt/ # Copy configuration files to MetricBeat configuration directory. COPY ./metricbeat-conf/*.* ${METRICBEAT_HOME}/conf/ # Make the start-script executable. RUN chmod +x /opt/start-mule.sh && \ # Set the owner of all Mule-related files to the user which will be used to run Mule. chown -R ${RUN_AS_USER}:${RUN_AS_USER} ${MULE_HOME} WORKDIR ${MULE_HOME} # Default when starting the container is to start Mule ESB. CMD [ "/opt/start-mule.sh" ] # Define mount points. VOLUME ["${MULE_HOME}/logs", "${MULE_HOME}/conf", "${MULE_HOME}/apps", "${MULE_HOME}/domains", "${METRICBEAT_HOME}/conf", "${METRICBEAT_HOME}/data", "${METRICBEAT_HOME}/logs"] # Default http port EXPOSE 8081 # JMX port. EXPOSE 1099 # Jolokia port. EXPOSE 8899
Note that:
- Metricbeat is installed into the /opt/metricbeat directory.
- A slightly customized start-script is used to start Metricbeat and Mule.
The start-script will be examined below. - Metricbeat configuration is copied into the Docker image.
This configuration is the default for this example and not a general default configuration.
More information about the Metricbeat configuration later. - Both Mule and Metricbeat will be run by the mule user in a Docker container created from this image.
- In addition to the Mule-related directories, the Metricbeat conf, data and logs directories are also configured as Docker data volumes.
- Two ports are exposed by the Docker image.
The first port exposes the traditional JMX endpoint. The second port exposes the Jolokia HTTP endpoint over which JMX data can be read and written.
Mule and Metricbeat Docker Image Start-Script
The start-script is based on the start-script from my Mule ESB CE Docker image found here.
#! /bin/sh # Set the timezone. Base image does not contain the setup-timezone script, so an alternate way is used. if [ "$SET_CONTAINER_TIMEZONE" = "true" ]; then cp /usr/share/zoneinfo/${CONTAINER_TIMEZONE} /etc/localtime && \ echo "${CONTAINER_TIMEZONE}" > /etc/timezone && \ echo "Container timezone set to: $CONTAINER_TIMEZONE" else echo "Container timezone not modified" fi # Force immediate synchronisation of the time and start the time-synchronization service. # In order to be able to use ntpd in the container, it must be run with the SYS_TIME capability. # In addition you may want to add the SYS_NICE capability, in order for ntpd to be able to modify its priority. ntpd -s # Set RMI server IP address in the Mule ESB wrapper configuration as to make JMX reachable from outside the container. if [ -z "$MULE_EXTERNAL_IP" ] then echo "No external Mule ESB IP address set, using 192.168.99.100." MULE_EXTERNAL_IP="192.168.99.100" else echo "Mule ESB external IP address set to $MULE_EXTERNAL_IP" fi sed -i -e"s|Djava.rmi.server.hostname=.*|Djava.rmi.server.hostname=${MULE_EXTERNAL_IP}|g" ${MULE_HOME}/conf/wrapper.conf # Start MetricBeat in the background. (cd ${METRICBEAT_HOME} && exec metricbeat -v -c ${METRICBEAT_HOME}/conf/metricbeat.yml) & # Start Mule ESB. # The Mule startup script will take care of launching Mule using the appropriate user. # Mule is launched in the foreground and will thus be the main process of the container. ${MULE_HOME}/bin/mule console
Note that:
-
The first part in which the timezone is set, time is synchronized and the external adress of the Mule container is set is identical to the original Mule start-script.
Please refer to my earlier article on time in Docker containers for more information on time time in Docker containers. - Metricbeat is started in the background.
This is the only addition made to the original Mule start-script.
The current directory is set to the Metricbeat home directory prior to starting Metricbeat, in order for Metricbeat to be able to locate its files and directories. The Metricbeat configuration file metricbeat.yml in the conf directory is used to launch Metricbeat.
The -v option configures Metricbeat to log verbose logs. - Mule ESB is started as the main process of the Docker container.
Metricbeat Configuration
The Metricbeat configuration file is entirely new for this version of the example and located in a file with the name metricbeat.yml. The Metricbeat configuration reference documentation is available here.
# Configuration for Metricbeat monitoring the Mule ESB instance. metricbeat.modules: # The system module measures CPU and memory statistics of the Docker container # in which Mule ESB is running. - module: system metricsets: - cpu - memory enabled: true period: 10s # The Jolokia module fetches data from JMX MBeans exposed by the Mule ESB instance. - module: jolokia metricsets: - jmx enabled: true period: 1s # URL at which Jolokia exposes JMX metrics. hosts: ["localhost:8899"] namespace: "jmx-metrics" jmx.mappings: # Retrieve Mule ESB JVM uptime. - mbean: 'java.lang:type=Runtime' attributes: - attr: Uptime field: jvm.uptime # Retrieve Mule ESB JVM CPU load. - mbean: 'java.lang:type=OperatingSystem' attributes: - attr: ProcessCpuLoad field: jvm.process_cpu_load # Retrieve heap and non-heap memory usage in Mule ESB JVM. - mbean: 'java.lang:type=Memory' attributes: - attr: HeapMemoryUsage field: jvm.memory.heap_memory_usage - attr: NonHeapMemoryUsage field: jvm.memory.non_heap_memory_usage # Retrieve number of events received by a flow in the example Mule ESB application. # Important note! # The ordering of the Flow and type is significant for MetricBeats and should # be as presented by Jolokia performing a direct GET request using a browser. # In addition, as can be seen below, quotes in MBean names do not need to be escaped. - mbean: 'Mule.mule-perpetuum-mobile:Flow="eventReceivingFlow",type=org.mule.Statistics' attributes: - attr: TotalEventsReceived field: mule.perpetuum-mobile.eventReceivingFlow.TotalEventsReceived # Metricbeat sends its output to Elasticsearch. output.elasticsearch: hosts: ['elasticsearch:9200'] # Remove/change username and password depending on Elasticsearch security configuration. # This username and password match those used by the Elastic Docker images. username: elastic password: changeme # Dynamic reloading of Metricbeat's configuration. metricbeat.config.modules: path: /opt/metricbeat/conf/*.yml reload.enabled: true reload.period: 10s # Metricbeat logging configuration. logging: # Logging level: debug, info, warning, error or critical. level: info to_files: true logging.to_syslog: false files: # Directory in which to write log files. path: /opt/metricbeat/logs # Name of log file. name: mybeat.log # Keep this number of log files when rotating. keepfiles: 10
Note that:
- Metricbeat configuration consists of a number of modules.
A Metricbeat module gathers metrics from one specific type of source. Examples of such sources are: PostgreSQL database servers, Apache HTTPD servers, server on which Metricbeat runs etc. - The first module in the Metricbeat configuration file is the system module.
- A module may have multiple sets of metrics which are made available by listing them under the metricsets tag.
In this example I want to gather metrics in the CPU and memory metricsets. - The enabled tag determines whether the module is enabled or not.
- The period tag sets the time between the occasions at which metrics are gathered and sent out.
For certain modules, setting this time too short will result in failure gathering metrics, please consult the documentation for the module in question for any advise. - The next Metricbeat module is the Jolokia module.
The Jolokia module collects JMX metrics using Jolokia (JMX-over-HTTP) with a period of once per second from localhost, port 8899. - After the Jolokia module general configuration follows a number of JMX mappings.
One JMX mapping will query one or more attributes of one single JMX MBean for metrics. - The first JMX mapping queries the java.lang.Runtime MBean for the up-time of the JVM in which the Mule ESB instance is running.
- The second JMX mapping queries the java.lang.OperatingSystem MBean for the CPU load caused by the JVM.
- The third JMX mapping queries the java.lang.Memory MBean for two different attributes.
These attributes are te heap and non-heap memory usage in the JVM. - The final JMX mapping queries an MBean exposing statistics for the flow with the name “eventReceivingFlow” in the Mule application with the name “mule-perpetuum-mobile”.
The statistics queried is the total number of events received by the Mule flow in question.
Please see the section below on the ordering of the different parts of the MBean name. - Next up is the output configuration which tells Metricbeat to send data to Elasticsearch.
JMX data will be sent to the Elasticsearch instance at “elasticsearch”, port 9200. If multiple addresses are configured in value of the hosts key then data will be distributed between the supplied Elasticsearch hosts. Since X-Pack is installed in Elasticsearch and basic authentication enabled, the username and password are needed to connect to Elasticsearch. - The metricbeat.config.modules configures Metricbeat to reload its configuration if it has been modified.
The path specifies which configuration file(s) to check for modifications, the reload.period specifies how often to check the configuration files for modifications. - Finally the logging section configures Metribeat logging output.
Logs are to be written to the mybeat.log file in the /opt/metricbeat/logs directory. When rotating logs, the ten last log-files are to be retained.
Metricbeat JMX and MBean Names
When I at first had the example up and running with Metricbeat, I noticed that there were some JMX data missing, namely the total number of events from the eventReceivingFlow in the Mule application mule-perpetuum-mobile. I examined the Metricbeat configuration over and over and everything seemed fine. I queried the Jolokia agent in the Mule ESB instance for all JMX data to verify that the data indeed was exposed when I noticed that the name of the MBean in the Metricbeat configuration file did not exactly match that queried from the Jolokia agent.
In order for the Metricbeat JMX module to be able to retrieve data, the flow part must preceed the type part. The proper format of the MBean name use the ordering as presented by Jolokia. JMX data from Jolokia can be examined in the following way:
-
List all the MBeans and their attributes exposed by Jolokia by using the following URL in a browser (if you are not running Docker in Linux, replace ”localhost” with the IP of the virtual machine in which Docker is running):
http://localhost:8899/jolokia/list
- Locate the mule-perpetuum–mobile JSON object and the eventReceivingFlow flow in it and note that the flow part does precede the type part.
"Mule.mule-perpetuum-mobile": { ... }, "Flow=\"eventReceivingFlow\",type=org.mule.Statistics": { ... "attr": { ... "TotalEventsReceived": { "rw": false, "type": "long", "desc": "Attribute exposed for management" }, ...
Thus the MBean name to use should be:
Mule.mule-perpetuum-mobile:Flow="eventReceivingFlow",type=org.mule.Statistics
Note that quotes are not escaped.
Elastalert Docker Image and HTTP Basic Authentication
The addition of support for HTTP basic authentication not only require modification to the Elastalert configuration, but also to the start-script of the Elastalert Docker image.
In addition, modifications to the rule used in the example also had to be done.
Elastalert Configuration
The Elastalert configuration file had to be modified to include the username and password used when connecting to Elasticsearch. The complete Elastalert configuration file used in the updated example looks like this:
# This is the folder that contains the rule yaml files # Any .yaml file will be loaded as a rule rules_folder: /opt/rules # How often ElastAlert will query elasticsearch # The unit can be anything from weeks to seconds run_every: minutes: 1 # ElastAlert will buffer results from the most recent # period of time, in case some log sources are not in real time buffer_time: minutes: 2 # The elasticsearch hostname for metadata writeback # Note that every rule can have it's own elasticsearch host es_host: elasticsearchhost # The elasticsearch port es_port: 9200 # Username and password to log in to elasticsearch. es_username: elastic es_password: changeme # Optional URL prefix for elasticsearch #es_url_prefix: elasticsearch # Connect with SSL to elasticsearch #use_ssl: True # The index on es_host which is used for metadata storage # This can be a unmapped index, but it is recommended that you run # elastalert-create-index to set a mapping writeback_index: elastalert_status # If an alert fails for some reason, ElastAlert will retry # sending the alert until this time period has elapsed alert_time_limit: days: 1
On rows 23 and 24, es_username and es_password have been added for HTTP basic authentication when connecting to Elasticsearch.
Elastalert Docker Image Start-Script
The complete Elastalert start-script below does not only include support for HTTP basic authentication, but also support for using HTTPS when connecting to Elasticsearch (thanks to JamesJJ for the later addition!).
#!/bin/sh set -e case "${ELASTICSEARCH_TLS}:${ELASTICSEARCH_TLS_VERIFY}" in true:true) WGET_SCHEMA='https://' CREATE_EA_OPTIONS='--ssl --verify-certs' ;; true:false) WGET_SCHEMA='https://' CREATE_EA_OPTIONS='--ssl --no-verify-certs' ;; *) WGET_SCHEMA='http://' CREATE_EA_OPTIONS='--no-ssl' ;; esac # Set the timezone. if [ "$SET_CONTAINER_TIMEZONE" = "true" ]; then setup-timezone -z ${CONTAINER_TIMEZONE} && \ echo "Container timezone set to: $CONTAINER_TIMEZONE" else echo "Container timezone not modified" fi # Force immediate synchronisation of the time and start the time-synchronization service. # In order to be able to use ntpd in the container, it must be run with the SYS_TIME capability. # In addition you may want to add the SYS_NICE capability, in order for ntpd to be able to modify its priority. ntpd -s # Wait until Elasticsearch is online since otherwise Elastalert will fail. if [ -n "$ELASTICSEARCH_USER" ] && [ -n "$ELASTICSEARCH_PASSWORD" ]; then WGET_AUTH="$ELASTICSEARCH_USER:$ELASTICSEARCH_PASSWORD@" else WGET_AUTH="" fi while ! wget -q -T 3 -O - "${WGET_SCHEMA}${WGET_AUTH}${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}" 2>/dev/null do echo "Waiting for Elasticsearch..." sleep 1 done sleep 5 # Check if the Elastalert index exists in Elasticsearch and create it if it does not. if ! wget -q -T 3 -O - "${WGET_SCHEMA}${WGET_AUTH}${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/elastalert_status" 2>/dev/null then echo "Creating Elastalert index in Elasticsearch..." elastalert-create-index ${CREATE_EA_OPTIONS} --host "${ELASTICSEARCH_HOST}" --port "${ELASTICSEARCH_PORT}" --config "${ELASTALERT_CONFIG}" --index elastalert_status --old-index "" else echo "Elastalert index already exists in Elasticsearch." fi echo "Starting Elastalert..." exec supervisord -c "${ELASTALERT_SUPERVISOR_CONF}" -n
Modifications for HTTP basic authentication have been made at the following places:
-
Rows 33-38.
Creates the part of the Elasticsearch URL containing username and password.
If basic authentication not is to be used, then sets this part of the URL will be set to the empty string. -
Row 39.
Waits for Elasticsearch to become available by attempting to querying Elasticsearch for status. -
Row 47.
Check if Elastalert index already exists in Elasticsearch before creating it.
If the Elastalert index is to be created in Elasticsearch, then the Elastalert configuration file will be used and username and password in this configuration file used.
Elastalert Example Rule
The exampe rule has been modified to use the index in Elasticsearch created by Metricbeat and to query the metrics gathered by Metricbeat. The complete rule located in the file cpu-spike.yml now looks like this:
# Example Elastalert rule that will alert on spikes in the CPU load of # the monitored Mule CE ESB. name: CPU spike type: spike index: metricbeat-* threshold: 1 timeframe: minutes: 1 spike_height: 2 spike_type: "up" filter: - range: jolokia.jmx-metrics.jvm.process_cpu_load: from: 0.03 to: 1.0 alert: - "debug"
Modifications:
-
Row 5.
Use the index name pattern ”metricbeat-*” in Elasticsearch. -
Row 14.
Use new name for the JVM process CPU load. -
Rows 15 and 16.
Change the filter range values, since Metricbeat will just pass on the values as queried from Jolokia, which are in the range 0.0 to 1.0 instead of the percentage values from 0.0 to 100.0 generated by Logstash.
Docker Compose File
I have made some minor modifications to the Docker Compose file of the example, in order for the example to work better under various circumstances.
# Docker Compose configuration for the Alerting with ELK and Elastalert article. # By Ivan Krizsan. version: "3" services: # Mule ESB CE instance that is being monitored in the example. mule_ce_esb: build: ivankrizsan-mulewithmetricbeat cap_add: - SYS_TIME - SYS_NICE volumes: - ./MuleShared/apps:/opt/mule-standalone/apps - ./MuleShared/conf:/opt/mule-standalone/conf - ./MuleShared/logs:/opt/mule-standalone/logs - ./MuleShared/metricbeat-conf:/opt/metricbeat/conf - ./MuleShared/metricbeat-logs:/opt/metricbeat/logs ports: - "8899:8899" links: - elasticsearch:elasticsearch environment: - MULE_EXTERNAL_IP=127.0.0.1 - SET_CONTAINER_TIMEZONE=true - CONTAINER_TIMEZONE=Asia/Taipei # Elasticsearch instance. # Note that with the official Elasticsearch Docker image, we have to set # es.network.bind_host to 0.0.0.0 in order for Elasticsearch to be reachable # from outside of the Docker container it runs in. elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0 ports: - "9200:9200" - "9300:9300" volumes: - ./ElasticsearchShared/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./ElasticsearchShared/logs:/opt/logs/elasticsearch environment: - http.host=0.0.0.0 - transport.host=127.0.0.1 # Kibana instance. kibana: image: docker.elastic.co/kibana/kibana:5.4.0 depends_on: - elasticsearch volumes: - ./KibanaShared/config:/usr/share/kibana/config/ - ./KibanaShared/logs:/log-dir ports: - "5601:5601" links: - elasticsearch:elasticsearch # Elastalert instance. # Docker image available from Docker Hub. elastalert: image: ivankrizsan/elastalert:latest depends_on: - elasticsearch - kibana cap_add: - SYS_TIME - SYS_NICE volumes: - ./ElastalertShared/logs:/opt/logs - ./ElastalertShared/rules:/opt/rules - ./ElastalertShared/config:/opt/config links: - elasticsearch:elasticsearchhost environment: - ELASTICSEARCH_USER=elastic - ELASTICSEARCH_PASSWORD=changeme - SET_CONTAINER_TIMEZONE=true - CONTAINER_TIMEZONE=Asia/Taipei
Changes are:
-
Build a custom Mule Docker image instead of using a ready-made image.
-
Map Metricbeat config and logs directories to host directories.
-
Set environment variables for the Mule service in order to set the timezone.
The time in the Mule and Elastalert Docker containers must match, in order for Elastalert to be able to query for events in the proper time-range and get accurate results. -
Use the Elasticsearch Docker image from Elastic.
-
Use the Kibana Docker image from Elastic.
-
Made the Elastalert Docker service depend on the Elasticsearch and Kibana Docker services in order to avoid possible failures when running on a system with limited memory and/or CPU resources.
-
Set environment variables for the Elastalert service in order to set the timezone.
See reason for setting timezone for the Mule service. -
Set environment variables for the Elastalert service in order to be able to connect to Elasticsearch.
-
Removed the Logstash service.
Logstash has been replaced by Metricbeat, which is run in the same container as Mule.
Running the Example
Having examined all the changes in the example, we are now ready to run it. After having started the example, we will use Kibana to take a look at the metrics gathered and then trigger the Elastalert example rule by increasing the number of events in the Mule application.
-
Set your current timezone in the docker-compose.yml file.
Modify all values of the environment variable CONTAINER_TIMEZONE. -
Start Docker if needed.
This applies to non-Linux operating systems. -
Open a terminal window.
-
Go to the root directory containing the example files.
This is the directory that contains the docker-compose.yml file along with the ElastalertShared, ElasticsearchShared etc directories. -
Start the Docker containers.
docker-compose up
-
Wait until log from Elastalert similar to this is seen in the terminal.
elastalert_1 | 2017-06-14 04:43:51,435 DEBG 'elastalert' stderr output: elastalert_1 | INFO:elastalert:Queried rule CPU spike from 2017-06-14 04:42 UTC to 2017-06-14 04:43 UTC: 0 / 0 hits
-
Open the Kibana web application in a browser.
The URL should be http://localhost:5601 but may be http://192.168.99.100:5601 if you are using DockerToolbox. -
Log in to Kibana using user ”elastic” and password ”changeme” without quotes.
-
Configure an intex pattern using ”metricbeat-*”, click refresh fields next to Time-field name and then select the time-field name ”@timestamp”.
-
Click the Create button in the lower left corner of the browser window.
-
Click the Discover option in the upper left corner of the browser window.
A list of events should appear looking something like this: -
Open the file mule-config.xml in MuleShared/apps/mule-perpetuum-mobile.
This is the Mule flow in that file that generates periodic events:<!-- Flow that periodically generates events that are sent to another flow. --> <flow name="eventGeneratingFlow"> <quartz:inbound-endpoint jobName="eventGeneratingJob" repeatInterval="50" repeatCount="-1" connector-ref="oneThreadQuartzConnector"> <quartz:event-generator-job> <quartz:payload>go</quartz:payload> </quartz:event-generator-job> </quartz:inbound-endpoint> <logger level="ERROR" message="Generated an event!"/> <vm:outbound-endpoint path="eventReceiverEndpoint" exchange-pattern="one-way"/> </flow>
- Modify the value of the repeatInterval attribute in the <quartz:inbound-endpoint> element so that it reads 2 instead of 50.
- Save the file.
- In the mule.log file located in MuleShared/logs notice how the Mule application was reloaded as we modified its configuration file:
INFO 2017-06-15 21:09:55,412 [Mule.app.deployer.monitor.1.thread.1] org.mule.module.launcher.DefaultArchiveDeployer: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + Redeploying artifact 'mule-perpetuum-mobile' + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Wait for a couple of minutes until an alert from Elastalert appears in the log:
2017-06-15 21:06:21,519 DEBG 'elastalert' stderr output: INFO:elastalert:CPU spike An abnormal number (26) of events occurred around 2017-06-15 21:04 CST. Preceding that time, there were only 13 events within 0:01:00 @timestamp: 2017-06-15T13:04:58.117Z _id: AVyr21zmioHBWFRzBuRy _index: metricbeat-2017.06.15 _type: metricsets beat: { "hostname": "7f361b7027d5", "name": "7f361b7027d5", "version": "5.4.0" } jolokia: { "jmx-metrics": { "jvm": { "memory": { "heap_memory_usage": { "committed": 1020067840.0, "init": 1073741824.0, "max": 1020067840.0, "used": 153855056.0 }, "non_heap_memory_usage": { "committed": 70082560.0, "init": 2555904.0, "max": -1.0, "used": 68869168.0 } }, "process_cpu_load": 0.03125, "uptime": 630691.0 }, "mule": { "perpetuum-mobile": { "eventReceivingFlow": { "TotalEventsReceived": 1313.0 } } } } } metricset: { "host": "localhost:8899", "module": "jolokia", "name": "jmx", "namespace": "jmx-metrics", "rtt": 8485 } num_hits: 40 num_matches: 1 reference_count: 13 spike_count: 26 type: metricsets 2017-06-15 21:06:21,589 DEBG 'elastalert' stderr output: INFO:elastalert:Ran CPU spike from 2017-06-15 21:05 CST to 2017-06-15 21:06 CST: 40 query hits (0 already seen), 1 matches, 1 alerts sent
We have (again) successfully caused the CPU Spike rule to trigger and send an alert!
Troubleshooting
In this section I will provide some solutions to problems I encountered while preparing the example.
Out of Memory Problems
If you are running the example on a non-Linux Docker host, you may need to allocate more memory to the virtual machine in which Docker runs in order to avoid out of memory conditions from the Elastalert and Mule containers.
Metricbeat Configuration File Problem
If you encounter an error from Metricbeat saying that the configuration file needs to be owned by root or the metricbeat user and you are not able to change the owner of the file, modify the start-mule.sh script for the Mule-Metricbeat Docker image and change the line that starts Metricbeat like this:
# Start MetricBeat in the background. (cd ${METRICBEAT_HOME} && exec metricbeat -strict.perms=false -v -c ${METRICBEAT_HOME}/conf/metricbeat.yml) &
Note the addition of the option -strict.perms=false.
Final Words
This concludes the second installation of Alerting with the ELK Stack and Elastalert.
Happy coding!
Hi,
I looked into the elastalert docker you’ve created (very nice !) and wonder what was the reason you chose to use supervisord as the elastalert watch dog ? Why not use the docker framework (–restart policy) ?
Thanks,
Avi
I have to admit that the reason was that I did not know better at that time. I am in the process of contributing my Elastalert Docker image to Yelp, so I won’t make any changes to it now.