Visualize Server Security on CentOS 7 with an Elastic Stack and Wazuh

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

What are Elasticsearch, Elastic Stack, and Wazuh?

An Elastic Stack, formerly known as an ELK Stack, is a combination of Elasticsearch, Logstash, and Kibana. In this tutorial, you will learn how to install and link together ElasticSearch, Logstash, Kibana, with Wazuh OSSEC to help monitor and visualize security threats to your machine. The resulting structure can be broken down into three core components that work with Wazuh’s endpoint security:

  • Elasticsearch

    • The heart of the Elastic Stack, Elasticsearch provides powerful search and analytical capabilities. It stores and retrieves data collected by Logstash.
  • Logstash

    • Ingests data from multiple sources and passes it along to Elasticsearch which acts as a central database.
  • Kibana

    • A self-hosted, web-based tool which provides a multitude of methods to visualize and represent data stored in Elasticsearch.

What is Wazuh OSSEC

Wazuh is an open source branch of the original OSSEC HIDS developed for integration into the Elastic Stack. Wazuh provides the OSSEC software with the OSSEC ruleset, as well as a RESTful API Kibana plugin optimized for displaying and analyzing host IDS alerts.

Before You Begin

  1. Many of the steps in this guide require root privileges. Complete the sections of our Setting Up and Securing a Compute Instance to create a standard user account, harden SSH access and remove unnecessary network services. Use sudo wherever necessary.

  2. Your Linode should have at least 8GB of RAM. While an Elastic Stack will run on less RAM, the Wazuh Manager will crash if RAM is depleted at any time during use.

  3. Add a domain zone, NS record, and A/AAA record for the domain you will use to access your Kibana installation. See the DNS Manager guide for details. If you will access your Kibana instance via your Linode’s IP address, you can skip this step.

  4. Create an SSL Certificate, if you will be using SSL encryption for your domain.

  5. Install NGINX or Apache. Visit our guides on how to install a LEMP or LAMP stack for CentOS for help:

  6. Configure your webserver for virtual domain hosting:

    NGINX

    Apache

Update System and Install Prerequisites

  1. Update system packages:

     yum update -y && yum upgrade -y
    
  2. Install Java 8 JDK:

     yum install java-1.8.0-openjdk.x86_64
    
  3. Verify the Java installation by checking the version:

    java -version
    

    Your output should be similar to:

    openjdk version "1.8.0_191"
    OpenJDK Runtime Environment (build 1.8.0_191-b12)
    OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
  4. If your Linode doesn’t have curl installed, install curl:

     yum install curl
    

Install Wazuh

  1. Create the wazuh.repo repository file and paste the text below:

    File: /etc/yum.repos.d/wazuh.repo
    1
    2
    3
    4
    5
    6
    7
    
    [wazuh_repo]
    gpgcheck=1
    gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
    enabled=1
    name=CentOS-$releasever - Wazuh
    baseurl=https://packages.wazuh.com/3.x/yum/
    protect=1
  2. Install Wazuh Manager:

     yum install wazuh-manager
    
  3. Install Wazuh API:

    1. Install the Node.js repository:

       curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
      
    2. Install Node.js:

       yum install -y nodejs
      
    3. Install Wazuh API:

       yum install wazuh-api
      
      Note

      Python >= 2.7 is required in order to run the Wazuh API. To find out which version of Python is running on your Linode, issue the following command:

        python --version
      

Install Elasticsearch, Logstash, and Kibana

Install the Elastic Stack via RPM files to get the latest versions of all the software. Be sure to check the Elastic website for more recent software versions. Adjust the commands below to match.

Install Elasticsearch

  1. Download the Elasticsearch RPM into the /opt directory:

     cd /opt
     curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.2.rpm
    
  2. Install Elasticsearch:

    rpm -i elasticsearch-6.5.2.rpm
    
  3. Enable the Elasticsearch service to start on system boot:

     systemctl enable elasticsearch
     systemctl start elasticsearch
    
  4. Verify that Elasticsearch has installed and is listening on port 9200:

     curl "http://localhost:9200/?pretty"
    

    You should receive a similar response:

    {
    "name" : "-7B24Uk",
    "cluster_name" : "elasticsearch",
    "cluster_uuid" : "UdLfdUOoRH2elGYckoiewQ",
    "version" : {
      "number" : "6.5.2",
       "build_flavor" : "default",
      "build_type" : "rpm",
      "build_hash" : "9434bed",
      "build_date" : "2018-11-29T23:58:20.891072Z",
      "build_snapshot" : false,
      "lucene_version" : "7.5.0",
      "minimum_wire_compatibility_version" : "5.6.0",
      "minimum_index_compatibility_version" : "5.0.0"
      },
    "tagline" : "You Know, for Search"
    }
  5. Load the Wazuh Elasticsearch template. Replace exampleIP with your Linode’s public IP address:

     curl https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl -X PUT "http://exampleIP:9200/_template/wazuh" -H 'Content-Type: application/json' -d @-
    

Install Logstash

  1. Download the Logstash RPM into the /opt directory:

     cd /opt
     curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-6.5.2.rpm
    
  2. Install Logstash:

     rpm -i logstash-6.5.2.rpm
    
  3. Enable Logstash on system boot:

     systemctl daemon-reload
     systemctl enable logstash
     systemctl start logstash
    
  4. Download the Wazuh config file for a single-host architecture for Logstash:

     curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/2.0/extensions/logstash/01-wazuh.conf
    
  5. Add the Logstash user to the ossec group to allow access to restricted files:

     usermod -aG ossec logstash
    

For CentOS 6 and RHEL 6 Only:

  1. Edit /etc/logstash/startup.options to change the LS_GROUP=logstash to LS_GROUP=ossec:

    File: /etc/logstash/startup.options
    1
    2
    3
    4
    5
    
    . . .
    # user and group id to be invoked as
    LS_USER=logstash
    LS_GROUP=logstash
    . . .
  2. Update the service with the new parameters:

     /usr/share/logstash/bin/system-install
    
  3. Restart Logstash:

     systemctl restart logstash
    

Install Kibana

  1. Download the Kibana RPM into the /opt directory:

     cd /opt
     curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-6.5.2-x86_64.rpm
    
  2. Install Kibana:

     rpm -i kibana-6.5.2-x86_64.rpm
    
  3. Enable Kibana on system boot:

     systemctl enable kibana
     systemctl start kibana
    
  4. Install the Wazuh app for Kibana:

     sudo -u kibana NODE_OPTIONS="--max-old-space-size=3072" /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.7.1_6.5.2.zip
    

    The Kibana app installation process takes several minutes to complete and it may appear as though the process has stalled.

  5. By default Kibana only listens on the loopback interface. To configure it to listen on all interfaces, update the /etc/kibana/kibana.yml file and uncomment server.host and the following value:

    File: /etc/kibana/kibana.yml
    1
    2
    3
    4
    5
    
    # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
    # The default is 'localhost', which usually means remote machines will not be able to connect.
    # To allow connections from remote users, set this parameter to a non-loopback address.
    server.host: "0.0.0.0"
        

    Reference the table below for information on other configurations available in the /etc/kibana/kibana.yml file:

    ValueParameter
    server.portIf the default port 5601 is in use, change this value.
    server.nameThis value is used for display purposes only. Set to anything you wish, or leave it unchanged.
    logging.destSpecify a location to log program information. /var/log/kibana.log is recommended.

    You may modify other values in this file as you see fit, but this configuration should work for most.

  6. Restart Kibana:

     systemctl restart kibana
    

Configure the Elastic Stack

The Elastic Stack will require some tuning before it can be accessed via the Wazuh API.

  1. Enable memory locking in Elasticsearch to mitigate poor performance. Uncomment the bootstrap.memory_lock: true line in the /etc/elasticsearch/elasticsearch.yml file:

    File: /etc/elasticsearch/elasticsearch.yml
    1
    2
    3
    4
    5
    6
    7
    
    # ----------------------------------- Memory -----------------------------------
    #
    # Lock the memory on startup:
    #
    bootstrap.memory_lock: true
    #
        
  2. Edit locked memory allocation. Follow the instructions under the appropriate init system used on your Linode:

    SystemD

    Edit the systemd init file and add the following line:

    File: /etc/systemd/system/multi-user.target.wants/elasticsearch.service
    1
    2
    3
    
    . . .
    LimitMEMLOCK=infinity
    . . .

    System V

    Edit the /etc/sysconfig/elasticsearch file. Add or change the following line:

    File: /etc/sysconfig/elasticsearch
    1
    2
    3
    
    . . .
    MAX_LOCKED_MEMORY=unlimited
    . . .
  3. Configure the Elasticsearch heap size based on your Linode’s resources. This figure will determine how much memory Elasticsearch is allowed to consume. Keep the following rules in mind:

    • No more than 50% of available RAM
    • No more than 32GB of RAM
    • The -Xmsg and -Xmxg values must be the same in order to avoid performance issues.

    Open the jvm.options file and navigate to the block shown here:

    File: /etc/elasticsearch/jvm.options
    1
    2
    3
    4
    5
    6
    7
    
    . . .
    # Xms represents the initial size of total heap space
    # Xmx represents the maximum size of total heap space
    
    -Xms4g
    -Xmx4g
    . . .

    This configures Elasticsearch with 4GB of allotted RAM. You may also use the M letter to specify megabytes, Xms4096M in this example. View your current RAM consumption with the htop command. If you do not have htop installed, install it with your distribution’s package manager. Allocate as much RAM as you can, up to 50% of the max, while leaving enough available for other daemon and system processes.

  4. Restart Elasticsearch for the configurations to take effect:

     systemctl daemon-reload
     systemctl restart elasticsearch
    

Configure a Reverse Proxy

A reverse proxy server allows you to secure the Kibana web interface with SSL and limit access to others. Instructions are provided for NGINX and Apache. The instructions assume you have your webserver configured to host virtual domains.

Set up a Reverse Proxy Server to Host Kibana as a Subdomain

If you have SSL encryption enabled on your domain, follow the instructions in the HTTPS section below. If not, follow the instructions included in the HTTP section. Although you may skip this section if you wish to access Kibana through its server port, this approach is recommended.

NGINX

  1. Navigate to your NGINX virtual host config directory. Create a new virtual host config file and name it something similar to example.conf. Replace example.com Add the contents below to this file. If you do not have a domain name available, replace the server_name parameter value with your Linode’s external IP address:

    HTTP

    File: /etc/nginx/conf.d/example.com.conf
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    server {
        listen 80;
        # Remove the line below if you do not have IPv6 enabled.
        listen [::]:80;
        server_name kibana.exampleIPorDomain;
    
        location / {
            proxy_pass http://exampleIPorDomain:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
    
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;
    }

    HTTPS

    File: /etc/nginx/conf.d/example.com.conf
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    
    server {
      listen 80;
      # Remove the line below if you do note have IPv6 enabled.
      listen [::]:80;
      server_name kibana.exampleIPorDomain;
    
      location / {
          proxy_pass http://exampleIPorDomain:5601;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection 'upgrade';
          proxy_set_header Host $host;
          proxy_cache_bypass $http_upgrade;
      }
    }
    
    server {
      listen 443 ssl;
    
      # Remove the line below if you do not have IPv6 enabled.
      listen [::]:443 ssl;
      server_name kibana.exampleIPorDomain;
    
      location / {
          proxy_pass http://exampleIPorDomain:5601;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection 'upgrade';
          proxy_set_header Host $host;
          proxy_cache_bypass $http_upgrade;
      }
    
      ssl_certificate /path/to/ssl/certificate.crt;
      ssl_certificate_key /path/to/ssl/certificate.key;
    
      auth_basic "Restricted Access";
      auth_basic_user_file /etc/nginx/.htpasswd;
    }
  2. Install httpd-tools if it is not already installed on your Linode:

     yum install httpd-tools
    
  3. Secure your Kibana site with a login page. Create a .htpasswd file first if you do not have one:

     touch /etc/nginx/.htpasswd
     htpasswd -c /etc/nginx/.htpasswd YourNewUsername
     chmod 644 /etc/nginx/.htpasswd
    
  4. Restart the NGINX server to load the new configuration:

     systemctl restart nginx
    

Apache

  1. In order for Apache to function as a reverse proxy, mod_proxy must be installed. Check that the following modules are enabled by running the httpd -M command:

     httpd -M
    
    • proxy_module
    • lbmethod_byrequests_module
    • proxy_balancer_module
    • proxy_http_module
  2. Enable the necessary mods in Apache. Open 00-proxy.conf and verify that the lines below are included:

    File: /etc/httpd/conf.modules.d/00-proxy.conf
    1
    2
    3
    4
    5
    6
    
    . . .
    LoadModule proxy_module modules/mod_proxy.so
    LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
    LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
    LoadModule proxy_http_module modules/mod_proxy_http.so
    . . .
  3. Create a new virtual config file for the Kibana site. Add the contents below to this file. If you do not have a domain name available, replace the server_name parameter value with your Linode’s public IP address. Replace kibana.exampleIPorDomain and http://exampleIPorDomain with your specific values:

    HTTP

    File: /etc/httpd/sites-available/example.com.conf
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    <VirtualHost *:80>
      ServerName kibana.exampleIPorDomain
      ProxyPreserveHost On
    
      ProxyPass / http://exampleIPorDomain:5601
      ProxyPassReverse / http://exampleIPorDomain:5601
    
      <Directory "/">
          AuthType Basic
          AuthName "Restricted Content"
          AuthUserFile /etc/apache2/.htpasswd
          Require valid-user
      </Directory>
    </VirtualHost>

    HTTPS

    File: /etc/httpd/sites-available/example.com.conf
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    
    <VirtualHost *:80>
      ServerName kibana.exampleIPorDomain
      ProxyPreserveHost On
    
      ProxyPass / http://exampleIPorDomain:5601
      ProxyPassReverse / http://exampleIPorDomain:5601
    
      <Directory "/">
          AuthType Basic
          AuthName "Restricted Content"
          AuthUserFile /etc/apache2/.htpasswd
          Require valid-user
      </Directory>
    </VirtualHost>
    
    <VirtualHost *:443
      ServerName kibana.exampleIPorDomain
      ProxyPreserveHost On
    
      ProxyPass / http://exampleIPorDomain:5601
      ProxyPassReverse / http://exampleIPorDomain:5601
    
      SSLEngine on
      SSLProtocol all -SSLv2
      SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM
    
      SSLCertificateFile /path/to/cert_file/ssl.crt
      SSLCertificateKeyFile /path/to/ssl/private.key
      SSLCertificateChainFile /path/to/ssl/server.ca.pem
    
      <Directory "/">
          AuthType Basic
          AuthName "Restricted Content"
          AuthUserFile /etc/apache2/.htpasswd
          Require valid-user
      </Directory>
    </VirtualHost>
  4. Secure your Kibana site with a login page. Create a .htpasswd file first if you do not have one:

     touch /etc/apache2/htpasswd.users
     htpasswd -c /etc/apache2/.htpasswd.users YourNewUsername
     chmod 644 /etc/apache2/.htpasswd.users
    
  5. Restart Apache:

     systemctl restart httpd
    

Add the Kibana Subdomain to the DNS Manager

The new Kibana subdomain will need to be configured in the Linode DNS Manager.

  1. Login to the Linode Manager and select Domains. Click on your domain’s corresponding ellipses and select Edit DNS Records. Add a new A/AAA record for the subdomain. Refer to the table below for the field values.

    FieldValue
    HostnameEnter your subdomain name here - ex. kibana
    IP AddressSet this value to your Linode’s external IP address.
    TTLSet this to 5 minutes.
  2. Click Save Changes.

Open the Kibana Port

Kibana’s default access port, 5601, must be opened for TCP traffic. Instructions are presented below for FirewallD, iptables, and UFW.

FirewallD

firewall-cmd --add-port=5601/tcp --permanent
firewall-cmd --reload
  1. Set SELinux to allow HTTP connections:

     setsebool -P httpd_can_network_connect 1
    

iptables

iptables -A INPUT -p tcp --dport 5601 -m comment --comment "Kibana port" -j ACCEPT
Note
To avoid losing iptables rules after a server reboot, save your rules to a file using iptables-save.

UFW

ufw allow 5601/tcp comment "Kibana port"
Note
Linode’s free Cloud Firewall service can be used to replace or supplement internal firewall configuration. For more information on Cloud Firewalls, see our Getting Started with Cloud Firewalls guide. For help with solving general firewall issues, see the Troubleshooting Firewalls guide.

Connect the Elastic Stack with the Wazuh API

Now you are ready to access the API and begin making use of your OSSEC Elastic Stack.

  1. The Wazuh API requires users to provide credentials in order to login. Navigate to /var/ossec/api/configuration/auth. Replace NewUserName with whatever user name you choose. Set a password following the system prompts:

     node htpasswd -c user NewUserName
    
  2. Restart the Wazuh API:

     systemctl restart wazuh-api
    
  3. Check the status of all daemon components and verify that they are running:

     systemctl -l status wazuh-api
     systemctl -l status wazuh-manager
     systemctl -l status elasticsearch
     systemctl -l status logstash
     systemctl -l status kibana
     systemctl -l status nginx
    
    Note
    If the Wazuh Manager fails to start and you determine the cause to be one of the OSSEC rules or decoders, disable that specific rule/decoder for now. Find the rules and decoders in the /var/ossec/ruleset directory. To disable, rename the file to any other file extension.
  4. In a web browser, navigate to the Kibana homepage. If you created a subdomain for Kibana, the URL will be similar to kibana.exampleIPorDomain. You can also reach Kibana by navigating to your server’s IP address and specifying port 5601. Login with the credentials you setup for your Kibana site.

  5. If everything is working correctly, you should have landed on the Discover page. Navigate to the Wazuh page using the left hand side menu. You will be immediately presented with the API configuration page. Underneath the ADD NEW API button, enter the user credentials you created for Wazuh. For URL and Port, enter you URL or IP and 55000, then click SAVE.

Where To Go From Here

Your OSSEC Elastic Stack setup is now complete! At this point, you will want to customize and configure your OSSEC rules to better suit the needs of your environment. The Wazuh API contains pre-configured charts and queries, and more information on how to use them can be found in the official Wazuh documentation.

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.