Lead Image © Leo Blanchette, 123RF.com

Lead Image © Leo Blanchette, 123RF.com

Installing and operating the Graylog SIEM solution

Log Inspector

Article from ADMIN 48/2018
By
Graylog security information and event management combines real-time monitoring and immediate notification of rule violations with long-term archiving for analysis and reporting.

Linux has long mastered the art of log forwarding and remote logging, which are prerequisites for external log analysis. From the beginning, security was the focus: An attacker who compromises a system most likely would also try to manipulate or delete the syslog files to cover his tracks. However, if the administrator uses a loghost, the files are less likely to fall into the hands of hackers and, thus, can still be analyzed after an attack.

As the number of servers increases, so do the size of logfiles and the risk of overlooking security-relevant entries. Security information and event management (SIEM) products usually determine costs by the size of logs. The Graylog [1] open source alternative discussed in this article processes many log formats; however, if the volume exceeds 5GB per day, license fees kick in.

Why SIEM?

As soon as several servers need to be managed, generating overall statistics or detecting problems that affect multiple servers becomes more and more complex, even if all necessary information is available. Because of the sheer quantity of information from different sources, the admin has to rely on tools that allow all logs to be viewed in real time and help with the evaluation.

SIEM products and services help you detect correlations in a jumble of information by enabling:

  • Access to logfiles, even without administrator rights on the production system.
  • Accumulation of the logfiles of all computers in one place.
  • Analysis of logs with support for correlation analysis.
  • Automatic notification for rule violations.
  • Reporting on networks, operating systems, databases, and applications.
  • Monitoring of user behavior.

Installing and configuring Graylog is quite easy. The Java application uses resources sparingly and stores metadata in MongoDB and logs in an Elasticsearch cluster. Graylog consists of a server and a web interface that communicate via a REST interface (Figure 1).

Figure 1: Graylog comprises a web interface, a server, MongoDB, and Elasticsearch.

Installation

Prerequisites for the installation of Graylog 2.4 – in this example, under CentOS 7 – are Java version 1.8 or higher, Elasticsearch 5.x [2], and MongoDB 3.6 [3]. If not already present, installing Java (as root or using sudo) before Elasticsearch and MongoDB is recommended:

yum install java-1.8.0-openjdk-headless.x86_64

You should remain root or use sudo for the following commands, as well. To install Elasticsearch and MongoDB, create a file named elasticsearch.repo and mongodb.repo in /etc/yum.repos.d (Listings 1 and 2); then, install the RPM key and the packages for MongoDB, Elasticsearch, and Graylog (Listing 3) to set up the basic components.

Listing 1

elasticsearch.repo

[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Listing 2

mongodb.repo

[mongodb-org-3.0]
name=MongoDB Repository
 **
baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1

Listing 3

Installing Components

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
 **
yum install elasticsearch
yum -y install mongodb-org
 **
rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-2.4-repository_latest.rpm
yum install graylog-server

Configuration

The best place to start the configuration is with Elasticsearch. In its configuration file, you specifically need to assign the cluster.name parameter. The only configuration file for Graylog itself is server.conf, which is located in the /etc/graylog/ directory and uses ISO 8859-1/Latin-1 character encoding. This extensive file begins with the definition of the master instance and ends with the encrypted password of the Graylog root user. The most important parameters define email, TLS, and the root password.

To start Graylog at all, the parameters password_secret and root_password_sha2 must be set. The password_secret parameter should be set with a string of at least 64 characters. Graylog uses this string for salting and encoding the password. The pwgen command generates a password and encrypts with sh256sum:

pwgen -N 1 -s 100
echo -n <Password> | sha256sum

The encrypted password is then assigned to the parameter root_password_sha2. Table 1 gives a summary of the important parameters.

Table 1

Important Configuration Parameters

Parameter Description Remarks
Important Graylog Parameters
is_master Defined master/slave Must be set; otherwise, Graylog does not start
password_secret String, at least 64 characters long for salting and encrypting password Must be set; otherwise, Graylog does not start
root_username Login name for admin Default is admin
root_password_sha2 Hash of the password as a result of the sh256sum command Must be set; otherwise, Graylog does not start
root_timezone Canonical ID for time zone (e.g., Europe/Vienna) Very important
root_email Email address of root
plugin_dir Path to plugin directory Relative or absolute
rest_listen_uri https://<myserver>:9000/api for other nodes and collectors Important
rest_transport_uri https://<myserver>:9000/api if Graylog is behind HTTP Proxy Important
rest_tls_cert_file /<path/to>/graylog.crt for encryption Important
rest_tls_key_file /<path/to>/graylog.key for encryption Important
web_listen_uri https://<myserver>:9000 Important
web_tls_cert_file /<path/to>/graylog-web.crt to encrypt communication Important
web_tls_key_file /<path/to>/graylog-web.key to encrypt communication Important
Important Elasticsearch Parameters
rotation_strategy count, size, or time Default is count; used for the delete strategy of collected logs. Important
retention_strategy delete or close Default is delete; close leaves indexes of Elasticsearch files on the disk, which naturally consumes resources. The indexes are not searched automatically; only reopening the indexes searches these files, as well. Important
elasticsearch_max_docs_per_index, elasticsearch_max_size_per_index, elasticsearch_max_time_per_index, elasticsearch_max_number_of_indices The appropriate parameter depends on the value of the rotation_strategy parameter Permissible values for time are d for day, h for hours, m for minutes, s for seconds (e.g., 3 months is 91d)
Important MongoDB Parameters
mongodb_uri mongodb://<grayloguser>:<password>@<hostname>:27017,<hostname>:27018,<hostname>:27019/graylog Important if replicated
mongodb_max_connections Number of allowed connections (e.g., 100)
mongodb_threads_allowed_to_block_multiplier Multiplier to determine how many threads can wait for a connection Default is 5; multiplied with mongodb_max_connections

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus