The ELK Stack (Elasticsearch, Logstash, and Kibana) is a powerful toolset for managing, analyzing, and visualizing log data on a Linux server. Here’s a step-by-step guide to installing and configuring ELK Stack.
Step 1: Update and Install Prerequisites #
- Update your system packages:
sudo apt update && sudo apt upgrade -y
2. Install Java, as it’s required by Elasticsearch:
sudo apt install openjdk-11-jdk -y
3. Verify the Java installation:
java -version
Step 2: Install Elasticsearch #
- Import the Elasticsearch PGP key:
Import the Elasticsearch PGP key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Add the Elasticsearch repository:
sudo sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'
Install Elasticsearch:
sudo apt update
sudo apt install elasticsearch -y
Enable and start Elasticsearch:
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Verify Elasticsearch is running:
curl -X GET "localhost:9200"
Step 3: Configure Elasticsearch #
- Edit the configuration file to bind Elasticsearch to localhost (if needed, allow remote connections by updating the
network.host
):
sudo nano /etc/elasticsearch/elasticsearch.yml
Set:
network.host: localhost
2. Restart Elasticsearch to apply changes:
sudo systemctl restart elasticsearch
Step 4: Install Logstash #
- Install Logstash:
sudo apt install logstash -y
2. Configure Logstash:
- Create a configuration file for Logstash, specifying the input, filter, and output sections. Here’s an example:
sudo vi /etc/logstash/conf.d/logstash.conf
Example content:
input {
beats {
port => 5044
}
}
filter {
# Optional filter configurations
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
3. Enable and start Logstash:
sudo systemctl enable logstash
sudo systemctl start logstash
4. Verify Logstash is running:
sudo systemctl status logstash
Step 5: Install Kibana #
Install Kibana:
sudo apt install kibana -y
Configure Kibana:
- Open the configuration file and set the server host:
sudo vi /etc/kibana/kibana.yml
- Set Kibana to listen on all network interfaces or keep it limited to localhost:
server.host: "localhost"
Enable and start Kibana:
sudo systemctl enable kibana
sudo systemctl start kibana
Access Kibana:
- Open your web browser and navigate to
http://<your_server_ip>:5601
. You should see the Kibana dashboard.
Step 6: Configure Firewall for ELK Stack #
To allow external access (if desired), configure the firewall to allow the necessary ports.
- Elasticsearch – typically runs on port
9200
.
sudo ufw allow 9200/tcp
2. Logstash – default port 5044
.
sudo ufw allow 5044/tcp
3. Kibana – default port 5601
.
sudo ufw allow 5601/tcp
4. Enable the firewall if it isn’t already enabled:
sudo ufw enable
Step 7: Testing ELK Stack #
- Testing Elasticsearch: Check Elasticsearch by sending an HTTP request.
curl -X GET "localhost:9200"
2. Testing Logstash: You can test Logstash by creating a simple input test. For example:
echo "Hello ELK Stack" | nc localhost 5044
3. Testing Kibana: Access Kibana from a browser at http://<your_server_ip>:5601
. You should see the Kibana dashboard.
Step 8: Set Up Filebeat to Forward Logs (Optional) #
To send log data to the ELK stack, use Filebeat on client machines.
- Install Filebeat:
sudo apt install filebeat -y
2. Configure Filebeat to send logs to Logstash:
sudo nano /etc/filebeat/filebeat.yml
Add or modify the output settings:
output.logstash:
hosts: [":5044"]
3. Enable and start Filebeat:
sudo systemctl enable filebeat
sudo systemctl start filebeat
Step 9: Visualize Data in Kibana #
- In Kibana, go to Management > Index Patterns and create an index pattern to visualize data from Filebeat.
- Explore the Discover section in Kibana to analyze and visualize the logs sent from Filebeat.
Author’s Final Word #
By following these steps, your ELK stack should be up and running on a Linux server, allowing you to collect, analyze, and visualize log data across your infrastructure