Enable logging and monitoring that will let us monitor our deployments in Kibana.
Integrate Elastic Alert with PagerDuty.
Enabling Logging and Monitoring
In Production, the best practice is to send our deployment logs and metrics to a dedicated monitoring deployment. Monitoring indexes logs and metrics into Elasticsearch and these indexes consume storage, memory, and CPU cycles like any other index. We can avoid affecting other production deployments and view the logs and metrics, even when production deployment is unavailable, by using a separate monitoring deployment.We need a minimum of three monitoring nodes to make monitoring highly available.Read More
Steps:
The Monitoring deployment must have the same major version and must be in the same region as your production deployments. Once the monitoring deployment has been setup, below are the steps to enable monitoring and logging.
Go to one of your deployments then go to Logs and Metrics.
2. Choose your Monitoring deployment where you want to ship data to. Choose Logs and Metrics. Click Save.
3. Repeat these steps for all your deployments.
Viewing Cluster Listing
Let’s view our clusters.
Steps:
1. Go to your Monitoring deployment.
2. Open Kibana.
3. Go to Stack Monitoring, which is under the Management section. When you open Stack Monitoring for the first time, you will be asked to acknowledge the creation of these default rules. They are initially configured to detect and notify on various conditions across your monitored clusters. You can view notifications for: Cluster health, Resource utilization, and Errors and exceptions for Elasticsearch in real time.
4. Click on one cluster to see its overview.
Review and Modify Existing Stack Monitoring Rules
Elastic Stack monitoring feature provides Kibana alerting rules, which is an out-of-the-box monitoring feature. These rules are preconfigured based on the best practices recommended by Elastic, but we can modify them to meet our requirements
Steps:
Go to Alerts and rules > Manage rules.
2. You may choose to edit or retain their default values.
Setting up Alerts Using PagerDuty Connector and Action
The PagerDuty connector uses the v2 Events API to trigger, acknowledge, and resolve PagerDuty alerts.
Creating PagerDuty Service and Intergration
Create a service and add integrations to begin receiving incident notifications.
Steps:
In Pager Duty, go to Services -> Service Directory and click New Service. On the next screen you will be guided through several steps.
2. Name: Enter a Name and Description based on the function that the service provides and click Next to continue.
3. Assign: Select Generate a new Escalation Policy or Select an existing Escalation Policy. Click Next to continue.
4. Integrations: Select the integration(s) you use to send alerts to this service from the search bar, dropdown, or from the list of our most popular integrations. In this case, we will select Elastic Alerts.
Click Create Service. Take note of your Integration Key and Integration URL.
Creating a Connector
Steps:
Go to Stack Monitoring -> Alerts and Rules -> Manage Rules.
2. Go to Rules and Connectors -> Connectors -> Create connector.
3. Select PagerDuty connector.
4. Enter a Connector Name. Also, enter the API URL (optional) and the Integration Key.
5. Click Save.
Editing Rules to Monitor via PagerDuty
Edit rule and add a connector.
Steps:
Choose a rule that you want to monitor and receive alerts via PagerDuty. Then click Edit Rule
2. Specify the interval in minutes when you want to receive the alert once the metric crosses the threshold.
3. Select PagerDuty as connector type.
4. Enter a Summary. Choose the severity level. Click Save.
Curator is an index management tool provided by open source Elasticsearch. This tool allows you to create, delete, and disable indexes. It also allows you to merge index segments.
This blog postdescribes how to install Curator and how to delete old indices based on time.
Installing Curator
pip3 install elasticsearch-curator
Check curator version
curator --version
Note: If you encounter this error while installing.
ERROR: Cannot uninstall ‘PyYAML’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Execute the command below to fix it.
sudo -H pip3 install --ignore-installed PyYAML
Create a curator.yml file
In this file, indicate the host, port, username, and password.
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 192.168.1.1
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
username: elastic
password: Password
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
The example configuration below will delete indices with a prefix pattern basketbal-scores- (full index format: basketbal-scores-2022.04.01) older than 14 days.
---
actions:
1:
action: delete_indices
description: >-
Delete indices older than 14 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: basketbal-scores-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 14
exclude:
# Housekeep indices more than 14 days
0 0 * * * /usr/local/bin/curator /home/scripts/delete_indices_time_base.yml --config /home/scripts/curator.yml >> /home/scripts/log/curator_purging_time_base.log 2>&1