Enable logging and monitoring that will let us monitor our deployments in Kibana.
Integrate Elastic Alert with PagerDuty.
Enabling Logging and Monitoring
In Production, the best practice is to send our deployment logs and metrics to a dedicated monitoring deployment. Monitoring indexes logs and metrics into Elasticsearch and these indexes consume storage, memory, and CPU cycles like any other index. We can avoid affecting other production deployments and view the logs and metrics, even when production deployment is unavailable, by using a separate monitoring deployment.We need a minimum of three monitoring nodes to make monitoring highly available.Read More
Steps:
The Monitoring deployment must have the same major version and must be in the same region as your production deployments. Once the monitoring deployment has been setup, below are the steps to enable monitoring and logging.
Go to one of your deployments then go to Logs and Metrics.
2. Choose your Monitoring deployment where you want to ship data to. Choose Logs and Metrics. Click Save.
3. Repeat these steps for all your deployments.
Viewing Cluster Listing
Let’s view our clusters.
Steps:
1. Go to your Monitoring deployment.
2. Open Kibana.
3. Go to Stack Monitoring, which is under the Management section. When you open Stack Monitoring for the first time, you will be asked to acknowledge the creation of these default rules. They are initially configured to detect and notify on various conditions across your monitored clusters. You can view notifications for: Cluster health, Resource utilization, and Errors and exceptions for Elasticsearch in real time.
4. Click on one cluster to see its overview.
Review and Modify Existing Stack Monitoring Rules
Elastic Stack monitoring feature provides Kibana alerting rules, which is an out-of-the-box monitoring feature. These rules are preconfigured based on the best practices recommended by Elastic, but we can modify them to meet our requirements
Steps:
Go to Alerts and rules > Manage rules.
2. You may choose to edit or retain their default values.
Setting up Alerts Using PagerDuty Connector and Action
The PagerDuty connector uses the v2 Events API to trigger, acknowledge, and resolve PagerDuty alerts.
Creating PagerDuty Service and Intergration
Create a service and add integrations to begin receiving incident notifications.
Steps:
In Pager Duty, go to Services -> Service Directory and click New Service. On the next screen you will be guided through several steps.
2. Name: Enter a Name and Description based on the function that the service provides and click Next to continue.
3. Assign: Select Generate a new Escalation Policy or Select an existing Escalation Policy. Click Next to continue.
4. Integrations: Select the integration(s) you use to send alerts to this service from the search bar, dropdown, or from the list of our most popular integrations. In this case, we will select Elastic Alerts.
Click Create Service. Take note of your Integration Key and Integration URL.
Creating a Connector
Steps:
Go to Stack Monitoring -> Alerts and Rules -> Manage Rules.
2. Go to Rules and Connectors -> Connectors -> Create connector.
3. Select PagerDuty connector.
4. Enter a Connector Name. Also, enter the API URL (optional) and the Integration Key.
5. Click Save.
Editing Rules to Monitor via PagerDuty
Edit rule and add a connector.
Steps:
Choose a rule that you want to monitor and receive alerts via PagerDuty. Then click Edit Rule
2. Specify the interval in minutes when you want to receive the alert once the metric crosses the threshold.
3. Select PagerDuty as connector type.
4. Enter a Summary. Choose the severity level. Click Save.