Home » ELK Stack » Dashboard for Nginx logs with Kibana+Elasticsearch

Dashboard for Nginx logs with Kibana+Elasticsearch

Not so long ago, I talked about how to configure the ELK Stack for the centralized storage of logs. Today I want to tell you in detail about how to create a coordinate geographic map based on nginx logs and create a dashboard for it. On this dashboard it is very convenient to monitor the status of a web project — investigate the initiators, analyze errors.

Introduction

Let’s start the creation of a dashboard with the most difficult – setting up a geo-map of requests. The official website has a detailed manual on the creation of a GeoIP card. It seems everything is clear. No special settings are required. Everything works out of the box. But I didn’t want to work with everything that was described there. I had to dig deeper with elasticsearch and its patterns in order to figure out what the reason was.

The thing is that the method described in the instructions works out of the box only if you use the standard template for indexes in the format logstash- *. Most likely you will have a lot of different patterns and indexes after you start the system in commercial operation.

The main difficulty here is that for the geoip map to work, you need in the field template with the type geo_point. After creating the index, the field type can no longer be changed. That is, simply converting ip-based data into coordinates is not difficult, it can make a geoip module into logstash. But then you don’t turn the coordinates into a geo_point data as a number. It is necessary at the very beginning to create a template with such fields.

I hope clearly explained 🙂 If it is not clear right away, then figure out further along the course of my story. I myself have figured out in this kitchen, decently picked and googled.

In the future, I will assume that your elasticsearch and kibana are configured approximately as in my instructions. The logstash filter responsible for processing nginx logs is as follows:

if [type] == "nginx-ext-access" {
        grok {
            match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
            overwrite => [ "message" ]
            }
        mutate {
            convert => ["response", "integer"]
            convert => ["bytes", "integer"]
            convert => ["responsetime", "float"]
            }
        geoip {
            source => "clientip"
            target => "geoip"
            add_tag => [ "nginx-geoip" ]
            }
        date {
            match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
            remove_field => [ "timestamp" ]
            }
        useragent {
            source => "agent"
            }
    }

And so the logs go into elasticsearch:

if [type] == "nginx-ext-access" {
        elasticsearch {
            hosts     => "localhost:9200"
            index    => "nginx-ext-%{+YYYY.MM.dd}"
        }
    }

Creating an index template

As I said above, in order for your geoip card to work, you must have a field of type geo_point in the index template. If they are not, then you will immediately get an error when creating a visualization with the Coordinate Map:

No Compatible Fields: The "nginx-*" index pattern does not contain any of the following field types: geo_point

No Compatible Fields: The "nginx-*" index pattern does not contain any of the following field types: geo_point

What I just didn’t do after I got this error. I checked the work of the geoip module. I watched the fields with coordinates based on ip address. Everything was in order and everything was in place.

geoip card request data

But the geoip map in Kibana didn’t work. Googling this topic a little, I slowly began to understand what was the matter.

First, let’s take a look at our index template with nginx logs. To do this, go to Management -> Index Management. Choose our index and look at the Mapping. We are interested in the location field.

mapping index with nginx logs

It has type float, and we need, as in the article on the site, the type geo_point.

Geo_point field type for coordinates

Then I began to understand how to change the field type in the template. It turned out that this can not be done. The type of fields can be set only at the moment of creating the index from the template. So you need to understand how to make your template with the right fields.

First, let’s see what templates we have now installed. To do this, go to Dev Tools and execute the command:

GET /_template

View template in elasticsearch

Pay attention to the logstash template. It has everything we need. If your index will have a logstash- * template, then you do not need to configure anything, everything will work out of the box. We will add a new nginx * template and set the field parameters necessary for the operation of the geoip card.

Run the following code to create the nginx template, similar to the logstash template.

PUT _template/nginx
{
"index_patterns": [
      "nginx*"
    ],
    "settings": {
      "index": {
        "refresh_interval": "5s"
      }
    },
    "mappings": {
      "_default_": {
        "dynamic_templates": [
          {
            "message_field": {
              "path_match": "message",
              "match_mapping_type": "string",
              "mapping": {
                "type": "text",
                "norms": false
              }
            }
          },
          {
            "string_fields": {
              "match": "*",
              "match_mapping_type": "string",
              "mapping": {
                "type": "text",
                "norms": false,
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              }
            }
          }
        ],
        "properties": {
          "@timestamp": {
            "type": "date"
          },
          "@version": {
            "type": "keyword"
          },
          "geoip": {
            "dynamic": true,
            "properties": {
              "ip": {
                "type": "ip"
              },
              "location": {
                "type": "geo_point"
              },
              "latitude": {
                "type": "half_float"
              },
              "longitude": {
                "type": "half_float"
              }
            }
          }
        }
      }
    },
    "aliases": {}
  }

Check the list of available templates.

New nginx log template in elasticsearch

Everything is good. Now new indexes falling under this template will contain the required fields. You can either delete the current indexes to create new ones, or wait until they are created according to your rules.

Before moving on, check that you actually have a geo_point field in the index template. Go to Management -> Index Patterns and see the fields of our index, after updating them by clicking Refresh field list.

View fields for the index nginx in kibana

If you have the same, you can move on.

Just in case, I’ll tell you about the wrong way I’ve taken initially, trying to solve the problem with the template. I learned that logstash stores its templates in the /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch directory. I decided to change its template, for this I simply edited the elasticsearch-template-es6x.json file, changing the template for the index. Restarted logstash, but nothing has changed. This template was filled in elasticsearch when you first start. Then it does not change. It must be removed so that it will be installed again with my changes. I did not do it, but just downloaded a new template.

Setting up a Kibana coordinate map

Now we will create a geographic map with the distribution of nginx requests on this map based on ip addresses. Go to the Visualize section and add the Coordinate Map. We select an index with nginx logs. We indicate in the map a field with coordinates – geoip.location.

Setting up a Coordinate Map in Kibana

Run the visualization and see the result.

Geoip map in Kibana and Elasticsearch for nginx requests

Now this card can be added to the dashboards along with the rest of the graphs. I will not tell you how to add regular graphics. There, though not quite everything is obvious, but not so difficult. It is better to understand and draw various graphs to understand which visualization is most convenient for you. I picked to your liking. I redrawed and reworked many times until I was satisfied with the result.

Configuring Dashboard for nginx

I set up such a dashboard in Kibana for Nginx logs (clickable, large picture, open in a separate tab to view).

Dashboard for nginx in kibana

Here is the following information:

  1. Geoip card
  2. Distribution of requests by country.
  3. List of the most popular urlov.
  4. List of the most active IP.
  5. Distribution of requests by type of response.
  6. Traffic
  7. Directly nginx logs in their pure form.

With such a dashboard it is very convenient to investigate incidents and just watch the statistics. For example, select the error code and view all the information on it. Immediately highlighted ip, which spam requests. You can immediately get all the information from them – where they come from and for what urls they spam. And so on. In general, very convenient. I can not imagine a big web project without such a dashboard. Previously, log analysis was much more difficult for me. And as I used to admin without such a tool 🙂 Live and learn.

Conclusion

I have been thinking about dashboards for nginx logs for a long time. Draw graphics, select data. As a result, I stopped at this option. I couldn’t think of anything useful for output. By the way, the Geo card itself is more for beauty here. I don’t use it. I see no practical use in it. If you have advice on any other useful data that can be displayed – share. Of course, you can add information on user agents, systems and browsers. But it seems to me that it is more convenient to look at such things in third-party analytics. There will be more accurate data.

We should also add information on request_time, upstream_response_time, upstream_cache_status, etc. to the nginx logs. Then parse this information and make a separate dashboard for monitoring performance and upstream responses. But it will be a separate thing. And here I have provided general information for the primary analysis.

Leave a Reply

Your email address will not be published.

Нажимая кнопку "Отправить комментарий" Я даю согласие на обработку персональных данных.