Contenedores Docker para crear un stack ELK (Elastica + Logstash + Kibana)

En esta oportunidad vamos a crear un stack de contenedores Docker ELK (Elastic + Logstash
+ Kibana) que nos permitirá llevar un control de los registros del sistema.

Nuestro stack ELK constará de:

Nginx. Proporcionará el proxy en Kibana y la capa de autenticación.
Logstash. Analizará los registros de entrada y de insertar los datos en Elasticsearch.
Kibana. Proporcionará herramientas de visualización y exploración.
Elasticsearch. Almacenamiento, índice y solución de búsqueda para todo el stack.

Necesitaremos crear una serie de archivos Dockerfile y de configuración, la estructura será tal como:

├── docker-compose.yml
├── kibana
│   ├── config
│   │   └── kibana.yml
│   ├── Dockerfile
│   └── entrypoint.sh
├── kibana-nginx
│   ├── Dockerfile
│   ├── kibana.conf
│   ├── kibana.htpasswd
│   └── nginx.conf
└── logstash
    ├── config
    │   └── nginx-syslog.conf
    └── Dockerfile

El primer archivo es el docker-compose.yml:

nginx:
  build: kibana-nginx
  links:
    - kibana
  ports:
    - "80:80"
elasticsearch:
  image: elasticsearch:latest
  command: elasticsearch -Des.network.host=0.0.0.0
  ports:
    - "9200:9200"
    - "9300:9300"
logstash:
  command: "logstash -f /opt/logstash/server/etc/conf.d/"
  build: logstash
  volumes:
    - ./logstash/config:/etc/logstash/conf.d
  ports:
    - "5000:5000"
  links:
    - elasticsearch
kibana:
  build: kibana/
  volumes:
    - ./kibana/config/:/opt/kibana/config/
  ports:
    - "5601:5601"
  links:
    - elasticsearch

Cómo vemos 3 de los 4 contendores Docker se construirán en base a las definiciones que incorporaremos en las 3 carpetas kibana-proxy, kibana y logstash.

Para el contenedor Docker kibana-nginx que actuará de proxy en kibana parte del contenedor Docker de nginx y copia los tres archivos de configuración, el Dockerfile:

FROM nginx
COPY kibana.htpasswd /etc/nginx/conf.d/kibana.htpasswd
COPY nginx.conf /etc/nginx/nginx.conf
COPY kibana.conf /etc/nginx/sites-enabled/kibana.conf

Fichero que contiene la contraseña para kibana:

kibana:$apr1$Z/5.LALa$P0hfDGzGNt8VtiumKMyo/0

Configuración del nginx:

events {
    worker_connections  1024;

}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log    /var/log/nginx/access.log;

    include       /etc/nginx/conf.d/*.conf;
    include       /etc/nginx/sites-enabled/*;

}

Configuración del sitio:

server {
    listen 80 default_server;
    server_name logs.dondocker.com;
    location / {
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
        proxy_pass http://kibana:5601;

    }

}

Para el contenedor Docker kibana que proporcionará las herramientas de visualización y exploración, este contenedor Docker parte de la imagen oficial de kibana:

FROM kibana:latest
RUN apt-get update && apt-get install -y netcat
COPY entrypoint.sh /tmp/entrypoint.sh
RUN chmod +x /tmp/entrypoint.sh
RUN kibana plugin --install elastic/sense
CMD ["/tmp/entrypoint.sh"]

El fichero entrypoint.sh para esta imagen:

#!/usr/bin/env bash
# Wait for the Elasticsearch container to be ready before starting Kibana.
echo "Stalling for Elasticsearch"
while true; do
    nc -q 1 elasticsearch 9200 2>/dev/null && break
done    
echo "Starting Kibana"
exec kibana

El fichero de configuración del kibana será:

# Kibana is served by a back end server. This controls which port to use.
port: 5601

# The host to bind the server to.
host: "0.0.0.0"

# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://elasticsearch:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch_preserve_host: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana_index: ".kibana"

# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass

# If your Elasticsearch requires client certificate and key
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsarech instance, put
# the path of the pem file here.
# ca: /path/to/your/CA.pem

# The default application to load.
default_app_id: "discover"

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# ping_timeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
request_timeout: 300000

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
shard_timeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# startup_timeout: 5000

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
verify_ssl: true

# SSL for outgoing requests from the Kibana Server (PEM formatted)
# ssl_key_file: /path/to/your/server.key
# ssl_cert_file: /path/to/your/server.crt

# Set the path to where you would like the process id file to be created.
# pid_file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# This will also turn off the STDOUT log output.
# log_file: ./kibana.log
# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
 - plugins/dashboard/index
 - plugins/discover/index
 - plugins/doc/index
 - plugins/kibana/index
 - plugins/markdown_vis/index
 - plugins/metric_vis/index
 - plugins/settings/index
 - plugins/table_vis/index
 - plugins/vis_types/index
 - plugins/visualize/index

Para el contenedor Docker logstash analizará los registros de entrada y de insertar los datos en Elasticsearch, esta imagen parte del contenedor Docker de logstash, el Dockerfile:

FROM logstash:latest
COPY config/nginx-syslog.conf /opt/logstash/server/etc/conf.d/nginx-syslog
EXPOSE 5000
CMD ["logstash"]

Y el fichero de configuración:

input {
  tcp {
    port => "5000"
    type => "syslog"
  }
  udp {
    port => "5000"
    type => "syslog"
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
  }
}

filter {
  if [type] == 'syslog' {
    date {
      match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
      remove_field => [ "timestamp" ]
    }

    useragent {
      source => "agent"
    }

    mutate {
      convert => ["response", "integer"]
      convert => ["bytes", "integer"]
      convert => ["responsetime", "float"]
    }

    geoip {
      source => "clientip"
      target => "geoip"
      add_tag => [ "nginx-geoip" ]
    }

    grok {
      match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
      overwrite => [ "message" ]
    }
  }
}

Y ahora levantamos el stack de contenedores Docker:

docker-compose up -d

Y listo podemos probar ver en acción a kibana:

http://{{tu-ip}}:5601/

Y con esto logramos de una forma sencilla ejecutar ELK usando contenedores Docker.

Hasta la próxima.

Leave a comment

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *