centralized logging

Setting up a CI/CD pipeline with centralized logging

An application developed by two engineers began to grow and more developers should be onboarded. What previously worked for two people (simple manual deployment on a webserver and searching through textfile logs when problems arise) wouldn’t work for a bigger team and multiple instances of the product deployed in the cloud. On a mission to cope with that as a new developer in the team I started by simplifying the deployment and packaging the application with a web server into a docker container. Because we were already using Amazon Web Services I could use its infrastructure to clusterize our servers. After writing so called TaskDefinitions I could tell the cluster to automatically use our uploaded docker container and deploy it as a service so that it’s available in the web. To make the task of deploying new code easier I used a feature of our web based git-repository manager GitLab. I configured a CI/CD pipeline through a script which builds committed code automatically, uploads it as tagged containers and restarts the AWS services via the AWS command line interface. The only thing that was missing now was a centralized logging solution enabling us to analyze logs from all of our systems. Installing Elasticsearch proved to be the solution. After routing all the docker container’s sysout to the Elasticsearch engine via Fluentd (a data collector and preprocessor), we could analyze and search through our logs easily with a neat web interface called Kibana.
Subscribe to centralized logging