Docker

Installing the newest version of docker-compose should solve this issue.

Current docker-compose is probably installed via 'apt-get install'.

If the 'docker-compose -v' is not 1.27.4 (currently newest version)

Run following:

'sudo curl -L "https://github.com/docker/compose/releases/download//docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose'

Instead of VERSION you put 1.27.4 which is the currently newest version.

Afterwards apply executable permissions on the downloaded binary:
'sudo chmod +x /usr/local/bin/docker-compose'

Now after running 'docker-compose -v' you should have the '1.27.4' version and the Dockerfile should not throw an error.

docker-compose invalid type in volume

I am getting a following error when I run docker-compose up: *volumes contains an invalid type, it should be a string* my docker-compose file: data-emitter: build: context: DataEmitter dockerfile: ./Dockerfile volumes: - type: bind source: ./DataEmitter/data/logs.txt target: /data/logs.txt read_only: true Since there is a type: bind, what is the problem here?

How to properly back up Docker volumes?

My problem was to develop a backup strategy for a running Linux server with multiple docker containers which used docker volume to store data. The problem was that it’s not trivial to just tar an volume like an bind mount. The recommended way to achieve this is to first launch a new container and mount the volume from the running container. Then to mount an local host directory as e.g. “/backup” and finally to pass a command that tars the contents of the data volume to a backup.tar file inside our “/backup” directory.

Preparing the first time use of Docker (using Windows OS)

This problem occurred when using Docker for the first time. Before a docker-compose.yml is created and the command docker-compose up can be used to actually run the containers/commands indicated in the file, some configuration has to be done when using Windows OS. In this case, a Docker volume was used. So, the link to the volume had to be set in the sample form: C:\Users\Cordula\Docker\dockerVolume:/var/container_home. The volume location has to be added to the docker-compose.yml. Then adapt in Settings > Advanced the settings for CPUs, Memory, Swap and Disk Image Size because Docker containers might otherwise be stopped automatically due to excessively high load. The resource usage of docker containers can be checked by the command docker stats at any time. In Settings > Shared Drives the drive, on which Docker will be run, should be selected. When using it for the first time, there might occur access problems due to internet security/anti-virus programmes – so for the time of set-up these are recommended to be interrupted for a short time.

For local development it is usually a practise to spawn test containers on a local machine and run them all the time during development.
Also embedded broker/server is also an option, but RabbitMQ and Postgres don’t offer embedded solutions out of the box and setting a custom one would take too much time.
So what you can easily do is spawn local short lived docker containers with these technologies. 
A good and tested solution to this problem is using the Testcontainers test dependency. Test containers
So, with provided RabbitMQ and Postgres images, you just launch containers on Test Suite start and they get automatically destroyed on shutdown.

Setting up a CI/CD pipeline with centralized logging

An application developed by two engineers began to grow and more developers should be onboarded. What previously worked for two people (simple manual deployment on a webserver and searching through textfile logs when problems arise) wouldn’t work for a bigger team and multiple instances of the product deployed in the cloud. On a mission to cope with that as a new developer in the team I started by simplifying the deployment and packaging the application with a web server into a docker container. Because we were already using Amazon Web Services I could use its infrastructure to clusterize our servers. After writing so called TaskDefinitions I could tell the cluster to automatically use our uploaded docker container and deploy it as a service so that it’s available in the web. To make the task of deploying new code easier I used a feature of our web based git-repository manager GitLab. I configured a CI/CD pipeline through a script which builds committed code automatically, uploads it as tagged containers and restarts the AWS services via the AWS command line interface. The only thing that was missing now was a centralized logging solution enabling us to analyze logs from all of our systems. Installing Elasticsearch proved to be the solution. After routing all the docker container’s sysout to the Elasticsearch engine via Fluentd (a data collector and preprocessor), we could analyze and search through our logs easily with a neat web interface called Kibana.

Serving multiple websites on a single host with Docker is against the principles of Docker and micro service architecture. Sure you can do it but what is blocking you from separating it into multiple docker instances. That way, if there is a problem not all of the sites go down. By separating the hosts, you make it easier the diagnose possible problems and minimize down time.

The problem can be solved by using a Nginx reverse proxy. Each application will be exposed through a corresponding sub-domain.

Dockerfile:

FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY proxy.conf /etc/nginx/includes/proxy.conf

proxy.conf:

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_intercept_errors off;

nginx.conf:

# pm config.
server {
listen 80;
server_name site1.myproject.com;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_pass http://site1_webserver_1;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
# api config.
server {
listen 80;
server_name site1.myproject.com;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_pass http://site2_webserver_1;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
# Default
server {
listen 80 default_server;
server_name _;
root /var/www/html;
charset UTF-8;
access_log off;
log_not_found off;
error_log /var/log/nginx/error.log error;
}

The proxy_pass is the name of the application's docker container.

Technology:

Multiple websites on a single host with Docker

It is not possible to run multiple web applications in Docker at the same time. Since all of them use port 80, only one Docker instance is accessible.

Pages

Subscribe to Docker