Beginner’s Guide to Docker and Docker Compose (Part 2/2)
Microservice architecture is fairly common these days. In microservice architecture, your application depends on many small services that run separately and communicate between themselves to process data. For example, imagine you have a web application that has an authentication service, a payment service, a file management service, front-end service, database service etc. For each of the services (separate applications), you will create separate containers. While running your web application, you will want all the required service containers up and running and properly configured for internal communication. Docker Compose is an extremely helpful tool in this scenario.
In part 1, we discussed how to containerize a single application using Docker. Here we will see how to manage multi-container setup with Docker Compose.
Installation
Docker Compose should get installed with Docker installation. We have discussed how to install Docker in part 1.
Get the Example Project
Check this Github repo where I have the example code for both part 1 and part 2. You can find the example project for this article inside the docker-compose
folder . Our project has a FastAPI server, a React client and a MySql database system. We will maintain them in three separate containers. Don’t worry if you don’t know any of these. I will explain what you need to know to understand our docker setup.
(Feel free to clone the repo and play with it.)
Project structure
We have three folders — server, client, database. In the server folder, you will find a requirements.txt
(list of libraries that our server application needs), a Dockerfile
and some *.py
files (main.py
is the entrypoint). The Dockerfile
is the same as we discussed in part 1, only change is, we are installing all the packages from the requirements.txt
file this time.
In the client folder, we have our React code. Let’s look at the Dockerfile here. First, we are using Node version 16 image, as React is a JS library and Node is a JS runtime environment so this is the appropriate image to use here. Then we copied package.json
file to the client folder inside the container and changed our working directory to that folder. package.json
file contains all the JS packages that our client application needs. We install these dependencies by running npm install
where we copied our package.json
file. Then we copied all of our application code inside the client
folder in our container. And finally set the command that will start our application, which is npm start
. (If you want to play with the application code, look at the app.js
file)
For MySql, we don’t need to write a Dockerfile. We will use an image directly from the docker hub.
Why use Docker Compose
Why do we want to use docker compose? As you saw in the previous post, for starting one container, we had to map host machine’s port with the container’s port, we had to set a volume. And now we have to ensure that when we are running our project, the database container, the server container and the client container are up (this is a fairly simple application, but in a large application, the list of services might be hard to keep track in your mind every time, so you might forget to run a service and wonder why your application isn’t turning on). Also you need to make sure all the containers are running under the same network so that they can communicate. Managing all of these configurations every time you start working will be troublesome. Docker Compose helps us with that. With a very simple docker compose command, we will be able to start all the containers with proper setup. We can also see each services’ log, status, and stop all containers with similarly simple commands.
Configuring Project for Docker Compose
We start by creating a docker-compose.yml
file in the root of our project. Here we list all the services and configurations. You can see that we already have this file in our project. Let’s look at the file and explain which field is used for what.
version
Different versions of docker compose file format might have different ways of defining something. In the version field, we define which version we are following in this docker compose file.services
Here we define the different services of our project. For our example project, we need to run three services — the client, the server and the databasebuild
This field specifies the location of theDockerfile
for the service so that docker knows where to build this container from. For our project, we created custom Dockerfile for both the client and the server. So for these services we specified their locations.image
If we don’t need to build an image to run the container because we already have one, then we can specify the image name here. This defines the image that can be used to run this container. In our project, we didn’t need to create any custom Dockerfile for our database service. So we defined the image name from docker hub.ports
Here we map the host machine’s ports with the container’s ports. Our client app runs on port 3000. So we mapped our host’s port 3000 with the container’s port 3000 (they don’t need to be the same. You could also do5000:3000
to map host’s port 5000 to container’s port 3000. Then, in the browser, we will go tolocalhost:5000
). Same for the server.expose
Here we expose a container’s port in the network under which this container is running. This allows other containers to communicate with this container through this port. Difference betweenports
andexpose
is, expose only opens the port internally in the network, so the host cannot access this port but all the containers in that network can. In our project, we don’t need to access the database from our host machine. If they can communicate internally, that is enough for us. So we exposed port 3306 for database.volumes
Here we define the volumes we want to set for the service. The volume is defined as<host-location>:<container-location>
(in the example project, under the client service, you will see I have added a volume without a:
, this is called an Anonymous volume. if you are interested, checkout this stackoverflow answer. In the database service, the first volume is to persist the data. The second location can be used to run scripts when starting the container. I haven’t used any scripts though.)env_file
If you have any environment variables that your application need, and want to keep them in a.env
file, you can specify that here. (I have used some environment variables in the project and kept them in the .env file. I pushed it to github, so that you can run the project. Usually you don’t push the env file to github, because it contains secrets, like password)environment
You can also set environment variables directly in the docker-compose file under this field. If you use bothenv_file
andenvironment
,environment
will overwrite variables from theenv_file
depends_on
When one service is dependent on other services, we define that here. Docker will first start those services, and then start this one. In our project, client depends on server and server depends on database so we specified that accordingly. When we will tell docker-compose to start client, the start serial will be first database, then server, and then client.
To address a service in the code from another service, you can use the service name. For example, our client service needs to send request to our server. In the code of our client application, look at the package.json
file, you will find this line,
"proxy": "http://contacts-server:8000/"
Similarly, our server can find the database by using contacts-db
as the hostname because that is the service name we used in our docker-compose.yml
file.
Now, to start the services, open a terminal in the project’s root directory, where the docker-compose.yml
file is, and run the following command
docker compose up
Docker will start the containers. On first run, it will pull and build the images it needs. So you will need to wait for that to finish. Once that completes, you can go to localhost:3000
(or another port if you changed that) in your browser to load the front-end.
You will see in the terminal, where we ran the up command, the logs of the services, but all the logs are mixed up together. If you want to follow the log of only one service, for example, for the server, you can run this command in another terminal,
docker compose logs -f --tail 100 <service-name>
(You can exit the log by pressing ctrl+c)
The -f
flag is used to “follow” the log, if you omit this, it will show all the logs till that point and exit. The--tail 100
says, “show me the last 100 lines of the log”. You can use any number here.
If you press crtl+c in the terminal where you ran the up command, docker will stop the containers. However, if you use -d
flag with the up
command, the containers will run in detached mode. In that case, to stop the containers, you run
docker compose stop
Next time, to start the containers again, you can use,
docker compost start
Another way to stop the containers is to run
docker compose down
Difference between stop and down is, stop will stop and keep the containers for later use but down will stop and remove the containers. So you will need to use the up
command to create the containers again.
Bonus
If you want to get inside a container, execute some command there, you can use the docker exec
command. For example, the following command will give you a bash terminal inside your container,
docker exec -it <container name> /bin/bash
Try to get into the databasse container and see if you can use the mysql cli to check the database.
That’s It!
There are many other docker commands and configuration options which I haven’t covered in this 2 part series, to keep it simple. I believe these basic concepts will help you to just get started and get familiar with docker. I encourage you to keep tweaking the configuration and explore other options on your own to see how they affect the setup and get a better understanding of the power of Docker.