In this step, you will learn how to dockerize your Express application. Docker is a platform that allows you to develop, ship, and run applications in containers. Containers are lightweight, standalone, and executable packages that contain everything needed to run an application, including the code, runtime, system tools, libraries, and settings.
To dockerize your Express application, you need to create a Dockerfile
in the root of your project. The Dockerfile
contains instructions for building a Docker image that will run your application in a container.
# Starting with a lightweight Node.js image.
FROM node:18-bullseye-slim
# Setting the working directory inside the container. The name of the directory (`workspace`) is arbitrary.
WORKDIR /workspace
# Copying package.json and package-lock.json files to leverage Docker cache during builds.
COPY package*.json ./
# Install dependencies.
RUN npm install
# Copy the rest of the application code.
COPY . .
# Expose the port the app runs in. This is more of a documentation than an actual command.
EXPOSE 3000
# Start the application in development mode.
CMD ["npm", "run", "dev"]
Let’s break down the Dockerfile
:
FROM node:18-bullseye-slim
: This line specifies the base image for the Docker container. In this case, we are using a lightweight Node.js image. The 18-bullseye-slim
tag refers to a specific version of the Node.js image based on Debian Bullseye (Debian 11) and optimized for minimal size and security.
WORKDIR /workspace
: Sets the working directory inside the container where the application code will be copied. You can name this directory anything you like.
COPY package*.json ./
: Copies the package.json
and package-lock.json
files to the working directory. This step is done separately from copying the rest of the application code to leverage Docker’s cache during builds (more on this later). The ./
indicates the current working directory inside the container (/workspace
).
RUN npm install
: Installs the application’s dependencies, as defined in the package.json
file, inside the container. This is essential for setting up the application correctly within the Docker environment.
COPY . .
: Copies the rest of the application code to the working directory inside the container. This includes all the source code, configuration files, and other assets needed to run the application. However, this step will not copy files listed in a .dockerignore
file (if present). We will define a .dockerignore
file in the next section.
EXPOSE 3000
: This line indicates that the application inside the container will be listening on port 3000 (a default setting for Express servers). Note that this does not expose the port to the host machine; instead, port mapping is handled when running the container or in the docker-compose.yml
file. So this line is more of a documentation than an actual command.
CMD ["npm", "run", "dev"]
: This command starts the application in development mode. It runs the npm run dev
script defined in the package.json
file. This script starts the server using nodemon
for live reloading during development. This command can be overridden when running the container or in the docker-compose.yml
file.
Docker images are built in layers, each layer representing a specific set of changes to the base image. The FROM
instruction in the Dockerfile specifies the base image from which the new image will be built. In this case, we are using the official Node.js image as the base image.
There are different versions and variants of the Node.js image available on Docker Hub, each optimized for different use cases. The 18-bullseye-slim
tag refers to a specific version of the Node.js image based on Debian Bullseye (Debian 11) and optimized for minimal size and security. Using a slim image helps reduce the size of the final Docker image, making it more efficient to build and deploy.
Another common base image is node:alpine
, which is based on the Alpine Linux distribution and is even smaller than the slim variant. However, Alpine Linux uses a different package manager (apk
) and has some differences in behavior compared to Debian-based images. Moreover, the Alpine distribution uses musl
as the implementation for the C standard library, which can sometimes lead to compatibility issues with native Node.js modules that expect the glibc
implementation. For most Node.js applications, the slim variant is a good balance between size and compatibility.
COPY package*.json ./
The COPY package*.json ./
instruction is a common pattern used in Dockerfiles to optimize the build process by leveraging Docker’s cache mechanism. When building a Docker image, each instruction in the Dockerfile creates a new layer in the image. Docker caches the layers to speed up subsequent builds by reusing previously built layers when the source code has not changed.
By copying the package.json
and package-lock.json
files separately from the rest of the application code, we ensure that the dependencies are installed only when these files change. If the application code changes but the dependencies remain the same, Docker will reuse the cached layer with the installed dependencies, saving time during the build process.
This pattern is particularly useful when working with Node.js applications, as the dependencies are typically defined in the package.json
file and are less likely to change frequently compared to the application code. By copying the dependency files separately, we can take advantage of Docker’s cache and avoid reinstalling dependencies unnecessarily.
The CMD ["npm", "run", "dev"]
instruction specifies the command to run when the container starts. In this case, it runs the npm run dev
script defined in the package.json
file. The dev
script typically starts the server in development mode using tools like nodemon
for live reloading. This setup allows developers to make changes to the code and see the effects immediately without restarting the server manually.
When running the container in production, you might use a different command to start the application, such as npm start
or node server.js
. The CMD
instruction can be overridden when running the container to specify a different command, providing flexibility for different environments and use cases.
CMD
vs. RUN
InstructionsThe CMD
instruction specifies the command to run when the container starts, while the RUN
instruction executes a command during the build process to set up the environment or install dependencies. The RUN
instruction is used to install dependencies, set up the application, or perform other tasks that are necessary for the container to run but are not part of the runtime behavior.
In contrast, the CMD
instruction defines the default command to run when the container starts. This command is executed in the container’s default shell (usually /bin/sh -c
), and it can be overridden when running the container to provide different behavior. The CMD
instruction is typically used to start the application or process that the container is designed to run.
If you don’t specify a CMD
instruction in the Dockerfile, the container will start and then immediately stop because there is no process running in the foreground. If you don’t want to start the server immediately, you can use a command like CMD ["sleep", "infinity"]
to keep the container running indefinitely until you manually start the server. This can be useful during development when you want to wait for other services to start or attach a debugger to the server.
Alternatively, you can use the tail -f /dev/null
command to keep the container running indefinitely. This command continually follows the end of an empty file (/dev/null
). It’s a common Unix pattern for a command that effectively does nothing but remains in the foreground, thus keeping the container alive. It’s generally understood and recognized in Unix and Linux circles.
.dockerignore
FileSimilar to the .gitignore
file, the .dockerignore
file specifies which files and directories should be excluded when building a Docker image. This helps reduce the size of the image and speeds up the build process by avoiding unnecessary files.
# Ignore environment settings
.env
# Ignore node modules
node_modules/
# Ignore logs
logs
*.log
npm-debug.log*
# Ignore version control directories
.git
.gitignore
# Ignore build output directories
/dist
/out
# Ignore temporary files
*.tmp
*.temp
# Ignore configuration files that are not needed
.dockerignore
Dockerfile
*.md
LICENSE
In the .dockerignore
file, we specify the files and directories that should be ignored during the Docker build process. This includes environment settings (.env
), node modules (node_modules/
), logs, version control directories (.git
), build output directories (dist
, out
), temporary files, and configuration files that are not needed in the Docker image.
docker-compose.yml
FileTo run the Express application in a Docker container, you can use Docker Compose to define the network and services required for the application. The docker-compose.yml
file specifies the configuration for the application’s services, including the Express application, MongoDB, and Redis.
version: "3"
# Define Docker Compose network and services
networks:
backend:
driver: bridge
services:
# Express application configuration
app:
build: # Build the image from the Dockerfile
context: . # Use the current directory
ports:
- "${PORT}:${PORT}"
volumes:
- .:/workspace # Mount the current directory to /workspace in the container
networks:
- backend # Connect to the backend network
environment: # Set environment variables (.env is ignored so we set them here)
- NODE_ENV=development
- PORT=${PORT}
- MONGODB_HOST=mongodb # MongoDB service name (don't use localhost)
- MONGODB_PORT=27017 # Default MongoDB port (don't use ${MONGODB_PORT})
- MONGODB_DATABASE=${MONGODB_DATABASE}
- REDIS_HOST=redis # Redis service name (don't use localhost)
- REDIS_PORT=6379 # Default Redis port (don't use ${REDIS_PORT})
depends_on: # Ensure MongoDB and Redis services are started first (not fully reliable)
mongodb:
condition: service_healthy
redis:
condition: service_healthy
command: /bin/sh -c "npm run seed && npm run dev" # Run the seed script and then start the app
# MongoDB service configuration
mongodb:
image: mongo
ports:
- "${MONGODB_PORT}:27017"
volumes:
- mongodb_data:/data/db
networks:
- backend # Connect to the backend network
environment:
- MONGO_INITDB_DATABASE=${MONGODB_DATABASE}
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.runCommand('ping').ok", "--quiet"]
interval: 10s
timeout: 10s
retries: 5
restart: always
# Redis service configuration
redis:
image: redis
ports:
- "${REDIS_PORT}:6379"
volumes:
- redis_data:/data
networks:
- backend # Connect to the backend network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 10s
retries: 5
restart: always
# Define volumes for data persistence
volumes:
mongodb_data:
redis_data:
We will now break down some important sections of the docker-compose.yml
file.
Docker Compose sets up a default network and manages networking between services automatically. However, custom networks can be defined for better control over inter-service communication. Here, we define a custom network named backend
with the bridge
driver. The bridge
driver forms an isolated network, facilitating container communication.
Services (app
, mongodb
, redis
) are connected to the backend
network, allowing them to communicate using their service names (mongodb
, redis
) instead of localhost
. This is crucial for proper networking within the Docker environment.
Three services are defined: app
, mongodb
, and redis
. Each service specifies its configuration, including the image to use, ports to expose, data persistence volumes, networks to connect to, environment variables, and dependencies on other services.
build
vs. image
: The build
section states that the app
service’s image should be built using the Dockerfile
in the current directory (.
). In contrast, services like MongoDB and Redis use pre-built images from Docker Hub (image: mongo
and image: redis
).
Security Note: For production environments, it’s recommended to specify non-default user roles and passwords for MongoDB and Redis to enhance security, especially if services are externally exposed.
The volume
section defines volumes for data persistence, ensuring data longevity beyond the container’s lifespan. Two volumes are defined: mongodb_data
for MongoDB data and redis_data
for Redis data. These volumes are mounted to their respective data directories within the containers.
For our Express application, we mount the current directory (.
) to /workspace
in the container. This allows us to observe code changes without rebuilding the image, aligning with the WORKDIR
specified in the Dockerfile and aiding development. However, including source code in the image is not recommended for production environments. Coupling this with running the Express application in development mode (using nodemon
) and live reloading of code changes allows automatic restarts when the code updates.
Docker Compose can read environment variables from a .env
file, provided it is in the same directory as the docker-compose.yml
file. The syntax ${VAR_NAME}
is used to reference environment variables defined in the .env
file, e.g., ${PORT}
references the PORT
environment variable.
However, best practices recommend not copying the .env
file into the container for security reasons. Instead, the environment variables can be defined directly in the docker-compose.yml
file, for each service, under the environment
section.
Note that when accessing services from other services (like the Express application accessing MongoDB or Redis), the service names (mongodb
, redis
) should be used instead of localhost
for proper networking within the Docker network. Likewise, default ports for MongoDB (27017) and Redis (6379) should be used instead of the environment variables (${MONGODB_PORT}
, ${REDIS_PORT}
) to avoid conflicts.
Health checks are defined for the MongoDB and Redis services to ensure they are healthy before the app
service starts. The health checks run at regular intervals (interval
), with a timeout (timeout
) and a number of retries (retries
) before marking the service as unhealthy. This helps ensure that the dependent services are ready before the application starts.
The depends_on
section specifies that the app
service depends on the mongodb
and redis
services. However, this does not guarantee that the dependent services are fully ready before the app
service starts. To address this, the condition: service_healthy
option is used to wait until the dependent services pass their health checks before starting the app
service.
To check the health of the MongoDB service, the mongosh
command is used to evaluate the db.runCommand('ping').ok
expression. For the Redis service, the redis-cli ping
command is used. These health checks ensure that the services are responsive and ready to accept connections before the application starts.
The command
section specifies the command to run when the app
service starts. In this case, the command runs the seed script (npm run seed
) to populate the database with sample data and then starts the application in development mode (npm run dev
). This command can be customized based on the application’s requirements, such as running migrations, seeding data, or starting the application in different modes.
The command
section overrides the default command specified in the Dockerfile (CMD ["npm", "run", "dev"]
). This allows for flexibility in defining the startup behavior of the application based on the environment and dependencies. It is common to keep the default command in the Dockerfile generic and then customize it in the docker-compose.yml
file based on the specific requirements of the application.
You may prefer to not run the seed script every time the container starts. In such cases, you can remove the seed script from the command
section and run it separately when needed. You can even use a command like sleep infinity
to keep the container running without starting the application immediately.
To build and run the Docker container for your Express application, follow these steps:
Build the Docker Image: Run the following command in the root of your project to build the Docker image:
docker-compose build
This command reads the docker-compose.yml
file, locates the Dockerfile
, and builds the Docker image for the Express application.
Run the Docker Container: Start the Docker container using the following command:
docker-compose up -d
This command will run all the services in the docker-compose.yml
file. The output will look something like this:
[+] Running 4/4
✔ Network mern-backend_backend Created 0.0s
✔ Container mern-backend-redis-1 Healthy 10.8s
✔ Container mern-backend-mongodb-1 Healthy 10.8s
✔ Container mern-backend-app-1 Started 10.9s
Access the Express App Container: The command
section in the docker-compose.yml
file starts the Express server in the app
container (mern-backend-app-1
). Had we set it up so it does not start the server, we would need to connect to the container and start the server manually. To do this, you can run the following command:
docker exec -it mern-backend-app-1 /bin/bash
This command connects you to the container. Let’s break down the command:
docker exec
is used to run a command in a running container.-it
is used to connect to the container’s standard input, output, and error.mern-backend-app-1
is the name of the container./bin/bash
is the command to run in the container.Start the Server: Once you are connected to the container, run the following command to seed the database (if you want to):
npm run seed
This command will seed the database with some sample data. The output will look something like this:
Connected to MongoDB
Database seeded!
Connection closed
Next, run the following command to start the server:
npm run dev
You should get an output like this:
[nodemon] 3.1.3
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): src/**/*.ts
[nodemon] watching extensions: ts,json
[nodemon] starting `ts-node src/server.ts`
Connected to MongoDB
Connected to Redis
Server is running on port 3000
The server is running on port 3000. You can access the server by visiting http://localhost:3000
in your browser.
Test the Application: Now the server is running, navigate to http://localhost:3000/users
in your browser or through Postman. This action should display the users and cache them in the new Redis instance running as a Docker container. You can verify this by connecting to and inspecting this Redis instance via the Redis Insight app.
Making Changes: Since we’ve mounted this directory to the container, any changes made to the files in the current directory will be reflected within the container. Additionally, running npm run dev
with nodemon
enables hot reloading, allowing the server to automatically restart whenever you make changes to the server files.
To test this, open the server.ts
file and modify the response message for the /
route from “Hello World!” to “Hello from Docker!“. Save the file. As soon as you do, you should see the server restarting in the terminal, with output similar to:
[nodemon] restarting due to changes...
[nodemon] starting `ts-node src/server.ts`
Connected to MongoDB
Connected to Redis
Server is running on port 3000
Next, visit http://localhost:3000
in your browser. You should see the updated message displayed.
Remember not to close the terminal where your server is running. If you do, the server will stop running. When you connect to a Docker container and start a process like npm run dev
, that process remains attached to your session. Closing the terminal typically sends a HUP
signal to the shell on the container, which by default terminates all foreground processes. To avoid this, consider using tools like nohup
or tmux
to run the process in the background.
When you’re finished with your application, you can stop the server and containers. To stop the server, press Ctrl + C
in the terminal where it’s running. This will terminate the server. Then, type exit
to exit the container and return to your host machine.
To shut down all containers, use the following command:
docker-compose down
This will stop and remove all the containers. The output will look something like this:
Stopping mern-backend-app-1 ... done
Stopping mern-backend-mongodb-1 ... done
Stopping mern-backend-redis-1 ... done
Removing mern-backend-app-1 ... done
Removing mern-backend-mongodb-1 ... done
Removing mern-backend-redis-1 ... done
Removing network mern-backend_backend
In this step, you learned how to dockerize your Express application using Docker and Docker Compose. By creating a Dockerfile
, a .dockerignore
file, and updating the docker-compose.yml
file, you defined the configuration for building and running the Docker container. You also learned how to build the Docker image, run the Docker container, and interact with the container to start the Express server. This setup allows you to develop and test your Express application in a containerized environment, providing consistency and portability across different development environments.