Utilize Docker with this MERN Stack Tutorial Solution

Published:26 April 2022 - 13 min. read

Fredrick Emmanuel Image

Fredrick Emmanuel

Read more tutorials by Fredrick Emmanuel!

Azure Cloud Labs: these FREE, on‑demand Azure Cloud Labs will get you into a real‑world environment and account, walking you through step‑by‑step how to best protect, secure, and recover Azure data.

Are you searching for a way to make your MERN application compatible with all computers while developing the application? You’ve come to the right place! Dockerize your application, and this MERN stack tutorial is just what you need.

In this tutorial, you’ll learn the basics of Dockerizing a MERN stack, how it works, and how to implement it in various applications.

Read on and solve compatibility problems by Dockerizing your application!

Prerequisites

This tutorial will be a hands-on demonstration. If you’d like to follow along, be sure you have:

  • An operating system supported by Docker – This tutorial uses Windows 10.

Setting up Docker

Docker is a software platform that fastens the building, testing, deploying, and managing applications. Docker uses a container to store all dependencies and operating system configurations necessary for applications to run in any environment.

Before Dockerizing a MERN stack, you’ll first need to create a custom node image using a Docker file.

1. Open the Docker application to check if Docker started successfully. Below, you can tell the Docker engine is running since the status bar (bottom-left) is green.

Verifying Docker Engine is Running
Verifying Docker Engine is Running

2. Next, create a project folder named MERN-Docker. This folder will hold all resources for this tutorial.

3. Create a .js file with your preferred code editor in your project folder (MERN-Docker). You can name the file as you like, but the file is named server.js in this tutorial. The server.js file will contain all codes for the node application.

4. Open your terminal and run the following npm command to initialize the application (init –y) and create a package.json file.

npm init --y
Initializing the MERN Application
Initializing the MERN Application

5. Finally, run the below command to install the express dependency, allowing you to create APIs for your node application.

npm i express
Installing the express Dependency
Installing the express Dependency

Setting up an Express Server

After initializing the MERN application and installing the express dependency, you’ll set up a simple express server. This tutorial uses a simple express server to demonstrate how Express and Node applications can be dockerized.

1. Open the server.js file and create an express GET route to /.

The code below sends a welcome message when a GET request is sent to http://localhost:5000.

//Importing and creating an instance of express
const express = require("express");
const app = express();

//Setting PORT to 5000 if PORT is not listed in environmental variables.
const PORT = process.env.PORT || 5000;

// Creating the `GET` route
app.get("/", (req, res) => {
  res.send("<h2>Welcome Friends</h2>");
});

//Starting the express server
app.listen(PORT, () =>
  console.log(`Server running at http://localhost:${PORT}`)
);

2. Now, run the node command below to start the express application.

node server.js

If the application is running correctly, you’ll see the output below.

Running the Express Server
Running the Express Server

3. Finally, make a GET request to http://localhost:5000 using an API testing service, like Postman, to test the express route.

Making a GET Request via Postman
Making a GET Request via Postman

Creating and Building a Custom Image in this Mern Stack Tutorial

After setting up the route, the next step is to integrate the express application into a Docker container using a base image. In this tutorial, you’ll use the node’s official base image to set up the container.

1. Create a file named Dockerfile and populate the following code, which creates a custom node image for your application. A customized image allows you to add your source code to the image and the configurations for your image.

# Sets the base image of the application to the node’s official image.
FROM node:17

# Sets the Working Directory as "/server"
WORKDIR /server
# Copies the package.json file into "/server" and runs npm i
COPY package.json /server
RUN npm i
# Copies the entire source code into "/server"
COPY . /server

# Specifies the port the node app will be running on
EXPOSE 5000

# Runs "node server.js" after the above step is completed
CMD ["node", "server.js"]

2. Run the docker build command below to create a customized image called (-t) node-image in the working directory (.).

docker build -t node-image .

The output below shows how Docker uses the Dockerfile to build the image.

Building the Custom node-image
Building the Custom node-image

3. Lastly, run the docker image command below to list (ls) all built images.

docker image ls

Below, you can see all available images, including the node image you created.

Listing All Available Images
Listing All Available Images

Setting up nodemon

You’ve successfully built a custom node image, and that’s great. But you’ll need help to develop your application when there are changes, and that’s where adding nodemon comes in.

nodemon automatically restarts the application when file changes are detected. But first, you’ll have to install it.

1. Run the npm command below to install nodemon as a dev dependency (--save-dev).

npm i nodemon --save-dev
Installing nodemon
Installing nodemon

2. Next, open the package.json file in your preferred code editor to add the configuration below.

"scripts": {
    "start": "node server.js",
    "dev": "nodemon -L server.js"
},

3. Open the Dockerfile and change the CMD command to the one below.

CMD ["npm", "run", "dev"]

4. Lastly, run the below docker build command to rebuild the image (node-image).

docker build -t node-image .

You can see below that Docker only loaded the cached data in the second step and ran the command from the third to the last step. This behavior results from the changes you made to the package.json file (Docker caching).

Building the node-image
Building the node-image

Speeding up Docker Creations by Excluding Files and Folders

Perhaps you want to speed up the Docker creation and protect sensitive content. If so, create a .dockerignore file where you’ll specify the files and folders to ignore from being copied into your Docker container.

1. Create a file named .dockerignore with your text editor and list the files, as shown below, to exclude from copying into your Docker container.

node_modules
Dockerfile
.dockerignore

2. Now, run the following docker build command to rebuild the image (node-image).

docker build -t node-image .

3. After building the image, execute the docker run command below to start a node-app container using the node-image on port 5000 (-p 5000:5000). This command also makes the container accessible through http://localhost:5000.

By default, Docker has a security mechanism that prevents other machines from accessing the Docker container. The only way to access the Docker container is by specifying an access port.

docker run --name node-app -d -p 5000:5000 node-image
Running the node-app
Running the node-app

To stop the container, run docker rm node-app -f.

4. Now, run the docker ps command below to view all active containers (-a).

 docker ps -a

Below, you can see your Docker container (node-app) is active.

All containers
All containers

5. Run the docker exec command below to start an interactive (-it) shell (bash) inside the node-app container.

docker exec -it node-app bash
Interactive shell
Interactive shell

6. Finally, run the below dir command to check if the files in the .dockerignore file were added to the container (node-app).

dir

You can tell in the output below that the files you listed in the .dockerignore file are not included in the note-app Docker container.

Files and Folders in the Server directory
Files and Folders in the Server directory

The node_modules folder in the container was generated by the npm i command from the Dockerfile.

Creating the node-image with Docker-compose

You’ve learned a container’s basic life cycle: building, starting, and stopping a container. But can the lifecycle still be improved? Docker-compose lets you simplify the lifecycle of not just one but numerous containers.

With Docker-compose, you only need to run one Docker command to start up all containers and one command to shut down all containers instead of running a series of commands. These Docker commands follow the instructions listed in the docker-compose file.

To get started with Docker-compose, you need a YAML file that will contain all services and the configurations for these services. Some of these configurations include.

  • Build Configuration: This contains the location of the Dockerfile you want to use to build the service and other build options.
  • Images: You can use images in hub.docker.com instead of building your custom image.
  • Environment variables: It stores configurations to variables in your code.
  • Ports: This option specifies what port the application will run on.
  • Network: This option enables one container to communicate with another container.

Create a YAML file named docker-compose.yml file and populate the configuration below, which creates a custom node image.

YAML is indentation sensitive, so ensure you use the appropriate indexing.

# Version of Docker-compose
version: '3.8'
services:
  # Service name
  node:
    # Creating a custom image
    build:
      # Location to the Dockerfile
      context: .
      # Name of the Dockerfile
      dockerfile: Dockerfile
    ports:
        # External port:Internal port
      - 5000:5000

Run the command below to build up and start the node service.

docker-compose up --build
Docker-compose build command
Docker-compose build command

Configuring Volumes for Persisting Data in Docker Container

After creating the node image, you’ll need to store static data and sync your source code to the source code in the container. How? By configuring volumes and bind mounts for Docker container.

In this tutorial, you’ll start configuring volumes first. Volumes in Docker are directories outside the Docker container that contain the data for that container. Volumes are primarily used to store persisting data, such as source codes, log files, etc.

1. Open your docker-compose file in your code editor.

2. Add the volume configurations below under the node service of the docker-compose file.

The configuration below creates a volume named nodeVolume and stores the volume in a folder named server. But feel free to change the name of the volume and the folder.

---
  node:
		---
		# ADD THE CONFIGURATION FROM THIS POINT to create a volume named nodeVolume
    volumes:
      # Syntax <nameOfVolume>:<directorInDocker>
      - nodeVolume:/server
# Making the node service volume accessible to other services.
volumes:
  # Declaring the node service volume.
  nodeVolume:

3. Lastly, run the command below to rebuild the image (node-image).

docker-compose up -d --build
Building the node image with Volume Configured
Building the node image with Volume Configured

Configuring Bind Mount to Sync Local Directory to Docker Container

Bind mount is a mechanism that syncs a folder in your local machine to a folder in the Docker container. A bind mount stores data in the container, but the data goes too when the container gets deleted.

Bind mounts are primarily used in the development stage where data are dynamic (information frequently changes). With a bind mount, you don’t have to rebuild the application every time a change is made to the application’s source code.

To configure a bind mount:

1. Open your docker-compose file and add the code below in the node service under volumes.

The code below syncs the working directory for the application to the /app directory in the container. At the same time, the code prevents your source code from making changes to the node_modules file in the app directory.

---
	node:
		volumes:
			---	 
			# ADD THE CONFIGURATION FROM THIS POINT to sync the working directory
			# for the application to the /app directory in the container
			- .:/server
			- /server/node_modules

2. Run the command below to rebuild the node image.

docker-compose up -d --build
Building the node-image with bind-mounts configured
Building the node-image with bind-mounts configured

Modifying files from the /app directory of the application in Docker will affect the files in your local machine since the folders are synced. To restrict Docker from making changes to your application’s source code, add the read-only option (:ro) to your bind mount config, as shown below.

 node:
   volumes:
     - ./:./server:ro # Adding the read-only option
     - - /server/node_modules

3. Open the server.js file in your code editor, replace h1 header to h5, as demonstrated below, and save the changes. These changes in the server.js file lets you test if the bind mount config works.

app.get("/", (req, res) => {
  res.send("<h5>Welcome Friends</h5>");
});

4. Now, run the command below to view all running containers.

docker ps
Viewing all Running Containers
Viewing all Running Containers

5. Now, run the docker exec command below to run an interactive shell (-it) of your running node container (mern-docker_node_1).

docker exec -it mern-docker_node_1 bash

6. Finally, run the cat command below to display the changed content inside the server.js file in your node container.

cat server.js

As you can see below, the header changed to h5.

Verifying the Change in the server.js file
Verifying the Change in the server.js file

Connecting MongoDB to the MERN Application

MongoDB is a NoSQL, free, open-source, cross-platform document-oriented database program. In this tutorial, you’ll set up MongoDB and see how the node service from the previous section can communicate with MongoDB.

Open the docker-compose file on your code editor, and add the configuration specified below under the node service.

This configuration uses Mongo’s official Docker image to build the MongoDB service (container).

version: '3.8'
services:
  node:
    ...
		# ADD THE CONFIGURATION FROM THIS POINT to build the MongoDB service
    environment:
      - PORT=5000
			# For security, specify a username and password as environmental variables
      # Username for the mongo database
      - MONGO_INITDB_ROOT_USERNAME=mern
      # Password for the mongo database
      - MONGO_INITDB_ROOT_PASSWORD=merndocker
    # Enables the mongo service to start before the node service
    depends_on:
      - mongo
  # Name of mongo service
  mongo:
    # Official mongo image from docker.hub
    image: mongo
    environment:
      # Username for the mongo database
      - MONGO_INITDB_ROOT_USERNAME=mern
      # Password for the mongo database
      - MONGO_INITDB_ROOT_PASSWORD=merndocker
    volumes:
      # <nameOfVolume>:<directorInDocker>
      - mongoDB:/data/db
volumes:
  # Making the volume accessible by other containers
  mongoDB:

Now, run the docker-compose command below to --build and start the mongo service.

docker-compose up -d --build 

As you can see below, Docker is creating a volume for the mongo service.

Creating the Volume for the mongo Service in this Mern Stack Tutorial
Creating the Volume for the mongo Service in this Mern Stack Tutorial

Linking NodeJS to MongoDB

After building the mongo service, you can now link the NodeJS service to MongoDB. Linking Nodejs to MongoDB enables you to store data in the MongoDB database.

Using the name of a service is one of the common ways to communicate with different containers. And this tutorial uses the mongoose dependency to Link the node service to MongoDB. But you’ll first have to install mongoose.

1. Run the command below to install mongoose.

npm i mongoose

2. Next, open the server.js file and add the code below, which imports the mongoose dependency and uses it to link NodeJS with MongoDB.

The code below uses the username and password you stored as environment variables in the docker-compose file to connect the node service with MongoDB.

const mongoose = require('mongoose');

// Gets the Username and Password 
const MONGO_URI = `mongodb://${process.env.MONGO_INITDB_ROOT_USERNAME}:${process.env.MONGO_INITDB_ROOT_PASSWORD}@mongo:27017`;

// Creating the connect function
const connectDB = async () => {
  await mongoose
    .connect(MONGO_URI, {
      useNewUrlParser: true,
      useUnifiedTopology: true,
    })
    .then(() => console.log("Mongo connected successfully"))// Logs out successful when MongoDB connects.
    .catch((e) => {
      console.log(e.message);// Logs out the error message if it encounters any.
    });
};

// Calling the Connect Function
connectDB();

...

3. Now, run the docker-compose command below to rebuild the node image.

docker-compose up -d --build
Re-building node-image
Re-building node-image

4. Finally, run the following command to open the logs for the application and check if MongoDB connected successfully.

docker-compose logs

Below, you can see MongoDB connected successfully.

Viewing Logs for the Node Service
Viewing Logs for the Node Service

Adding Redis to the MERN Application

You’ve just added MongoDB as a second service to the MERN application, and now you’ll be adding Redis as the third. Redis is a NoSQL database commonly used to store cached data and tokens.

Open the docker-compose file, and add the following configuration below the mongo service under services, as shown below.

This configuration sets up Redis’ official Docker image

services:
	---
  mongo:
		---
	# ADD THE CONFIGURATION FROM THIS POINT to set up the Redis service
  redis:
    image: redis

Setting up the Client-side of the MERN application

From setting up your application on the server side, delve into setting up a React app as the client-side of the MERN application. React is a JavaScript library for building user interfaces.

1. Run the command below to create a simple React application. This command automatically creates a directory named client in the root directory of the MERN application.

npx create-react-app client

The create-react-app command starts installing all the required dependencies on the output below.

Installing React
Installing React

2. Once installation completes, open the client directory in the root directory of the MERN application, and create a Dockerfile inside it. You can name the Dockerfile differently, but the Dockerfile is named react.dockerfile in this tutorial.

3. Add the code below into the Dockerfile (react.dockerfile), which creates a custom React image.

# Official node image
FROM node:17
# Setting the working directory to "/client"
WORKDIR /client

# Copies the package.json file into "/client" and run npm i
COPY package.json /client
RUN npm install
# Copies the entire react source code into "/client"
COPY . /client

EXPOSE 3000
# Starting the react app
CMD [ "npm", "start"]

4. Now, open your docker-compose file and replace the content with the code below.

The following code adds a react service under services with the Dockerfile’s location and Dockerfile’s name.

You’ll add portions of the code you’ll add in the docker-compose file, and you’ll see the entire code in the last part of these steps.

# Version of Docker-compose
version: '3.8'
services:
  # Add the react service
  react:
		# Location to the dockerfile
	  context: ./client
	  # Name of the dockerfile
		dockerfile: react.dockerfile

5. Configure the volumes, ports, and depends_on options, as demonstrated below.

    volumes:
        # Bind-mounts configuration
      - ./client:/client
        # Ignoring any changes made in "node_modules" folder
      - /client/node_modules
    ports:
        # External port:Internal port
      - 3000:3000
    depends_on:
        # Starts up the node service before starting up the react service
      - node

6. Finally, add the configuration below to add an environment variable to enable hot reload in the Docker container. Hot reload refreshes a react page and re-renders its components.

    environment:
      # Enabling hot reload
      - CHOKIDAR_USEPOLLING=true

Following the steps above will bring you to the configuration below.

version: '3.8'
services:
  react:
    build:
      context: ./client
      dockerfile: react.dockerfile
    volumes:
      - ./client:/client
      - /client/node_modules
    ports:
      - 3000:3000
    environment:
      - CHOKIDAR_USEPOLLING=true
    depends_on:
      - node
  node:
    ---
  mongo:
    ---
  redis:
    ---

Conclusion

This tutorial aimed to teach you how to set up your application using Docker and make it compatible with other devices. Do you feel that’s the case? At this point, you’ve learned the basics of bosting your MERN stack application’s development.

As a next step, why not learn to use NGINX to set up a proxy for your application and deploy it to Docker?

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!