OK, to start from positive side, if you will try just copying over some basic Dockerfile example for building Node.js based microservice container, it would be fine. Like if you need something small and portable you can just use node:lts-alpine docker base image and go with it by building this kind of simple Dockerfile

FROM node:lts-alpine

ADD . /app

WORKDIR /app

RUN yarn && yarn build

ENV NODE_ENV=production

CMD ['node', './dist']

If you have a Node Typescript project like I have here https://github.com/tigranbs/node-typescript-starter-kit this will work perfectly fine, especially if alpine itself is quite small base image, the only storage that you have to acquire is your node_modules, which of-course could grow to few GB’s if you have even 20+ packages in your dependencies.

In microservice world it is considered a good practice of having small Docker images for faster deployments and faster uptimes on server side. This is also means if you have for example a staging environment or Github Actions for running your automated tests, it will take a lot less time if you have smaller images with a less build times. So, the idea of having an optimised Docker image comes down to this following 2 points

  • Keep Image size as small as possible
  • Make sure to keep unique Docker Image layers for not rebuilding entire image layers all the time when you have a slight change

For the 1st point we already have pretty match the bare minimum Docker image, which means we probably can remove some unnecessary files from Dockerfile like README.md, .git, etc... , but overall it wouldn’t make that match of a difference.

Main idea here is to keep your Docker image with the same unique image layers so that ONLY changed layers would be uploaded to Registry and only changed layers would be downloaded from there. Which comes down of having something like this for our Node.js Typescript project

FROM node:lts-alpine

ADD package.json /app/package.json
ADD yarn.lock /app/yarn.lock

WORKDIR /app

# Installing packages
RUN yarn

ADD . /app

ENV NODE_ENV=production

# Building TypeScript files
RUN yarn build

CMD ['node', './dist']

The main changes from original Dockerfile is having to separate yarn install and yarn build things, which makes sure that if you have changed yarn.lock or package.json we will rebuild Docker image layer responsible for having node_modules otherwise it will stay the same and we don’t have to upload that or download from Registry if we already have.

You can admit that like in 80% cases when you are doing a regular development process you are not touching yarn.lock or package.json file, it is just when you need new package or you need to upgrade some package to a newer version, and your node_modules layer would be built only in that cases, otherwise Docker Image layer will stay the same making it very fast build times and less network traffic/storage to move between Registry and your server or local environment.

This technique is regularly used by thing called multistage builds, where you are assigning specific image layer based on image name and referencing it all the time by building multiple images. For example if you have to build staging, test and production docker images and you have some specific code based differences there. It is usually common for compiled languages like Go, Java or Rust, where you probably compiling a bit different library or in Debug or production mode with a different optimization flags, but for Node.js TypeScript project it is the same codebase, it’s just changed environment variables.

FROM node:lts-alpine AS packages

ADD package.json /app/package.json
ADD yarn.lock /app/yarn.lock

WORKDIR /app

# Installing packages
RUN yarn

# Staging Image
FROM packages

ADD . /app

ENV NODE_ENV=staging

# Building TypeScript files
RUN yarn build

CMD ['node', './dist']

This is mostly about having multiple image builds at the same time, but not sure if it is useful for Node.js based projects. Have seen this kind of things before for Go projects, where it make sense to build Go project with a different compile flags based on environment, BUT that also a specific use-case. The thing is that you just have to remember that the key is to keep your unique image layer untouched when you continuously build your Docker image, it will save you a lot of time later on.

Conclusion

It is pretty simple concept of having unchanged Docker image later all the whenever you build your image, but it is going to save you a lot of time and resources later on.

If this was helpful consider subscribing to my newsletter and following/subscribing!