Docker 101 for embedded systems DevOps

9 minute read

Docker 101 for embedded systems DevOps.

I started using docker back in 2016 and ever since, I have been using it in the context of embedded systems DevOps. This article condenses the learning so that someone starting afresh can get to speed quickly. You can read my article on bring-up a buildroot image on docker in the article Docker: Scratching an itch to build from ground up.

I’ll first introduce some of the key concepts and commands, and then we will look at some docker for embedded systems DevOps use-cases.

Docker what?

To understand what docker is, first, you need to understand the concept of containers.

A container is pretty much like a pre-configured VM at a high level. It lets you package all your tools, configurations, and dependencies. It provides an isolated execution environment for your code without worrying about the dependency versions, configuration mismatch, etc. But, unlike a VM, a complete OS (and baggage) is not part of the container. Its leaner and leverages the capabilities of the underlying OS. It also comes pre-configured with purpose-specific tools. (That way, it is closer to a python virtual environment than a VM?)

Containers use the underlying OS kernel to provide application-level virtualization. Due to this, a container image created for an OS might not work with another OS. (Docker toolbox addresses this.)

Docker is one of the tools that let you create, configure, and use containers. Podman is another example of a container tool.

Practically, containers are stored in a container repository. DocerHub is an example of a public container repo. It contains official and unofficial images. You can use docker to fetch and start one of these images. Your entire development/deployment environment will be up and running with a single command. This is one of the primary docker use cases.

For instance, if you want to try out the latest RISC-V clang nightly build without messing up your environment, pull it from DockerHub and use it in the container with:

docker run -dit tuxmake/riscv_clang-nightly

followed by docker attach <container ID>

If you ended up opening many, close all with docker container prune.


Docker has a client-server architecture. The client communicates to the daemon (server) using a REST API. So, the daemon can be local or be in the network. You will read more about the components shown in teh diagram below in the following sections.



Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace. (Source: )

Docker Images

A docker container “executes” a docker image. The container includes all the essential components required to execute an image. This consists of a virtual FS, networking, etc.

A docker image is built by layering different images. The base image is typically a Linux distribution. When you do a docker pull or docker run, these layers are downloaded and stacked on top of each other to create the final container image.

Essential Commands

pull Brings in images from a repo into your machine.
images List all the local images.

Run an image in a new container.

-d Run in detached mode
--name Provide a docker name
-e To pass environment variables inside the container
--net Network to use
-v Volume mapping

List all running containers.

-a List even stopped containers
stop Stop the container
start Start a stopped container
logs List logs of a container.

Execute a command in a container. Provide container id and the command to execute.

-it Two arguments i and t are used together for an interactive terminal.
container prune Clear stopped containers.
rmi Delete an image
tag Image rename
system prune Clean all images and containers.

Docker Networking

Like an application running in a VM, an application virtualized in a container is unaware it is executing within it. So, applications in different container instances can open the same ports. To use these ports, we need to bind a host port to a container port.

To bind a container port to a host port, use the -p argument to docker run.

docker run -p8000:6379 redis

Docker also has internal networking that lets containers talk to each other. To list the networks that are available use:

docker network ls

To create a custom network, use:

docker network <network name>

Once a network is created, you can pass it in the run command using the –net argument. The required ports should also be bound using -p while adding a container to a network.

Containers within the same network can talk to each other using the container name.

Building docker images

A Dockerfile is a recipe for building docker images. Key commands that can be used in a Dockerfile are listed below

FROM Denotes the base image
ENV Environment variables
RUN Execute a command within the container
COPY Copy something from the host into the container
CMD Entry point command to execute after boot

Once you create a Dockerfile, you build it with

docker build -t myApp:0.1 .

The image is built and stored locally. It can be listed with docker images. It can then be pushed to a registry using

docker push

When you push updated versions, only the changed layers are pushed.

For private registries, you can use Amazon ECR, Digital Ocean, etc. You can use DockerHub for public registries.

Persistent Volumes

The data generated and written into the virtual container filesystem is destroyed when the container stops. We use persistent volumes to overcome this.

Volumes maps (mounts) host file system paths into the container file system.

They are three types of volumes

  1. Host Volumes

    Pass the host and destination paths within the container using the -v argument to the run command.

    docker run -v /home/embeddedinn/data:/var/lib/mysql/data
  2. Anonymous Volumes

    Only the container directory is passed to -v argument. A directory is created within the host to map this automatically. In Linux, this is created in the /var/lib/docker/volumes folder.

  3. Named Volumes

    Similar to anonymous, but a name for the volume can be passed.

    docker -v name:/var/lib/mysql/data

    This is the recommended method. For shared volumes, the same name can be used across containers.

Usecase 1: Setting up a custom RISC-V toolchain image

Let’s consider a development ecosystem to enable developers and CI to use a standard, custom toolchain version. This can be enabled using docker with the following steps.

  1. Cloning and Compiling the toolchain in a docker container

    To create the toolchain in a clean environment, lets first bring up a container with a persistent volume, then clone the git repo, install dependencies and compile the toolchain.

     docker run -it -v /home/embeddedinn/docker/volume/toolInstall:/opt/riscv ubuntu:20.04 /bin/bash
     sudo apt-get install autoconf automake autotools-dev curl python3 libmpc-dev libmpfr-dev bgmp-dev gawk build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev bexpat-dev git
     git clone
     cd riscv-gnu-toolchain
     ./configure --prefix=/opt/riscv --with-arch=rv32gc --with-abi=ilp32d --enable-multilib
     make -j$(nproc)
     make -j$(nproc) linux

    At the end of this process, the compiled toolchain will be available in the /home/embeddedinn/docker/volume/toolInstall folder mapped as a volume to /opt/riscv within the container.

  2. Creating a docker image

    Now that we have compiled the toolchain that works on ubuntu 20.04, we can go ahead and package it into a docker image that other developers and the CI can pull.

    This is how the Dockerfile will look like

     FROM ubuntu:20.04
     RUN mkdir /opt/riscv
     COPY toolInstall/\* /opt/riscv/
     ENV PATH="/opt/riscv:${PATH}"
     CMD /bin/bash

    You can build the image using

     docker build -t riscv-toolchain:0.1 .

    Once the build completes, the image will be available locally. You can use docker images to see it.

  3. Pushing the docker image to the container registry

    For others to use the image, you created you need to make it available through a container registry. Depending on the registry you are using, there will be different steps to push the image. In this case, we are using DockerHub. The image needs to be renamed (tagged) to the format appropriate to the registry.

    docker tag riscv-toolchain:0.1 vppillai/riscv-toolchain:0.1

    Then login to the registry and push the image.

    docker login
    docker push vppillai/riscv-toolchain:0.1

    Once pushed, you can see the image details from the hub interface

  4. Using the image

    Now that the image has been pushed, developers and CI can use it with

    docker run -it vppillai/riscv-toolchain:0.1
Container with the new RISC-V toolchain

Container with the new RISC-V toolchain

If you want to compile a codebase in your local machine with this toolchain, you can mount the volume into the container while running it.

Usecase 2: Compiling MPLABX projects with a custom Docker image

Microchip MPLABX project builds can be automated in a CI/CD pipeline using a Docker image with the tools pre-configured. MPLABX 6.0 even provides a CI/CD wizard tool to generate Docker files finetuned for your project needs.



A sample Dockerfile generated using the wizard looks like this:

# This file was generated by the CI/CD Wizard version 1.0.391.
# See the user guide for information on how to customize and use this file.

FROM debian:buster-slim

ENV DEBIAN_FRONTEND noninteractive

USER root
RUN dpkg --add-architecture i386 \
    && apt-get update -yq \
    && apt-get install -yq --no-install-recommends \
        ca-certificates \
        curl \
        make \
        unzip \
        procps \
    && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Download and install MPLAB X IDE version 6.00

RUN curl -fSL -A "Mozilla/4.0" -o /tmp/mplabx-installer.tar \
         "${MPLABX_VERSION}-linux-installer.tar" \
 && tar xf /tmp/mplabx-installer.tar -C /tmp/ && rm /tmp/mplabx-installer.tar  \
 && USER=root ./tmp/MPLABX-v${MPLABX_VERSION} --nox11 \
    -- --unattendedmodeui none --mode unattended \
 && rm ./tmp/MPLABX-v${MPLABX_VERSION} \
 && rm -rf /opt/microchip/mplabx/v${MPLABX_VERSION}/packs/Microchip/*_DFP \
 && rm -rf /opt/microchip/mplabx/v${MPLABX_VERSION}/mplab_platform/browser-lib
ENV PATH /opt/microchip/mplabx/v${MPLABX_VERSION}/mplab_platform/bin:$PATH
ENV PATH /opt/microchip/mplabx/v${MPLABX_VERSION}/mplab_platform/mplab_ipe:$PATH
ENV XCLM_PATH /opt/microchip/mplabx/v${MPLABX_VERSION}/mplab_platform/bin/xclm


# Download and install toolchain
RUN curl -fSL -A "Mozilla/4.0" -o /tmp/${TOOLCHAIN}.run \
 && chmod a+x /tmp/${TOOLCHAIN}.run \
 && /tmp/${TOOLCHAIN}.run --mode unattended --unattendedmodeui none \
    --netservername localhost --LicenseType NetworkMode \
 && rm /tmp/${TOOLCHAIN}.run

# DFPs needed for default configuration

# Download and install Microchip.PIC32MZ-W_DFP.1.5.203
RUN curl -fSL -A "Mozilla/4.0" -o /tmp/tmp-pack.atpack \
         "" \
 && mkdir -p /opt/microchip/mplabx/v${MPLABX_VERSION}/packs/PIC32MZ-W_DFP/1.5.203 \
 && unzip -o /tmp/tmp-pack.atpack -d /opt/microchip/mplabx/v${MPLABX_VERSION}/packs/PIC32MZ-W_DFP/1.5.203 \
 && rm /tmp/tmp-pack.atpack

Once you build the docker image, you can mount volumes and compile code with the RISCV toolchain.

Note: Before compilation, you need to regenerate the Makefiles to reflect the local paths using the prjMakefilesGenerator command, passing the path to the project Makefile folder. The command path is exported

Alternately, we can include git into the Docker image, clone the repo into the container, and compile it without mounting a volume. This might be useful in the case of some CI systems.

Usecase 3: Using the container with GitHub Actions

The GitHub actions file to use the container we created in the previous use-case will look like this.

name: containerTest
on: push

        runs-on: ubuntu-latest
            image: vppillai/riscv-toolchain:0.1
          - name : gcc version check
            run: riscv32-unknown-linux-gnu-gcc -v

The execution result shows that the compiler is useable in the Action.

Github Actions result

Github Actions result

Though this file simply lists the tool version, we can use additional run commands to clone your code, compile it, and run tests.

Leave a comment