An engineer in an audio lab.

A Developer’s Guide to Containerizing Embedded Applications

Photo of Andy Doan

Posted on Sep 30, 2025 by Andy Doan

14 min read

The practice of containerization developed first in server-side and enterprise computing environments, providing a way to isolate applications in highly complex systems, and allow flexible, fast modification or updating without disrupting the entire system.

Now the concept of containerization is gaining broad support in the embedded computing community, as the complexity of system designs based on a Linux® operating system (OS) makes updating of monolithic, non-containerized systems more and more unwieldy. And with the introduction of legislation such as the European Union’s Cyber Resilience Act (CRA), embedded devices can no longer avoid the need to implement regular security updates to protect against the threat of cyber-attack, and to respond to common vulnerabilities and exposures (CVE) notices – containerization can make such updating less disruptive to the device user.

What is a container?

Containerization is a method of packaging an application and all its dependencies into a single, isolated unit. This unit, the ‘container’, is a combination of application code, a minimal filesystem, runtime, system tools, and libraries, abstracted from the host environment.

Containers should be distinguished from virtual machines (VMs): the difference lies in the level of abstraction and isolation. A VM virtualizes the entire stack, including a full OS kernel, making it a heavy and resource-intensive entity.

A container, on the other hand, operates directly on the host's OS kernel. Because containers share a kernel architecture, they are lightweight and fast, requiring far fewer resources than VMs. While the function of a VM is to emulate a separate machine, the function of a container is to run an application in an isolated, self-contained environment.

This is why containerization is more attractive for embedded systems than the use of VMs. Embedded devices are resource-constrained, and the overhead of running multiple full OSs on multiple VMs is often prohibitive. Containers offer the benefits of isolation without the performance penalty.

The benefits of containerization in embedded Linux OS-based systems

Embedded device manufacturers which take the plunge and embrace the practice of containerization can expect to reap various benefits.

1. Higher reliability and fault tolerance

In a monolithic embedded Linux architecture, a bug in one application can potentially crash the entire system, leading to a system-wide failure. With containers, each application runs in its own isolated environment. If a containerized application crashes, it does not affect the host OS or other containers. Likewise, if there is a problem or bug in a containerized application, the system can shut down only that container pending a fix, while keeping the rest of the system running.

2. Simplified software updating

Updating an application or a full root filesystem in a monolithic system can be a complex and risky process. A failed update can brick a device, requiring a physical factory reset.

Containerization streamlines this process. Traditionally, firmware updates are delivered and deployed as a single monolithic package encompassing the Linux OS distribution, middleware and applications. After updating the Linux distribution, the device must be rebooted.

In a containerized architecture, discrete containers can be updated without requiring changes to the kernel or to other containers. This makes it possible to implement an over-the-air firmware update without a reboot – in fact, without the user even noticing that it has happened.

3. Streamlined dependency management

Dependencies commonly bedevil the development of complex embedded systems. When different applications on the same device require different versions of the same library (for instance, a legacy application might need Python 2.7 while a new application needs Python 3.9), conflicts are inevitable.

Containers eliminate this problem entirely. Because each container bundles its own dependencies, multiple applications with conflicting requirements can run on the same device. This allows for cleaner, more predictable development and deployment.

4. Stronger security posture

The isolation of applications within containers creates a sandbox for each service. If one application is compromised, the attacker is contained within a single container and cannot easily move laterally to other services or the host operating system. This can help to limit the damage that can be caused by common attack vectors.

5. Better portability and collaboration

A containerized application is portable. An image built on a development machine can be run on a target device with a compatible kernel: this means that if the software works on a development machine, it will also work on the device, streamlining quality assurance and testing processes.

Containerization also facilitates collaboration: the workload can be distributed efficiently between developers, each working separately on their own containerized application or function. This is particularly helpful for development teams that have limited access to the hardware target’s development board: each developer or team can work in isolation on a different containerized application without reference to an SoC’s board support package. This helps OEMs to accelerate time-to-market while keeping development hardware costs to a minimum.

So isolating applications inside containers provides multiple features and benefits, including reliability, security and productivity. Next, let’s explore the ways in which embedded device development teams can approach the implementation of containerization.

Linux kernel provides the enabling features of containerization

The basic components of a containerized system are core features of the Linux kernel: namespaces, control groups, and union filesystems.

Linux namespaces

Linux namespaces provide the fundamental mechanism for process isolation. They enable the partitioning of kernel resources so that one set of processes can see one set of resources while another set of processes sees a different set. This gives each container its own isolated view of the system.

The key namespaces are:

  • PID (Process ID) Namespace: each container has its own private process tree. PID 1 in the container is the initial process for that container, not the init process of the host.
  • NET (Network) Namespace: each container has its own private network stack, including network interfaces, IP addresses, and routing tables.
  • UTS (UNIX Time-sharing System) Namespace: this provides a private hostname and domain name for each container.
  • MNT (Mount) Namespace: this isolates the filesystem. A container can have its own root filesystem without affecting the host's filesystem.
  • Cgroup (Control Group) Namespace: while cgroups are a separate technology, a namespace for them enables each container to get a private view of its resource limits.
  • User Namespace: this provides user isolation. It allows a user within a container to have root privileges within that container, while being an unprivileged user on the host system.

Cgroups (control groups)

While namespaces provide isolation, cgroups are responsible for resource management and control. They allow the host kernel to allocate and limit resources for a group of processes. For embedded systems, this is a crucial feature, as it prevents a single misbehaving application from consuming all the CPU or memory resource and starving other critical services. Cgroups can be used to set:

  • CPU limits: control how much CPU time a container can use.
  • Memory limits: cap the amount of RAM a container can consume.
  • I/O throttling: limit disk and network I/O to prevent a container from monopolizing resources.

Union filesystems (OverlayFS)

Container images are built using a layered filesystem model: technologies such as OverlayFS are popular. This layered approach underpins their efficiency.

An image consists of several read-only layers, each representing a single instruction in the container's build process. When a container is started, a new writable layer is added on top. This means that when a container is updated, only the new layers need to be downloaded, not the entire filesystem. This substantially reduces bandwidth and storage requirements, an important benefit for resource- and cost-constrained IoT and embedded devices.

How to build a containerized embedded system

The development of containerized embedded systems is best undertaken with the help of a dedicated container tool. By far the most widely used tool is Docker: it is integrated into the FoundriesFactory™ DevOps platform, alongside tools such as The Update Framework (TUF) for developing and delivering security-focused over-the-air (OTA) updates, and a continuous integration/continuous development (CI/CD) framework for developing and deploying code.

What is Docker?

Docker is an open platform for developing, shipping, and running containerized applications. Docker's tools for shipping, testing, and deploying code can greatly reduce the typical delay between writing code and running it in production.

Part of the Docker platform, Docker Compose is a tool for orchestrating multiple containers and managing their lifecycle, enabling users to:

  • Develop applications and their supporting components using containers.
  • Use the container as the unit for distributing and testing an application.
  • Deploy the application in its production environment, as a container or an orchestrated service.

Docker, and the technique of containerization, emerged from the enterprise and data center worlds. For this reason, some embedded developers fear that its use will make excessively heavy use of scarce processor and power resources. In fact, even the full Docker Engine and command line interface can be used on embedded platforms such as Raspberry Pi or NVIDIA Jetson boards.

Container development workflow

The typical workflow for a containerized embedded system involves creating a container image on a development machine and then deploying it to the target device.

The first step is to create the Dockerfile, a text file which contains all the instructions needed to build a container image. It acts as a blueprint, defining the base OS, dependencies, and commands to run.

In the second step, on a development machine, the developer uses the container runtime's build command to create the image from the Dockerfile.

The third step is to deploy and run the container. Once the image is built, the developer pushes it to a container registry, or transfers it to the device using docker save and docker load. On the target embedded device, the container runs with a single command:

>> docker run --name my-application --rm -it my-embedded-app-image <<

This command runs the container, names it, and removes it (--rm) on exit.

Deploying containers in real-world applications: the main problems to solve

While the benefits of implementing containerization, including higher system reliability and greater flexibility in the development process, are valuable, it is important to recognize that the adoption of a container-based embedded software architecture is not without risks or costs. The three most important problems that embedded device manufacturers will have to handle are centered on resource management, updating containers, and security provisions.

How to avoid overwhelming scarce hardware resources

The use of containers requires slightly more memory, storage and processor resources than a conventional monolithic system architecture. Without carefully monitoring this resource usage, a containerized architecture can risk slowing system performance.

The size of the container image itself is a primary concern: developers should minimize the container's footprint by only including essential libraries and binaries. Their efforts to limit a container’s footprint will be helped by the use of a lean base Linux image, such as Alpine Linux or a distroless image, rather than a full-featured distribution such as Ubuntu.

It is also important to monitor and limit the CPU and memory consumption of each container. The cgroups tool in the Linux kernel allows developers to set strict resource limits for containers, ensuring that a single process does not consume all available resources and crash the system. Among other things, this calls for advanced systemd integration. The creation of systemd service files enables developers to properly manage containers, ensuring that they start on boot, restart on failure, and are properly integrated into the device's service ecosystem.

Applying updates to maintain containers’ cybersecurity features

In the development of an embedded device, the delivery and deployment of security patches is simple. The difficulty arises when the update process needs to be scaled up to a deployed fleet of thousands or millions of devices.

An advanced update strategy will include:

  • A container registry to store and manage image versions
  • A rollback mechanism to revert to a previous, working version of a container if an update fails. This ensures that the device does not become permanently bricked due to a bad update.
  • A superior tool for delivering and deploying updates at scale. A popular and proven option is The Update Framework (TUF), which is integrated into the FoundriesFactory platform.

Maintaining system-level security features to help protect containers

While containers provide a degree of process isolation, they share the host kernel, which means a vulnerability in the kernel could affect all containers: this means that developers must ensure that the host OS is kept up-to-date with the latest security patches. Users of the FoundriesFactory software-as-a-service (SaaS) benefit from Foundries.io’s continual maintenance of the LmP operating system. Updates to the LmP are developed and distributed automatically by Foundries.io.

Container images should also be scanned for vulnerabilities throughout the development and deployment pipeline. Tools such as Syft - built into the FoundriesFactory platform - can be integrated into the CI/CD workflow to automatically check for known vulnerabilities.

Additionally, applying the principle of least privilege by running containers as a non-root user and restricting their capabilities helps mitigate the risk that a compromised container could damage the entire system.

How to fit a container-based architecture smoothly into embedded development processes

The specific processes for developing, deploying and maintaining containers have been described above. But containers are not a discrete element within an embedded system - they are fully integrated alongside the Linux OS, libraries, metadata and other resources. The same good practices which help elsewhere to streamline code development and to integrate security protection features also apply to container code. These include:

  • CI/CD – in a container-based architecture, a CI/CD process automates the entire software delivery pipeline. CI/CD pipelines automatically build a new container image whenever code is updated, run a suite of automated tests, and then deploy the validated image to the target environment. This automation supports consistency and repeatability, eliminating manual errors. It also enables faster and more frequent releases of new features, security patches, and bug fixes, as the process is streamlined from code commit to deployment. This reduces the risk of deployment failures and allows for rapid feedback and iteration.
  • OTA updating – Linux OS-based embedded devices are exposed to a constantly changing combination of cyber-threats, and protection requires a readiness to rapidly develop and deploy patches to address emerging vulnerabilities. A superior system for identifying devices at risk - enabled by a software bill-of-materials (SBOM) maintained individually device-by-device - and for securely delivering appropriate OTA updates to a fleet of devices is an essential element for maintaining cybersecurity features.
  • Device configuration and fleet management – as well as security updates supplied individually to devices in the field, it is important to be able to update the configuration of devices by sending configuration files to individual devices or groups. This calls for a system to generate, store and update information about each device, including a complete update history and SBOM.

Managing a containerized embedded system: the advantages of a DevOps platform

After initial skepticism about a software architecture originally developed for enterprise systems, which have practically unlimited hardware resources available to them, the embedded developer community has more recently embraced the concept of the containerized embedded system.

Embedded system containerization helps promote system reliability by isolating the effect of bugs within individual applications, gives development teams the flexibility to work on application development in isolation from the underlying hardware, and facilitates the delivery and deployment of security updates without requiring a system reboot.

The implementation of a container-based architecture benefits from the availability of certain system capabilities, including a lean and continually updated Linux OS image, a CI/CD infrastructure, support for secure OTA updating, and device and fleet management services.

The development and provision of these services and capabilities is a huge task for any single OEM. This is why the FoundriesFactory platform is a superior option: it is a comprehensive, ready-made DevOps platform for embedded devices, and has a suite of robust security features built-in, including support for secure boot, trusted execution environments and secure key storage, an automatic SBOM generator, integrated TUF support, and a maintained Linux distribution, the Linux microPlatform (LmP). With Docker and Docker Compose integrated, the FoundriesFactory platform is made for containerized embedded systems.

Embedded Linux developers who want to pursue a container-based approach to embedded system development can request a free, no-obligation demonstration of the FoundriesFactory platform today.

Related posts