Building 32-bit Arm Containers

Photo of Andy Doan

Posted on Feb 25, 2020 by Andy Doan

4 min read

We strongly believe in the power of containers for embedded development at Foundries.io. However, in a world of buzzword bingo like Kubernetes, serverless, and Edge, people working on 32 bit Arm hardware would be right to feel neglected. This article discusses one way we are helping: building armhf containers efficiently.

A tangent on terminology:

  • linux/arm/v7 - what Docker calls a traditional 32-bit armhf target
  • linux/arm64/v8 - what Docker calls a new 64-bit aarch64 target

Building armhf containers was originally difficult because there was no good hardware to build with. The only practical solution was by running Docker inside an armhf qemu instance on Intel hardware. This still seems to be the prevailing way people do this. However, with 64-bit Arm servers being readily available, you'd think there’d be a better way? It turns out there are some obstacles.

The first really powerful servers you could get access to were the Cavium ThunderX servers. These servers are amazing and we use them ... for aarch64. It turns out ThunderX is missing support for something called ILP32. This means it's not capable of executing 32-bit Arm instructions. That makes these servers no better than the x86 servers.

A few of us had Socionext Synquacer servers. These turn out to be pretty decent. You can run qemu in host pass-through mode so that you pay very little overhead. However, it's still something like a 15-20% performance loss. On top of that - these are servers in our home, as opposed to in a managed data center.

The latest option are the Graviton servers in AWS. The first problem with these servers is that there is no /dev/kvm (ie - virtualization is going to be slow). I suspect this is because Arm doesn't yet have good support for nested virtualization. However, like Synquacer servers, these servers can execute 32-bit Arm instructions. So, we should be able to run Docker natively and get the fast armhf builds we need ...

The original approach was somewhat convoluted: We ran a 32-bit LXC container on the Graviton with nesting enabled. Inside that container we ran dockerd and got the builds we needed. This worked pretty well until we recently hit a bug while trying to use some new features of Docker based on BuildKit.

While debugging Docker using GDLV (great tool, thanks so much!), we learned that Docker decides its target (ie linux/arm/v7) by looking at 2 things:

  • GOARCH - This is set in the Docker binary. So a 32 bit version of Docker will be arm, and a 64 bit version will be arm64.
  • The value of /proc/cpuinfo's "CPU Architecture".

The /proc/cpuinfo issue has been the problem. It's always going to be "v8", even if you are running 32 bit binaries. Things got desperate last week and we came up with a way to make this work on a Graviton with a few simple steps:

  1. Grab the contents of docker:19.03.5-dind for armhf.
  2. Extract that on the Graviton’s root file system to say, /opt/docker
  3. Run the dockerd-entrypoint.sh script with /opt/docker in $PATH
  4. Create a new hacked /usr/local/bin/docker script. This script checks to see if the command is a docker run. If so, it inserts a -v/hijacked-armhf-cpuinfo:/proc/cpuinfo into the command invocation so that tooling will think its armhf.

For some people, step 4 might not be needed. However, our CI infrastructure runs everything inside Docker, so we wind up running a docker:dind container in order to build containers. Step 4 makes sure that the docker tools we run will be tricked into thinking it's on an armhf server.

That’s it. Armhf container builds can be fast. You just need to put on a cowboy hat to do it!

Epilogue

Well - there’s a sequel. Turns out we had to go back and use a 32-bit LXC host to run Docker. Why? Some containers we build use GNU autoconf. The config.guess script calls uname -m, and the version the Graviton returned (aarch64) breaks armhf builds. The LXC value (armv8l) is what's needed.

Related posts