Running Rancher k3s as a Docker App

Posted on Dec 10, 2019 by Marc Streeter

3 min read

In less than a decade, containers and, subsequently, container orchestration have exploded into almost every modern development stack.

We explore just how easy it is to get Kubernetes up and running within the LMP.

Kubernetes Small Enough for IoT

Being small isn't easy for Kubernetes, the highly popular container orchestration platform. With roots deep in Google, Kubernetes was built for memory and compute rich datacenter environments. Much of its hardware requirements stem from the lofty goal of being "...a platform for building platforms...".

Since Kubernetes' inclusion into the CNCF, the uptick in contributions to Kubernetes and its ecosystem have exploded. External collaboration with the project has enabled Kubernetes in spaces typically dominated by AWS and pushed its availability into more hybrid environments. With the creation of k3s, embedded and IoT environments are in reach.

Rancher's k3s is billed as a "fully compliant Kubernetes distribution" only occupying half the memory of typical installations. Rancher achieves this by removing specific parts of Kubernetes not necessary/common for typical installations, such as in-tree cloud providers and storage plugins. In this way, k3s is able to cater to memory constrained devices, common in IoT.

Hardware Requirements

As k3s is meant to orchestrate containers, the hardware requirements vary based on the size of your deployments. At least 512MB of RAM and one CPU core must be dedicated specifically to k3s.

NOTE: In our own tests, the Raspberry Pi had performance issues even before meeting the theoretical limits of the system! Beware of resource utilization in constrained devices.

Installing k3s as a Docker App

Way back when, we blogged about Docker Apps and how to deploy them. The process for creating Docker Apps hasn't changed, and we've used it in adding the k3s Docker App to our Docker App Store.

To install k3s from Foundries' Docker App Store, follow steps present in the "How to enable a Docker App" section. It takes a little less than a minute to install k3s, depending on the internet connection available.

NOTE: This configuration is not meant for production environments!

Using Kubernetes

Provided that kubectl can reach the k3s installation, all that remains is to make a deployment. Here is an example deployment manifest for shellhttpd:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: hub.foundries.io/lmp/shellhttpd
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: demo
spec:
  selector:
    app: demo
  ports:
    - protocol: TCP
      port: 8080
      nodePort: 30808
  type: NodePort

Confirm that the service is running and accessible with a call to the port where k3s is exposing our shellhttpd service:

$ curl '<IP_OF_LMP_DEVICE>:30808'
OK

It's aliiive! The shellhttpd container has been exposed successfully to the world.

Closing Thoughts

Foundries' newest addition to the Docker App Store, k3s, demonstrates once again the exploratory spirit that comes with platform stability. Foundries is focused on product enablement. The FoundriesFactoryâ„¢ is key to this focus. It provides a consistent software story that's incrementally updatable, is secure by design, and has the tooling needed to manage the variability that exists in embedded and IoT devices deployed in the field.

Related posts