Arduino UNO Q Elf Detector Series - Part 5: Integrated Holiday Monitoring System with MQTT

Welcome back for the grand finale of our Arduino UNO Q Elf Detector Series! The Elf surveillance system from Part 4 is running, and I have successfully identified unauthorized movement near the cookie jar. But detection without consequence is just a documentary. I don't want to watch them steal my tools; I want to stop them.

Now it's time to bring everything together! In this final part, we will build a complete integrated holiday monitoring system that combines all the technologies we've learned: voice control, LED matrix displays, object detection, and MQTT messaging ‐ all working together in perfect harmony to stop the intruders!

We'll create a smart Christmas tree system that:

  • Detects when an elf appears on camera and triggers a red alert.
  • Animates the LED matrix with synchronized patterns.
  • Responds to voice commands to change tree colors.
  • Coordinates all services in real-time using MQTT.

All processed locally on the Arduino UNO Q without cloud connectivity!

Prerequisites

Before we combine these technologies, it is critical that you have the fundamentals in place. This tutorial assumes you have completed the previous parts of the series

To follow along, you need:

  • Hardware:
    • An Arduino UNO Q powered and connected to the internet
    • USB Camera connected to /dev/video0
    • USB Microphone connected for audio input
  • Software:
    • A FoundriesFactory account with your Factory created
    • The fioup daemon running on your device
    • Git configured with your Factory repository cloned
    • An Edge Impulse Account
    • The AI Model: You must have the model.eim file generated in Part 4
  • Previous tutorials:
    • Understanding of how we built the Voice Detection (Part 3) and Object Detection (Part 4) containers, as we will be deploying modified versions of them today

Note: If you have the arduino-elf-webui application from Part 4 still running, please disable it via the FoundriesFactory UI before proceeding to ensure a clean slate.

The Architecture: Microservices & MQTT

Previously, we worked with isolated applications. The Voice app handled audio and the UI; the Matrix app handled the LEDs and the web control.

This time, we will run multiple services communicating to create a coordinated system. We will run three distinct containerized applications and one message broker.

What is MQTT? To coordinate these services, we are introducing MQTT (Message Queuing Telemetry Transport), an event-driven communication layer.

  • The Broker (Mosquitto): Acts as the central message orchestrator. It sorts and delivers messages.
  • Publishers: Services that send messages.
  • Subscribers: Services that listen for messages.

Modifying Our Applications

We aren't just running the exact same containers from our earlier demonstration. We will modify the source code of each application to include an MQTT client:

  1. arduino-elf-mqtt now publishes alerts instead of just drawing boxes
  2. arduino-voice-mqtt (MPU): now subscribes to alerts to update its UI, and publishes commands to reset them
  3. arduino-led-matrix-mqtt (MCU + MPU): now subscribes to everything to act as the physical display driver.

The Logic: MQTT Topics

To make these services talk, we define a shared "language" using MQTT topics. Here is the flow we are building:

TopicPayloadPublisherSubscriberEffect
arduino/alert/red"start"Elf DetectorVoice UI, MatrixTriggers flashing red lights & UI alert
arduino/alert/red"stop"Voice ServiceVoice UI, MatrixClears the alert state
arduino/led/command"color:blue"Voice ServiceMatrixChanges the tree color to Blue

Understanding the Integrated System

The Three Services and their key responsibilities are:

  1. arduino-elf-mqtt (Elf Detection Server):
    • Runs Edge Impulse YOLOv5 and OpenCV to detect "elf" objects.
    • Publishes the arduino/alert/red topic with a "start" message when an elf is detected.
    • The alert must be cleared by a voice "select" command.
  2. arduino-voice-mqtt (Voice Control + Christmas Tree):
    • Runs Edge Impulse Audio to recognize voice commands (like select, blue, red).
    • Publishes commands to control the LEDs, such as arduino/led/command → "color:blue".
    • Subscribes to arduino/alert/red to trigger the red blinking animation on the virtual tree.
  3. arduino-led-matrix-mqtt (LED Matrix + RGB LED Controller):
    • Controls the 13x8 LED matrix display and the 6 RGB LEDs via the Arduino Bridge API.
    • Subscribes to arduino/alert/red to trigger the physical red LED blink and matrix flash.
    • Subscribes to arduino/led/command to control LED colors and blinking.

The alert → blink latency is only ~75ms, demonstrating a near-instantaneous response.

Step 1: Deploying the MQTT Broker Container

To follow along with the code logic and model configuration, you can explore the source files in our public repository now: Arduino Demo Repository

We will perform this deployment in two stages to manage the download size efficiently. First, we deploy the message broker.

1. Update Your Local Repository Navigate to your Factory containers.git folder and pull the broker code from our demo repository:

host:~$ git remote add arduino-demo https://github.com/foundriesio/containers.git
host:~$ git fetch arduino-demo arduino-demo-25Q4
host:~$ git checkout arduino-demo/arduino-demo-25Q4 -- mosquitto

2. Commit and Push

host:~$ git add mosquitto
host:~$ git commit -m "Adding Mosquitto MQTT broker container"
host:~$ git push

This process should be becoming second nature by now. FoundriesFactory will trigger a CI/CD job and build a new target.

The fioup demon will see a new target available and download it to your device.

3. Enable the Broker

  1. Navigate to your FoundriesFactory Devices page. https://app.foundries.io/factories/<FACTORY-NAME>/devices/.
  2. Select your ARDUINO UNO Q device.
  3. Click "Update tags & apps".
  4. Move mosquitto from the left (Available) to the right (Selected) list and click "Apply".

4. Check that the broker is running on your device

device:~$ docker ps

You should see mosquitto running on port 1883.

5. Critical Cleanup: Freeing Up Storage

STORAGE WARNING: If you are using the 4GB Arduino UNO Q, there is a high likelihood of the device running out of space during the next step.

The integrated application update is large. If you have old, unused containers or images taking up space, the download will fail to install. You must manually clean the device storage before proceeding.

Disable previous apps enabled in the Factory

If you have any other apps enabled on the device, you should disable them now on the FoundriesFactory web UI.

  1. Navigate to your Devices page on the FoundriesFactory dashboard. https://app.foundries.io/factories/<FACTORY-NAME>/devices/.
  2. Select your ARDUINO UNO Q device
  3. Click "Update tags & apps".
  4. Only mosquitto should be in the right hand column (selected)

Prune the Device: Check for any lingering containers on your device terminal:

device:~$ docker ps -a

You should see mosquitto running. If you see any other containers (even those with status Exited), you must remove them to free up space.

  1. Kill any running containers that are not Mosquitto:
device:~$ docker kill <CONTAINER_ID>
  1. Prune the system to remove stopped containers and unused images:
device:~$ docker system prune -a

Type y when prompted to confirm.

Your device is now lean, clean, and ready for the main application payload.

Step 3: Deploying the Integrated System

With our MQTT broker deployed and the device cleaned of old containers, we can move on to the main event.

1. Checkout the Applications. We need to grab the remaining folders from the demo repository**😗*

host:~$ git checkout arduino-demo/arduino-demo-25Q4 -- arduino-elf-mqtt
host:~$ git checkout arduino-demo/arduino-demo-25Q4 -- arduino-voice-mqtt
host:~$ git checkout arduino-demo/arduino-demo-25Q4 -- arduino-led-matrix-mqtt

2. Add Your AI Model (Critical Step) The arduino-elf-mqtt application requires your trained YOLOv5 model to function

CRITICAL WARNING: You must copy your model.eim file (from Part 4) into the arduino-elf-mqtt directory. If you skip this, the build will fail.

# Copy the model
host:~$ cp /path/to/your/model.eim arduino-elf-mqtt/

# Force add it to git (ignoring .gitignore rules for binaries)
host:~$ git add -f arduino-elf-mqtt/model.eim

3. Commit and Push

Now that the applications and the model are staged, send them to the Factory to trigger a build.

host:~$ git add arduino-elf-mqtt arduino-voice-mqtt arduino-led-matrix-mqtt
host:~$ git commit -m "Add integrated MQTT applications and AI model"
host:~$ git push

Enabling the Applications

Once the build finishes (FoundriesFactory triggers the CI/CD job), the fioup daemon on your device will download the new target. You must now enable the new containers.

  1. Navigate to your FoundriesFactory Devices page.
  2. Click "Update tags & apps".
  3. Move the following apps from the Available (left) list to the Enabled (right) list:
    • arduino-elf-mqtt
    • arduino-voice-mqtt
    • arduino-led-matrix-mqtt
    • (Ensure mosquitto remains enabled)
  4. Click "Apply".

Verification

Wait for the update to apply, then check the status of your device. You should now see four specific containers running in harmony.

device:~$ docker ps

You should see output similar to this:

CONTAINER ID   IMAGE                                          STATUS         NAMES
a1b2c3d4e5f6   hub.foundries.io/<Factory>/arduino-elf-mqtt    Up 2 minutes   arduino-elf-mqtt-1
b2c3d4e5f6a1   hub.foundries.io/<Factory>/arduino-voice-mqtt  Up 2 minutes   arduino-voice-mqtt-1
c3d4e5f6a1b2   hub.foundries.io/<Factory>/arduino-led-matrix  Up 2 minutes   arduino-led-matrix-1
c6fad4f70091   hub.foundries.io/<Factory>/mosquitto           Up 2 hours     mosquitto-mosquitto-1

Step 4: Verifying the Experience

Give the device a few moments to initialize. You can verify that all four containers are running by checking docker ps on the device.

1. The Setup

  • Open the Vision Feed at http://\<device-ip\>:8001. You should see the live camera stream.
  • Open the Voice UI at http://\<device-ip\>:8000. You should see a festive living room scene with the prompt "Say 'Select' to start".

2. Test Voice Control (Normal Operation)

Before we trigger an alert, let's decorate our tree some.

  • Wake the System: Speak clearly: "Select".
  • Result: The Voice UI updates to "Select the Color". The LED matrix on your board will light up, waiting for a command.
  • Change the Mood: Say a color, like "Green" or "Blue".
  • Verification:
    • The Christmas tree on the Voice UI immediately changes to that color.
    • The Physical LED Matrix on your ArduinoUNO Q syncs perfectly, displaying the same color.
    • This confirms the Voice Service is successfully publishing MQTT messages to the Matrix Service.

3. The Intruder Alert (The Override)

Now, let's test the Vision integration.

  • Action: Hold your Elf figurine (or a printed picture of one) in front of the camera while the system is in "Green" or "Blue" mode.
  • Watch the Chain Reaction:
    1. Vision: The arduino-elf-mqtt container detects the object and draws a red bounding box (e.g., "elf 73%").
    2. MQTT: It publishes the arduino/alert/red message.
    3. Reaction: Instantly—regardless of the previous color—the Voice UI tree turns red, and the Physical LED Matrix switches to a red alert state.

4. Reset the System

  • To clear the alert, simply say "Select" again. The system acknowledges the command, clears the red alert status, and waits for your next color choice.

Conclusion: Mission Accomplished

We started this journey with a simple blinking light and finished with an intelligent, elf-hunting sentinel. This project proved that the Arduino UNO Q is far more than a standard microcontroller; its hybrid architecture effortlessly bridged the gap between heavy AI workloads and precise, real-time hardware control.

Crucially, FoundriesFactory and fioup turned what could have been a deployment nightmare into a seamless workflow. We didn't battle dependency hell or manually flash boards to get there we simply pushed code. From voice recognition to computer vision, we scaled our system’s complexity without ever leaving the command line.

Our system is now live, and the elves don't stand a chance. Whether you’re guarding holiday cheer or deploying the next generation of industrial sensors, you now have the blueprint for success: Start simple, use robust tools, and scale seamlessly.

Resources

For a deeper dive, explore the repository and documentation to gain a better understanding of the code and platform:

Related posts