Testing video output from Qualcomm® RB3 Gen 2 dev kit in LAVA
LAVA’s main goal is to drive testing on the DUT (Device Under Test) itself. The alternative for testing video output would be for the DUT to run the playback and recording of the playback on its own.
This isn’t impossible, but it’s generally impractical. It’s much easier to capture the video playback from the DUT on the LAVA worker, usually with more resources available to postprocess the data and figure out whether the output is correct.
In the past, running the test on both the DUT and some other piece would require a multi-node job; these are notoriously hard to run and maintain. To address this issue—at least for some types of test jobs—it is now possible to run a test on the LAVA worker inside a Docker container. This feature was introduced to help run the Android CTS suite. Details are described in the LAVA docs.
There are a few prerequisites that need to be met to use host Docker testing for verifying video output.
1. Proper hardware setup
The LAVA worker needs to be able to capture the video output generated by the DUT. To accomplish this goal, we’ll use a USB HDMI capture dongle. The dongle is plugged into the worker, and the HDMI output from Qualcomm RB3 Gen 2 dev kit is connected to the dongle with a regular HDMI cable.
2. LAVA device type and device dictionary
When running a test in a Docker container on the worker, the container doesn’t have access to most of the resources. In particular, it doesn’t have access to /dev/video*
devices. To fix this we need to tell the worker to launch a container with proper devices. The LAVA Docker test implementation allows for this. Device type needs to contain the following section:
actions: test: methods: docker: options:
Since the device type for the Qualcomm RB3 Gen 2 dev kit doesn’t inherit from any other more general type, it’s easy to add. For other devices it may be more difficult. This is the snippet added to the device type:
test: methods: docker: options: {{ device_docker_options|default([]) }}
This allows us to overwrite the Docker options for every device separately. This is done in a device dictionary:
{% set device_docker_options = ["--device", "/dev/video0:/dev/video0", "--device", "/dev/video1:/dev/video1"] %}
The resulting device dictionary contains the required options:
actions: … test: methods: docker: options: ['--device', '/dev/video0:/dev/video0', '--device', '/dev/video1:/dev/video1']
The last step is to prepare a test that uses these features. The test will capture a single frame from HDMI output and compare it to the reference. The reference frame represents what is supposed to be displayed on the screen. As mentioned above, the test will run in a Docker container on the host. Here is the snippet of the test job definition covering this part:
- test: docker: image: ghcr.io/mwasilew/python-cv2-docker:main timeout: minutes: 30 definitions: - from: git name: video_compare path: automated/linux/video/video.yaml branch: video_compare parameters: REFERENCE_IMAGE: /lava-downloads/lmp_desktop.jpg repository: https://github.com/mwasilew/test-definitions.git
The container that the test runs in contains image comparison tools based on OpenCV. The reference image was taken beforehand. It’s stored on the host server and then downloaded to the worker with other build artifacts. All artifacts are also available in the testing container.
In the end the test results are captured in the test job:
Similarity Score: 100.000% Required threshold: 99.000% Test pass <LAVA_SIGNAL_TESTCASE TEST_CASE_ID=video_output RESULT=pass>