Chainguard Libraries for JavaScript and CVE remediation for Python libraries
Learning Lab for October 2025 about Chainguard Libraries for JavaScript and CVE remediation for Python libraries
The November 2025 Learning Lab with Erika Heidi covers the release of Chainguard OS for the Raspberry Pi, showing how Chainguard OS has evolved to power new environments.
To get started with Chainguard OS on the Raspberry Pi, and to be able to run the demos in this presentation, you’ll need:
You also need to download the Chainguard Raspberry Pi Docker image by filling in the request form.
Unpack the image contents:
gunzip rpi-generic-docker-arm64-*.raw.gzCreate the disk - this assumes your microSD card reader is on /dev/sda:
sudo dd if=rpi-generic-docker-arm64-*.raw of=/dev/sda bs=1MAfter the disk is ready, plug it into the Pi and connect the board to the power source, ethernet cable, micro-hdmi, and keyboard. You can log in with user linky and password linky.
Then, you can run ip addr to find out your local network IP address and connect to the Pi via SSH from another computer.
On the first demo, Erika demonstrates how to build a minimal Minecraft Java server with Chainguard Containers, running Chainguard OS on the Raspberry Pi.
From the Raspberry Pi, clone the Guardcraft repository:
git clone https://github.com/chainguard-demo/guardcraft-server.git && cd guardcraft-serverBuild the image:
docker build . -t guardcraft-serverThen, run the Minecraft server with:
docker-compose upThis starts a Minecraft Java server using default settings configured via environment variables on the docker-compose.yaml file. You can connect from any compatible client on your local network.
On the second demo, Erika shows how to build a Llama.cpp container image using Chainguard Containers, and how to run the Llama.cpp server with vision-capable LLMs for generating rich ALT image descriptions, on the Raspberry Pi with Chainguard OS.
From the Raspberry Pi, clone the Wolfi-llama repository:
git clone https://github.com/erikaheidi/wolfi-llama.git && cd wolfi-llamaNext, run the command to build the wolfi-llama container image. This step compiles Llama.cpp from source, which may take several minutes to complete.
docker build . -t wolfi-llamaFor this demo, we’re using the Qwen3-VL open source LLM, since that has vision capabilities. We picked the 2B-Instruct version since that runs well on the Raspberry Pi.
Access the models directory from the repository. This is where the model files should be stored in order to be shared with the container when the server is running:
cd models/Download the LLM model from Huggingface:
curl -L -O https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF/resolve/main/Qwen3-VL-2B-Instruct-Q8_0.gguf?download=trueNext, download the mmproj file for that model, since that is required for advanced image features:
curl -L -O https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF/resolve/main/mmproj-F32.gguf?download=trueWhen download is complete, you can run the server.
The docker-compose.yaml file includes a custom command directive that includes all options required for running the server using the models you just downloaded. These values are hardcoded so you don’t need to type a long docker run command every time you want to get the server up and running. As a reference, here is the command that you’ll be running via docker-compose:
docker run --rm --device /dev/dri/card1 --device /dev/dri/renderD128 \
-v ${PWD}/models:/models -p 8000:8000 wolfi-llama:latest --no-mmap --no-warmup \
-m /models/Qwen3-VL-2B-Instruct-Q8_0.gguf --mmproj /models/mmproj-F32.gguf \
--port 8000 --host 0.0.0.0 -n 512 \
--temp 0.7 \
--top-p 0.8 \
--top-k 20 \
--presence-penalty 1.5To start the server, run:
docker-compose upWhen the server is up and running, you can access the chatbot interface from your browser by pointing it to the Raspberry Pi IP address in your local network, on port 8000.
Last updated: 2025-11-21 12:30