5 minutes
Controlling an air-gapped robot vacuum from Home Assistant using synthesized speech
I have a Dreame L40 Ultra to vacuum and mop around my home.
The Dreame app requires the robot to have internet access to receive commands, send status information, or modify scheduling. Not being a fan of audio-video recording devices roaming freely in my home, I keep it offline. Unfortunately, this means no Home Assistant integration possibilities, and painful schedule changes (log in to firewall, temporarily allow traffic, open the app, update schedule, block again…).
However, this robot (and many others) recognizes a set of voice commands (a feature that works fully offline), so I built jacadi , a Go HTTP server that maps endpoints to audio file playback, to play these voice commands on demand.
Overview
Home-assistant performs a POST request to jacadi. Jacadi plays the wake word + the command using aplay on the USB speaker connected to the raspberry pi. The Dreame robot hears the wake word followed by the command, and performs the action.
Setup
Hardware
I connected a USB speaker to a raspberry pi located near the robot’s homebase.
Software
On the raspberry pi, we need to install alsa-utils, and add the user the container will be running as to the audio group.
With the USB speaker plugged into the raspberry pi, aplay -l will help us figure out which card to use (card 1 here):

A simple test will confirm we have this right:
speaker-test -D plughw:1,0 -t wav
If the volume blasts your eardrums, adjust:
amixer -c 1 sset PCM 20%
We can then deploy the jacadi api using ansible (or simply run the docker-compose.yml from the repo):
- hosts: raspberry-pi
roles:
- role: ansible-role-jacadi
jacadi_audiodev: "plughw:1,0" # adjust to match your card
Supporting new devices
The shipped image only contains commands for the Dreame L40 Ultra, but this can easily be expanded for other devices.
Add commands to the image
Create a new set of commands in jacadi’s routes/ folder (check dreame.json
for reference), then build your custom image:
docker build --target slim --build-arg ROUTES=mydevice -t jacadi:mydevice .
The new audio files will be generate during build and baked into the image.
Add commands through mounted volume
The docker images tagged full embed piper
and can generate text to speech audio files at runtime or startup. Creating and mounting an extra_routes file to your container will generate the missing audio files at start up. See jacadi’s README for details
Home Assistant
Generating the rest_command yaml list
With the API up and running, we can generate the corresponding Home-Assistant rest_command using the generate-homeassistant script in the jacadi
repo:
go run cmd/generate-homeassistant/main.go -base-url="http://jacadi.local:8080" -device=dreame

This will generate the ha-config/homeassistant-rest.yml file that contains a mapping of all the routes to home-assistant rest commands. We paste them in the configuration.yml’s rest_command entry:
rest_command:
jacadi_dreame_battery_level:
url: http://jacadi.local:8080/play/dreame/battery-level
method: post
jacadi_dreame_clean_balcony:
url: http://jacadi.local:8080/play/dreame/clean-balcony
method: post
[...]
Wake word script
For the Dreame vacuum, all commands need the wake word (“Okay Dreame”) spoken before, so we can add this small script to our HA config:
script:
vacuum_command:
alias: "Vacuum Command with Wake"
fields:
command:
description: "The rest_command to execute after wake up"
example: "jacadi_dreame_clean_balcony"
sequence:
- service: rest_command.jacadi_dreame_ok_dream
- delay:
seconds: 2
- service: "rest_command.{{ command }}"
Dashboard
This dashboard makes all of the vacuum cleaner’s commands easy to invoke.

Caveats
Audio annoyance
Audio playback on the Pi’s USB speaker from Go within Docker was frustrating. Getting the right audio encoding, bitrate, and device mapping working through Go audio libraries added complexity that wasn’t worth it for this hack. Calling aplay with pre-generated wav files was the path of least resistance.
Unilateral communication
The robot listens to us, and acts, but we don’t get any confirmation it has heard our command, or that the command was successfully performed.
Limited controls
We are limited to the set of commands Dreame has set up for voice recognition. They cannot be combined or chained. We cannot ask the robot to “Vacuum only” and “Clean the bathroom”. The “Clean the bathroom” command will clean the bathroom with whatever setting was last used through the app. We cannot ask for multiple rooms to be cleaned (and as we have no feedback when the cleaning is over, we need to add time buffers between manually chained tasks).
Inconsistent names
The room names in the app don’t always map to the voice commands. Here are a few mappings I figured out:
- Saying “clean the hallway” cleans the corridor
- Saying “clean the bedroom” cleans all bedrooms
- Saying “clean the master room” cleans the primary bedroom
- Saying “clean the guest room” cleans the second bedroom
Conclusion
Despite a few caveats, this set up has allowed me, through Home-Assistant, to set up and easily update my home’s cleaning schedule, fire one off cleaning actions whether I am home or not, and build some simple automations (like “clean around the litter box after a cat has been in there”), while keeping the vacuum cleaner fully offline and preserving some feeling of privacy.
A somewhat obvious next step would involve adding voice recognition to jacadi, and registering the robot’s (very verbose) vocal feedback to home-assistant to get information about task success, things the robot wants me to fix or clean, etc. Although that would mean adding a new device listening in…
Although the only non-human voice activated device in my home is the Dreame, I am sure more applications can be found for this type of voice bridging with air-gapped devices.
cats home-automation TTS raspberrypi
918 Words
2026-01-27 03:20 (Last updated: 2026-01-27 03:54)
Maxence Ardouin (nbr23)