Migrating My Existing Home Assistant from Bare Metal to a VM
For those who doesn’t know, it’s like changing the wings while you’re flying the plane. Little did I think before backing myself by multiple GPTs to max out the free quotas and getting down to business.
TL;DR
- I used
QEMU
andLibvirt
in CLI mode. - Use AI to get trough the hurdles. A GPT with a free quota would suffice.
- Go for a bridged network to benefit from your routers’ DHCP
- Disable your firewall and enable IP forwarding
- Pass all the necessary devices through using their IDs for consistency
- Use the domain
qemu:///system
underroot
- I used
virt-install
andvirsh
to deploy and manage my VMs from CLI virt-manager
is a great choice if you feel like a GUI- Don’t forget to download your Homeassistant backup before wiping your disk.
Caveats
USB devices might fail if the host reboots and VMs are not gracefully shut down. If you want to be able to reboot the host without bothering with manual VMs shutdown - deploy a script to ensure graceful VM shutdown automatically.
I solved it with a systemd
service.
Hard for me to judge, Proxmox was off my table since I used Arch
as a host and it is available only for Ubuntu.
VirtualBox is good but more beginner and GUI oriented and gives no benefits for CLI driven environment, although, I didn't test.
Arch Linux as a VM Host
I deployed Arch
because it is what I use on my laptop.
For virtualization I went with KVM
hypervisor backed by QEMU
and libvirt
as a robust combination of hardware emulator and API.
In my case dependencies were:
sudo pacman -S qemu-full virt-manager dnsmasq vde2 ebtables iptables
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
After I hard rebooted the host, my Sonoff Z-wave USB adapter together with Z-Wave stick, and Thread radio stopped working even though I could find them as devices in my VM. The solution is to either gracefully shutdown you VM before the host reboot or manually re-inserting all of the USB sticks for them to work again. You can of course also fully shut down your host which acts as a power drop for the USB sticks and makes everything work again. To avoid this manual hassle read the next chapter.
Network Via a Bridge
I'm not touching a firewall here. On a freshly installed machine the rules should be permissive enough. If you have a firewall then either disable or know what you're doing.
Enable IP forwarding in the Kernel
# Enable IPv4 forwarding
sudo sysctl -w net.ipv4.ip_forward=1
# Enable IPv6 forwarding
sudo sysctl -w net.ipv6.conf.all.forwarding=1
Create a Bridge Interface
A bridge is the preferred option for production setup and is rather easy to achieve.
There are many ways to create your bridge interface, I went with NetworkManager
CLI.
Update all the names according to how interfaces in your system are named.
sudo nmcli con add type bridge ifname br0
sudo nmcli con add type ethernet ifname enp2s0 master br0
sudo nmcli con mod br0 ipv4.method auto
sudo nmcli con up br0
Run ip a
and check that your bridge is there.
You IP will obviously change after reboot because of a new MAC on the bridge interface.
Mind it if you are configuring everything over SSH
or assigning static leases from your router which you will have to update.
Configure the Virtual Machine
Get the Disk Image
You’ll need your installation medium.
In my case it is a QCOW2
disk image from Homeassistant.
Map Out Devices
In case you want to attach any USB devices to your VM, look them up and note bus
and device
IDs.
# lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 4555:1031
Bus 001 Device 003: ID 10c4:ea60 Silicon Labs CP210x UART Bridge
Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 005: ID 1915:cafe Nordic Semiconductor ASA nRF528xx OpenThread Device
Bus 001 Device 006: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Install the VM
Everything seems to be ready for the VM creation using virt-install
.
virt-install
--name haos
--description "Home Assistant OS"
--os-variant=generic
--ram=1512
--vcpus=2
--disk haos_ova-15.0.qcow2,bus=scsi
--controller type=scsi,model=virtio-scsi
--import
--graphics none
--boot uefi
--network bridge=br0,model=virtio
--hostdev 001.003
--hostdev 001.005
--hostdev 001.006
Check IP and MAC
From within VM
You should now end up inside your VMs terminal where after boot you will have press Enter
and get a loging prompt.
Login as root
and check your VMs IP with ip a
command.
To exit the VM terminal use
CTRL-5
, might be different on Mac though.
Find MAC from Host
Using virsh
# sudo virsh domiflist haos
Interface Type Source Model MAC
-----------------------------------------------------------
vnet0 bridge br0 virtio 52:54:00:8e:6b:b8
Using ip a
4: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:8e:6b:b8 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe8e:6bb8/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
Mark VM for Autostart
To resiliently start your VM after host reboots use
sudo virsh autostart <VM>
Restore Home Assistant Backup
Point your browser to your VM IP and port 8123. Press the button to restore from back and use the file you carefully saved before. Login with your existing user after the restore is finished. In my case everything worked perfectly right after the restore finished. Kudos Home Assistant Team!
After I hard rebooted the host, my Sonoff Z-wave USB adapter together with Z-Wave stick, and Thread radio stopped working even though I could find them as devices in my VM. The solution is to either gracefully shutdown you VM before the host reboot or manually re-inserting all of the USB sticks for them to work again. You can of course also fully shut down your host which acts as a power drop for the USB sticks and makes everything work again. To avoid this manual hassle read the next chapter.
systemd
for Graceful VM Shutdown on Host Reboot
Use Use libvirt-guests.service
to manager the VMs life cycle.
Create libvirt-guests
configuration file
sudo nano /etc/conf.d/libvirt-guests
These are decent defaults
# Set to "suspend" if you want to suspend VMs instead of shutting down
ON_SHUTDOWN=shutdown
# Maximum time to wait for a VM to shut down (in seconds)
SHUTDOWN_TIMEOUT=300
# Use parallel shutdown to speed up the process (number of VMs to shut down simultaneously)
PARALLEL_SHUTDOWN=5
# Enable this to start VMs that were running when the host was shut down
ON_BOOT=start
Enable the libvirt-guests
service
sudo systemctl enable libvirt-guests.service
QEMU Guest Agent
QEMU guest agent improves communication between the host and VM
# Arch-based installation
sudo pacman -S qemu-guest-agent
# Enable and start the service
sudo systemctl enable qemu-guest-agent
sudo systemctl start qemu-guest-agent
Edit your VM configuration file and add the following into the <devices>
section.
The file is usually located in /etc/libvirt/qemu/<machine-name>.xml
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
systemd
Service to Gracefully Shutdown VMs
Custom Create a Script for Controlled VM Shutdown
Create a script file
sudo nano /usr/local/bin/vm-shutdown-script.sh
Copy the script code and save
#!/bin/bash
# Debug info
echo "Running as user: $(whoami)"
echo "Libvirt socket: $(ls -la /var/run/libvirt/libvirt-sock 2>/dev/null || echo 'Not found')"
echo "Libvirtd status: $(systemctl is-active libvirtd)"
# Wait for libvirtd to be fully available
echo "Waiting for libvirtd connection..."
timeout 30 bash -c 'until virsh list &>/dev/null; do sleep 2; echo "Retrying virsh connection..."; done'
# Get list of running VMs with full path
echo "Trying to list VMs..."
export LIBVIRT_DEFAULT_URI=qemu:///system
VMS=$(virsh --connect qemu:///system list --name --state-running)
echo "Found VMs: ${VMS:-None}"
# Wait for libvirtd to be fully available
timeout 30 bash -c 'until virsh list &>/dev/null; do sleep 1; done'
# Get list of running VMs
VMS=$(virsh list --name --state-running)
if [ -z "$VMS" ]; then
echo "No running VMs found"
exit 0
fi
for VM in $VMS; do
echo "Sending shutdown signal to $VM"
# Try guest agent first, fall back to normal shutdown
virsh shutdown --mode=agent "$VM" &>/dev/null || virsh shutdown "$VM" &>/dev/null
done
# Wait for VMs to shut down (with timeout)
TIMEOUT=300
ELAPSED=0
while [ $ELAPSED -lt $TIMEOUT ]; do
RUNNING_VMS=$(virsh list --name --state-running)
if [ -z "$RUNNING_VMS" ]; then
echo "All VMs shut down successfully"
exit 0
fi
echo "Waiting for VMs to shut down: $RUNNING_VMS"
sleep 5
ELAPSED=$((ELAPSED + 5))
done
# Force poweroff for any remaining VMs
for VM in $(virsh list --name --state-running); do
echo "Forcing poweroff for $VM after timeout"
virsh destroy "$VM"
done
Make it executable
sudo chmod +x /usr/local/bin/vm-shutdown-script.sh
Create Service Configuration
sudo nano /etc/systemd/system/vm-shutdown.service
Copy service configuration and save
[Unit]
Description=Graceful VM shutdown
After=libvirtd.service vm-shutdown.socket
Requires=libvirtd.service
Before=shutdown.target reboot.target halt.target
[Service]
Type=oneshot
User=root
Environment=LIBVIRT_DEFAULT_URI=qemu:///system
ExecStartPre=/bin/sh -c 'sleep 5 && chmod 666 /var/run/libvirt/libvirt-sock'
ExecStart=/bin/true
ExecStop=/usr/local/bin/vm-shutdown-script.sh
TimeoutStartSec=300
TimeoutStopSec=300
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Enable the Service
sudo systemctl daemon-reload
sudo systemctl start vm-shutdown.service
You should now be able to reboot your host with all the VM gracefully shutdown. Check the service logs and status if something doesn’t look right
sudo journalctl -u vm-shutdown.service
sudo systemctl status vm-shutdown.service
In my case the logs look like this
Mar 18 21:05:32 virtuoso systemd[1]: Stopping Graceful VM shutdown...
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Running as user: root
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Libvirt socket: srw-rw-rw- 1 root root 0 Mar 18 21:02 /var/run/libvirt/libvirt-sock
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Libvirtd status: active
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Waiting for libvirtd connection...
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Trying to list VMs...
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Found VMs: haos
Mar 18 21:05:32 virtuoso vm-shutdown-script.sh[702]: Sending shutdown signal to haos
Mar 18 21:05:33 virtuoso vm-shutdown-script.sh[702]: Waiting for VMs to shut down: haos
Mar 18 21:05:38 virtuoso vm-shutdown-script.sh[702]: Waiting for VMs to shut down: haos
Mar 18 21:05:43 virtuoso vm-shutdown-script.sh[702]: Waiting for VMs to shut down: haos
Mar 18 21:05:48 virtuoso vm-shutdown-script.sh[702]: Waiting for VMs to shut down: haos
Mar 18 21:05:53 virtuoso vm-shutdown-script.sh[702]: All VMs shut down successfully
Mar 18 21:05:53 virtuoso systemd[1]: vm-shutdown.service: Deactivated successfully.
Mar 18 21:05:53 virtuoso systemd[1]: Stopped Graceful VM shutdown.
virsh
for VM Management
Using To manage your VMs from the CLI use virsh
.
It can do a lot.
Check help to see the details.
I used it for debugging and experimenting all the time.
Most useful for me was
// start VM
sudo virsh start <VM>
// shutdown VM
sudo virsh shutdown <VM>
// reboot VM
sudo virsh reboot <VM>
// connect to VMs console
sudo virsh console <VM> --safe
// list the VMs in any status
sudo virsh list --all
// list the networks
sudo virsh net-list
// attach a device to a running VM
attach-device <VM> --file </etc/libvirt/qemu/devies/xyz.xml>
// remove a device from a running VM
detach-device <VM> --file </etc/libvirt/qemu/devies/xyz.xml>
Interestingly enough, when you create a bridged network and not register it with QEMU, you can still use it but it is not shown by virsh
.
It will also not show if you VM gets and IP.
This caused me confusion because I thought my networking doesn't work until I checked from withing VM and found that it gets an IP from router.
Success
My over complicated home works again.
Smile. Hope nothing crashes when my kids are back home.
Next on installing Immich and Paperless to benefit from the released disk space. I can already see my German self growing on me after 8 years in Berlin🤓 🇩🇪.
How Much Did I Suffer
- 1.5 Days with interruptions for
- Food
- Unrelated AI chats
- Arbitrary questions from kids
- Doctor visit and battling influenza
- Waiting until AI replaces me so that I can go surfing.
Biggest Hurdles Were
- Confusion if my VM gets and IP because of poor tools integration and difference between after boot greetings for VM console vs on screen console.
- Figuring out I can log in as root without a password to my HA VM
- USB sticks for IoT radio interfaces not working although present in the devices both on host and the VM
- Trying to use bind and unbind to emulate USB stick re-insertion
- Resisting to go full GUI mode. I wanted to save resource of my old Celeron with meager 4GB RAM.
Troubleshooting
My VM Disappeared
Make sure you use the right QEMU URI.
Under root qemu:///system
is used by default, under normal user it is usually qemu:///session
.
For a VMs that should be able to leverage all the system capabilities the former is preferred.
However if you forget sudo
the CLI will use wrong URI and show no VMs for that name space.
[artem@xyz ~]$ virsh uri
qemu:///session
[root@xyz ~]# virsh uri
qemu:///system
My USB Sticks don’t Work
- Check they are bound by vednorId and deviceId in the VM
- Check if you can see them from withing VM with lsusb
- Re-insert and try again, they might be in bad state after bad reboot like from a power outage.
- Power off the host and see if they recover after fresh start