Blog

9p rootfs on Ubuntu Server VM Using QEMU

This post is a guide on how to install Ubuntu Server to a Plan 9 filesystem.

If you are impatient you can skip to the actual tutorial content by clicking here.

What is Plan 9?

Plan 9 started out as a research operating system developed at Bell Labs. It extends the “everything is a file” API metaphor of UNIX even farther. Filesystem APIs for networking sockets, graphics, and more. Any resource of any system, be it remote or local, can be accessed if you have a mounted filesystem with these special files in them!

Fig 1. Oprah Winfrey giving everyone files

To accomplish all of this resource and file sharing, the Plan 9 Filesystem Protocol (also known as 9p) was devised.

Within the last few years, a lot of people have begun to realise that 9p was really neat and started to use it in Linux. It is especially popular for sharing resources with virtual machines because of how lightweight it is as a protocol.

What happened to Plan 9?

Sadly because of the pressures of the capitalistic hellscape we all live under, Bell Labs changed hands and some of this research got pared down. Bell Labs was first spun off into AT&T Technologies which was then bought by Lucent that later merged with the French Alcatel to form Alcatel-Lucent that then got bought again by the undead corpse of Nokia. Standard murder-execution, er, merger-acquisition stuff.

Fig 2. It’s just business, really

The researchers were scattered like ashes to the wind, many of them finding homes at places like Google et. al. They took their work with them to the BSD, Linux, and Macintosh systems that they ended up working with at their new jobs. They ported a lot of the features from Plan 9 to user spaces of other operating systems and even made kernel modules and extensions to add back some of the features they sorely missed.

(See #1 for more details on this.)

Why use a 9p rootfs?

The most common way that developers boot their VMs is using disk images. This is “fine,” but it introduces a lot of annoyances and inefficiencies.

Let’s look at the latter first. When you are booting a virtual machine with a disk image you are literally telling your virtual machine software to emulate an entire computer motherboard and hard drive. This isn’t something that can typically be accelerated with your host system’s virtualisation features, either. Your virtual machine software (like QEMU, for instance) literally has to do the grunt work of emulating the behavior a motherboard’s controller chips, buses, and attached devices (like the drive serving up your disk image).

Let’s also look at why booting a VM this way might be annoying for a developer. Because all of your VM’s files are trapped in a disk image file this means that you have run an extra command to mount the disk image on some sort of loopback device. If you use a QEMU Qcow image you have to do an extra step of mapping the image to a network block device. Some of these things may require administrative/root privileges without special configuration magic. Even worse, you often can’t directly write to a running VM’s filesystem without first turning off the virtual machine! This leads users down iSCSI, NFS, SMB, and SSH rabbit holes if they want to poke at the filesystems of running VMs. What if there was a better way?

With 9p filesystems, you can inject the filesystem directly into a mapped space of guest’s kernel without the baggage of emulating block devices and running traditional filesystems designed for physical disks. You can directly modify the filesystems of a running VM in situ without any catastrophic consequences. You can even map the permissions of the shared filesystem to the user account running the VM.

Meaning on your VM you can have this on the mapped guest 9p filesystem:

root@ubuntu-server:~# ls /
total 32K
drwxr-xr-x   1 1000 1000  266 Aug  7 05:15 .
drwxr-xr-x   1 1000 1000  266 Aug  7 05:15 ..
-rwxr-xr-x   1 1000 1000 6.9K Aug  7 05:10 arch-chroot
lrwxrwxrwx   1 root root    7 Apr 22  2024 bin -> usr/bin
drwxr-xr-x   1 root root    0 Feb 26  2024 bin.usr-is-merged
drwxr-xr-x   1 root root  306 Aug  7 17:26 boot
drwxr-xr-x  15 root root 3.7K Aug  8 17:59 dev
drwxr-xr-x   1 root root 2.8K Aug  7 18:03 etc
drwxr-xr-x   1 root root    0 Apr 22  2024 home
lrwxrwxrwx   1 root root    7 Apr 22  2024 lib -> usr/lib
drwxr-xr-x   1 root root    0 Apr  8  2024 lib.usr-is-merged
lrwxrwxrwx   1 root root    9 Apr 22  2024 lib64 -> usr/lib64
drwxr-xr-x   1 root root    0 Aug  7 05:00 media
drwxr-xr-x   1 root root    0 Aug  7 05:00 mnt
drwxr-xr-x   1 root root    0 Aug  7 05:00 opt
dr-xr-xr-x 417 root root    0 Aug  8 17:59 proc
drwx------   1 root root  142 Aug  8  2025 root
drwxr-xr-x  15 root root  500 Aug  8 17:59 run
lrwxrwxrwx   1 root root    8 Apr 22  2024 sbin -> usr/sbin
drwxr-xr-x   1 root root    0 Mar 31  2024 sbin.usr-is-merged
drwxr-xr-x   1 root root   12 Aug  7 17:44 snap
drwxr-xr-x   1 root root    0 Aug  7 05:00 srv
dr-xr-xr-x  13 root root    0 Aug  8 17:59 sys
drwxrwxrwt   1 root root  640 Aug  8 17:59 tmp
drwxr-xr-x   1 root root   94 Aug  7 05:00 usr
drwxr-xr-x   1 root root  124 Aug  7 17:44 var

…and get this on your host machine:

targetdisk@vm-host:~$ ls 9p/
total 32K
drwxr-xr-x 1 targetdisk targetdisk  266 Aug  7 00:15 .
drwxr-xr-x 1 targetdisk targetdisk   76 Aug  5 14:48 ..
-rwxr-xr-x 1 targetdisk targetdisk 6.9K Aug  7 00:10 arch-chroot
-rw------- 1 targetdisk targetdisk    7 Apr 22  2024 bin
drwx------ 1 targetdisk targetdisk    0 Feb 26  2024 bin.usr-is-merged
drwx------ 1 targetdisk targetdisk  306 Aug  7 12:26 boot
drwx------ 1 targetdisk targetdisk  128 Aug  7 00:00 dev
drwx------ 1 targetdisk targetdisk 2.8K Aug  7 13:03 etc
drwx------ 1 targetdisk targetdisk    0 Apr 22  2024 home
-rw------- 1 targetdisk targetdisk    7 Apr 22  2024 lib
-rw------- 1 targetdisk targetdisk    9 Apr 22  2024 lib64
drwx------ 1 targetdisk targetdisk    0 Apr  8  2024 lib.usr-is-merged
drwx------ 1 targetdisk targetdisk    0 Aug  7 00:00 media
drwx------ 1 targetdisk targetdisk    0 Aug  7 00:00 mnt
drwx------ 1 targetdisk targetdisk    0 Aug  7 00:00 opt
drwx------ 1 targetdisk targetdisk    0 Apr 22  2024 proc
drwx------ 1 targetdisk targetdisk  142 Aug  8 12:59 root
drwx------ 1 targetdisk targetdisk   44 Aug  7 00:06 run
-rw------- 1 targetdisk targetdisk    8 Apr 22  2024 sbin
drwx------ 1 targetdisk targetdisk    0 Mar 31  2024 sbin.usr-is-merged
drwx------ 1 targetdisk targetdisk   12 Aug  7 12:44 snap
drwx------ 1 targetdisk targetdisk    0 Aug  7 00:00 srv
drwx------ 1 targetdisk targetdisk    0 Apr 22  2024 sys
drwx------ 1 targetdisk targetdisk  478 Aug  8 12:59 tmp
drwx------ 1 targetdisk targetdisk   94 Aug  7 00:00 usr
drwx------ 1 targetdisk targetdisk  124 Aug  7 12:44 var

Installing Ubuntu Server to a 9p filesystem

Get Ubuntu Server

First, download the Ubuntu Server installer ISO from Ubuntu’s website.

Start the VM

Make a directory you’d like to export as a 9p filesystem. For the purposes of this demonstration, we’ll call our directory ubuntu-9p:

mkdir ubuntu-9p

Now you can run your installer VM with your Ubuntu ISO attached and your 9p filesystem mapped and exported! You’ll need to substitute the path for UBUNTU_ISO to the path of your downloaded Ubuntu Server ISO. I threw on some extra flags to enable some of the extra sandboxing features of QEMU.

Run the following on your host:

qemu-system-$(uname -m) \
    -cpu max \
    -enable-kvm \
    -smp $(nproc) \
    -nodefaults \
    -no-user-config \
    -nographic \
    -chardev stdio,id=virtcons0 \
    -device virtio-serial-pci \
    -device virtconsole,chardev=virtcons0 \
    -m 8G \
    -net user,hostfwd=tcp::2221-:22 \
    -net nic \
    -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=fs0 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -drive file=UBUNTU_ISO,media=cdrom,id=virtiso1  \
    -fsdev local,security_model=mapped,id=fsdev-fs0,multidevs=remap,path=ubuntu-9p

This will start a virtual machine with the local directory ubuntu-9p mapped to a 9p filesystem on the guest named fs0.

Get to a shell

After a little wait the VM should boot into the text-mode Ubuntu installer interface. At the time of writing the Ubuntu installer doesn’t support installing to systems without block devices. This means we will have to install by other means in a shell on our booted install media.

Select Enter shell from the [ Help ] menu.

================================================================================
  Serial                                               ┌──────────────[ Help ]┐
=======================================================│ Help on this screen  │=
                                                       │ Keyboard shortcuts   │
  As the installer is running on a serial console, it h│ Enter shell          │
  mode, using only the ASCII character set and black an│ View error reports   │
                                                       ├──────────────────────┤
  If you are connecting from a terminal emulator such a│ About this installer │
  supports unicode and rich colours you can switch to "│ Help on SSH access   │
  unicode, colours and supports many languages.        ├──────────────────────┤
                                                       │ Toggle rich mode     │
  You can also connect to the installer over the networ└──────────────────────┘
  allow use of rich mode.







                          [ Continue in rich mode  > ]
                          [ Continue in basic mode > ]
                          [ View SSH instructions    ]

If during this install process you want a bigger terminal you can exit the shell and select Help on SSH access. You’ll get the name of the installer account and a password that was randomly-generated when you booted the Ubuntu live installer media.

┌────────────────────────── Help on SSH access ──────────────────────────┐
│                                                                        │
│  It is possible to connect to the installer over the network, which    │
│  might allow the use of a more capable terminal and can offer more     │
│  languages than can be rendered in the Linux console.                  │
│                                                                        │
│  To connect, SSH to installer@10.0.2.15.                               │
│                                                                        │
│  The password you should use is "Qhs_3:@C^#H'nw`4rd6%".                │
│                                                                        │
│                                                                        │
│                                                                        │
│                             [ Close      ]                             │
│                                                                        │
└────────────────────────────────────────────────────────────────────────┘

In another terminal on your host machine ssh like so:

ssh -p 2221 installer@127.0.0.1

If you run the VM multiple times, you might have to edit your ~/.ssh/known_hosts file to edit the host fingerprint entry for the 127.0.0.1:2221 host.

Mount your 9p filesystem

Now that you have a working shell, you’ll need to mount the 9p filesystem:

root@ubuntu-server:/# mount -t 9p -o trans=virtio fs0 /mnt

Bootstrap the base Ubuntu system

With the destination 9p filesystem mounted, it’s now time to bootstrap a base Ubuntu installation to it. The Ubuntu Server installation media at the time of writing does not include the debootstrap tool so we’ll have to install it.

First, update the package lists:

root@ubuntu-server:/# apt update

Now, install the debootstrap tool:

root@ubuntu-server:/# apt install debootstrap

It’s time to begin bootstrapping our base system. You’ll need the shortened codename of Ubuntu release you’d like to install. For instance, if you were installing Ubuntu 20.04 “Noble Numbat,” you’d use the short name noble. (See #2 for more details.)

Bootstrap the new system with the debootstrap tool like so:

root@ubuntu-server:/# debootstrap noble /mnt

Change root to the target system

For the rest of the installation process, the rest of the steps will need to be performed from a shell on the target system we just bootstrapped. To do this, we’ll need to pass some pseudo-filesystems on from our live installation environment and execute the chroot command to “change root” into the target that lives on our 9p filesystem.

First, let’s pass over the live environment’s pseudo-filesystems as bind mounts:

root@ubuntu-server:/# mount -o bind /proc /mnt/proc
root@ubuntu-server:/# mount -o bind /dev /mnt/dev
root@ubuntu-server:/# mount -o bind /dev/pts /mnt/dev/pts
root@ubuntu-server:/# mount -o bind /sys /mnt/sys

Now change root to the 9p filesystem:

root@ubuntu-server:/# chroot /mnt /usr/bin/bash

Installing a kernel

The earlier debootstrap command line installed most of the components of a working system, but not quite all of them. The next thing you’ll need to do is install a Linux kernel. Later on you’ll be passing this kernel and its associated initial RAM disk image as command-line flags to QEMU as -kernel and -initrd, respectively.

We’re going to pick the linux-virtual package here for a smaller footprint on the host machine:

root@ubuntu-server:/# apt install linux-virtual

Enable 9p kernel modules

To allow the installed system to boot properly, you’ll need to add 9p kernel modules to the initial RAM disk image. The following lines need to be appended to the end of /etc/initramfs-tools/modules:

9p
9pnet
9pnet_virtio

You can add them with an editor like vi or with shell I/O redirects like so:

root@ubuntu-server:/# cd /etc/initramfs-tools
root@ubuntu-server:/etc/initramfs-tools# echo 9p >> modules
root@ubuntu-server:/etc/initramfs-tools# echo 9pnet >> modules
root@ubuntu-server:/etc/initramfs-tools# echo 9pnet_virtio >> modules

The lines added to /etc/initramfs-tools/modules tell Ubuntu’s scripts to include the 9p, 9pnet, and 9pnet_virtio kernel modules in the initial RAM disk image. With that done, update the initial RAM filesystem:

root@ubuntu-server:/# update-initramfs -u

Enable DHCP on boot

To get internet access when your VM boots, you’ll need to configure a network interface. The VM can automatically get an IP address and a gateway at with DHCP enabled on its virtual network interface (ens2 on my VM). Ubuntu has this terrible thing called netplan in its base installation that can be used to configure network interfaces. Since it’s already here, we might as well use it.

Get the name of the virtual Ethernet interface with the ip command:

root@ubuntu-server:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
    altname enp0s2
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic ens2
       valid_lft 85077sec preferred_lft 85077sec
    inet6 fec0::5054:ff:fe12:3456/64 scope site dynamic mngtmpaddr noprefixroute
       valid_lft 86078sec preferred_lft 14078sec
    inet6 fe80::5054:ff:fe12:3456/64 scope link
       valid_lft forever preferred_lft forever

Your network interface should be named something like ens2. With that known, tell the netplan command to enable DHCPv4 on the interface like so:

root@ubuntu-server:/# netplan set --origin-hint ens2 ethernets.ens2.dhcp4=true

This will create a YAML file with the .yaml extension in /etc/netplan that has ens2 somewhere in the name. Hints are merely suggestions, after all.

Set root password

There’s one last thing you need to do before you can enjoy your newly-installed VM: set a root password! You can do it by invoking the passwd command with no arguments:

root@ubuntu-server:/# passwd
New password:
Retype new password:
passwd: password updated successfully

With the password set, you may now exit the changed-root shell and poweroff the installer VM.

Enjoying your VM

You can now boot your new VM with the following command:

qemu-system-$(uname -m) \
    -cpu max \
    -enable-kvm \
    -smp $(nproc) \
    -nodefaults \
    -no-user-config \
    -nographic \
    -chardev stdio,id=virtcons0 \
    -device virtio-serial-pci \
    -device virtconsole,chardev=virtcons0 \
    -m 8G \
    -net user,hostfwd=tcp::2222-:22 \
    -net nic \
    -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=fs0 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -kernel ubuntu-9p/boot/$(<vmlinuz) \
    -append "earlyprintk=ttyS0 root=fs0 rw rootfstype=9p rootflags=trans=virtio,version=9p2000.L,msize=5000000,cache=mmap,posixacl console=ttyS0" \
    -initrd ubuntu-9p/boot/$(<initrd.img) \
    -fsdev local,security_model=mapped,id=fsdev-fs0,multidevs=remap,path=ubuntu-9p

After a little wait, you should be greeted with a text-mode getty login prompt. Log into the root account with the password you set earlier.

Logging with SSH

If you would like to log in with SSH, you’ll need to install the OpenSSH server software and configure it. Install it like so:

root@ubuntu-server:~# apt install openssh-server

The Ubuntu install scripts for the OpenSSH server package should enable the daemon automatically, but you won’t be able to get in yet. To do that, you’ll need to copy a public key from your host so you can log in. If you don’t already have a key, you can make one with ssh-keygen.

Once you have a key that you’d like to use, make a place to put it on the guest VM:

root@ubuntu-server:~# mkdir .ssh
root@ubuntu-server:~# chmod 700 .ssh
root@ubuntu-server:~# touch .ssh/authorized_keys
root@ubuntu-server:~# chmod 600 .ssh/authorized_keys

In another terminal on your host machine, add the public key you’d like to use to the end of the authorized_keys file like so:

targetdisk@vm-host:~$ cat ~/.ssh/id_ed25519.pub >> ubuntu-9p/root/.ssh/authorized_keys

Now you can log in to the VM with the following command:

ssh -p 2222 root@127.0.0.1

Conclusion

That’s it! You now have a working minimal Ubuntu server installation that you can use and abuse from within and without, thanks to the Plan 9 filesystem! You’ll probably want to save the long QEMU command to a shell script so that you can quickly boot up your VM later.

SEE ALSO

  1. Why did Plan 9’s creators give up on Plan 9? via 9front
  2. Ubuntu Version History via Wikipedia