About

The guide on visualizing the Raspberry Pi uses Versatile PB that is limited in the amount of resources one can allocate to the virtual machine. In many way this limits the usage of emulating an ARM platform since given the restrictive resources not many tasks could be carried out. For instance, compiling some packages would require large amounts of RAM that would exceed the Versatile PB constraints to only $256MiB$ of RAM. Furthermore, Versatile PB only supports a single CPU such that services designed to leverage multiprocessing would not run properly or would run very slow.

In any case Versatile PB is useful for low-level tasks but for heavier loads it it much more preferable to just emulate the architecture and processor without necessarily being restricted to the hardware provided by the Raspberry Pi. In order to document this scenario Debian has been used as the distribution but many of the steps should apply similarly to any distribution that provides packages for ARM; similarly, AARCH64 has a similar workflow just that the architecture and CPU will differ when setting up the virtual machine.

Workflow

An overview can be provided of the tasks that must be carried out in order to set up a virtual machine emulating ARM on x86 or x86_64:

  1. setup the virtual machine with all parameters configured and all the necessary hardware attached,
  2. obtain an ARM kernel and initrd in order to boot and install the distribution,
  3. after the install is complete, extract the installed kernel and initrd as well as the kernel parameters from the device that the distribution was installed to,
  4. reconfigure the virtual machine to boot the extracted kernel, initrd and pass the kernel parameters

Storage

For this guide, LVM is used to set up a logical volume to which the ARM distribution will be installed to. The logical LVM volume can optionally leverage the caching capabilities of LVM in order to speed up the virtual machine.

The process of creating the logical volume (as well as physical volume, group volume, etc.) is not documented here. The guide assumes that a block storage device is available at /dev/vms/armhf.vm to which Linux will be installed.

Obtaining a Kernel and Initrd

Debian provides several kernels for a minimal installation environment for download. Perhaps the easiest way to install Debian for this setup is to use the netboot images.

Simply download vmlinuz and initrd.gz from the netboot directory and place them at:

  • /var/lib/libvirt/images/armhf/installer-vmlinuz
  • /var/lib/libvirt/images/armhf/installer-initrd

such that they can be referenced from within the libvirt domain definition.

Virtual Machine Definition

Create or import a domain definition with the following contents:

<domain type='qemu' id='67'>
  <name>armhf.vm</name>
  <uuid>50326617-be76-4d22-ad83-642f777d27b4</uuid>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='armv7l' machine='virt-2.11'>hvm</type>
    <kernel>/var/lib/libvirt/images/armhf/installer-vmlinuz</kernel>
    <initrd>/var/lib/libvirt/images/armhf/installer-initrd</initrd>
    <cmdline>console=ttyAMA0 console=ttyS0</cmdline>
    <boot dev='hd'/>
  </os>
  <features>
    <gic version='2'/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>cortex-a15</model>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>destroy</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-arm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/vms/armhf.vm' index='1'/>
      <backingStore/>
      <target dev='hda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-to-pci-bridge'>
      <model name='pcie-pci-bridge'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x01' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='ab:0c:db:53:71:1e'/>
      <source bridge='br0'/>
      <target dev='vnet59'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='tcp'>
      <source mode='bind' host='0.0.0.0' service='2445' tls='no'/>
      <protocol type='telnet'/>
      <target type='system-serial' port='0'>
        <model name='pl011'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='tcp'>
      <source mode='bind' host='0.0.0.0' service='2445' tls='no'/>
      <protocol type='telnet'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <graphics type='vnc' port='6000' autoport='no' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='virtio' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </video>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+0</label>
    <imagelabel>+0:+0</imagelabel>
  </seclabel>
</domain>

with the following notes:

  • the domain is named armhf.vm, is defined to use $1GiB$ of RAM and span $4$ CPUs (the emulated CPU being a Cortex A15),
  • the virtualization machine uses the standard virt machine at version 2.11 (virt-2.11) - this is due to some distribution kernels not being built with LPAE leading to no PCI devices being discovered upon boot hence not detecting the network card, storage device and not allowing the OS to be installed,
  • as mentioned previously the storage device will be an LVM block device at /dev/vms/armhf.vm,
  • the virtual machine will listen on port 2445 for a telnet connection exposing the Linux console to the user. This is done by passing the kernel parameters console=ttyAMA0 console=ttyS0 and defining a serial console within the domain definition,
  • a network device is attached to br0 where br0 is a bridge for all virtual machines,
  • all devices attached use the virtio drivers meaning that the installed distribution should have support for virtio (Debian does).

With the previous setup in place, the virtual machine can be booted and the installer should start. Either connect to the VNC port or telnet to the serial console in order to proceed with the installation.

Extracting the Installed Kernel and Initrd

With the setup complete, the installed kernel and initrd can be extracted in order to boot the virtual machine after the installation has completed. Ensure that the virtual machine is turned off and then using kpartx, device mappings can be created for each partition on the block device. Issue:

kpartx -av /dev/vms/armhf.vm

in order to map the partitions to block devices.

Then, mount the boot partition to a folder on the host in order to extract the kernel and initrd images:

mount /dev/vms/armhf.vm0p1 /mnt/extract

and copy the files to /var/lib/libvirt/images/armhf.

cp /mnt/extract/vmlinuz-5.10.0-8-armmp-lpae /var/lib/libvirt/images/armhf/vmlinuz
cp /mnt/extract/initrd.img-5.10.0-8-armmp-lpae /var/lib/libvirt/images/armhf/initrd

Note that the boot partition under most Linux distributions contains a number of symlinks that point to the original files such that some care must be taken to extract the actual files and not the symlinks.

With the new kernel and initrd images extracted, modify the virtual machine domain definition in order to reference the new files. In this example, change:

    <kernel>/var/lib/libvirt/images/armhf/installer-vmlinuz</kernel>
    <initrd>/var/lib/libvirt/images/armhf/installer-initrd</initrd>

to:

    <kernel>/var/lib/libvirt/images/armhf/vmlinuz</kernel>
    <initrd>/var/lib/libvirt/images/armhf/initrd</initrd>

and save the domain definition.

Additionally, the kernel parameters should be specified within the domain definition by updating the value of the cmdline tag:

    <cmdline>root=/dev/vda1 console=ttyAMA0 console=ttyS0</cmdline>

where:

  • console=ttyAMA0 and console=ttyS0 are now optional.

Closing Notes

Even though the installation is rather straightforward the extra difficulty is due to the machine type being defined as virt (which is the default when using virt-install) instead of virt-2.11 (at the top of the domain definition):

    <type arch='armv7l' machine='virt'>hvm</type>

that might result in the following errors when booting the kernel:

pci-host-generic 4010000000.pcie: can't claim ECAM area [mem 0x10000000-0x1fffffff]: address conflict with pcie@10000000 [mem 0x10000000-0x3efeffff]
pci-host-generic: probe of 4010000000.pcie failed with error -16

due to which no PCI devices will be detected making an install difficult since no networking card nor storage device will be available to the virtual machine.

The qemu workaround is to pass highmem=off or for libvirt the virt-2.11 machine type can be used instead:

    <type arch='armv7l' machine='virt-2.11'>hvm</type>

libvirt/emulating_arm_using_libvirt.txt ยท Last modified: 2022/04/19 08:28 by 127.0.0.1

Access website using Tor Access website using i2p Wizardry and Steamworks PGP Key


For the contact, copyright, license, warranty and privacy terms for the usage of this website please see the contact, license, privacy, copyright.