Instead of having several boxes to host different services, a more convenient way would be to host services within virtual machines. Probably, the best scenarios are those where a service has many dependencies and would require a considerable amount of software to be installed inside a root-jail.
Some ideas for virtual boxes include:
One example are the virtual machines created for VIBE. Due to the lack of focus on security of the OpenSim daemon, the only moderately safe solution we have found was to protect the host system by running OpenSim in a virtual machine. This also refers to the point we made about complicated applications with many dependencies because OpenSim depends on mono (along with its many libraries) and mysql.
This tutorial will help you set-up your own virtual machine infrastructure. However, on modern Linux systems virsh
is a complete solution that allows the management of virtual machines without the need to manually create and manage them as described in this tutorial.
Notable consumer-oriented virtualization software includes:
Out of all of them, qemu
seems like the best option because it is able to run headless (without a GUI) as a background process. Also, qemu
also has very few external dependencies and available for many architectures (some obscure ones as well).
First step is to check whether the processor on the machine that is going to be the host supports hardware virtualization. This check can be accomplished by issuing:
cat /proc/cpuinfo
and by reading the CPU flags.
Depending on the CPU vendor, the following flags have to be enabled for hardware virtualization:
Intel | AMD |
---|---|
vmx | svm |
If any of those flags are present in the right column of cpuinfo
then the hardware version of qemu
can be installed with:
aptitude install qemu-kvm
otherwise, the software version, qemu
can be installed with:
aptitude install qemu
We will assume for the rest of the guide that kvm
is installed instead of qmeu
. For systems where hardware virtualization is not available, for any command kvm
can be substituted by qemu
.
The following command will create an image, we named it nano:
kvm-img create -f qcow2 nano.img 4G
the last parameter 4G
indicates the size of the disk image (in this case 4 Gigabytes).
A good security measure is to run qemu
or kvm
under a different credentials (other than root
), thus we create a new user called nano
:
groupadd nano useradd -g nano -G nano -d /srv/nano nano
Then, we change the directory to /srv/nano
and download an ISO (the Debian net-install, in this case).
Now, the virtual machine can be launched in headless mode by issuing:
kvm -hda nano.img -cdrom debian-7.1.0-amd64-netinst.iso -boot d -m 512 -cpu kvm64 -name nano -enable-kvm -runas nano -vnc :1
The machine will run in headless mode with a VNC
port open. Since :1
was selected in the command above, then kvm
will accept VNC
connections remotely on port 5901
. Every display :1
, :2
, and so on, is added to the base 5900
port.
By connecting to kvm
, an installation can be performed as on a regular machine.
Most virtual machine guides suggest that the best networking option is to use a bridge in order to snap together the virtual machine interface and the interface of the host machine. However, this is not desired in any way since we would like to isolate the machine as much as possible from the host system and eventually perform some packet mangling on the different interfaces.
Thus, the following command creates a tap
device named nan0
and binds the virtual machine to that interface. From the virtual machine's point of view, the device will appear as a standard ethernet device:
kvm -hda nano.img -m 512 -cpu kvm64 -name nano -enable-kvm -net nic,vlan=0 -net tap,vlan=0,ifname=nan0,script=nan-up.sh -pidfile nano.pid -runas nano -daemonize -vnc :1
The command uses a script called nan-up.sh
that is run in the host machine and is responsible for automatically bringing the nan0
interface up. It contains just these lines:
#!/bin/bash /sbin/ifconfig nan0 192.168.5.1
The next step would be to configure the ethernet device within the virtual machine itself by connecting via VNC
again. If the guest
system within the virtual machine is Debian, then the /etc/network/interfaces
file should be edited to bring the interface within the virtual machine up with a static address. Since the C-class selected for the set-up is 192.168.5.0/24
, the /etc/network/interfaces
file should be configured as follows (provided that the tap
interface is interpreted in the virtual machine as eth0
):
iface eth0 inet static address 192.168.5.2 broadcast 192.168.5.255 netmask 255.255.255.0 gateway 192.168.5.1 dns-nameservers 192.168.5.1
This will set the virtual machine to use the host system as a router. Provided everything is set-up correctly, at this point ICMP
requests to and from the virtual machine should pass over the tap
device.