Benito has been designed from the ground up to support multiple virtualization platforms. The interface for plugging new virtualization platform into Benito is described here. It is useful to refer to BenitoPipeline, as this document will refer to various points in the pipeline where your scripts must be called. == Networking == All networking internal to Benito is done using [http://vde.sourceforge.net/ VDE] (Virtual Distributed Ethernet). If your virtualization platform natively supports VDE, your life is relatively easy. If not, plugging in should still be possible. Networking configuration can be found in the following places: * TopDL: IP address, netmask, MAC address, VDE switch/port * {{{/var/benito/config/route/$HOSTNAME}}}: routes Each interface on the experimental network will be assigned a VDE switch and port number. If your platform supports VDE natively, then you're done! For platforms that do not have VDE support, TAP support is sufficient. {{{vde_plug2tap}}} acts as an adapter between a VDE switch port and a TAP interface. VDE also comes with {{{vde_tunctl}}} which can create TAP interfaces on the fly to be used by other apps. The following example illustrates how one might do this with QEMU. QEMU has native VDE support, but we'll assume for a moment that we can't use it. {{{ #!/bin/sh SWITCH_SOCKET=$1 SWITCH_PORT=$2 vde_tunctl tap0 vde_plug2tap -d -s $SWITCH_SOCKET -p $SWITCH_PORT tap0 qemu -net nic -net tap,ifname=tap0 fs.img }}} If you don't even have TAP support.. get creative. Chances are if you can talk to a file descriptor pair {{{vde_plug}}} will work for you. === Interface Attributes === The info you care about will be attributes on each interface. * {{{ip4_address}}} * {{{ip4_netmask}}} * {{{benito:mac_address}}} * {{{benito:vde_switch}}} (switch socket full path) * {{{benito:vde_port}}} (switch port) === Routes === Route information is in {{{/var/benito/config/route/$HOSTNAME}}}. Benito provides a script for automatically setting up routes: {{{ /var/benito/launch/routes.py /var/benito/config [$HOSTNAME] }}} The hostname parameter is optional and will be auto-detected if it is not provided. === Control Net === FOREWORD: Control net bridging is currently a hack. It makes a lot of assumptions about the underlying platform (i.e., qemu) and many aspects of this are baked into the code. Best of luck.. You'll probably want to bridge onto the control net if that makes sense for your platform. If you see yourself wanting to SSH to a VM running on your platform, this is for you. Due to DETER's controlnet separation, you must explicitly request IPs/MACs from boss. The setup script {{{setup/15_control_net.py}}} handles these requisitions. You'll need to hack this file a bit. Sorry! On pnodes which host vnodes, the controlnet interface is bridged with TAP controlnet interfaces of its children. The bridge is brought up by {{{launch/qemu/control_bridge.py}}}. This script is currently tailored to QEMU. To support new VM infrastructure this will need to be hacked on. == File Systems == Guest OSes expect to be able to read: * {{{/users}}} * {{{/proj}}} * {{{/groups}}} * {{{/share}}} If your system can directly access parts of the host file system then you're done! For an example of this, see view-os lightweight processes. If not, you should probably use the npfs implementation currently being used by qemu. The npfs server is launched from {{{launch/qemu}}} just before inner-nodes are booted. This will likely need to move to a more general location. Linux 9P support is excellent starting from the 2.6.20's series. Here's an example fstab entry, assuming the host machine's IP is {{{192.168.1.1}}}: {{{ 192.168.1.1 /users 9p _netdev,aname=/users 0 1 }}} For more bread crumbs look in {{{setup/qemu/40_file_systems.py}}} and {{{setup/qemu/50_root_fs.py}}} (look for 'fstab' in the latter).