[[TOC]] = Reference Guide = This document describes the details of the commands and data structures that make up the containers system. The [UserGuide User Guide /Tutotial] provides useful context about the workflows and goals of the system that inform these technical details. == containerize.py == The {{{containerize.py}}} command creates a DETER experiment made up of containers. The {{{containerize.py}}} program is available from {{{/share/containers/containerize.py}}} on {{{users.isi.deterlab.net}}}. A sample invocation is: {{{ $ /share/containers/containerize.py MyProject MyExperiment ~/mytopology.tcl }}} It will create a new experiment in {{{MyProject}}} called {{{MyExperiment}}} containing the experiment topology in {{{mytopology.tcl}}}. All the topology creation commands supported by DETER are supported by the conatainerization system, but [https://trac.deterlab.net/wiki/Tutorial/Advanced emulab/DETER program agents] are not. [https://trac.deterlab.net/wiki/Tutorial/CreatingExperiments#Startingyourapplicationautomatically Emulab/DETER start commands] '''are''' supported. Containers will create an experiment in a group if the project parameter is of the form ''project''/''group''. To start an experiment in the {{{testing}}} group of the {{{DETER}}} project, the first parameter is specified as {{{DETER/testing}}}. Either an [https://trac.deterlab.net/wiki/nscommands ns2 file] or a [http://fedd.deterlab.net/wiki/TopDl topdl] description is supported. Ns2 descriptions must end with {{{.tcl}}} or {{{.ns}}}. Other files are assumed to be topdl descriptions. By default, {{{containerize.py}}} program will partition the topology into openvz containers, packed 10 containers per physical computer. If the topology is already partitioned - at least one element has a {{{conatiners::partition}}} atttribute - {{{containerize.py}}} will not partition it. The {{{--force-partition}}} flag causes {{{containerize.py}}} to partition the experiment regardless of the presence of {{{containers:partition}}} attributes. If container types have been assigned to nodes using the {{{containers:node_type}}} attribute, {{{containerize.py}}} will respect them. Valid container types for the {{{containers:node_type}}} attribute or the {{{--default-container}}} parameter are: || __Parameter__ || __Container__ || || {{{embedded_pnode}}} || Physical Node || || {{{qemu}}} || Qemu VM || || {{{openvz}}} || Openvz Container || || {{{process}}} || ViewOS process || The {{{containerize.py}}} command takes several parameters that can change its behavior: {{{--default-container}}}=''kind'':: Containerize nodes without a container type into ''kind''. If no nodes have been assigned containers, this puts all them into ''kind'' containers. {{{--force-partition}}}:: Partition the experiment whether or not it has been paritioned already {{{--packing=}}}''int'':: Attempt to put ''int'' containers into each physical node. The default {{{--packing}}} is 10. {{{--config=}}}''filename'':: Read configuration variables from ''filename'' the configuration values are discussed [ReferenceGuide#SiteConfigurationFile below]. {{{--pnode-types=}}}''type1[,type2...]'':: Override the site configuration and request nodes of ''type1'' (or ''type2'' etc.) as host nodes. {{{--end-node-shaping}}}:: Attempt to do end node traffic shaping even in containers connected by VDE switches. This works with qemu nodes, but not process nodes. Topologies that include both openvz nodes and qemu nodes that shape traffic should use this. {{{--vde-switch-shaping}}}:: Do traffic shaping in VDE switches. Probably the default, but that is controlled in [ReferenceGuide#SiteConfigurationFile the site configuration]. {{{--openvz-diskspace}}}:: Set the default openvz disk space size. The suffixes G and M stand for gigabytes and megabytes. {{{--openvz-template}}}:: Set the default openvz template. Templates are described in the [UsersGuide#SettingOpenvzParameters users guide]. {{{--image}}}:: Construct a visualization of the virtual topology and leave it in the experiment directories (default) {{{--no-image}}}:: Do not construct a visualization of the virtual topology and leave it in the experiment directories {{{--debug}}}:: Print additional diagnostics and leave failed DETER experiments on the testbed {{{--keep-tmp}}}:: Do not remove temporary files - for debugging only This invocation: {{{ $ ./containerize.py --packing 25 --default-container=qemu --force-partition DeterTest faber-packem ~/experiment.xml }}} takes the topology in {{{~/experiment.xml}}} (which must be topdl), packs it into 25 qemu containers per physical node, and creates an experiment called !DeterTest/faber-packem that can be swapped in. If {{{experiment.xml}}} were already partitioned, it will be re-partitioned. If some nodes in that topology were assigned to openvz nodes already, those nodes will be still be in openvz containers. The result of a successful {{{containerize.py}}} run is a DETER experiment that can be swapped in. More detailed examples are available in [UsersGuide the tutorial] == Site Configuration File == The site configuration file is an attribute value pair file parsed by a [file:///usr/local/share/doc/python2.7/library/configparser.html python ConfigParser] that sets overall container parameters. Many of these have legacy internal names. The default site configuration is in {{{/share/containers/site.conf}}} on {{{users.isi.deterlab.net}}}. Acceptable values (and their DETER defaults) are: backend_server:: The IRC server used as a backend coordination service for grandstand. Will be replaced by MAGI. Default: {{{boss.isi.deterlab.net:6667}}} grandstand_port:: Port that third party applications can contact grandstand on. Will be replaced by MAGI. Default: {{{4919}}} maverick_url:: Default image used by qemu containers. Default: {{{http://scratch/benito/pangolinbz.img.bz2}}} url_base:: Base URL of the DETER web interface on which users can see experiments. Default: {{{http://www.isi.deterlab.net/}}} qemu_host_hw:: Hardware used by containers. Default: {{{pc2133,bpc2133,MicroCloud}}} xmlrpc_server:: Host and port from which to request experiment creation. Default: {{{boss.isi.deterlab.net:3069}}} qemu_host_os:: OSID to request for qemu container nodess. Default: {{{Ubuntu1204-64-STD}}} exec_root:: Root of the directory tree holding containers software and libraries. Developers often change this. Default: {{{/share/containers}}} openvz_host_os:: OSID to request for openvz nodes. Default {{{CentOS6-64-openvz}}} openvz_guest_url:: Location to load the openvz template from. Default: {{{ %(exec_root)s/images/ubuntu-10.04-x86.tar.gz}}} switch_shaping:: True if switched containers (see below) should do traffic shaping in the VDE switch that connects them. Default: {{{true}}} switched_containers:: A list of the containers that are networked with VDE switches. Default: {{{qemu,process}}} openvz_template_dir:: The directory that stores openvz template files. Default: {{{%(exec_root)s/images/}}} (that is the {{{images}}} directory in the {{{exec_root}}} directory defined in the site config file. == Container Notes == Different container types have some quirks. This section lists limitations of each container. === Openvz === Openvz containers use a custom OS image to support their virtualization. They cannot share physical resources with other containers. A physical node holding openvz containers holds only openvz containers. They are interconnected with one another through bridges and kernel virtual networking rather than through VDE switches (as qemu and process containers are)). As a result, openvz containers provide network delays using per-container endpoint traffic shaping. This means that they cannot correctly interconnect with traffic sphped qemu nodes. === Interconnections: VDE switches and local networking === The various containers are interconnected using either local kernel virtual networking or [http://wiki.virtualsquare.org/wiki/index.php/VDE VDE switches]. Kernel networking is lower overhead because it does not require process context switching, but VDE switches are a more general solution. Network behavior changes - loss, delay, rate limits - are introduced into a network of containers using one of two mechanisms: inserting elements into a VDE switch topology or end node traffic shaping. Inserting elements into the VDE switch topology allows the system to modify the behavior for all packets passing through it. Generally this means all packets to or from a host, as the container system inserts these elements in the path between the node and the switch. This figure shows 3 containers sharing a virtual LAN on a VED switch with no traffic shaping: Openvz containers are interconnected through kernel networking directly to support their high efficiency, and because talking to another container implies leaving the physical machine. They induce network delays through [http://www.linuxfoundation.org/collaborate/workgroups/networking/netem end node traffic shaping]. Qemu nodes can support either end node tra