{{{ #!html
The Neo-Containers system uses cloud-container technology to abstract and
generalize container creation and initialization. At the DETER level, the
experiments have a number of physical nodes, "pnodes", which serve as hosts
for the virtualized containers. Outside of the experiment there are two
servers which configure the containers. Both run on
chef.isi.deterlab.net
. The Chef server serves "code as
configuration" and stores static (for the most part) configuration
information. The config_server
is a RESTful API which loads then
serves experiment-specific configuration information. The code that the Chef
server runs on the containers usually pull the specific configuration
information from the config_server
.
The config_server
code can be found on DETER's Github account in the config_server
repository. The Chef recipes used are there as well in the deter-chef repository. (Although the
deter-chef repository is currently private to DETER dev and OPs.)
There are two approaches to running configuring virtual machines in Neo-Containers: leveraging the existing DETER containers system or not. These can be mixed and matched; both can be used to describe the final node and network configuration of the containers in the experiment. If you use the existing containers system. please refer to the DETER Containers Docuementation for details it. If you want to add "non-containerized" nodes to your experiment, you'll have to write a simple JSON formatted configuration file which describes the virtual machines you want to add (IP addresses, hostnames, OS, etc).
A nodes.json file must be created that will describe the containers added to the experiment. This file is only used to define the containers for this experiment. (The file need not be named nodes.json, but that is the name that will be used in this documentation.)
Each node must have the following fields defined in the nodes.json file.
http://scratch/containers/deter_ub1404_64_vb.box)
and Windows 7 (at http://scratch/containers/deter_win7.box
).
The following is an example nodes.json file that creates one Ubuntu 14.04 container and one Windows container:
[ { "host": "leda", "name": "sarah", "image_url": "http://scratch/containers/deter_ub1404_64_vb.box", "image_os": "ubuntu 14.04 64", "image_type": "vagrant", "image_name": "deter/ub14", "interfaces": [ { "address": "10.1.1.101", "mac": "de:ad:be:ef:00:be" } ] }, { "host": "swan", "name": "helena", "image_url": "http://scratch/containers/deter_win7.box" "image_os": "windows", "image_type": "vagrant", "image_name": "deter/win7", "interfaces": [ { "address": "10.1.1.201", "mac": "de:ad:be:ef:00:af" } ] } ]
The Configuration Server needs to know that information in your brand-new
nodes.jsonfile. There's a little script just for this:
/share/config_server/bin/initialize_containers.py. This script feeds the information from your
nodes.jsonfile to the Configuration Server. This must be done before you swap in your experiment for reasons given below. (Note that because the script needs to run before the experiment is swapped in, the script is run on the
usersmachine.
The script asks DETER to allocate control network addresses for your new nodes. These addresses must exist prior to the containers existing as the control network to request configuration information from the configuration server. And due to the way DETER works, the addresses must be allocated before swap in or the addresses will not be properly associated with the containers hostnames. The hostnames are how the Configuration Server talks to your containers. So if the script is not run, the configuration of the new nodes cannot happen. The script can be run multiple times without ill effects. The system is smart enough to only request the control network addresses from DETER once. If the script is re-run, the (non-control-net-IP) information will simply overwrite the existing information in the configuration database.
Below is an example of how to run the script. The <expid> and <projid> fields in the example refer to the experiment ID and the project ID. The experiment ID is defined by the user, and could be something like "neocont-test" or "netstriping". The project ID is the name of the project under which the experiment is run.
$ /share/config_server/bin/initialize_containers.py -p <projid> -e <expid> -f path/to/nodes.jsonWhen run, you'll see output in the terminal. If successful, you will not see any s. Here's sample output:
This method of using Neo-Containers uses the existing Containers system. This method allows the use of more complex network topologies.
Create an experiment using the existing Containers system. An NS file and the /share/containers/containerize.py script are used to create the containerized experiment.
In your NS file for each container, specify image_os, image_type, image_name, and image_url via the tb-add-node-attribute syntax. Details on each attribute are given below.
http://scratch/containers/deter_ub1404_64_vb.box)
and Windows 7 (at http://scratch/containers/deter_win7.box
).
The following is an example NS file that creates one Windows container and one Ubuntu 14.04 container:
set r2d2 [$ns node] tb-add-node-attribute $r2d2 containers:image_os windows tb-add-node-attribute $r2d2 containers:image_type vagrant tb-add-node-attribute $r2d2 containers:image_name deter/win7 tb-add-node-attribute $r2d2 containers:image_url http://scratch/containers/deter_win7.box set c3po [$ns node] tb-add-node-attribute $c3po containers:image_os ubuntu tb-add-node-attribute $c3po containers:image_type vagrant tb-add-node-attribute $c3po containers:image_name ubuntu/trusty64 tb-add-node-attribute $c3p0 containers:image_url http://scratch/containers/deter_ub1404_64_vb.box
Use the NS file to create a containerized experiment using the existing Containers scripts.
$ /share/containers/containerize.py
Note: The experiment must currently be created in the Deter group as that's where the custom pnode disk images are. This will change.
Modify the NS file generated by containerize.py to have a new image for the pnode machines.
Follow these steps in your browser:
After making these modifications, each pnode in the NS file should have these lines:
tb-set-node-os ${pnode(0000)} PNODE-BASE tb-set-hardware ${pnode(0000)} MicroCloud
The final NS file will look something like this:
set ns [new Simulator] source tb_compat.tcl tb-make-soft-vtype container0 {dl380g3 pc2133 MicroCloud} set pnode(0000) [$ns node] tb-set-node-os ${pnode(0000)} PNODE-BASE tb-set-hardware ${pnode(0000)} container0 tb-set-node-failure-action ${pnode(0000)} "nonfatal" $ns rtproto Static $ns run
On the experiment's webpage, swap in the experiment.
Populate the configuration database that runs on
chef.isi.deterlab.net
by running the load_containers_db.sh
and load_config_db.sh database-population scripts.
This should be run on a single physical node in the experiment.
pnode-0000
is used in the example below.
The <expid> and <projid> fields in the following example are referring to the experiment ID and the project ID. The experiment ID is defined by the user, and could be something like "neocont-test" or "netstriping". For now, the project ID should always be "Deter".
$ ssh pnode-0000.<expid>.<projid> $ cd <config_server-repo>/bin $ ./load_config_db.sh $ ./load_containers_db.sh -p <projid> -e <expid>
This step will be automated in the future.
The Chef system is used to bootstrap and configure the nodes. All the steps for this are enclosed in the bootstrap_node.sh script.
The script needs to know which node's role in the experiment. There are currently three roles: pnode, container, and win-container.
On all the pnodes which will be running containers:
$ ssh <pnode>.<expid>.<projid> $ cd <config_server-repo>/bin $ ./bootstrap_node.sh -r pnode
The pnode only have to be bootstrapped once per experiment swap in. Once a pnode is bootstrapped into chef, chef-client needs to be run. The pnode role will spawn the containers and configure them. So once the chef-client command is run on a pnode, all containers on that be pnode will be running and configured.
$ ssh <pnode>.<expid>.<projid> $ cd <config_server-repo>/bin $ sudo chef-client
It is easy to fix problems if something should go wrong with bootstrapped nodes. Running "sudo chef-client" will re-configure the nodes (both pnodes and the containers).
If all the preceding steps succeeded, then your pnodes and containers are configured, booted, and ready for use.
On the experiment's webpage, swap in the experiment.
Populate the configuration database that runs on
chef.isi.deterlab.net
by running the load_containers_db.sh
and load_config_db.sh database-population scripts.
This should be run on a single physical node in the experiment.
pnode-0000
is used in the example below.
The <expid> and <projid> fields in the following example are referring to the experiment ID and the project ID. The experiment ID is defined by the user, and could be something like "neocont-test" or "netstriping". For now, the project ID should always be "Deter".
$ ssh pnode-0000.<expid>.<projid> $ cd <config_server-repo>/bin $ ./load_config_db.sh
This step will be automated in the future.
The Chef system is used to bootstrap and configure the nodes. All the steps for this are enclosed in the bootstrap_node.sh script.
The script needs to know which node's role in the experiment. There are currently three roles: pnode, container, and win-container.
On all the pnodes which have containers running on them:
$ ssh <pnode>.<expid>.<projid> $ cd <config_server-repo>/bin $ ./bootstrap_node.sh -r pnodeThe pnode only have to be bootstrapped once per experiment swap in. Once a pnode is bootstrapped into chef, chef-client needs to be run. The pnode role will spawn the containers and configure them. So once the chef-client command is run on a pnode, all containers on that be pnode will be running and configured.
$ ssh <pnode>.<expid>.<projid> $ cd <config_server-repo>/bin $ sudo chef-client
It is easy to fix problems if something should go wrong with bootstrapped nodes. Running "sudo chef-client" will re-configure the nodes (both pnodes and the containers).
If all the preceding steps succeeded, then your pnodes and containers are configured, booted, and ready for use.
There are a number of things that may be done after the containers are configured and booted. These include the following:
$ ssh pnode $ ssh username@containernodeCygwin is installed on Windows nodes so you can ssh to Windows containers as well.
Command | Purpose |
vagrant status | confirm containers are running |
vagrant ssh containernode | login "vagrant"; password "vagrant" |
vagrant reload containernode | reboot a container |
vagrant halt containernode | halt a container |
vagrant up containernode | boot a container |
After an experiment is complete, the experiment data must be removed from the configuration database. There are two ways this may be done.
Method 1: On a host which can talk to chef.isi.deterlab.net
,
run these commands:
$ cd <config_server-repo>/bin $ rm_experiment_config.sh -p <projid> -e <expid>
Method 2: The config_server
may be called directly:
$ curl http://chef:5320/exp/<projid>/<expid>/delete
Since the system runs on Chef anyone authorized to push chef recipes to the chef server can write custom node configuration code.
The system uses Vagrant to spin up the containers, thus any Vagrant-supported image can run in a container. However, the image must be downloaded and served via HTTP.
}}}