Version 22 (modified by Geoff Lawler, 5 years ago) (diff)


The Basics

The neo-containers system uses cloud-container technology to abstract and generalize container creation and initialization. At the DETER level, the experiments have a number of physical nodes, "pnodes", which serve as hosts for the virtualized containers. Outside of the experiment there are two servers which configure the containers. Both run on The Chef server serves "code as configuration" and stores static (for the most part) configuration information. The config_server is a RESTful API which loads then serves experiment-specific configuration information. The code that the Chef server runs on the containers usually pull the specific configuration information from the config_server.

HOWTO run neo-containers

Note that much of the detail of the system is still exposed. Users must currently run a script or two. These scripts (or the functionality they contain) will be moved into the system itself in the future and will be hidden.

  1. Checkout the config_server repository. This has the config_server code as well as the script you will use to populate the config server database.
    users: > cd src
    users: > git clone
  1. Create a containerized experiment with an NS file and the /share/containers/ script. (There is a mode of neo-containers which does *not* require running the older containerization scripts. This will be documented.)

In your NS file for each container in the experiment, specify image_os, image_type, image_name, and image_url via the tb-add-node-attribute syntax. Details on each attribute is given below.

  • image_os - This is really just to distinguish Windows from non-Windows nodes. If the image_os starts with "windows", the image will be treated as a Windows node. Otherwise it'll be assumed to be some sort of Unix-y container.
  • image_type - This setting describes the containerization tech of the node. Currently this is *always* set to "vagrant" as Vagrant is the only package used to spin up the containers.
  • image_name - The name of the image. Any containers that share a name will also share an image.
  • image_url - A URL must be specified which the neo-containers system uses to download the container image. This URL must be resolvable from the experiment nodes. The image will only be downloaded once as long as the image_names are the same for each container. Existing and supported images are Ubuntu 14.04 64 @ http://scratch/containers/ and Windows 7 @ http://scratch/containers/

Here is an example that creates Windows and Ubuntu 14.04 containers:

set r2d2 [$ns node]
tb-add-node-attribute $r2d2 containers:image_os windows
tb-add-node-attribute $r2d2 containers:image_type vagrant
tb-add-node-attribute $r2d2 containers:image_name deter/win7
tb-add-node-attribute $r2d2 containers:image_url http://scratch/containers/

set c3po [$ns node]
tb-add-node-attribute $c3po containers:image_os ubuntu
tb-add-node-attribute $c3po containers:image_type vagrant
tb-add-node-attribute $c3po containers:image_name ubuntu/trusty64
tb-add-node-attribute $c3p0 containers:image_url http://scratch/containers/

Currently Windows nodes do not get fully configured and a final script must be run by hand on the container.

  1. Use the NS file to create a containerized experiment using the existing containers scripts (on users): /share/containers/ [group] [experiment] [ns file]. Note that the experiment must currently be created in the Deter group as that's where the custom pnode disk images are. This will change.
  1. Modify the NS file generated by to have a new image for the pnode machines. Navigate to the new experiment page and click Modify Experiment. Change the OS type of the pnodes to PNODE_BASE and the hardware type to MicroCloud. I.e. for each pnode in the NS file, make the lines have the form:
        tb-set-node-os ${pnode(0000)} PNODE-CONT
        tb-set-hardware ${pnode(0000)} MicroCloud

Remove all existing tb-set-node-startcmd lines as these start the old containers system. This is no longer used.

The final NS file will look something like this.

set ns [new Simulator]
source tb_compat.tcl

tb-make-soft-vtype container0 {dl380g3 pc2133 MicroCloud}
set pnode(0000) [$ns node]
tb-set-node-os ${pnode(0000)} PNODE-BASE
tb-set-hardware ${pnode(0000)} container0
tb-set-node-failure-action ${pnode(0000)} "nonfatal"

$ns rtproto Static
$ns run
  1. Swap in the experiment.
  1. Populate the configuration database that runs on by running the database population scripts and (This will automated in the future.) This should be run from a physical node in the experiment. I use pnode-0000 in the example below.

On a single pnode:

> ssh pnode-0000.${EXPID}.${PROJID}
> cd [your config_server repository]/bin
> ./ -p ${PROJID} -e ${EXPID}
> ./

At this point, the Chef server and configuration database knows everything it needs to about your experiment and the nodes within it.

  1. Let Chef configure the nodes. Bootstrap and configure the pnodes. To configure/bootstrap the node use the script. The script needs to know which role the node plays in the experiment. There are currently three roles: pnode, container, and win-container.

On all the pnodes:

> ssh pnode-0000.${EXPID}.${PROJID}
> cd [your config_server repository]/bin
> ./ -r pnode

The pnode role will spawn the containers and configure them.

Once nodes are bootstrapped, simply running sudo chef-client will re-configure the nodes (both pnodes and the containers) if something should go wrong.

  1. Remove experiment data from the configuration database once the experiment is complete.

On a single pnode:

> ssh pnode-0000.${EXPID}.${PROJID}
> cd [your config_server repository]/bin
> ./ -p ${PROJID} -e ${EXPID}

An alternate way to do this is just to make a call on the config_server directly:

curl http://chef:5320/exp/${PROJID}/${EXPID}/delete

Fun things to do after the containers are running.

  • Login to a node:
    • ssh pnode-0000. ssh username@[nodename]. Cygwin is installed on the Windows node so you can ssh to Windows containers as well.
  • Play around with Vagrant. To use: a) ssh to any pnode-XXXX, b) sudo su -, c) cd /space/vagrant_home
    • Confirm containers are running: vagrant status
    • ssh to a node (windows or no): vagrant ssh [node name] (login vagrant, password vagrant
    • reboot a container: vagrant reload [node name]. Or halt then restart a node: vagrant halt [node name], vagrant up [node name]
  • Login to a Windows desktop:
    • build ssh tunnel to port 3389 on the pnode. ssh -L3389:pcXXX:3389
    • use client RDP to connect to localhost:3389. Login vagrant, password vagrant.
  • Play around with knife, the command line interface to Chef.
    • login to the config node
    • cd to /space/local/chef/chef-repo
    • Use knife.
      • knife node list
      • knife node show [node name]
      • knife --help

Chef Workstation

Since the system runs on Chef anyone authorized to push chef recipes to the chef server can write custom node configuration code.


The system uses Vagrant to spin up the containers, thus any vagrant supported image run in a container. The image must be downloaded and served via HTTP though.