52 | | |
53 | | Much of the Neo-Containers system is still exposed and require the user to |
54 | | run a few configuration scripts. In the fullness of time, these details will |
55 | | be folded in to the system. |
56 | | |
57 | | <p> |
58 | | |
59 | | <!------------------------------------------------------------------------> |
60 | | |
61 | | <a name="initial"></a> |
62 | | <h2>2. Initial Set-up</h2> |
63 | | |
64 | | <p> |
65 | | |
66 | | Check out the <code>config_server</code> repository from Github. This |
67 | | repository contains the <code>config_server</code> code, as well as several |
68 | | scripts that must be run. |
69 | | |
70 | | <p> |
71 | | |
72 | | It is assumed this will be checked out on <code>users.isi.deterlab.net</code>. |
73 | | |
74 | | <p> |
75 | | |
76 | | <pre> |
77 | | $ mkdir src |
78 | | $ cd src |
79 | | $ git clone https://github.com/deter-project/config_server.git |
80 | | </pre> |
81 | | |
82 | | <p> |
83 | | |
84 | | <!------------------------------------------------------------------------> |
85 | | |
86 | | <a name="existing-old-containers"></a> |
87 | | <h2>3. Using Neo-Containers with the Existing Containers System</h2> |
88 | | |
89 | | <p> |
90 | | |
91 | | This method of using Neo-Containers uses the existing Containers system. |
92 | | This method allows the use of more complex network topologies. |
93 | | |
94 | | <p> |
95 | | |
96 | | <!-----------------------------------------------> |
97 | | |
98 | | <h3>Create an Experiment</h3> |
99 | | |
100 | | <p> |
101 | | |
102 | | Create an experiment using the existing Containers system. An NS file |
103 | | and the <b>/share/containers/containerize.py</b> script are used to create |
104 | | the containerized experiment. |
105 | | |
106 | | <p> |
107 | | |
108 | | In your NS file for each container, specify <i>image_os</i>, |
109 | | <i>image_type</i>, <i>image_name</i>, and <i>image_url</i> via the |
110 | | <i>tb-add-node-attribute</i> syntax. Details on each attribute are |
111 | | given below. |
112 | | |
113 | | <p> |
114 | | |
115 | | <ul> |
116 | | |
117 | | <li><i>image_os</i> - This is really just to distinguish Windows from |
118 | | non-Windows nodes. If the <i>image_os</i> starts with "windows", the image |
119 | | will be treated as a Windows node. Otherwise it'll be assumed to be some sort |
120 | | of Unix-y container. |
121 | | |
122 | | <p> |
123 | | |
124 | | <li><i>image_type</i> - This setting describes the containerization tech of |
125 | | the node. Currently this is *always* set to "vagrant" as Vagrant is the only |
126 | | package used to spin up the containers. |
127 | | |
128 | | <p> |
129 | | |
130 | | <li><i>image_name</i> - The name of the image. Any containers that share a |
131 | | name will also share an image. |
132 | | |
133 | | <p> |
134 | | |
135 | | <li><i>image_url</i> - A URL must be specified which the neo-containers system |
136 | | uses to download the container image. This URL must be resolvable from the |
137 | | experiment nodes. The image will only be downloaded once as long as the |
138 | | <i>image_name</i>s are the same for each container. Existing and supported |
139 | | images are Ubuntu 14.04 64 |
140 | | (at <code>http://scratch/containers/deter_ub1404_64_vb.box)</code> |
141 | | and Windows 7 (at <code>http://scratch/containers/deter_win7.box</code>). |
142 | | |
143 | | </ul> |
144 | | |
145 | | <p> |
146 | | |
147 | | The following is an example NS file that creates one Windows container and |
148 | | one Ubuntu 14.04 container: |
149 | | |
150 | | <p> |
151 | | |
152 | | <pre> |
153 | | set r2d2 [$ns node] |
154 | | tb-add-node-attribute $r2d2 containers:image_os windows |
155 | | tb-add-node-attribute $r2d2 containers:image_type vagrant |
156 | | tb-add-node-attribute $r2d2 containers:image_name deter/win7 |
157 | | tb-add-node-attribute $r2d2 containers:image_url |
158 | | http://scratch/containers/deter_win7.box |
159 | | |
160 | | set c3po [$ns node] |
161 | | tb-add-node-attribute $c3po containers:image_os ubuntu |
162 | | tb-add-node-attribute $c3po containers:image_type vagrant |
163 | | tb-add-node-attribute $c3po containers:image_name ubuntu/trusty64 |
164 | | tb-add-node-attribute $c3p0 containers:image_url |
165 | | http://scratch/containers/deter_ub1404_64_vb.box |
166 | | </pre> |
167 | | |
168 | | <p> |
169 | | |
170 | | <!-----------------------------------------------> |
171 | | |
172 | | <h3>Containerize the Experiment</h3> |
173 | | |
174 | | <p> |
175 | | |
176 | | Use the NS file to create a containerized experiment using the existing |
177 | | Containers scripts. |
178 | | |
179 | | <p> |
180 | | |
181 | | <pre> |
182 | | $ /share/containers/containerize.py <group> <experiment> <ns-file> |
183 | | </pre> |
184 | | |
185 | | <p> |
186 | | |
187 | | <b>Note:</b> The experiment must currently be created in the Deter group |
188 | | as that's where the custom <i>pnode</i> disk images are. This will change. |
189 | | |
190 | | <p> |
191 | | |
192 | | <!-----------------------------------------------> |
193 | | |
194 | | <h3>Finalize the NS File</h3> |
195 | | |
196 | | <p> |
197 | | |
198 | | Modify the NS file generated by <b>containerize.py</b> to have a new image for |
199 | | the <i>pnode</i> machines. |
200 | | |
201 | | <p> |
202 | | |
203 | | Follow these steps in your browser: |
204 | | |
205 | | <ol> |
206 | | |
207 | | <li>Go to the new experiment page. |
208 | | |
209 | | <li>Click <i>Modify Experiment</i>. |
210 | | |
211 | | <li>Remove all existing <i>tb-set-node-startcmd</i> lines.<br> These start |
212 | | the old Containers system and are no longer used. |
213 | | <li>For each <i>pnode</i>, change the OS type to PNODE_BASE. |
214 | | |
215 | | <li>For each <i>pnode</i>, change the hardware type to MicroCloud. |
216 | | |
217 | | </ol> |
218 | | |
219 | | <p> |
220 | | |
221 | | After making these modifications, each pnode in the NS file should have |
222 | | these lines: |
223 | | |
224 | | <pre> |
225 | | tb-set-node-os ${pnode(0000)} PNODE-BASE |
226 | | tb-set-hardware ${pnode(0000)} MicroCloud |
227 | | </pre> |
228 | | |
229 | | <p> |
230 | | |
231 | | The final NS file will look something like this: |
232 | | |
233 | | <pre> |
234 | | set ns [new Simulator] |
235 | | source tb_compat.tcl |
236 | | |
237 | | tb-make-soft-vtype container0 {dl380g3 pc2133 MicroCloud} |
238 | | set pnode(0000) [$ns node] |
239 | | tb-set-node-os ${pnode(0000)} PNODE-BASE |
240 | | tb-set-hardware ${pnode(0000)} container0 |
241 | | tb-set-node-failure-action ${pnode(0000)} "nonfatal" |
242 | | |
243 | | $ns rtproto Static |
244 | | $ns run |
245 | | </pre> |
246 | | |
247 | | <p> |
248 | | |
249 | | <!-----------------------------------------------> |
250 | | |
251 | | <h3>Swap In</h3> |
252 | | |
253 | | <p> |
254 | | |
255 | | On the experiment's webpage, swap in the experiment. |
256 | | |
257 | | <p> |
258 | | |
259 | | <!-----------------------------------------------> |
260 | | |
261 | | <h3>Populate the Configuration Database</h3> |
262 | | |
263 | | <p> |
264 | | |
265 | | Populate the configuration database that runs on |
266 | | <code>chef.isi.deterlab.net</code> by running the <b>load_containers_db.sh</b> |
267 | | and <b>load_config_db.sh</b> database-population scripts. |
268 | | |
269 | | <p> |
270 | | |
271 | | This should be run on a single physical node in the experiment. |
272 | | <code>pnode-0000</code> is used in the example below. |
273 | | |
274 | | <p> |
275 | | |
276 | | The <i><expid></i> and <i><projid></i> fields in the following |
277 | | example are referring to the experiment ID and the project ID. The |
278 | | experiment ID is defined by the user, and could be something like |
279 | | "neocont-test" or "netstriping". For now, the project ID should always |
280 | | be "Deter". |
281 | | |
282 | | <p> |
283 | | |
284 | | <pre> |
285 | | $ ssh pnode-0000.<i><expid></i>.<i><projid></i> |
286 | | $ cd <i><config_server-repo></i>/bin |
287 | | $ ./load_config_db.sh |
288 | | $ ./load_containers_db.sh -p <i><projid></i> -e <i><expid></i> |
289 | | </pre> |
290 | | |
291 | | <p> |
292 | | |
293 | | This step will be automated in the future. |
294 | | |
295 | | <p> |
296 | | |
297 | | <!-----------------------------------------------> |
298 | | |
299 | | <h3>Node Configuration by Chef</h3> |
300 | | |
301 | | <p> |
302 | | |
303 | | The Chef system is used to bootstrap and configure the nodes. All the |
304 | | steps for this are enclosed in the <b>bootstrap_node.sh</b> script. |
305 | | |
306 | | <p> |
307 | | |
308 | | The script needs to know which node's role in the experiment. There |
309 | | are currently three roles: <i>pnode</i>, <i>container</i>, and |
310 | | <i>win-container</i>. |
311 | | |
312 | | <p> |
313 | | |
314 | | On all the <i>pnode</i>s which will be running containers: |
315 | | |
316 | | <pre> |
317 | | $ ssh <i><pnode></i>.<i><expid></i>.<i><projid></i> |
318 | | $ cd <i><config_server-repo></i>/bin |
319 | | $ ./bootstrap_node.sh -r pnode |
320 | | </pre> |
321 | | |
322 | | <p> |
323 | | |
324 | | The pnode only have to be bootstrapped once per experiment swap in. Once |
325 | | a pnode is bootstrapped into chef, <i>chef-client</i> needs to be run. |
326 | | |
327 | | The <i>pnode</i> role will spawn the containers and configure them. So |
328 | | once the <i>chef-client</i> command is run on a pnode, all containers |
329 | | on that be pnode will be running and configured. |
330 | | |
331 | | <pre> |
332 | | $ ssh <i><pnode></i>.<i><expid></i>.<i><projid></i> |
333 | | $ cd <i><config_server-repo></i>/bin |
334 | | $ sudo chef-client |
335 | | </pre> |
336 | | |
337 | | <p> |
338 | | |
339 | | It is easy to fix problems if something should go wrong with bootstrapped |
340 | | nodes. Running "sudo chef-client" will re-configure the nodes (both |
341 | | <i>pnode</i>s and the containers). |
342 | | |
343 | | <p> |
344 | | |
345 | | <!-----------------------------------------------> |
346 | | |
347 | | <h3>Set-up Complete</h3> |
348 | | |
349 | | <p> |
350 | | |
351 | | If all the preceding steps succeeded, then your <i>pnode</i>s and containers |
352 | | are configured, booted, and ready for use. |
353 | | |
354 | | <p> |
355 | | |
356 | | <!------------------------------------------------------------------------> |
357 | | |
358 | | <hr> |
359 | | <p> |
360 | | |
361 | | <a name="absent-old-containers"></a> |
362 | | <h2>4. Using Neo-Containers While Bypassing the Existing Containers System</h2> |
363 | | |
364 | | <p> |
365 | | |
366 | | This method of using Neo-Containers does not use the existing Containers |
367 | | system. This method allows the containers to be associated with physical |
368 | | nodes. It requires the user to manually compute IP addresses for the |
369 | | container nodes. Standard NS files are used in DETER experiments in this |
370 | | method of using Neo-Containers. |
371 | | |
372 | | <p> |
373 | | |
374 | | <!-----------------------------------------------> |
375 | | |
376 | | <h3>Create an Experiment</h3> |
377 | | |
378 | | <p> |
379 | | |
380 | | Create an experiment without using the existing Containers system. This |
381 | | experiment requires an NS file with a fully connected network. The |
382 | | PNODE-BASE image must be used for all machines which will run containers. |
383 | | The NS file must be loaded into the DETER system in the usual way. |
384 | | |
385 | | <p> |
386 | | |
387 | | Example NS file: |
388 | | |
389 | | <p> |
390 | | |
391 | | <pre> |
392 | | set ns [new Simulator] |
393 | | source tb_compat.tcl |
394 | | |
395 | | set nodes "leda swan" |
396 | | |
397 | | tb-make-soft-vtype pnode_hardware {pc2133 MicroCloud} |
398 | | |
399 | | foreach node $nodes { |
400 | | set $node [$ns node] |
401 | | tb-set-node-os $node PNODE-BASE |
402 | | tb-set-hardware $node pnode_hardware |
403 | | } |
404 | | |
405 | | set lan0 [$ns make-lan $nodes 100Mb 0ms] |
406 | | |
407 | | $ns rtproto Static |
408 | | $ns run |
409 | | </pre> |
410 | | |
411 | | <p> |
| 48 | |
| 49 | <!-----------------------------------------------> |
| 50 | |
| 51 | <a name="containers-by-hand"></a> |
| 52 | <h2>1. Adding virtual machines "by hand" to your experiment.</h2> |
519 | | file to DETER. |
| 160 | file to the script at <pre>/share/config_server/bin/initialize_containers.py</pre>. |
| 161 | |
| 162 | <p> |
| 163 | |
| 164 | This must be run at least once per experiment and <b>must be done before experiment swap-in</b>. This is because the <pre>initialize_containers.py</pre> script asks DETER to allocate control network addresses for the containers. These addresses must exist prior to the containers existing as the control network to request configuration information from the configuration server. And due to the way DETER works, the addresses must be allocated before swap in or the addresses will not be properly associated with the containers hostnames so the configuration server will not be able to talk to the containers. |
| 165 | |
| 166 | The script can be run multiple times without ill effects. |
| 167 | |
| 168 | <p> |
| 169 | |
| 170 | The <i><expid></i> and <i><projid></i> fields in the following |
| 171 | example are referring to the experiment ID and the project ID. The |
| 172 | experiment ID is defined by the user, and could be something like |
| 173 | "neocont-test" or "netstriping". The project ID is the name of the project under which the experiment is run. |
| 174 | |
| 175 | <p> |
| 176 | |
| 177 | <pre> |
| 178 | $ /share/config_server/bin/initialize_containers.py -p <i><projid></i> -e <i><expid></i> -f <i>path/to/nodes.json</i> |
| 179 | </pre> |
| 180 | |
| 181 | |
| 182 | If you decide to change the the nature of the containers run in an experiment, |
| 183 | you must <b>destroy the experiment</b> and start over. This needs to be done |
| 184 | so that DETER will unreserve the control net addresses it previously |
| 185 | reserved for the containers. |
| 186 | |
| 187 | <p> |
| 188 | |
| 189 | <!-----------------------------------------------> |
| 190 | <a name="existing-old-containers"></a> |
| 191 | <h2>3. Using Neo-Containers with the Existing Containers System</h2> |
| 192 | |
| 193 | <p> |
| 194 | |
| 195 | This method of using Neo-Containers uses the existing Containers system. |
| 196 | This method allows the use of more complex network topologies. |
| 197 | |
| 198 | <p> |
| 199 | |
| 200 | <!-----------------------------------------------> |
| 201 | |
| 202 | <h3>Create an Experiment</h3> |
| 203 | |
| 204 | <p> |
| 205 | |
| 206 | Create an experiment using the existing Containers system. An NS file |
| 207 | and the <b>/share/containers/containerize.py</b> script are used to create |
| 208 | the containerized experiment. |
| 209 | |
| 210 | <p> |
| 211 | |
| 212 | In your NS file for each container, specify <i>image_os</i>, |
| 213 | <i>image_type</i>, <i>image_name</i>, and <i>image_url</i> via the |
| 214 | <i>tb-add-node-attribute</i> syntax. Details on each attribute are |
| 215 | given below. |
| 216 | |
| 217 | <p> |
| 218 | |
| 219 | <ul> |
| 220 | |
| 221 | <li><i>image_os</i> - This is really just to distinguish Windows from |
| 222 | non-Windows nodes. If the <i>image_os</i> starts with "windows", the image |
| 223 | will be treated as a Windows node. Otherwise it'll be assumed to be some sort |
| 224 | of Unix-y container. |
| 225 | |
| 226 | <p> |
| 227 | |
| 228 | <li><i>image_type</i> - This setting describes the containerization tech of |
| 229 | the node. Currently this is *always* set to "vagrant" as Vagrant is the only |
| 230 | package used to spin up the containers. |
| 231 | |
| 232 | <p> |
| 233 | |
| 234 | <li><i>image_name</i> - The name of the image. Any containers that share a |
| 235 | name will also share an image. |
| 236 | |
| 237 | <p> |
| 238 | |
| 239 | <li><i>image_url</i> - A URL must be specified which the neo-containers system |
| 240 | uses to download the container image. This URL must be resolvable from the |
| 241 | experiment nodes. The image will only be downloaded once as long as the |
| 242 | <i>image_name</i>s are the same for each container. Existing and supported |
| 243 | images are Ubuntu 14.04 64 |
| 244 | (at <code>http://scratch/containers/deter_ub1404_64_vb.box)</code> |
| 245 | and Windows 7 (at <code>http://scratch/containers/deter_win7.box</code>). |
| 246 | |
| 247 | </ul> |
| 248 | |
| 249 | <p> |
| 250 | |
| 251 | The following is an example NS file that creates one Windows container and |
| 252 | one Ubuntu 14.04 container: |
| 253 | |
| 254 | <p> |
| 255 | |
| 256 | <pre> |
| 257 | set r2d2 [$ns node] |
| 258 | tb-add-node-attribute $r2d2 containers:image_os windows |
| 259 | tb-add-node-attribute $r2d2 containers:image_type vagrant |
| 260 | tb-add-node-attribute $r2d2 containers:image_name deter/win7 |
| 261 | tb-add-node-attribute $r2d2 containers:image_url |
| 262 | http://scratch/containers/deter_win7.box |
| 263 | |
| 264 | set c3po [$ns node] |
| 265 | tb-add-node-attribute $c3po containers:image_os ubuntu |
| 266 | tb-add-node-attribute $c3po containers:image_type vagrant |
| 267 | tb-add-node-attribute $c3po containers:image_name ubuntu/trusty64 |
| 268 | tb-add-node-attribute $c3p0 containers:image_url |
| 269 | http://scratch/containers/deter_ub1404_64_vb.box |
| 270 | </pre> |
| 271 | |
| 272 | <p> |
| 273 | |
| 274 | <!-----------------------------------------------> |
| 275 | |
| 276 | <h3>Containerize the Experiment</h3> |
| 277 | |
| 278 | <p> |
| 279 | |
| 280 | Use the NS file to create a containerized experiment using the existing |
| 281 | Containers scripts. |
| 282 | |
| 283 | <p> |
| 284 | |
| 285 | <pre> |
| 286 | $ /share/containers/containerize.py <group> <experiment> <ns-file> |
| 287 | </pre> |
| 288 | |
| 289 | <p> |
| 290 | |
| 291 | <b>Note:</b> The experiment must currently be created in the Deter group |
| 292 | as that's where the custom <i>pnode</i> disk images are. This will change. |
| 293 | |
| 294 | <p> |
| 295 | |
| 296 | <!-----------------------------------------------> |
| 297 | |
| 298 | <h3>Finalize the NS File</h3> |
| 299 | |
| 300 | <p> |
| 301 | |
| 302 | Modify the NS file generated by <b>containerize.py</b> to have a new image for |
| 303 | the <i>pnode</i> machines. |
| 304 | |
| 305 | <p> |
| 306 | |
| 307 | Follow these steps in your browser: |
| 308 | |
| 309 | <ol> |
| 310 | |
| 311 | <li>Go to the new experiment page. |
| 312 | |
| 313 | <li>Click <i>Modify Experiment</i>. |
| 314 | |
| 315 | <li>Remove all existing <i>tb-set-node-startcmd</i> lines.<br> These start |
| 316 | the old Containers system and are no longer used. |
| 317 | <li>For each <i>pnode</i>, change the OS type to PNODE_BASE. |
| 318 | |
| 319 | <li>For each <i>pnode</i>, change the hardware type to MicroCloud. |
| 320 | |
| 321 | </ol> |
| 322 | |
| 323 | <p> |
| 324 | |
| 325 | After making these modifications, each pnode in the NS file should have |
| 326 | these lines: |
| 327 | |
| 328 | <pre> |
| 329 | tb-set-node-os ${pnode(0000)} PNODE-BASE |
| 330 | tb-set-hardware ${pnode(0000)} MicroCloud |
| 331 | </pre> |
| 332 | |
| 333 | <p> |
| 334 | |
| 335 | The final NS file will look something like this: |
| 336 | |
| 337 | <pre> |
| 338 | set ns [new Simulator] |
| 339 | source tb_compat.tcl |
| 340 | |
| 341 | tb-make-soft-vtype container0 {dl380g3 pc2133 MicroCloud} |
| 342 | set pnode(0000) [$ns node] |
| 343 | tb-set-node-os ${pnode(0000)} PNODE-BASE |
| 344 | tb-set-hardware ${pnode(0000)} container0 |
| 345 | tb-set-node-failure-action ${pnode(0000)} "nonfatal" |
| 346 | |
| 347 | $ns rtproto Static |
| 348 | $ns run |
| 349 | </pre> |
| 350 | |
| 351 | <p> |
| 352 | |
| 353 | <!-----------------------------------------------> |
| 354 | |
| 355 | <h3>Swap In</h3> |
| 356 | |
| 357 | <p> |
| 358 | |
| 359 | On the experiment's webpage, swap in the experiment. |
| 360 | |
| 361 | <p> |
| 362 | |
| 363 | <!-----------------------------------------------> |
| 364 | |
| 365 | <h3>Populate the Configuration Database</h3> |
| 366 | |
| 367 | <p> |
| 368 | |
| 369 | Populate the configuration database that runs on |
| 370 | <code>chef.isi.deterlab.net</code> by running the <b>load_containers_db.sh</b> |
| 371 | and <b>load_config_db.sh</b> database-population scripts. |
| 372 | |
| 373 | <p> |
| 374 | |
| 375 | This should be run on a single physical node in the experiment. |
| 376 | <code>pnode-0000</code> is used in the example below. |
533 | | $ ./load_containers_db.sh -p <i><projid></i> -e <i><expid></i> -f <i>path/to/nodes.json</i> |
534 | | </pre> |
535 | | |
536 | | <p> |
537 | | |
538 | | This must only be done once per experiment. It does not have to be done |
539 | | before each swap-in. Just once to reserve control net addresses from DETER. |
540 | | |
541 | | <p> |
542 | | |
543 | | If you decide to change the the nature of the containers run in an experiment, |
544 | | you must <b>destroy the experiment</b> and start over. This needs to be done |
545 | | so that DETER will unreserve the control net addresses it previously |
546 | | reserved for the containers. |
547 | | |
548 | | <p> |
| 391 | $ ./load_config_db.sh |
| 392 | $ ./load_containers_db.sh -p <i><projid></i> -e <i><expid></i> |
| 393 | </pre> |
| 394 | |
| 395 | <p> |
| 396 | |
| 397 | This step will be automated in the future. |
| 398 | |
| 399 | <p> |
| 400 | |
| 401 | <!-----------------------------------------------> |
| 402 | |
| 403 | <h3>Node Configuration by Chef</h3> |
| 404 | |
| 405 | <p> |
| 406 | |
| 407 | The Chef system is used to bootstrap and configure the nodes. All the |
| 408 | steps for this are enclosed in the <b>bootstrap_node.sh</b> script. |
| 409 | |
| 410 | <p> |
| 411 | |
| 412 | The script needs to know which node's role in the experiment. There |
| 413 | are currently three roles: <i>pnode</i>, <i>container</i>, and |
| 414 | <i>win-container</i>. |
| 415 | |
| 416 | <p> |
| 417 | |
| 418 | On all the <i>pnode</i>s which will be running containers: |
| 419 | |
| 420 | <pre> |
| 421 | $ ssh <i><pnode></i>.<i><expid></i>.<i><projid></i> |
| 422 | $ cd <i><config_server-repo></i>/bin |
| 423 | $ ./bootstrap_node.sh -r pnode |
| 424 | </pre> |
| 425 | |
| 426 | <p> |
| 427 | |
| 428 | The pnode only have to be bootstrapped once per experiment swap in. Once |
| 429 | a pnode is bootstrapped into chef, <i>chef-client</i> needs to be run. |
| 430 | |
| 431 | The <i>pnode</i> role will spawn the containers and configure them. So |
| 432 | once the <i>chef-client</i> command is run on a pnode, all containers |
| 433 | on that be pnode will be running and configured. |
| 434 | |
| 435 | <pre> |
| 436 | $ ssh <i><pnode></i>.<i><expid></i>.<i><projid></i> |
| 437 | $ cd <i><config_server-repo></i>/bin |
| 438 | $ sudo chef-client |
| 439 | </pre> |
| 440 | |
| 441 | <p> |
| 442 | |
| 443 | It is easy to fix problems if something should go wrong with bootstrapped |
| 444 | nodes. Running "sudo chef-client" will re-configure the nodes (both |
| 445 | <i>pnode</i>s and the containers). |
| 446 | |
| 447 | <p> |
| 448 | |
| 449 | <!-----------------------------------------------> |
| 450 | |
| 451 | <h3>Set-up Complete</h3> |
| 452 | |
| 453 | <p> |
| 454 | |
| 455 | If all the preceding steps succeeded, then your <i>pnode</i>s and containers |
| 456 | are configured, booted, and ready for use. |
| 457 | |
| 458 | <p> |
| 459 | |
| 460 | <!------------------------------------------------------------------------> |
| 461 | |
| 462 | <hr> |
| 463 | <p> |
| 464 | |