[Rdo-list] Using tmux to do an OpenStack demo

Kashyap Chamarthy kchamart at redhat.com
Wed Dec 11 16:09:53 UTC 2013


Just a little while ago, I did an internal demo of a small aspect of
OpenStack over a shared terminal using 'tmux' (inspirited by a colleague).
Just posting here the details of how/what I did, in-case someone wants to try
something similar. It went fairly well as everything just worked :-)

Due to time limitation, we discussed three aspects. Pre-requisite: An existing set-up:

  [1] Flow of a VM
  [2] Boot from Snapshot
  [3] Neutron Tenant Network Creation/Boot a guest from this new Tenant

Here's the commands (attached here for reference):


And, these were my Neutron configs on both Controller/Compute nodes:


Setup details:

It's a two node OpenStack RDO set-up configured manually on two
Fedora 20 VMs (running Nested KVM on Intel).

  - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open
    vSwitch plugin and GRE tunneling).

  - Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

Setting up [*] tmux for a shared read-only session:

  $ useradd demo-ostk

  $ passwd demo-ostk

  $ yum install tmux -y

  $ tmux -S /var/tmp/demo-ostk

  $ chmod 777 /var/tmp/demo-ostk

  $ cat /home/demo-ostk/run-tmux
  #!/bin/sh -
  exec /usr/bin/tmux -S /var/tmp/demo-ostk attach -r

  $ grep demo-ostk /etc/passwd

  $ chown root.root /home/demo-ostk/run-tmux

  $ chmod 0555 /home/demo-ostk/run-tmux

  $ chcon system_u:object_r:bin_t:s0 /home/demo-ostk/run-tmux

That's all. Ask your participants to login via the demo user & the read-only
session will be presented:

  $ ssh demo-ostk at IP


tmux resizes the window to the smallest client (even you're read-only).
This is annoying. If you end up doing it inadvertently, you can your participant
to undo it, it'll be  back to normal on the controlling end.
(This is possible by using this setting in tmux.conf -- 'setw -g aggressive-resize on'.)
Thanks to Lars Kellog-Stedman for this tip.

 - https://rwmj.wordpress.com/2011/11/23/using-tmux-to-share-a-terminal/

-------------- next part --------------

0. List different Nova flavors

  $ nova flavor-list

1. Boot a guest with flavor 1 (i.e. 512 MB memory, and a small disk)

  $ GLANCE_IMG=$(glance image-list | grep "cirros\ " | awk '{print $2;}')

  $ nova boot --flavor 1 \
    --image $GLANCE_IMG cirr-guest1

2. Ensure it's active:

  $ nova list

  And also check in the instance's serial console log, if it *really* acquired DHCP lease:

  $ nova console-log cirr-guest1
  Starting network...
  udhcpc (v1.20.1) started
  Sending discover...
  Sending select for
  Lease of obtained, lease time 120
  deleting routers
  route: SIOCDELRT: No such process
  adding dns
  cirros-ds 'net' up at 13.34

3. Try SSHing using private IP via namespace
  # List namespaces
  $ ip netns

  # Reach internet from the router namespace:
  $ ip netns exec qrouter-f2df2518-78cb-4ad2-917c-3c1b0e994de7 ping google.com

  # SSH into the private IP via the router namespace
  $ ip netns exec qrouter-f2df2518-78cb-4ad2-917c-3c1b0e994de7 ssh cirros at

4. Create a Floating IP on the "external" network, and list:

  $ neutron floatingip-create ext
  $ neutron floatingip-list

5. Pull Nova guest ID, Floating IP ID and the VM port ID into an
environment variables :

  $ NOVA_GUEST_ID=$(nova list | grep cirr-guest1 | awk '{print $2;}')
  $ FLOATINGIP_ID=$(neutron floatingip-list | grep | awk '{print $2}')
  $ VM_PORT_ID=$(neutron port-list --device-id $NOVA_GUEST_ID | grep ip_address | awk '{print $2;}')

6. Associate Floating and Fixed IP (this will take a little bit of time); and do a couple of 'list' operations:
# Associate:
$ neutron floatingip-associate $FLOATINGIP_ID $VM_PORT_ID

# List the Floating IP addresses to see mapping:
$ neutron floatingip-list

# List Nova instances:
$ nova list

$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=13.2 ms
64 bytes from icmp_seq=2 ttl=63 time=1.50 ms

7. SSH into the instance via its floating IP ID
$ ssh cirros at
$ sudo -i
# shows only the Fixed IP
$ ifconfig -a 

Some commands to run

  $ neutron net-list
  $ neutron subnet-list
  # List namespaces(DHCP namespace, Router namespace)
  $ ip netns


Create a snapshot of a running instance:

  $ nova image-create cirr-guest2 snap1-of-cirr-guest2

List in Glance, to see if it shows up:

  $ glance image-list

Boot via this image:

  $ nova boot --flavor 1 --image 4cdc2f39-2c64-4145-8011-3c0bb58ff05f vm3


1. Source the admin tenant credentials, and pull the SERVICES tenant
info into a variable

  $ . keystonerc_admin

2. Create a tenant

  $ keystone tenant-create --name demo1
  $ keystone user-create --name tuser1 --pass fedora
  $ keystone user-role-add --user tuser1 --role user --tenant demo1

3. Create an RC file for this user and source the credentials:

  $ cat >> ~/keystonerc_tuser1 <<EOF
  export OS_USERNAME=tuser1
  export OS_TENANT_NAME=demo1
  export OS_PASSWORD=fedora
  export OS_AUTH_URL=http://localhost:5000/v2.0/
  export PS1='[\u@\h \W(keystone_tuser1)]\$ '

  $ . keystonerc_tuser1

4. Create a private network

  $ neutron net-create priv-net1

5. Create a subnet:

  $ neutron subnet-create priv-net1 --name priv-subnet1

6. Create a router

  $ neutron router-create testrouter

7. Associate the router to the external network by setting its gateway

  $ EXT_NET=$(neutron net-list | grep ext | awk '{print $2;}')
  $ ROUTER_ID=$(neutron router-list | grep testrouter | awk '{print $2;}')
  $ neutron router-gateway-set $ROUTER_ID $EXT_NET

8. Add the private network to the router

  $ PRIV_NET=$(neutron subnet-list | grep priv-subnet1 | awk '{print $2;}')
  $ neutron router-interface-add $ROUTER_ID $PRIV_NET

9. List Neutron networks

  $ neutron net-list

10. Add Neutron security groups

  $ neutron security-group-rule-create --protocol icmp \
    --direction ingress --remote-ip-prefix default

  $ neutron security-group-rule-create --protocol tcp \
    --port-range-min 22 --port-range-max 22 \
    --direction ingress --remote-ip-prefix default

11. Boot a guest with this newly created NIC:

  $ nova boot --flavor 1 --nic net-id=$PRIV_NET --image $GLANCE_IMG vm1 --security_groups default

More information about the dev mailing list