[Rdo-list] Foreman quickstack Neutron with VLAN
by Jonas Hagberg
Hej
Is there any guide to configure a neutron network node to support OVS and
VLAN with two physical interface?
I tried change some parameters in foreman. I would also like to us the
mellanox plugin and sr-IOV in ethernet mode.
https://wiki.openstack.org/wiki/Mellanox-Neutron-Icehouse-Redhat#Network_...
But I do not get things running.
in
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
I now have
[OVS]
vxlan_udp_port=4789
network_vlan_ranges=physnet1:1000:2999
tenant_network_type=vlan
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet1:br-eth0,public:br-ex
Eth0 is m y internal mellanox interface.
I have run the script
./bridge-create.sh br-eth0 eth0
my ifconfig looks like this
ifconfig
br-eth0 Link encap:Ethernet HWaddr 24:BE:05:9A:2B:71
inet addr:10.10.10.101 Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::7431:b4ff:fe5f:4dbd/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:180 (180.0 b) TX bytes:930 (930.0 b)
br-eth0,public Link encap:Ethernet HWaddr E6:72:FA:A1:EB:45
inet6 addr: fe80::e08d:1eff:feb4:f07d/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
br-int Link encap:Ethernet HWaddr 32:16:54:BA:73:4C
inet6 addr: fe80::1c1b:21ff:fe0e:48f9/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
eth0 Link encap:Ethernet HWaddr 24:BE:05:9A:2B:71
inet6 addr: fe80::26be:5ff:fe9a:2b71/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:1731 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:110784 (108.1 KiB) TX bytes:926 (926.0 b)
eth2 Link encap:Ethernet HWaddr 9C:B6:54:08:94:FC
inet addr:172.25.8.101 Bcast:172.25.11.255 Mask:255.255.252.0
inet6 addr: fe80::9eb6:54ff:fe08:94fc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:53594 errors:0 dropped:0 overruns:0 frame:0
TX packets:21334 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:52384874 (49.9 MiB) TX bytes:8096475 (7.7 MiB)
Memory:f7d00000-f7e00000
eth3 Link encap:Ethernet HWaddr 9C:B6:54:08:94:FD
UP BROADCAST MULTICAST MTU:9000 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Memory:f7b00000-f7c00000
int-br-eth0 Link encap:Ethernet HWaddr B2:F3:16:36:98:6A
inet6 addr: fe80::b0f3:16ff:fe36:986a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:468 (468.0 b) TX bytes:468 (468.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
phy-br-eth0 Link encap:Ethernet HWaddr A6:85:EA:FF:E1:86
inet6 addr: fe80::a485:eaff:feff:e186/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:468 (468.0 b) TX bytes:468 (468.0 b)
So lots of strange bridges.
but no br-ex
I created that by hand and can get service neutron-openvswitch-agent to run
I am just starting to learn Neutron so I may not fully understand what I am
doing.
Some help and kind guidance would be wonderful.
cheers
--
Jonas Hagberg
BILS - Bioinformatics Infrastructure for Life Sciences - http://bils.se
e-mail: jonas.hagberg(a)bils.se, jonas.hagberg(a)scilifelab.se
phone: +46-(0)70 6683869
address: SciLifeLab, Box 1031, 171 21 Solna, Sweden
10 years, 4 months
[Rdo-list] Fwd: [openstack-community] Official Paris Summit Schedule is Live
by Rich Bowen
FYI, the OpenStack Summit Schedule is now available!
-------- Original Message --------
Subject: [openstack-community] Official Paris Summit Schedule is Live
Date: Tue, 26 Aug 2014 22:25:14 +0200
From: Shari Mahrdt <shari(a)openstack.org>
To: community(a)lists.openstack.org <community(a)lists.openstack.org>,
marketing(a)lists.openstack.org <marketing(a)lists.openstack.org>
*The official OpenStack Summit Schedule is available here
<https://openstacksummitnovember2014paris.sched.org/>.*
We received an incredible 1,100+ submissions for the Paris Summit, and
had to make some tough decisions for the schedule. The final sessions
were chosen last week and everyone who submitted a proposal was notified
on Friday - August 22, 2014. All accepted and alternate speakers
received free codes to register to the Summit. Email notifications were
sent from events(a)openstack.org <mailto:events@openstack.org> or
speakermanager(a)fntech.com <mailto:speakermanager@fntech.com>. Please let
us know if there is anyone who has submitted a session but hasn't
received a notification from one of these emails.
There is also the opportunity to present a Tech Talk in the #vbrownbag
room. The TechTalks offer a forum for community members to give ten
minute presentations. They have a small in-person audience and will be
video recorded and published to YouTube. To participate, just fill out
the submission form here
<http://openstack.prov12n.com/techtalks-at-openstack-summit-paris/>.
Please remember that the *last day to *purchase
<https://openstacksummitnov2014.eventbrite.com/>* Summit passes at the
Early Bird rate is this Thursday - August 27, 2014. *
We look forward to seeing you all in Paris!
Cheers,
Shari
Shari Mahrdt
OpenStack Marketing
shari(a)openstack.org <mailto:shari@openstack.org>
10 years, 4 months
[Rdo-list] Deploying with Heat - Hangout - September 5
by Rich Bowen
Next week, Friday, September 5, 10 am Eastern US time, Lars
Kellogg-Stedman will be presenting a Google Hangout on the subject of
deploying with Heat. This will be streamed live on YouTube at
https://plus.google.com/events/c9u4sjn7ksb8jrmma7vd25aok94 and if the
time is not convenient, you will be able to watch it at that same URL
after the fact.
Come to the #rdo-hangout channel on Freenode IRC for questions and
discussion during and after the event, or come to #rdo at any time for
RDO-related discussion.
--Rich
--
Rich Bowen - rbowen(a)rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon
10 years, 4 months
[Rdo-list] Icehouse : Foreman + Staypuft on Centos 6.5
by 10 minus
Hi ,
Has anybody got Staypuft to work . I get an error "missing
base_hostgroup" when I click on New Deployment.
The error is same regardless of which version I use.
from "ruby193-rubygem-staypuft-0.1.2" thru "ruby193-rubygem-staypuft-0.1.20"
Cheers,
10 years, 4 months
[Rdo-list] icehouse with ML2 : VMs not able to get DHCP on Centos 6.5
by 10 minus
Hi,
My setup
Contoller+Network node -- 2 nics ( internal+vm, external)
2x Compute -- 2 nics (internal+vm, external)
I have used packstack to set the environment up.
The VMs on compute node are unable to contact controller node.
tcpdump shows me that the packets never make it to controller node
On compute node
--snip--
tcpdump -i br-vm | grep -i dhcp
17:25:52.476521 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP,
Request from fa:16:3e:3e:ca:c2 (oui Unknown), length 281
17:27:52.598709 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP,
Request from fa:16:3e:3e:ca:c2 (oui Unknown), length 281
--snip--
On controller node the above packets never make it
logs for /var/log/neutron/openvswitch-agent.log on compute node :
--snip--
2014-08-22 17:17:38.793 29698 INFO neutron.agent.securitygroups_rpc
[req-faf30bbb-de0c-4f41-8fcb-cf9f09cfd141 None] Security group member
updated [u'292c5a84-5c31-4158-858d-8261a6ea9680']
2014-08-22 17:18:08.231 29698 WARNING neutron.agent.linux.ovs_lib [-] Found
failed openvswitch port: [u'int-br-ex', [u'map', []], -1]
2014-08-22 17:18:08.348 29698 INFO neutron.agent.securitygroups_rpc [-]
Preparing filters for devices set([u'739aff99-7472-4e5c-921b-095005830f61'])
2014-08-22 17:18:08.391 29698 INFO neutron.openstack.common.rpc.common [-]
Connected to AMQP server on 10.5.0.31:5672
2014-08-22 17:18:09.162 29698 INFO
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port
739aff99-7472-4e5c-921b-095005830f61 updated. Details: {u'admin_state_up':
True, u'network_id': u'16e331e2-3502-4d72-8a91-8931bb90263c',
u'segmentation_id': 100, u'physical_network': u'tvlan', u'device':
u'739aff99-7472-4e5c-921b-095005830f61', u'port_id':
u'739aff99-7472-4e5c-921b-095005830f61', u'network_type': u'vlan'}
2014-08-22 17:18:09.162 29698 INFO
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Assigning 1 as
local vlan for net-id=16e331e2-3502-4d72-8a91-8931bb90263c
2014-08-22 17:18:09.639 29698 INFO
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Configuration for
device 739aff99-7472-4e5c-921b-095005830f61 completed.
--snip--
logs for /var/log/neutron/server.log on controller :
--snip--
2014-08-22 17:18:01.996 3131 INFO neutron.wsgi
[req-6dbeeb06-b98c-4567-b4c5-1003932ea426 None] (3131) accepted
('10.5.0.31', 58207)
2014-08-22 17:18:02.052 3131 INFO neutron.wsgi
[req-53df877f-59bd-48d9-a8c0-ec799ce86677 None] 10.5.0.31 - - [22/Aug/2014
17:18:02] "GET //v2.0/subnets.json HTTP/1.1" 200 1424 0.055183
2014-08-22 17:18:11.554 3131 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 10.5.0.31
2014-08-22 17:18:11.657 3131 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 10.5.0.31
2014-08-22 17:18:11.827 3131 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 10.5.0.31
2014-08-22 17:18:12.048 3131 INFO neutron.notifiers.nova [-] Nova event
response: {u'status': u'completed', u'tag':
u'739aff99-7472-4e5c-921b-095005830f61', u'name': u'network-vif-plugged',
u'server_uuid': u'aaf0838b-d668-457a-b564-b9aa626ea78a', u'code': 200}
2014-08-22 17:18:15.656 3131 INFO neutron.wsgi [-] (3131) accepted
('10.5.0.31', 58217)
.
.
2014-08-22 17:18:29.494 3131 INFO neutron.wsgi
[req-a8c3197a-9ac8-4721-b2c2-7e120c1b2b68 None] 10.5.0.33 - - [22/Aug/2014
17:18:29] "GET
/v2.0/ports.json?network_id=16e331e2-3502-4d72-8a91-8931bb90263c&device_owner=network%3Adhcp
HTTP/1.1" 200 941 0.020400
2014-08-22 17:19:30.697 3131 INFO neutron.wsgi [-] (3131) accepted
('10.5.0.33', 33439)
2014-08-22 17:19:30.945 3131 INFO neutron.wsgi
[req-8bc82373-aa5b-425f-b258-6a75022ece9f None] (3131) accepted
('10.5.0.33', 33442)
2014-08-22 17:19:30.963 3131 INFO neutron.wsgi
[req-fc706978-e642-4057-8fda-9ee53bfddf91 None] 10.5.0.33 - - [22/Aug/2014
17:19:30] "GET /v2.0/subnets.json?id=7667013a-af5f-4171-9797-9dd788fe8461
HTTP/1.1" 200 628 0.017350
2014-08-22 17:19:30.965 3131 INFO neutron.wsgi
[req-fc706978-e642-4057-8fda-9ee53bfddf91 None] (3131) accepted
('10.5.0.33', 33443)
2014-08-22 17:19:30.986 3131 INFO neutron.wsgi
[req-7f86ccd7-3463-4057-be48-1c4deb475238 None] 10.5.0.33 - - [22/Aug/2014
17:19:30] "GET
/v2.0/ports.json?network_id=16e331e2-3502-4d72-8a91-8931bb90263c&device_owner=network%3Adhcp
HTTP/1.1" 200 941 0.020030
2014-08-22 17:20:32.204 3131 INFO neutron.wsgi [-] (3131) accepted
('10.5.0.33', 33444)
--snip--
my plugin.ini on compute node
--snip--
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = tvlan:100:110
[ml2_type_gre]
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
bridge_mappings = tvlan:br-vm
network_vlan_ranges = tvlan:100:110
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
local_ip = 172.16.0.33
--snip--
If I define a fixed ip address I'm unable to query the router on controller
node.
--snip--
tcpdump -i br-vm | grep 172.16.100.254
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-vm, link-type EN10MB (Ethernet), capture size 65535 bytes
18:00:27.637994 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:28.638008 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:29.640179 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:30.638030 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:31.638033 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:32.640302 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:33.638048 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
18:00:34.638055 ARP, Request who-has 172.16.100.254 tell 172.16.100.5,
length 28
--snip--
What baffles me I'm unable to see the vlan info
# My neutron config for computing-node
neutron agent-show 9fa4620b-27e0-4308-a4ef-0bd29bc813f4
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| agent_type | Open vSwitch agent |
| alive | True |
| binary | neutron-openvswitch-agent |
| configurations | { |
| | "tunnel_types": [], |
| | "tunneling_ip": "172.16.0.33", |
| | "bridge_mappings": { |
| | "tvlan": "br-vm" |
| | }, |
| | "l2_population": false, |
| | "devices": 1 |
| | } |
| created_at | 2014-08-21 08:59:59 |
| description | |
| heartbeat_timestamp | 2014-08-22 16:08:10 |
| host | cc03.t10.de |
| id | 9fa4620b-27e0-4308-a4ef-0bd29bc813f4 |
| started_at | 2014-08-22 15:09:40 |
| topic | N/A |
+---------------------+--------------------------------------+
# Controller config
neutron agent-show 8f947289-c8bc-40d6-8ebf-b5a29a5f83bc
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| agent_type | Open vSwitch agent |
| alive | True |
| binary | neutron-openvswitch-agent |
| configurations | { |
| | "tunnel_types": [], |
| | "tunneling_ip": "", |
| | "bridge_mappings": { |
| | "physnet1": "br-ex", |
| | "tvlan": "br-vm" |
| | }, |
| | "l2_population": false, |
| | "devices": 4 |
| | } |
| created_at | 2014-08-20 15:49:14 |
| description | |
| heartbeat_timestamp | 2014-08-22 16:21:39 |
| host | cc01.t10.de |
| id | 8f947289-c8bc-40d6-8ebf-b5a29a5f83bc |
| started_at | 2014-08-21 11:26:17 |
| topic | N/A |
+---------------------+--------------------------------------+
Any pointers to fix the issue ..
10 years, 4 months
[Rdo-list] instance can't connect to neutron
by Zhao, Xin
Hello,
I am setting up a 3-node icehouse testbed on RHEL6.5, using RDO, the
testbed has one controller node, one network node and one compute node.
I use ML2 plugin, with OVS
mechanism and VLAN type.
When I start an instance, it fails. On the compute node nova.log file,
there is the following error messages:
2014-08-11 17:05:44.234 25860 WARNING nova.compute.manager [-]
[instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network
setup (attempt 1 of 3)
2014-08-11 17:05:45.240 25860 WARNING nova.compute.manager [-]
[instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed network
setup (attempt 2 of 3)
2014-08-11 17:05:47.254 25860 ERROR nova.compute.manager [-] Instance
failed network setup after 3 attempt(s)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager Traceback (most
recent call last):
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1504,
in _allocate_network_async
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager
dhcp_options=dhcp_options)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line
259, in allocate_for_instance
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager net_ids)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py", line
128, in _get_available_networks
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager nets =
neutron.list_networks(**search_opts).get('networks', [])
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
111, in with_params
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager ret =
self.function(instance, *args, **kwargs)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
333, in list_networks
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager **_params)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
1250, in list
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager for r in
self._pagination(collection, path, **params):
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
1263, in _pagination
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager res =
self.get(path, params=params)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
1236, in get
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager
headers=headers, params=params)
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager File
"/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py", line
1228, in retry_request
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager raise
exceptions.ConnectionFailed(reason=_("Maximum attempts reached"))
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager
ConnectionFailed: Connection to neutron failed: Maximum attempts reached
2014-08-11 17:05:47.254 25860 TRACE nova.compute.manager
2014-08-11 17:05:49.069 25860 WARNING nova.virt.disk.vfs.guestfs
[req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea
f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c]
Failed to close augeas aug_close: do_aug_close: you must call 'aug-init'
first to initialize Augeas
2014-08-11 17:05:49.222 25860 ERROR nova.compute.manager
[req-a534fed9-ebea-4a61-8064-ff3d3db2e6ea
f6bd9769708b4fbe971a616143c6959f eea3753cf3ce471ba60c434e7382750c]
[instance: fdaba1ab-728b-4352-89b1-57f302496a07] Instance failed to spawn
On the controller node and network node, I don't see much errors from
the neutron services log files. I can connect to the (standalone) DB
from the network node, using the username/password inside the
neutron.conf file.
Here are the relevant rpms on the compute node:
openstack-utils-2014.1-3.el6.noarch
openstack-neutron-openvswitch-2014.1.1-8.el6.noarch
openstack-neutron-ml2-2014.1.1-8.el6.noarch
openstack-nova-compute-2014.1.1-3.el6.noarch
openstack-neutron-2014.1.1-8.el6.noarch
openstack-nova-common-2014.1.1-3.el6.noarch
openstack-selinux-0.1.3-2.el6ost.noarch
python-neutronclient-2.3.4-1.el6.noarch
python-neutron-2014.1.1-8.el6.noarch
Any idea what went wrong?
Thanks a lot,
Xin
10 years, 4 months
[Rdo-list] [QA] Tempest - is volume testing actually testing anything with block storage?
by Kaul, Yaniv
(Some of you may know me from my previous work at Red Hat - good to see some familiar faces!)
I'm working on testing a Cinder driver for OpenStack - IceHouse, Havana, and Juno (regretfully, in that order).
I've quickly found out Tempest is not really testing much for real (example: if I don't configure iSCSI, I still pass all tests but 5 of them!), so I'm doing some 'manual' tests.
I've discovered few issues, not sure where to file them upstream (Glance/Cinder/libvirt/etc.), I'll send them over the mailing list (or upstream, but I was hoping for a low-volume mailing list).
1. Is there any test actually booting or running the VMs from the block storage? I have it configured correctly (I think), but none of the relevant tempest.api.volume* tests actually do much with it. Volumes are created, mapped, snapshotted, removed, etc, but nothing is really written to it...
2. Is there a way to test multi-backend for real? Looks like in Tempest there's a single 'storage_protocol' entry?
3. Is there a way to configure Nova and friends to use as much as possible the block storage?
- Can I somehow get rid of the 'base' ? Don't need it if I can have a base as volumes on the block storage.
- I've found out that Glance does not really support Cinder (the 'raise NotImplementedError' under add() gave it away). Unless I misunderstood something, not sure why it's documented everywhere.
(I've 'hacked' around it by creating /var/lib/glance/image_block, referring Glance to use it. That in turn is a mount on a multipathed LUN. It works, but Glance is regretfully copying files in 4K chunks for some reason. This is horrible performance-wise).
- Conversion to raw is using only the first path in multipath (again, horrible performance-wise. By default Nova is regretfully configured this way too - unless use_multipath_for_image_xfer is set to True)
I'm using the forked Tempest from https://github.com/redhat-openstack/tempest , on CentOS 6.5 (could not installed IceHouse on CentOS 7, known issue I believe), with IceHouse (hoping to see RDO Juno packages soon!).
TIA,
Y.
10 years, 4 months