[Rdo-list] Simplest Icehouse Implementation Architecture

Eric Berg eberg at rubensteintech.com
Tue Jun 3 14:55:43 UTC 2014


I have performed this installation and now have a control host and one 
compute host, but am not sure of a few things:

 1. First, I believe that I need nova-networking running on each compute
    hosts to avoid routing all traffic through a dedicated network host,
    but I'm not sure how to check to see that the networking service is
    running on my compute host.
 2. Lars helped me set up a single-host setup, which put my instances on
    our 192.168.0.0/16 network by using an ovs bridge (br-ex) with the
    IP of the host on the bridge, which owns eth0, but I'm not sure how
    that relates to this new setup.  Should I create the same type of
    bridged connection on each compute host?

I have done quite a bit of searching and reading, but have yet to find 
any documents that clearly lay out how this networking shold work.  
Obviously, with OpenStack in general, there are many different 
implementations with different needs, but I feel like there's very 
little to get you through network configuration beyond basic installation.

If I'm missing something obvious, I'd appreciate it if anybody could 
provide pointers.  In the mean time, I'm hoping for some help.

Thanks.

Eric



On 5/30/14, 1:31 PM, Eric Berg write:
> Thoughts, anyone?
>
> I'm moving forward with the following:
>
> packstack  --install-hosts=192.168.0.37,192.168.0.39
>
> and will add another compute host in the future.  Still thinking about 
> what the network should look like, but I'm probably overthinking it 
> for a change.
>
> On 5/29/14, 4:04 PM, Eric Berg wrote:
>> Thanks as always, Lars.
>>
>> By "development environment", I mean several things:
>>
>> 1) Developers work on these hosts.  We're a web shop, and one or more 
>> developers will spin up dev web servers on these hosts
>> 2) Ideally, I'd also want to validate our production cloud 
>> environment so that when we deploy it in production, we have 
>> validated the configuration.
>>
>> For the time being, however, #2 is a nice-to-have and does not at all 
>> seem to fit in with the fairly aggressive goal of implementing a new 
>> RDO deployment in 1-3 days (way over that already as you might well 
>> imagine).
>>
>> So, basically, I want to migrate from the current set of physical 
>> hosts on which developers now work to a cloud environment which will 
>> host no more than 25 VMs.
>>
>> Since we have two fairly well-endowed hosts targeted for use as 
>> compute hosts, would it be realistic to use one as the controller, 
>> while still using it as a compute host?
>>
>> On a related note, what happens if I lose the controller box in this 
>> two-compute-hosts-one-as-controller-host scenario?  I believe that 
>> I'm out of business until I can remedy that, and if I wanted to set 
>> up the two hosts as both compute hosts as well as putting some kind 
>> of HA in place so that control could pass from one to the other of 
>> these boxes, would that be possible? Recommended?
>>
>> Must the control host be separate in order to do (live) migrations?
>>
>> Is it a requirement that the control host be separate if I want to 
>> deploy 2 compute hosts?
>>
>> And, if I choose the two-host solution, how does the network host 
>> (through which my understanding is that all network access to the 
>> instances must pass) play into this?
>>
>> Eric
>>
>> On 5/29/14, 3:39 PM, Lars Kellogg-Stedman wrote:
>>> On Thu, May 29, 2014 at 03:31:09PM -0400, Eric Berg wrote:
>>>> So, are either of the following architectures sufficient for a 
>>>> development
>>>> environment?
>>> Depending on your definition of "development environment", a *single*
>>> host may be sufficient.  It really depends on how many instances you
>>> expect to support, of what size, and what sort of workloads you'll be
>>> hosting.
>>>
>>> Having a seperate "control" node makes for nice logical separation of
>>> roles, which I find helpful in diagnosing problems.
>>>
>>> Having more than one compute node lets you experiment with things like
>>> instance migration, etc, which may be useful if you eventually plan to
>>> move to a production configuration.
>>>
>>
>

-- 
Eric Berg
Sr. Software Engineer
Rubenstein Technology Group
55 Broad Street, 14th Floor
New York, NY 10004-2501

(212) 518-6400
(212) 518-6467 fax
eberg at rubensteintech.com
www.rubensteintech.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rdoproject.org/pipermail/dev/attachments/20140603/3f561c8f/attachment.html>


More information about the dev mailing list