hi all,
during redeployment, faced this issue now [1]
it looks like, on compute deployment step, compute, cannot communicate to
VIM.
even with a regular ping I can reach it.
with regular curl
http://IP:5000/v3 I get json:
[root@rem0te-compr-0 heat-admin]# podman exec -it c3515b7d46fe curl
http://10.120.129.202:5000/v3
{"version": {"id": "v3.14", "status":
"stable", "updated":
"2020-04-07T00:00:00Z", "links": [{"rel": "self",
"href": "
http://10.120.129.202:5000/v3/"}], "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.identity-v3+json"}]}}
[root@rem0te-compr-0 heat-admin]#
Also I see in TCP dump, that I receive reply, and even on compute, I see
reply coming in....
I am lost. Any ideas?
I am using L3 routed networks [2]
And these OSP deployment files: [3]
[1]
http://paste.openstack.org/show/coo2bB418Ik1uiWjEcPn/
[2]
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/featu...
[3]
https://github.com/qw3r3wq/homelab/tree/master/overcloud
On Wed, 24 Jun 2020 at 20:06, Ruslanas Gžibovskis <ruslanas(a)lpic.lt> wrote:
yes, will check and add it later, once back home.
On Wed, 24 Jun 2020, 21:02 Arkady Shtempler, <ashtempl(a)redhat.com> wrote:
> Hi Ruslanas!
>
> Is it possible to get all logs under /var/log/containers somehow?
>
> Thanks!
>
> On Wed, Jun 24, 2020 at 2:18 AM Ruslanas Gžibovskis <ruslanas(a)lpic.lt>
> wrote:
>
>> Hi Alfredo,
>>
>> Compute nodes are baremetal or virtualized?, I've seen similar bug
>>>> reports when using nested virtualization in other OSes.
>>>>
>>> baremetal. Dell R630 if to be VERY precise.
>>
>>
>>
>>>> When using podman, the recommended way to restart containers is using
>>> systemd:
>>>
>>>
>>>
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deplo...
>>>
>>
>> Thank you, I will try. I also modified a file, and it looked like it
>> relaunched podman container once config was changed. Either way, if I
>> understand Linux config correctly, the default value for user and group is
>> root, if commented out:
>> #user = "root"
>> #group = "root"
>>
>> also in some logs, I saw, that it detected, that it is not AMD CPU :)
>> and it is really not AMD CPU.
>>
>>
>> Just for fun, it might be important, here is how my node info looks.
>> ComputeS01Parameters:
>> NovaReservedHostMemory: 16384
>> KernelArgs: "crashkernel=no rhgb"
>> ComputeS01ExtraConfig:
>> nova::cpu_allocation_ratio: 4.0
>> nova::compute::libvirt::rx_queue_size: 1024
>> nova::compute::libvirt::tx_queue_size: 1024
>> nova::compute::resume_guests_state_on_host_boot: true
>> _______________________________________________
>> users mailing list
>> users(a)lists.rdoproject.org
>>
http://lists.rdoproject.org/mailman/listinfo/users
>>
>> To unsubscribe: users-unsubscribe(a)lists.rdoproject.org
>>
>
--
Ruslanas Gžibovskis
+370 6030 7030