Intro This is the first post on how to track packet flows in OpenStack Neutron connectivity. I want to focus here on packet flows to/from virtual machines in a scenario with the ML2 core plugin and the openvswitch mechanism driver. Some of the demonstrated steps and commands require access to the OpenStack API and others require root user privileges in the nodes of the OpenStack cluster. For a detailed description on how Neutron agents work together, please refer to the upstream Neutron Documentation.
I have been working with the OpenStack networking project (Neutron) for about 6 years now. First I was working for the public cloud provider OVH where we built OpenStack based public cloud services. I was part of the team which built a custom, BGP based backend for Neutron. Later on I joined Red Hat in their OpenStack networking team, where I continue working until now. In both companies I have dealt with OpenStack networking issues which in many cases are pretty similar to each other.
eBPF - what is it? eBPF stands for extended Berkeley Packet Filter and it allows write and run eBPF programs in the virtual machine inside the Linux kernel. More about it can be found for example on the Brendan Gregg’s blog. I used it to learn eBPF basics and I think it’s very good place to start really. eBPF is the “engine” which allows to run programs but there are also other tools (frameworks) which allows to write eBPF programs in the easier way.
Installation of Openshift on the OpenStack cloud During last “Day of Learning” in Red Hat which was great opportunity for me to spent whole day on learning something new to me and I choose to learn a bit about installation and management of the Openshift cluster. This post is mostly note for myself from what I did during that training. I was using Openshift 4.6.1 and I installed it on the OpenStack based cloud.
Yet another virtual PTG happened between 26th and 30th of October. Neutron team had sessions in each day of the PTG. Below is summary of what we discussed and agreed during that time. Etherpad with notes from the discussions can be found at OpenDev Etherpad. Retrospective of the Victoria cycle From the good things during Victoria cycle team pointed: Complete 8 blueprints including the Metadata over IPv6, Improved feature parity in the OVN driver, Good review velocity From the not so good things we mentioned:
Retrospective From the good things team mentioned that migration of the networking-ovn driver to the core neutron went well. Also our CI stability improves in the last cycle. Another good thing was that we implemented all required in this cycle community goals and we even migrated almost all jobs to Zuul v3 syntax already. Not so good was progress on some important Blueprints, like adoptoion of the new engine facade. The other thing mentioned here was activity in the stadium projects and in the neutron-lib.
Short background For the past few years I have been an OpenStack Neutron contributor, core reviewer and now even the project’s PTL. One of my responsibilities in the community is taking care of our CI system for Neutron. As part of this job I have to constantly check how various CI jobs are working and if the reasons for their failure are related to the patch or the CI itself.
This is my summary of OpenStack PTG which had place in Shanghai in November 2019. It is brief summary of all discussions which we had in Neutron room during 3 days event. On boarding Slides from onboarding session can be found at here In my opinion onboarding was good. There was around 20 (or even more) people in the room during this session. Together with Miguel Lavalle we gave talk about