Author Archives: numans

About numans

I am a Principle Software Engineer in Red Hat, Bangalore. I contribute to OVN (part of OVS) and OpenStack Neutron ML2 driver for OVN. Before working on OVN, I have contributed to OpenContrail and OpenStack Neutron. twitter - @numansiddique

How to create an Open Virtual Network distributed gateway router

Note: This article is originally published here

In this article, I discuss external connectivity in Open Virtual Network (OVN), a subproject of Open vSwitch (OVS), using a distributed gateway router.

OVN provides external connectivity in two ways:

  • A logical router with a distributed gateway port, which is referred to as a distributed gateway router in this article
  • A logical gateway router

In this article, you will see how to create a distributed gateway router and an example of how it works.

Creating a distributed gateway router has some advantages over using a logical gateway router for the CMS (cloud management system):

  • It is easier to create a distributed gateway router because the CMS doesn’t need to create a transit logical switch, which is needed for a logical gateway router.
  • A distributed gateway router supports distributed north/south traffic, whereas the logical gateway router is centralized on a single gateway chassis.
  • A distributed gateway router supports high availability.

Note: The CMS can be OpenStack, Red Hat OpenShift, Red Hat Virtualization, or any other system that manages a cloud.

Continue reading

Debugging OVN external connectivity – Part 1

In an OVN deployment (with OpenStack or not), I have faced issues related to external (North/South) connectivity to/from the VMs and most of the time it is misconfiguration in the OVN databases. So I thought of writing this post.

I assume that the reader is familiar with the basic OVN architecture. Please see at the end of the post which has links to some of the tutorials and blog posts on OVN.

OVN provides external connectivity in two ways

  • Creating a logical gateway router. I recommend reading this excellent blog to know more about it –
  • Adding a logical gateway router port to the a logical router.
    • This can be again configured as HA or non HA. If HA is enabled then the gateway router port is scheduled on multiple chassis with one acting as master. If it fails for some reason then the other chassis will take over.

In this blog post I will concentrate on the logical gateway router port with no HA. In the next blog post, I intend to cover the logical gateway router port with HA scenario.

What does scheduling mean here ? It means the chassis which is selected to host the gateway router port provides the centralized external connectivity. The north-south tenant traffic will first be redirected to this chassis and it acts as a gateway.

I will take OpenStack as an example here. Let’s say you have a private network “private” with subnet – and a VM port is created with IP – The private network is attache to a neutron router “r1” and a gateway is added to it.

$openstack network create private
$openstack subnet create --network private --subnet-range private-subnet
$openstack router create r1
$openstack router add subnet r1 private-subnet
$openstack network create public --external --provider-network-type vlan --provider-segment 10 --provider-physical-network datacentre
$openstack subnet create --network public --subnet-range --allocation-pool start=,end= --no-dhcp public-subnet
$openstack router set --external-gateway public r1
$openstack port create --network private vm1
$openstack floating ip create --port vm1 public

When we run “ovn-nbctl show” we will see the below output. In my case OVN databases are running on a node with IP with port 6641 for OVN northbound db and port 6642 for OVN southbound db.

# ovn-nbctl --db=tcp: show
switch 9315d386-d612-49b2-8e90-0e69a29af331 (neutron-3e992fef-0d27-4a51-b85a-fa0d1aac48d4) (aka public)
 port 3e29a3ce-8113-47c4-909d-99586f510be6
 type: localport
 addresses: ["fa:16:3e:7d:1e:60"]
 port 4bbf7c7d-fdf0-444d-af04-68abf0cdb9c9
 type: router
 router-port: lrp-4bbf7c7d-fdf0-444d-af04-68abf0cdb9c9
 port provnet-3e992fef-0d27-4a51-b85a-fa0d1aac48d4
 type: localnet
 tag: 10
 addresses: ["unknown"]
switch 78cb4087-716f-4674-9279-9f5a8a4e251a (neutron-a004a804-436a-4175-b764-17c69e016247) (aka private)
 port 31b9cd7f-573b-4b4b-9d6a-8c9ebe968e80 (aka vm1)
 addresses: ["fa:16:3e:48:15:ef"]
 port 2a36a3e8-6490-479f-af38-e6b86d4800a1
 type: localport
 addresses: ["fa:16:3e:c8:06:ce"]
 port 16ec2d28-a9b1-492a-885e-ba6a18f731a0
 type: router
 router-port: lrp-16ec2d28-a9b1-492a-885e-ba6a18f731a0
router 207b380c-4f66-412b-979f-f0696ed1832b (neutron-82572bd3-f790-413b-b83b-942b2d23f9d2) (aka r1)
 port lrp-4bbf7c7d-fdf0-444d-af04-68abf0cdb9c9
 mac: "fa:16:3e:93:7f:f0"
 networks: [""]
 port lrp-16ec2d28-a9b1-492a-885e-ba6a18f731a0
 mac: "fa:16:3e:97:e5:c6"
 networks: [""]
 nat 343cfa84-9ef2-4fa6-996f-3c2eb97eaafa
 external ip: ""
 logical ip: ""
 type: "dnat_and_snat"
 nat 8e43b8a4-9a04-4fd2-adc1-57bf70dd062c
 external ip: ""
 logical ip: ""
 type: "snat"

Step 1. Get the list of chassis in your deployment

In OVN terminology, chassis is nothing but a node where ovn-controller service is running. ovn-controller service running on each chassis connects to the south bound database and an entry is created for each chassis in the southbound db.

Run “ovn-sbctl show”

In my case, I get the below output

# ovn-sbctl --db=tcp: show
Chassis "771bfd23-8a81-4685-b759-bb4d7d542282"
 hostname: "overcloud-novacompute-1.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}
Chassis "116e3e4f-3ae1-4788-a300-b902b019530b"
 hostname: "overcloud-controller-0.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}
Chassis "58e05e13-bc58-4afc-b975-88b13c9b38cf"
 hostname: "overcloud-controller-1.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}
 Port_Binding "cr-lrp-4bbf7c7d-fdf0-444d-af04-68abf0cdb9c9"
Chassis "3c3f0f21-8cc9-4668-8a11-c3aebe5bbda3"
 hostname: "overcloud-novacompute-2.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}
Chassis "f7479467-cfea-49a2-a662-8c87bf69380e"
 hostname: "overcloud-controller-2.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}
Chassis "b8d08aa0-0486-403a-851b-366e45416c51"
 hostname: "overcloud-novacompute-0.novalocal"
 Encap geneve
 ip: ""
 options: {csum="true"}

Step 2: Verify ovn-bridge-mappings on all your chassis’s.

Make sure that ovn-bridge-mappings are configured in your chassis.

In order for a chassis to provide external connectivity, ovn-controller expects “ovn-bridge-mappings” to be configured. You can verify ovn-bridge-mappings settings by running the below command in the chassis.

#ovs-vsctl get open . external_ids:ovn-bridge-mappings

In my case it returns “datacentre:br-ex”. Please see and grep for ovn-bridge-mappings for more information about it. In case the above command returns error and you want that chassis to provide external connectivity, then configure ovn-bridge-mappings by running

#ovs-vsctl set open . external_ids:ovn-bridge-mappings=”datacentre:br-ex”

“datacentre:br-ex” is just an example. Also create the ovs bridge “br-ex” if not present.

Step 3: Get the scheduled chassis of the gateway router port

Next step is to figure out where the gateway router port is scheduled. The chassis on which the gateway router port is scheduled acts as the gateway for the tenant traffic.

First get the name of the logical router gateway port by running the below command. happens to be gateway ip attached to the router in my case. You can figure it out by running “openstack router show r1”.

[root@overcloud-controller-0 heat-admin]# ovn-nbctl --db=tcp: show | grep -B3
router 207b380c-4f66-412b-979f-f0696ed1832b (neutron-82572bd3-f790-413b-b83b-942b2d23f9d2) (aka r1)
 port lrp-4bbf7c7d-fdf0-444d-af04-68abf0cdb9c9
 mac: "fa:16:3e:93:7f:f0"
 networks: [""]

If you look into the options column, you will see that the gateway port is scheduled on the chassis “116e3e4f-3ae1-4788-a300-b902b019530b” which is “overcloud-controller-0.novalocal” in my case. You will see another option “gateway_chassis”. If that is set, then the gateway port is scheduled on multiple chassis with HA configured. Let’s assume “gateway_chassis” column is empty for now. In case “options” column is empty it means the gateway router port is not scheduled. In the case of OpenStack this should not happen. In the case of other CMS’s (cloud management system) it is expected that this column is set by CMS. You can schedule it manually. See step 4

Step 4: Schedule the gateway router port if required

This step will be required either if “options” column was empty in step 3 or the gateway router port was scheduled on a chassis which doesn’t provide external connectivity. So you want to reschedule it to another chassis which provides external connectivity. Select a chassis where you want to schedule. Make sure that it has ovn-bridge-mappings configured. If you are facing the external connectivity issue with your tenant traffic, then this is most likely the cause and you need to fix it here.

Let’s say you want to select the chassis 58e05e13-bc58-4afc-b975-88b13c9b38cf (overcloud-controller-1.novalocal).

[root@overcloud-controller-0 heat-admin]# ovn-nbctl --db=tcp: set logical_router_port 528f0224-c016-4560-a122-3bb12bbdef1c options:redirect-chassis=58e05e13-bc58-4afc-b975-88b13c9b38cf.

Run the command in step 2 to verify once.

Following the above steps should provide external connectivity to your tenant traffic. If it still doesn’t work, most likely it is a bug in OVN. Please report it to the OVS mailing list <>.


In this blog post we saw how to inspect the OVN databases to figure out the issue if your external connectivity is broken for your tenant traffic. In the next blog post we will see how to fix issues for HA scenario.

Links to OVN blogs and tutorials

Native DHCP support in OVN

Recently native DHCP support has been added to OVN. In this post we will see how native DHCP is supported in OVN and how it is used by OpenStack Neutron OVN ML2 driver. The code which supports native DHCP can be found here and here.

Please see this to understand the architecture of OVN and the services of the OVN. To brief, OVN has a service called ovn-northd which generates the logical flows based on the OVN northbound database state. OVN northbound database is populated by the OVN ML2 neutron driver. OVN has another service called ovn-controller which is run on each compute host. ovn-controller translates the logical flows generated by ovn-northd into OpenFlow flows and adds these flows into the integration bridge (br-int) managed by the local ovs-vswitchd instance.

I recommend reading this blog and this as I found them to be very useful along with the ovn-architecture man page if you are curious to know more about OVN.

OpenStack Neutron supports DHCP and provides the IP addresses to the VMs using the Neutron dhcp agent. The dhcp agent can be configured to run on multiple nodes. When a VM boots up it sends DHCP discover broadcast packets which is received by the “dnsmasq” (spawned and configured by dhcp agent) which looks its configuration and send the DHCP reply packet with the appropriate IP address. In the OpenStack world, the IPv4 addresses are assigned when the neutron port is created. So it becomes easier to send the DHCP reply packet with the appropriate IPv4 address.

Until the native DHCP support was added into OVN, we were relying on the dhcp agent to support DHCP in the OpenStack environment.

Advantages of having native DHCP support are

  • We don’t need to rely on dhcp agent. So no namespace and ‘dnsmasq’ instance for each virtual network is needed.
  • With the dhcp agent approach, it is not completely distributed. In cases where the dhcp agent is down, the VMs might not get the DHCP replies.
  • It is completely distributed. ovn-controller running in each compute node handles the DHCP requests from the VMs hosted locally which makes the DHCP support in OVN distributed.

A little about Continuations feature of OVS

Native DHCP is supported using an OVS feature called “Continuations”. This feature which will be available in OVS 2.6. release. Please see here for detailed information.

“Continuations” provides an OpenFlow action called “NXT_PACKET_IN2”  which has a flag called “pause” and a field called “userdata”. With the “pause” flag set, the controller when receives a packet as packet-in, can inspect the packet and modify it if required and can send the packet back to the switch which will resume the pipeline from the point where it was interrupted.

Native DHCP details

Lets see some details on how native DHCP is supported in OVN.

When ovn-controller receives a DHCP request packet, in order to send a DHCP reply

  •  It needs to know the IPv4 address to be offered
  • The DHCP options to be added in the DHCP reply packet.

OVN Northbound database has a new table called “DHCP_Options” which is used to define the set of DHCP options. In the Logical_Switch_Port table a new column called “dhcpv4_options” is added which refers to the DHCP_Options rows. In order to make use of native DHCPv4 feature, the CMS (Cloud management system) is expected to define DHCP options for each of the logical ports.

ovn-northd then adds logical flows to send the DHCP replies for each logical port which has an IPv4 address and DHCP options defined. ovn-northd adds two new stages in the ingress logical pipeline – “ls_in_dhcp_options” followed by “ls_in_dhcp_response” where these logical flows are added.

Let’s say we have a logical port with name “port1” configured with IPv4 address – “” and the following DHCP options defined – lease_time=”43200″, mtu=”1442″, router=”″, server_id=”″ and server_mac=”fa:16:3e:96:22:da”.

ovn-northd will add the below logical flows

table=10(ls_in_dhcp_options ), priority=100  , match=(inport == “port1” && eth.src == fa:16:3e:50:47:62 && ip4.src == && ip4.dst == && udp.src == 68 && udp.dst == 67), action=(reg0[3] = put_dhcp_opts(offerip =, netmask =, router =, mtu = 1442, server_id =, lease_time = 43200); next;)

table=11(ls_in_dhcp_response), priority=100  , match=(inport == “port1” && eth.src == fa:16:3e:50:47:62 && ip4.src == && ip4.dst == && udp.src == 68 && udp.dst == 67 && reg0[3]), action=(eth.dst = eth.src; eth.src = fa:16:3e:96:22:da; ip4.dst =; ip4.src =; udp.src = 67; udp.dst = 68; outport = inport; flags.loopback = 1; output;)

The OVN action “put_dhcp_opts” transforms the DHCP request packet into a reply packet, adds the DHCP options defined and stores 1 in the ovs register reg0 bit 3. If the packet is invalid, it leaves the packet unchanged and stores 0 in the ovs register reg0 bit 3.

In order to understand how this action transforms the DHCP request packet into the reply packet, lets see the corresponding OF flow.

table=26,priority=100,udp,reg14=0x3,metadata=0x4,dl_src=fa:16:3e:b9:ce:e0, nw_src=,nw_dst=,tp_src=68,tp_dst=67 actions=controller(userdata=,pause),resubmit(,27)

As you see above, the action “put_dhcp_opts” translates into controller action with “pause” flag set and the DHCP options stored in the “userdata” field.

When a DHCP request packet is received, ovs-vswitchd sends the packet to “ovn-controller”. “ovn-controller” receives this packet, extracts the offer ip and the DHCP options from the “userdata”, frames a DHCP reply packet and adds these DHCP options, stores 1 in the ovs register field bit and sends the packet back to the switch. How would ovn-controller know which register to use ? It is also stored in the “userdata” field.

On receiving the packet back, the ovs-vswitchd resumes the packet and executes the next pipeline “ls_in_dhcp_response”.

table=27,priority=100,udp,reg0=0x8/0x8,reg14=0x1,metadata=0x4,dl_src=fa:16:3e:50:47:62, nw_src=,nw_dst=,tp_src=68,tp_dst=67 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:fa:16:3e:96:22:da,mod_nw_dst:,mod_nw_src:,mod_tp_src:67,mod_tp_dst:68,move:NXM_NX_REG14[]->NXM_NX_REG15[],load:0x1->NXM_NX_REG10[0],resubmit(,32)

The OVN actions “outport = inport; flags.loopback = 1; output;” in the “ls_in_dhcp_response” pipeline causes the reply DHCP packet to be delivered to the VM port(which sent the DHCP request packet).

Below diagram depicts the sequence of actions when the VM sends a DHCP request packet.


OpenStack Neutron OVN ML2 driver

The neutron OVN ML2 driver makes use of this feature. In order to use native DHCP, the configuration option “ovn_native_dhcp” should be set to True in the ML2 plugin configuration file.

OVN ML2 driver creates the DHCP_Options row for every subnet and defines the DHCP options. All the ports of the subnet will refer to the DHCP_Options row. It also supports the extra DHCP options if defined for any port. Please see here to get an overview of how native DHCPv4 is used in neutron.

Native DHCPv6 support in OVN

Patches to support DHCPv6 are submitted for review in the OVS dev mailing list. Once they are reviewed and accepted, OVN will have native DHCPv6 support. This feature will be really useful once OVN supports IPv6 Router Advertisements. OVN already supports IPv6 routing. Patches are up for review to support IPv6 RAs.

Limitation of using native DHCP support

  • OVN still doesn’t have native DNS support for internal DNS queries. If support for internal DNS is a requirement in the OpenStack deployments, then the dhcp agent needs to be used.
  • To have metadata support, dhcp agent would be still required. There is a patch in networking-ovn which supports VM metadata access using native DHCP which can be found here.

Logging configuration in OpenContrail

We know that all software components and services generate log files. These log files are vital in troubleshooting and debugging problems. If the log files are not managed properly then it can  be extremely  difficult to get a good look into them.

Although system administrators cannot control the generation of logs, they can achieve  some level of log management by

  • having log rotators to get rid of the old log files.
  • using syslog to catch alerts.
  • archiving logs etc.

OpenContrail has several components, many of which can generate logs as well as store them in the log files. OpenContrail also provides the mechanism to configure the logging, so that the system administrators / DevOps can define the logging parameters to suite their own requirements.

In this blog post we will see logging support in OpenContrail components and what are the logging configuration mechanisms supported by it.

OpenContrail uses Sandesh protocol which provides the mechanism to exchange messages between various OpenContrail components. It also provides the functionality of logging those messages and the logs into the log files. You can read more about Sandesh in this great article

Logging can be configured by :

  • choosing the log file
  • selecting the log file size
  • defining custom formatters/loggers
  • using syslog etc.

OpenContrail has mainly Python components and C++ components.

Python components of OpenContrail are :

  • contrail API server
  • schema transformer
  • SVC monitor
  • discovery server
  • analytics Op server

C++ components of OpenContrail are :

  •  contrail vrouter
  •  contrail controller
  •  Query engine
  •  contrail analytics server
  •  contrail DNS

C++ components of OpenContrail use log4cplus for logging and python components use python logging.

OpenContrail versions

The configuration mechanisms defined in this post are supported by the master version of OpenContrail.

You need to cherry pick the below patches if you are using R2.2 or R2.1 version as these patches are still not merged yet.

OpenContrail R2.2

OpenContrail R2.1

Logging in OpenContrail python modules

First we will talk about logging in python components of OpenContrail. OpenContrail supports logging configuration for python components in three ways:

  1. Use the default logging provided by OpenContrail.
  2. Define your own log configuration file based on the python logging
  3. Define new logging mechanism by implementing a new logger or using other logging libraries like oslo.log

The configuration files of python components support the below logging parameters:

  • log_file
  • log_level
  • log_local
  • logging_conf
  • logger_class

In order to define custom logging configuration, we need to use the ‘logging_conf’ and ‘logger_class’ parameters. When these two parameters are defined, the other ones are ignored.

1. Use the default logging provided by OpenContrail.

You don’t have to do anything here. If you are not particular about logging configuration, then this is good enough.

2. Define your own log configuration file based on the python logging

You can define your own log configuration file. Please refer to the logging file format for more information on how to define the log config file for pythin logging.

Define the ‘logger_class’ and ‘logging_conf’ configuration parameters in the OpenContrail python component configuration files.

logger_class = pysandesh.sandesh_logger.SandeshConfigLogger



logger_class = pysandesh.sandesh_logger.SandeshConfigLogger
logging_conf = /etc/contrail/contrail-api-logger.conf

Format of the log configuration file

As mentioned above this has all the details about defining the log configuration file.

Log configuration file should have three main sections defined – [loggers],[handlers] and [formatters].

Below is a sample log configuration file format. This sample file can be used for all the OpenContrail python components. You can define one configuration file per module as well.










args=('/var/log/contrail/contrail-api.log', 'a', 3000000, 10)

args=('/var/log/contrail/svc-monitor.log', 'a', 3000000, 8)

args=('/var/log/contrail/contrail-schema.log', 'a', 2000000, 7)

args=('/var/log/contrail/contrail-discovery-conf.log', 'a', 3000000, 0)

args=('/var/log/contrail/contrail-analytics.log', 'a', 3000000, 0)

args=('/dev/log', handlers.SysLogHandler.LOG_USER)

format= %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p

format=contrail : %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p

format=SVC MON %(asctime)s [%(name)s]: %(message)s
datefmt=%m/%d/%Y %I:%M:%S %p

As you can see above, a logger is defined for each of the OpenContrail components.


‘qualname’ should match the OpenContrail component name, otherwise the logger defined for the OpenContrail component would not get reflected.

Below are the ‘qualname’ for each of the OpenContrail components.

Component name qualname
Contrail Api server contrail-api
SVC Monitor contrail-svc-monitor
Schema Transformer contrail-schema
Contrail Discovery contrail-discovery
Contrail Analytics API contrail-analytics-api

Defining your own logging configuration file gives you the flexibility to choose the logging parameters as per your requirements.

You can choose the logging handlers supported by the python logging like RotatingFileHandler, TimedRotatingFileHandler, WatchedFileHandler, MemoryHandler etc.

You can also choose a simple handler like FileHander and use logrotate or other external log rotaters to rotate the log files.

3. Define your own custom logging mechanism or use existing logging libraries.

If you’r someone who likes to define your new logging mechanism, this can also be done.

In order to do this you need to first:

  • write your custom logging class
  • define the custom logging class in the ‘logger_class’ configuration parameter.

Make sure that your custom python class is loadable. Your custom python class should be derived from ‘sandesh_base_logger.SandeshBaseLogger’.

Contrail Oslo Logger

You can find one custom logger – Contrail Oslo Logger here. Contrail Oslo logger uses the oslo.log and oslo.config modules of OpenStack.

You can define the log configuration options supported by oslo.log in a configuration file and provide the name of the file in the ‘logging_conf’ configuration parameter.

You can find the logging options supported by oslo.log here and here.

If you would like to have your own logging mechanism please see the code of contrail oslo logger as reference.

Logging in OpenContrail C++ components

OpenContrail C++ components use log4cplus  for logging.

OpenContrail supports the below logging parameters in the component configuration files :

  • log_disable : Disable logging
  • log_file    : Name of the log file
  • log_property_file : Path of the log property file.
  • log_files_count : Maximum log file roll over index
  • log_file_size  : Maximum size of the log file
  • log_level  : Severity level for local logging of sandesh messages

Similar to the python logging configuration file, you can define a log configuration file for the C++ components and give the path of the configuration file in the ‘log_property_file’ configuration parameter. When ‘log_property_file’ is defined, other logging parameters are ignored by the OpenContrail C++ components. log4cplus uses the term property file for the log configuration file.

The log property file should be defined in the format described here.

You can refer to this link to understand the format of the log4cplus log property file.

Define ‘log_property_file’ in the DEFAULT section of the C++ component configuration files to use the log property file defined by you.

Eg. contrail-control.conf


Sample lop property file

log4cplus.rootLogger = DEBUG, logfile, syslog

log4cplus.appender.logfile = log4cplus::FileAppender
log4cplus.appender.logfile.File = /var/log/contrail/contrail-collector.log
log4cplus.appender.logfile.Append = true
log4cplus.appender.logfile.ImmediateFlush = true

log4cplus.appender.logfile.layout = log4cplus::PatternLayout

log4cplus.appender.logfile.layout.ConversionPattern = %D{%Y-%m-%d %a %H:%M:%S:%Q %Z} %h [Thread %t, Pid %i]: %m%n

log4cplus.appender.syslog = log4cplus::SysLogAppender
log4cplus.appender.syslog.layout = log4cplus::PatternLayout
log4cplus.appender.syslog.layout.ConversionPattern = %D{%Y-%m-%d %a %H:%M:%S:%Q %Z} %h [Thread %t, Pid %i]: %m%n

You can refer to the Appenders supported by log4cplus here.


You’ve now hopefully seen how logging is supported in OpenContrail and how you can define your own custom logging configuration files. With this knowledge, it should be possible for system admins/DevOps to manage the log files properly and help them quickly and efficiently troubleshoot problems.