Nowadays, modern and next-generation networks (for example, 5G) and IoT infrastructures require having workloads deployed near the customer network. Analysing and computing data at the edge of the network reduces the impact of the latency introduced by a centralized computation approach. Edge computing can also be used to filter, correlate, and aggregate collected data and metrics to reduce the network traffic. A distributed computing environment is also more robust and resilient to connectivity issues.

Thanks to the previously mentioned advantages, telecommunication providers frequently deliver network functions directly at the edge of the network. Network functions evolve  from Virtual Network Functions (VNF) to Cloud-Native Network Function (CNF): an implementation of a Network Function deployed through the cloud native principles. CNF fits very well with edge computing use-cases, thanks to the flexibility and scalability provided by a cloud native architecture.

  • Facilitate communication between legacy or IP-disconnected devices
  • Data filtering, aggregation, optimization, and pre-processing
  • Device and sensors configuration
  • Diagnostics and troubleshooting
  • Monitoring
  • And others

Network providers deliver services and functions at the edge.  In order to increase the computational density and optimize the resources, some components need to be shared between different tenants (or customers). In such cases, providing multi-tenancy and proper network isolation can be a challenge.

Network isolation

Even if the scope of this article is not an in-depth analysis of network isolation solutions, we first quickly introduce some basic concepts about the technologies we will use: VLANs, kernel network namespaces, and VRF domains.


Virtual Local Area Network is a network protocol defined by the IEEE 802.1q standard.

VLAN is used to create different “logical” (or virtual) networks over the same medium.

Basically, everything resides in the IEEE 802.1q VLAN tag: a 4 bytes-long field which is added to the Ethernet frame containing the VLAN ID.

Layer two network equipment (network switches) will forward an Ethernet frame tagged with a particular VLAN ID only to the selected ports configured with a matching VLAN ID.

A VLAN trunk is a connection between two devices carrying Ethernet frames with different VLAN tags. A Trunk connection permits VLANs to span across multiple switches.

Follow a couple of blog articles with an in-depth description of the VLANs and how to configure them in Red Hat Linux:

Thanks to the VLAN tag, the IEEE-802.1Q protocol can provide network isolation features: A network switch can be configured to allow or deny communication between different ports based on the VLAN tag.

Network namespaces

Network namespaces (netns), are a Linux kernel feature that can be used to create a sandbox where network devices, addressing, routing tables, and, in general, the whole network stack are isolated from the other sandboxes and from the host itself.

Network namespaces are widely used by containers to provide network isolation to the containerized process.

Netns can be created and managed using the ip netns command.

Even if a network namespace is an isolated environment, it can be connected to other networks, thanks to the virtual Ethernet devices, a logical device that the Linux kernel can use to interconnect network namespaces or real devices.

Thanks to the veth devices, many different use-cases and topologies can be implemented. The following pictures represent just a few examples:

A small introduction to network namespaces can be found in our blog article The 7 most used Linux namespaces.

IP Virtual Routing and Forwarding (VRF)

IP VRF is a network technology that is supported on the Linux kernel on the VRF-lite version.

Virtual Routing and Forwarding is a feature providing network isolation only at layer three.

On a Linux box with VRF support, it is possible to create different VRF domains. Every domain can have different routing tables, different default gateway, and IP address spaces are independent. IP addresses can overlap between different VRF domains without collisions.

A network device can be associated with a VRF domain. A Linux process binding to this specific device will use the VRF-specific routing stack.

Compared to network namespaces, VRF does not affect network features working at L2 (ARP, LLDP, and others).

With the release of Red Hat OpenShift Container Platform 4.9, the Virtual Routing and Forwarding CNI network plug-in graduated to GA.

A multi-tenant edge architecture

The following picture is a high-level representation of an edge infrastructure where some Customer Premise Equipment (CPE) are connected to an OpenShift cluster.

In this scenario, an OpenShift single node cluster (GA with OpenShift version 4.9) is deployed at the edge of the network, OpenShift is providing Cloud-Native Network Functions (CNF) to the CPE networks from a single pod “shared” between all the tenants:

One of the main requirements of such a scenario is that the infrastructure must guarantee L2 isolation between the tenants.

Each CPE network must be independent from the others. The OpenShift node must be capable of dealing with IP overlapping and duplicated IP addresses between the tenants.

Different approaches can be taken to fulfill the requirements. The solution described in this article leverages all the network isolation techniques mentioned before:

  • The edge gateway uses kernel namespaces to provide a network sandbox to each tenant;
  • VLAN as protocol to transport the data up to the node;
  • Virtual Routing and Forwarding to manage IP address overlaps at the remote node.

Design foundations

The presented architecture relies on a previously mentioned Edge Gateway, a hardware device housing software applications providing services to the CPE network.

Here are the architecture foundations:

  • The edge gateway facilitates the communication between the CPEs running one or more processes implementing the services.
  • To maintain tenant isolation, each CPE network is placed in a dedicated kernel network namespace.
  • Each network namespace has a VLAN ID associated with it. Each tenant network is bridged into a VLAN Trunk Ethernet port.
  • The VLAN Trunk port is directly connected to the OpenShift node via a dedicated network interface.
  • The OpenShift Multus CNI plug-in is configured to create an additional network instance per each VLAN.
  • Every additional network is placed in a specific VRF domain by the vrf CNI plug-in.
  • The pod implementing the CNF is connected to the additional networks; thus, the main process can bind to the specific IP-VRF interfaces.

Edge gateway

This host implements the CPE networks. The main purposes of this device are to:

  • send the data from a specific network namespace in a VLAN-tagged network;
  • execute the edge gateway applications on the dedicated network namespaces.

In the sake of simplicity, each network namespace is paired with a virtual Ethernet device. If connectivity to an external network is required, a netns can be paired with a physical network device.

Thanks to the VLAN-awareness feature of the Linux network bridge, it is possible to tag all the datagram coming from the veth devices with the proper VLAN ID.

To summarize the configuration process, we will:

  1. Create a network namespace per each tenant or CPE network. In our example, we will use red, blue, and green as netns names.
  2. Per each netns, we will create a pair of virtual Ethernet cards. One (veth0) will be assigned to the network namespace; the related peer (veth-${color}) will remain in the default namespace. This connection will make the netns reachable from the host network stack.
  3. Define a VLAN-aware bridge on the host. The physical Ethernet device and all the veth-${color} devices will be part of the bridge. Each virtual device is assigned to a specific VLAN ID and configured as untagged. The physical device eth1 is instead configured as a trunk port.

The edge gateway can be any Linux device supporting kernel network namespaces, VLAN-aware bridges, and virtual Ethernet devices.

The described setup has been tested on a Red Hat Enterprise Linux 7 and 8.

Step 1: Set up the network namespaces (netns)

We create three namespaces: red, blue, and green.

[root@edge-gw ~]# ip netns add red
[root@edge-gw ~]# ip netns add blue
[root@edge-gw ~]# ip netns add green

Now we can assign the network devices to the netns: Each namespace has an assigned virtual Ethernet device (veth0) which is paired with another virtual Ethernet device out of the namespace (veth-${color}). This connection provides network connectivity to the netns. As mentioned before, the veth0 inside the netns can be paired with a real Ethernet device if needed.

To each veth0 device, we also assign an IP address. To confirm the network isolation, we will use the same IP address for all the veth0.

So, let’s create the device and activate the links:

[root@edge-gw ~]# ip link add veth0 netns red type veth peer name veth-red
[root@edge-gw ~]# ip link add veth0 netns blue type veth peer name veth-blue
[root@edge-gw ~]# ip link add veth0 netns green type veth peer name veth-green
[root@edge-gw ~]# ip netns exec red ip link set dev veth0  up
[root@edge-gw ~]# ip netns exec blue ip link set dev veth0  up
[root@edge-gw ~]# ip netns exec green ip link set dev veth0  up
[root@edge-gw ~]# ip link set dev veth-red up
[root@edge-gw ~]# ip link set dev veth-blue up
[root@edge-gw ~]# ip link set dev veth-green up

Now we can assign the IP address to the veth0 devices into the netns:

[root@edge-gw ~]# ip netns exec red ip addr add dev veth0
[root@edge-gw ~]# ip netns exec blue ip addr add dev veth0
[root@edge-gw ~]# ip netns exec green ip addr add dev veth0

Step 2: Configure the edge gateway bridge

As a first step, we have to create the bridge device:

[root@edge-gw ~]# ip link add br0 type bridge
[root@edge-gw ~]# ip link set br0 up

The following command is crucial for the VLAN management. It enables the VLAN awareness on the bridge:

[root@edge-gw ~]# ip link set br0 type bridge vlan_filtering 1

Now we can add the devices to the bridge:

[root@edge-gw ~]# ip link set eth1 master br0
[root@edge-gw ~]# ip link set veth-red master br0
[root@edge-gw ~]# ip link set veth-blue master br0
[root@edge-gw ~]# ip link set veth-green master br0

The veth-${color} bridge port must be associated with the proper VLAN and configured in access-mode (untagged):

[root@edge-gw ~]# bridge vlan add dev veth-red vid 10 pvid untagged master
[root@edge-gw ~]# bridge vlan add dev veth-blue vid 20 pvid untagged master
[root@edge-gw ~]# bridge vlan add dev veth-green vid 30 pvid untagged master

As a last step, the VLAN must be added to the eth1 port of the switch as tagged:

[root@edge-gw ~]# bridge vlan add dev eth1 vid 10 master
[root@edge-gw ~]# bridge vlan add dev eth1 vid 20 master
[root@edge-gw ~]# bridge vlan add dev eth1 vid 30 master

The OpenShift configuration

Thanks to the Cluster Network Operator, the NMState Operator, and the VRF CNI plug-in, it is possible to connect the OpenShift cluster to the edge gateway, maintaining the network isolation.

The setup tasks are the following:

  • An NMState NodeNetworkConfigurationPolicy is deployed to derive a VLAN interface from the physical network device (eno1).  Please consider that at the moment of this writing (OCP 4.9), the NMState Operator is a Technical Preview feature.
  • Attach the additional networks to the CNF pod using the Cluster Network Operator.
  • The vrf CNI plug-in is used to assign a specific VRF domain to the network interface attached to the pod.

Proceeding with the configuration, as a prerequisite, we have to install the NMState operator and create the kubernetes-nmstate instance.

Once NMState operator is set up, we can configure the VLAN devices. tThe following snippet creates three VLAN devices (eno1.10, eno1.20, eno1.30) from the Ethernet device eno1 of the node:

$ MASTER_DEV=eno1; for vlan in 10 20 30; do cat <<EOF | oc apply -f -; done
kind: NodeNetworkConfigurationPolicy
name: vlan-${vlan}
nodeSelector: worker-edge
  - name: ${MASTER_DEV}.${vlan}
    description: VLAN ${vlan} using ${MASTER_DEV}
    type: vlan
    state: up
      base-iface: ${MASTER_DEV}
      id: ${vlan}

Using the multus CNI plug-in, it is possible to create additional networks on the cluster and connect them to specific pods.

Following is an example of a raw multus CNI configuration to create an additional network:

  • The macvlan CNI plug-in connects the network to the node interface eno1.10.
  • The vrf CNI plug-in assigns the vrf-red VRF domain to the interface.
  • The ipam CNI plug-in configures the IP address on the interface.
        "cniVersion": "0.3.1",
        "name": "vlan-10-vrf-red",
        "plugins": [
            "type": "macvlan",
            "master": "eno1.10",
            "ipam": {
              "type": "static",
              "addresses": [
                  "address": ""
            "type": "vrf",
            "vrfname": "vrf-red",
            "table": 1001

The following script ( can be used to easily create the additional networks for our example:

DEBUG='--dry-run=client -o yaml'
function addAdditionalNet {
cat << EOF | oc patch -p "$(cat)" --type json cluster
{'op': 'add', 'path': '/spec/additionalNetworks', value: []}
function addMultusVlan {
cat << EOF | oc patch -p "$(cat)" --type json cluster
  'op': 'add',
  'path': '/spec/additionalNetworks/-',
  'value': {
    'name': '${vrf_name}',
    'namespace': '$namespace',
    'type': 'Raw',
    'rawCNIConfig': '{ "cniVersion": "0.3.1", "name": "vlan-${vlan_id}-${vrf_name}", "plugins": [{ "type": "macvlan", "master": "$master", "ipam": { "type": "static", "addresses": [{"address": "${ip}"}]}}, {"type": "vrf", "vrfname": "${vrf_name}", "table": ${table}}]}'
addMultusVlan 10 vrf-red 1001
addMultusVlan 20 vrf-blue 1002
addMultusVlan 30 vrf-green 1003

To create the networks on OpenShift, we have to first create the Cluster Network patch file:

[antani@bastion ~]$ ./ patched patched patched

The Cluster Network Operator now should have created three NetworkAttachmentDefinition objects inside the default namespace. These objects should reflect the CNI configuration of each network.

$ oc get -n default -o wide
NAME        AGE
vrf-blue    69m
vrf-green   69m
vrf-red     69m

Testing the environment

Now everything should be set, and we can proceed verifying the connectivity between the Edge Gateway and the CNF pod.

To test the configuration, we will deploy a simple idling pod connected to the additional networks.

From the edge gateway, we will check the connectivity of the IP address assigned to each VRF domain (

Connecting a pod to the networks

To test the environment, we can now create a simple pod connected to the three networks. The additional network attachment must be defined with the annotation:

[antani@bastion ~]$ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
annotations: default/vrf-red,default/vrf-blue,default/vrf-green
name: vrf-test
nodeName: worker-edge
- name: main
  - /bin/bash
  - -c
  - sleep infinity

During the pod creation process, the network attachment events should be visible. The device net1 is attached to the network vrf-red, net2 to vrf-blue and net3 to vrf-green:

[antani@bastion ~]$ oc get events
4s         Normal   AddedInterface   pod/vrf-test   Add eth0 [] from openshift-sdn
4s         Normal   AddedInterface   pod/vrf-test   Add net1 [] from default/vrf-red
4s         Normal   AddedInterface   pod/vrf-test   Add net2 [] from default/vrf-blue
4s         Normal   AddedInterface   pod/vrf-test   Add net3 [] from default/vrf-green

Once the pod is running, the network attachment should be also visible in the annotation of the pod.

Inside the pod, the VRF interfaces are available:

[antani@bastion ~]$ oc exec vrf-test -- /bin/bash -c "for vrf in red blue green; do ip -4 addr show vrf vrf-\${vrf}; done"
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vrf-red state UP group default  link-netnsid 0
  inet brd scope global net1
     valid_lft forever preferred_lft forever
6: net2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vrf-blue state UP group default  link-netnsid 0
  inet brd scope global net2
     valid_lft forever preferred_lft forever
8: net3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vrf-green state UP group default  link-netnsid 0
  inet brd scope global net3
     valid_lft forever preferred_lft forever

Now from the edge gateway it is possible to check the connectivity to the CNF pod through the different networks executing a ping within each network namespace:

[root@edge-gw ~]# for ns in red blue green; do ip netns exec $ns ping -c 3; done

Since each netns on the edge gateway is connected to a different macvlan device on the OpenShift node, the mac address associated with the IP address is different in each namespace:

[root@edge-gw ~]# for ns in red blue green; do echo -ne "${ns}: "; ip netns exec $ns ip neigh ; done
red: dev veth0 lladdr 7a:f1:d9:f9:98:87 STALE
blue: dev veth0 lladdr ba:14:b0:9b:0e:ee STALE
green: dev veth0 lladdr 1a:c4:2c:10:78:74 STALE

To confirm, we can see that the mac addresses reported by the arp command on the edge gateway are the same of the additional network devices of the CNF pod:

[antani@bastion ~]$ oc rsh vrf-test

net1@if12600: 7a:f1:d9:f9:98:87
net2@if12603: ba:14:b0:9b:0e:ee
net3@if12606: 1a:c4:2c:10:78:74

To confirm network isolation, we can check the network traffic executing a tcpdump on the OpenShift node sniffing network data on the VLAN device:

[antani@bastion ~]$ oc debug node/worker-edge
Starting pod/worker-edge-debug …
sh-4.4# tcpdump -nn -i eno1.10 icmp

Considerations of the edge gateway

As we described before, the edge gateway should provide some services to the CPE networks and should run one or more processes within the network namespaces.

Processes can be started in a netns with the command ip netns exec <netns> <command> [options].

In this article, for brevity and clarity, we did not focus on how to make the edge gateway network configurations persistent.

Since network namespaces is the technology used by containers to provide network isolation, it is also possible to use containers to provide full isolation between the processes.

Since we wanted to focus on the network isolation, we decided to use the network namespaces directly.


Satisfying network isolation requirements can create very complex scenarios.

Thanks to the many options offered by the Linux kernel and the OpenShift networking architecture, it is possible to combine different network isolation techniques to fit all the requirements and implement a secure and robust environment.

OpenShift and Red Hat Enterprise Linux provide the maximum power and flexibility to implement such complex infrastructures.


How-tos, Security, Edge, devsecops

< Back to the blog