Cloud Experts Documentation

Assign Consistent Egress IP for External Traffic

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

It may be desirable to assign a consistent IP address for traffic that leaves the cluster when configuring items such as security groups or other sorts of security controls which require an IP-based configuration. By default, Kubernetes via the OVN-Kubernetes CNI will assign random IP addresses from a pool which will make configuring security lockdowns unpredictable or unnecessarily open. This guide shows you how to configure a set of predictable IP addresses for egress cluster traffic to meet common security standards and guidance and other potential use cases.

See the OpenShift documentation on this topic for more information.

Prerequisites

  • ROSA Cluster 4.14 or newer
  • openshift-cli (oc)
  • rosa-cli (rosa)
  • jq

Demo

Set Environment Variables

This sets environment variables for the demo so that you do not need to copy/paste in your own. Be sure to replace the values for your desired values for this step:

Ensure Capacity

For each public cloud provider, there is a limit on the number of IP addresses that may be assigned per node. This may affect the ability to assign an egress IP address. To verify sufficient capacity, you can run the following command to print out the currently assigned IP addresses versus the total capacity in order to identify any nodes which may affected:

Example Output:

NOTE: the above example uses jq as a friendly filter. If you do not have jq installed, you can review the metadata.annotations['cloud.network.openshift.io/egress-ipconfig'] field of each node manually to verify node capacity.

Label the worker nodes

To allow for assignment, we need to assign a specific label to the nodes. The egress IP rule that you created in a previous step only applies to nodes with the k8s.ovn.org/egress-assignable label. We want to ensure that label exists on only a specific machinepool as set via an environment variable in the set environment variables step.

For ROSA clusters, you can assign labels via the following rosa command:

WARNING: if you are reliant upon any node labels for your machinepool, this command will replace those labels. Be sure to input your desired labels into the --labels field to ensure your node labels persist.

To complete the egress IP assignment, we need to assign a specific label to the nodes. The egress IP rule that you created in a previous step only applies to nodes with the k8s.ovn.org/egress-assignable label. We want to ensure that label exists on only a specific machinepool as set via an environment variable in the set environment variables step.

For ROSA clusters, you can assign labels via either of the following rosa command:

Option 1 - Update an existing Machine Pool

WARNING: if you are reliant upon any node labels for your machinepool, this command will replace those labels. Be sure to input your desired labels into the --labels field to ensure your node labels persist.

Option 2 - Create a new Machine Pool

NOTE: set the replicas to 3 if its a multi-az cluster.

Wait until the Nodes have been labelled

Create the Egress IP Rule(s)

Identify the Egress IPs

Before creating the rules, we should identify which egress IPs that we will use. It should be noted that the egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned into.

Reserve the Egress IPs

It is recommended, but not required, to reserve the egress IPs that you have requested to avoid conflicts with the AWS VPC DHCP service. To do so, you can request explicit IP reservations by following the AWS documentation for CIDR reservationsexternal link (opens in new tab) .

Example: Assign Egress IP to a Namespace

Create a project to demonstrate assigning egress IP addresses based on a namespace selection:

Create the egress rule. This rule will ensure that egress traffic will be applied to all pods within the namespace that we just created via the spec.namespaceSelector field:

Example: Assign Egress IP to a Pod

Create a project to demonstrate assigning egress IP addresses based on a pod selection:

Create the egress Rule. This rule will ensure that egress traffic will be applied to the pod which we just created using the spec.podSelector field. It should be noted that spec.namespaceSelector is a mandatory field:

Review the Egress IPs

You can review the egress IP assignments by running oc get egressips which will produce output as follows:

Test the Egress IP Rule

Create the Demo Service

To test the rule, we will create a service which is locked down only to the egress IP addresses in which we have specified. This will simulate an external service which is expecting a small subset of IP addresses

Run the echoserver which gives us some helpful information:

Expose the pod as a service, limiting the ingress (via the .spec.loadBalancerSourceRanges field) to the service to only the egress IP addresses in which we specified our pods should be using:

Retrieve the load balancer hostname as the LOAD_BALANCER_HOSTNAME environment variable which you can copy and use for following steps:

Test Namespace Egress

Test the namespace egress rule which was created previously. The following starts an interactive shell which allows you to run curl against the demo service:

Once the pod has started, you can send a request to the load balancer, ensuring that you can successfully connect:

You should see output similar to the following, indicating a successful connection. It should be noted that that client_address below is the internal IP address of the load balancer rather than our egress IP. Successful connectivity (by limiting the service to .spec.loadBalancerSourceRanges) is what provides a successful demonstration:

You can safely exit the pod once you are done:

Test Pod Egress

Test the pod egress rule which was created previously. The following starts an interactive shell which allows you to run curl against the demo service:

Once inside the pod, you can send a request to the load balancer, ensuring that you can successfully connect:

You should see output similar to the following, indicating a successful connection. It should be noted that that client_address below is the internal IP address of the load balancer rather than our egress IP. Successful connectivity (by limiting the service to .spec.loadBalancerSourceRanges) is what provides a successful demonstration:

You can safely exit the pod once you are done:

Test Blocked Egress

Alternatively to a successful connection, you can see that the traffic is successfully blocked when the egress rules do not apply. Unsuccessful connectivity (by limiting the service to .spec.loadBalancerSourceRanges) is what provides a successful demonstration in this scenario:

Once inside the pod, you can send a request to the load balancer:

The above command should hang. You can safely exit the pod once you are done:

Cleanup

You can cleanup your cluster by running the following commands:

You can cleanup the assigned node labels by running the following commands:

WARNING: if you are reliant upon any node labels for your machinepool, this command will replace those labels. Be sure to input your desired labels into the --labels field to ensure your node labels persist.

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.