Cloud Experts Documentation

Ingress to ROSA Virt VMs with Certificate-Based Site-to-Site (S2S) IPsec VPN and Libreswan

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

Introduction

In this guide, we build a Site-to-Site (S2S) VPNexternal link (opens in new tab) so an Amazon VPCexternal link (opens in new tab) can reach VM IPs on a ROSA OpenShift Virtualization User-Defined Network (UDN/CUDN) —with no per-VM NAT or load balancers. We deploy a small CentOS VM inside the cluster running Libreswanexternal link (opens in new tab) that establishes IPsec/IKEv2 tunnelexternal link (opens in new tab) to an AWS Transit Gateway (TGW)external link (opens in new tab) .

We use certificate-based authenticationexternal link (opens in new tab) : the AWS Customer Gateway (CGW)external link (opens in new tab) references a certificate issued by ACM Private CAexternal link (opens in new tab) , and the cluster VM uses the matching device certificate. Because identities are verified by certificates—not a fixed public IP—the VM can initiate the VPN from behind NAT (worker → NAT Gateway) and still form stable tunnels.

On AWS, the TGW terminates two redundant tunnels (two “outside” IPs). We associate the VPC attachment(s) and the VPN attachment with a TGW route table and enable propagation as needed. In the VPC, route tables send traffic for the CUDN prefix (e.g., 192.168.1.0/24) to the TGW. On the cluster side, the CUDN has IPAM disabled; you can optionally add a return route on other CUDN workloads to use the IPsec VM as next hop when those workloads need to reach the VPC.

NAT specifics: when the VM egresses, it traverses the NAT Gatewayexternal link (opens in new tab) . If that NAT uses multiple EIPs, AWS may select different EIPs per connection; this is fine because the VPN authenticates via certificates, not source IP.

s2svpn-v3

Why this approach

  • Direct, routable access to VMs: UDN/CUDN addresses are reachable from the VPC without per-VM LBs or port maps, so existing tools (SSH/RDP/agents) work unmodified.
  • Cert-based, NAT-friendly: The cluster peer authenticates with a device certificate, so it can sit behind NAT; no brittle dependence on a static egress IP, and no PSKs to manage.I
  • AWS-native and minimally invasive: Uses TGW, CGW (certificate), and standard route tables—no changes to managed ROSA networking, and no inbound exposure (no NLB/NodePorts) because the VM initiates.
  • Scales and hardens cleanly: Add a second IPsec VM in another AZ for HA, advertise additional prefixes, or introduce dynamic routing later. As BGP-based UDN routing matures, you can evolve without re-architecting.

In short: this is a practical and maintainable way to reach ROSA-hosted VMs without PSKs, without a static public IP, and without a fleet of load balancers.

0. Prerequisites

  • A classic or HCP ROSA cluster v4.18 and above.
  • Bare metal instance machine pool (we are using m5.metal, feel free to change as needed), and OpenShift Virtualization operator installed. You can follow Step 2-5 from this guide to do so.
  • The oc CLI # logged in.

1. Create the project and secondary network (CUDN)

Create vpn-infra project and the ClusterUserDefinedNetwork (CUDN) object.

oc new-project vpn-infra || true
cat <<'EOF' | oc apply -f -
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata: { name: vm-network }
spec:
  namespaceSelector:
    matchExpressions:
    - key: kubernetes.io/metadata.name
      operator: In
      values: [vpn-infra]
  network:
    layer2: { role: Secondary, ipam: { mode: Disabled } }
    topology: Layer2
EOF

Disabling IPAM also disables the network enforcing source/destination IP security. This is needed to allow the ipsec VM below to act as a gateway to pass traffic for other IP addresses.

2. Create the ipsec VM (cert-based IPsec, NAT-initiated)

Go to your cluster’s web console. On the navigation bar, select Virtualization → Catalog, and from the top, change the Project to vpn-infra. Then under Create new VirtualMachine → Instance types → Select volume to boot from, choose CentOS Stream 10 (or 9 is fine).

virt-catalog

Scroll down and name it ipsec, and select Customize VirtualMachine.

virt-catalog-1

Select Network on navigation bar. Under Network interfaces, click Add network interface. Name it cudn. Select the vm-network you created earlier.

virt-cudn-0

Then click Save. Click Create VirtualMachine.

virt-cudn

Wait for a couple of minutes until the VM is running.

Then click the Open web console and log into the VM using the credentials on top of the page.

Alternatively, you can run this on your CLI terminal: virtctl console -n vpn-infra ipsec, and use the same credentials to log into the VM.

Then as root (run sudo -i), let’s first identify the ifname of the non-primary NIC. Depending on OS, this may either be disabled, or enabled with no IP address assigned.

nic -t

Run the following inside the VM to give the second NIC (cudn) an IP. Replace enp2s0 with the name of the interface from the previous command.

ip -4 a
nmcli con add type ethernet ifname enp2s0 con-name cudn \
  ipv4.addresses 192.168.1.10/24 ipv4.method manual autoconnect yes
nmcli con mod cudn 802-3-ethernet.mtu 1400
nmcli con mod cudn ipv4.routes "10.10.0.0/16 192.168.1.10"
nmcli con up cudn

Install Libreswan and tools:

dnf -y install libreswan nss-tools NetworkManager iproute
# optional, if you’ll use the `pki` CLI:
# dnf -y install idm-pki-tools

Kernel networking (forwarding & rp_filter):

cat >/etc/sysctl.d/99-ipsec.conf <<'EOF'
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
EOF
sysctl --system

Firewalld note: CentOS often has firewalld on. You don’t need inbound allows (the VM initiates), but outbound UDP/500, UDP/4500 must be allowed.

3. Create and import Private CA (ACM PCA)

Go to AWS Console and select Certificate Manager. Then on the left navigation tab, click AWS Private CAs, and then click Create a private CA.

On the Create private certificate authority (CA) page, keep CA type options as Root. You could leave the default options for simplicity sake. We would recommend, however, give it a name; so for example, here we give the Common name (CN) ca test v0. Acknowledge the pricing section, and click Create CA.

And then on the root CA page, go to the Action tab on upper right side, and select Install CA certificate. On the Install root CA certificate page, you can leave the default configurations as-is and click Confirm and Install. The CA should now be Active.

Next, create a subordinate CA by repeating the same thing but on the CA type options, choose Subordinate, and give it a Common name (CN) such as ca sub test v0. Confirm pricing and create it.

And similarly, on the subordinate CA page, go to the Action tab on top right side, and select Install CA certificate. On the Install subordinate CA certificate page, under Select parent CA, choose the root CA you just created as the Parent private CA. And under Specify the subordinate CA certificate parameters, for the validity section, pick a date at least 13 months from now. You can leave the rest per default and click Confirm and Install.

Once done, you will have these private CAs like this snippet below:

private-ca

Next, go the AWS Certificate Manager (ACM) page, and click Request a certificate button. On the Certificate type page, select Request a private certificate, and click Next.

Under Certificate authority details, pick the subordinate CA as Certificate authority. Then under Domain names, pick a Fully qualified domain name (FQDN) of your choice. Note that this does not have to be resolvable, we just use it as an identity string for IKE. For example here, we use something like s2s.vpn.test.mobb.cloud. You can leave the rest per default, acknowledge Certificate renewal permissions and click Request.

Wait for until the status is changed to Issued. Then, click Export button on top right side. Under Encryption details, enter a passphrase of your choice. You will be prompted to input this passphrase in the next steps, so please keep it handy. Acknowledge the billing and click Generate PEM encoding. And on the next page, click Download all, and finally click Done.

Once downloaded you will be seeing 3 files on your local machine:

  • certificate.pem
  • certificate_chain.pem
  • private_key.pem

Note if the downloaded files are in .txt, rename them into .pem files (you can simply mv certificate.txt certificate.pem and so forth for the rest of the files).

Next, create the PKCS#12 for Libreswan. Feel free to change the name of the cert, but be sure you’re on the same directory where the downloaded certificate files are:

openssl pkcs12 -export \
  -inkey private_key.pem \
  -in certificate.pem \
  -certfile certificate_chain.pem \
  -name test-cert-cgw \
  -out left-cert.p12

This will prompt you with passphrase you created before.

Next, we need two files on the VM:

  • left-cert.p12 — the PKCS#12 you just created (leaf + key + chain)
  • certificate_chain.pem — the full CA chain (subordinate then root)

Option A — using virtctl (easiest):

# from your local machine
virtctl scp ./left-cert.p12  vpn-infra/ipsec:/root/left-cert.p12
virtctl scp ./certificate_chain.pem vpn-infra/ipsec:/root/certificate_chain.pem

Option B — if you only have PEMs on the VM (build P12 on the VM):

# copy PEMs instead, then build the PKCS#12 on the VM
virtctl scp ./private_key.pem vpn-infra/ipsec:/root/private_key.pem
virtctl scp ./certificate.pem  vpn-infra/ipsec:/root/certificate.pem
virtctl scp ./certificate_chain.pem vpn-infra/ipsec:/root/certificate_chain.pem

# on the VM:
sudo -i
set -euxo pipefail
openssl pkcs12 -export \
  -inkey /root/private_key.pem \
  -in /root/certificate.pem \
  -certfile /root/certificate_chain.pem \
  -name test-cert-cgw \
  -out /root/left-cert.p12 \
  -passout pass:changeit

Now run the import:

sudo -i
set -euxo pipefail

LEAF_P12="/root/left-cert.p12"            # already on the VM
CHAIN="/root/certificate_chain.pem"       # already on the VM
NICK='test-cert-cgw'                      # use this in ipsec.conf: leftcert
P12PASS='changeit'                        # the PKCS#12 password you used

# initialize (idempotent)
ipsec initnss || true

# import CA chain with CA trust (split CHAIN into individual certs)
awk 'BEGIN{c=0} /BEGIN CERT/{c++} {print > ("/tmp/ca-" c ".pem")}' "$CHAIN"
for f in /tmp/ca-*.pem; do
  certutil -A -d sql:/etc/ipsec.d \
           -n "$(openssl x509 -noout -subject -in "$f" | sed "s#.*/CN=##")" \
           -t "C,," -a -i "$f"
done

# import device cert + key from PKCS#12 with the nickname the config expects
pk12util -i "$LEAF_P12" -d sql:/etc/ipsec.d -W "$P12PASS" -n "$NICK"

# sanity check
echo "=== NSS certificates ==="; certutil -L -d sql:/etc/ipsec.d
echo "=== NSS keys         ==="; certutil -K -d sql:/etc/ipsec.d

systemctl enable --now ipsec

Tip: ACM’s certificate_chain.pem already contains subordinate + root in that order. If yours doesn’t, cat subCA.pem rootCA.pem > certificate_chain.pem before copying.

4. Create a Customer Gateway (CGW)

Go to AWS console, find VPC. Then on the left navigation tab, find Customer gateways → Create customer gateway.

On the Certificate ARN section, choose your ACM-PCA–issued cert. You can give it a name like cgw test v0, leave the default options, and click Create customer gateway.

With certificate-auth, AWS doesn’t require a fixed public IP on the CGW; that’s why this pattern works behind NAT.

5. Create (or use) a Transit Gateway (TGW)

Note that this setup also works (and tested) with Virtual Gateway (VGW). So when to choose VGW or TGW:

  • Use VGW when you only need VPN to one VPC, don’t require IPv6 over the VPN, and want the lowest ongoing cost (no TGW hourly/attachment fees; you will just pay standard Site-to-Site VPN hourly + data transfer).

  • Use TGW when you need a hub-and-spoke to many VPCs/accounts, inter-VPC routing, or IPv6 VPN. Expect extra charges such as TGW hourly, per-attachment hourly, and per-GB data processing, on top of VPN. Also add a one-line cost link.

Continue with this step if you choose TGW.

On the left navigation tab, find Transit Gateways → Create transit gateway. Give it a name like tgw test v0, leave the default options, and click Create transit gateway.

Next, let’s attach the VPC(s) to the TGW. On the navigation tab, find Transit Gateway attachments → Create transit gateway attachment.

Give it a name like tgw attach v0, pick the transit gateway you just created as Transit gateway ID, and select VPC as the Attachment type. And on the VPC attachment section, select your VPC ID, and select the private subnet of each subnets you want reachable from the cluster. Once done, click Create transit gateway attachment.

tgw-attach

6. Create the Site-to-Site VPN (Target = TGW)

Still on VPC console, find → Site-to-Site VPN connections → Create VPN connection.

Give it a name like vpn test v0. Choose Transit gateway as Target gateway type and choose your TGW from the Transit gateway dropdown. Then choose Existing for Customer gateway, and select the certificate-based CGW from previous step from the Customer gateway ID options.

vpn-0

Choose Static for Routing options. For Local IPv4 network CIDR, put in the CUDN CIDR, e.g. 192.168.1.0/24. And for Remote IPv4 network CIDR, put in the cluster’s VPC CIDR, e.g. 10.10.0.0/16.

vpn-1

Leave default options as-is and click Create VPN connection.

At the moment, the status of both tunnels are Down and that is completely fine. For now, take note on the tunnels’ outside IPs as we will use them for the Libreswan config in the next step.

tunnel-outside-ip

7. Creating Libreswan config

Let’s go back to the VM now, and as root (and be sure to replace the placeholder values, e.g. cert nickname, tunnels outside IPs):

sudo tee /etc/ipsec.conf >/dev/null <<'EOF'
config setup
    uniqueids=yes
    plutodebug=none
    nssdir=/etc/ipsec.d

conn %default
    keyexchange=ikev2                 # change to ikev2=insist if you're running Centos/RHEL 9
    authby=rsasig
    fragmentation=yes
    mobike=no
    narrowing=yes

    left=%defaultroute
    leftsubnet=192.168.1.0/24
    leftcert=test-cert-cgw            # change this to your cert nickname
    leftid=%fromcert
    leftsendcert=always

    rightsubnet=10.10.0.0/16
    rightid=%fromcert
    rightca=%same

    ikelifetime=28800s
    ike=aes256-sha2_256;modp2048,aes128-sha2_256;modp2048,aes256-sha1;modp2048,aes128-sha1;modp2048

    salifetime=3600s
    esp=aes256-sha2_256;modp2048,aes128-sha2_256;modp2048,aes256-sha1;modp2048,aes128-sha1;modp2048
    pfs=yes

    dpddelay=10
    retransmit-timeout=60
    auto=add

conn aws-tun-1
    right=44.228.33.1                    # change this to your tunnel 1 outside IP
    auto=start

conn aws-tun-2
    right=50.112.212.105                 # change this to your tunnel 2 outside IP
    auto=start
EOF

sudo systemctl restart ipsec
sudo ipsec auto --delete aws-tun-1 2>/dev/null || true
sudo ipsec auto --delete aws-tun-2 2>/dev/null || true
sudo ipsec auto --add aws-tun-1
sudo ipsec auto --add aws-tun-2
sudo ipsec auto --up aws-tun-1
sudo ipsec auto --up aws-tun-2

Next, run ipsec status and now you should see something like Total IPsec connections: loaded 4, routed 1, active 1 which means that your tunnel is up.

And so now if you go back to the VPN console you will see one of the tunnel is up as follows:

vpn-up

8. Associate VPC to TGW route tables

On VPC navigation tab, find Transit gateway route tables, and go to Propagations tab, and ensure that both VPC and VPN resources/attachments are Enabled.

tgw-rtb-0

Then click Routes tab, look under Routes → Create static route. For CIDR, put in CUDN CIDR 192.168.1.0/24 and under Choose attachment, pick the VPN attachment and click Create static route.

tgw-rtb-1

Wait for a minute and it should now look like this:

tgw-rtb-2

9. Modify VPC route tables

Next, we will add route to the CUDN targeting our CGW for each VPC that should reach the cluster overlay. On the navigation tab, find Route tables. Filter it based on your VPC ID.

Select one of the private subnets. Under Routes tab, go to Edit routes. Click Add route. For Destination, put in CUDN subnet (e.g., 192.168.1.0/24), and as Target pick Transit Gateway and select the TGW you created, and click Save changes.

Repeat this with other private/public subnets you want to route CUDN to as needed.

10. Security groups and NACLs

On the navigation tab, find Security groups. Filter it based on your VPC ID.

Select one of the worker nodes’ security groups. Under Inbound rules, go to Edit inbound rules. Click Add rule. For Type, pick All ICMP - IPv4, and as Source, put in the CUDN subnet (e.g., 192.168.1.0/24), and click Save rules.

Optionally, you can also add rule for TCP 22/80 from the CUDN for SSH/curl tests.

Be sure to also check NACLs on the VPC subnets to allow ICMP/TCP from 192.168.1.0/24 both ways.

11. Configure networking on other VMs

Other OpenShift Virtualization VMs will each need some configuration to make networking work for them.

11.1 Add a secondary network interface for the CUDN

Like the ipsec VM, other VMs will also need a secondary network interface connected to the vm-network ClusterUserDefinedNetwork.

When creating a new VM, select Customize VirtualMachine, then select Network on navigation bar. For existing VMs, go to the VM’s Configuration tab and then select Network from the side navigation. Under Network interfaces, click Add network interface. Name it cudn. Select the vm-network you created earlier.

virt-cudn-0

Depending on the specifics of the VM, you may need to reboot the VM before it can see the new network interface, or it may be available immediately.

11.2 Set an address for the network interface

Since IPAM is turned off on the cudn, each VM has to be given an IP address manually.

Log into the VM using Open web console, virtctl console, or virtctl ssh, if configured.

Then as root (run sudo -i), let’s first identify the ifname of the non-primary NIC. Depending on OS, this may either be disabled, or enabled with no IP address assigned.

nmcli -t

Run the following inside the VM to give the second NIC (cudn) an IP. Replace enp2s0 with the name of the interface from the previous command. Replace the 192.168.1.20/24 with a unique address per VM within the CUDN CIDR (which in our examples has been 192.168.1.0/24) and ensure the number after the slash matches the subnet mask of the CIDR.

ip -4 a
nmcli con add type ethernet ifname enp2s0 con-name cudn \
  ipv4.addresses 192.168.1.20/24 ipv4.method manual autoconnect yes
nmcli con up cudn

11.2 Set the ipsec VM as the next hop for VPC traffic

Each VM needs to know that it should send traffic destined for the VPC through the ipsec VM.

As root, run the following. Replace 10.10.0.0/16 with your VPC’s CIDR. Replace 192.168.1.10 with the IP address of the ipsec VM.

nmcli con mod cudn ipv4.routes "10.10.0.0/16 192.168.1.10"
nmcli con up cudn

12. Ping test

Now that everything is set, let’s try to ping from VM to an EC2 instance in the VPC. Pick an EC2 instance, e.g. bastion host, or bare metal instance, etc. to do so.

Note that if you launch a throwaway EC2 for testing, private subnet is safer (no public IP). Attach the same TGW RTB, and use Session Manager instead of a keypair if you want to skip inbound SSH. Security group needs ICMP (and optionally TCP/22) from 192.168.1.0/24.

Take a note on the private IPv4 address. Then on the VM console run:

ping -I <CUDN-IP> -c3 <EC2-private-IP>

And then from the EC2 instance:

ping -c3 <CUDN-IP>

13. Optional: Route-based (VTI) IPsec

Why do this?

  • Scale & simplicity: with VTIs you route like normal Linux—no per-subnet policy rules. Adding more VPC CIDRs later is just adding routes.
  • Better availability: you can ECMP two tunnels (one per TGW endpoint). That gives fast failover on the tunnel path. (Note: this is not AZ-resilient if you still have only one VM.)

13.1. Replace policy-based with route-based config

sudo tee /etc/ipsec.conf >/dev/null <<'EOF'
config setup
    uniqueids=yes
    plutodebug=none
    nssdir=/etc/ipsec.d

conn %default
    keyexchange=ikev2
    authby=rsasig
    fragmentation=yes
    mobike=no
    narrowing=yes

    left=%defaultroute
    leftsendcert=always
    leftcert=test-cert-cgw
    leftid=%fromcert
    rightid=%fromcert
    rightca=%same

    # route-based: allow any, routing decides what actually traverses the tunnel
    leftsubnet=0.0.0.0/0
    rightsubnet=0.0.0.0/0

    ikelifetime=28800s
    ike=aes256-sha2_256;modp2048,aes128-sha2_256;modp2048
    salifetime=3600s
    esp=aes256-sha2_256;modp2048,aes128-sha2_256;modp2048
    pfs=yes

    dpddelay=10
    retransmit-timeout=60

conn tgw-tun-1
    also=%default
    right=44.228.33.1
    mark=0x1/0xffffffff
    reqid=1

    vti-interface=ipsec10
    vti-routing=yes
    vti-shared=no
    leftvti=169.254.218.106/30          # change this to your <CGW inside IP>/30
    rightvti=169.254.218.105/30         # change this to your <AWS inside IP>/30

    auto=start

conn tgw-tun-2
    also=%default
    right=50.112.212.105
    mark=0x2/0xffffffff
    reqid=2

    vti-interface=ipsec1
    vti-routing=yes
    vti-shared=no
    leftvti=169.254.86.186/30
    rightvti=169.254.86.185/30

    auto=start
EOF

If you haven’t already, import your certs to the VM per Step 3 above.

13.2. System tunables for route-based IPsec

Two key rules:

  1. VTIs: disable_policy=1, disable_xfrm=0 (encrypt on the VTI, but don’t do policy lookups).
  2. Physical NICs: disable_policy=1, disable_xfrm=1 (never apply XFRM on the underlay).
# apply & (re)start
systemctl restart ipsec

# VTIs: encryption happens here; no policy lookup
sysctl -w net.ipv4.conf.ipsec10.disable_policy=1
sysctl -w net.ipv4.conf.ipsec10.disable_xfrm=0
sysctl -w net.ipv4.conf.ipsec1.disable_policy=1
sysctl -w net.ipv4.conf.ipsec1.disable_xfrm=0

# underlay NICs: never transform/encap here
sysctl -w net.ipv4.conf.enp1s0.disable_policy=1
sysctl -w net.ipv4.conf.enp1s0.disable_xfrm=1
sysctl -w net.ipv4.conf.enp2s0.disable_policy=1
sysctl -w net.ipv4.conf.enp2s0.disable_xfrm=1

# forwarding + relaxed reverse-path checks (asymmetric return possible with ECMP)
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv4.conf.all.rp_filter=0
sysctl -w net.ipv4.conf.enp1s0.rp_filter=0
sysctl -w net.ipv4.conf.enp2s0.rp_filter=0
sysctl -w net.ipv4.conf.ipsec10.rp_filter=0
sysctl -w net.ipv4.conf.ipsec1.rp_filter=0

# make sure VTIs are up (they should come up automatically, but just in case)
ip link set ipsec10 up
ip link set ipsec1 up

# (optional, NAT-T + ESP overhead can require lower MTU, hence this tweak)
ip link set ipsec10 mtu 1436 || true
ip link set ipsec1  mtu 1436 || true

Optionally, you can also persist VTI sysctls to survive reboots:

cat >/etc/sysctl.d/99-ipsec-vti.conf <<'EOF'
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.enp1s0.rp_filter=0
net.ipv4.conf.enp2s0.rp_filter=0
net.ipv4.conf.ipsec10.rp_filter=0
net.ipv4.conf.ipsec1.rp_filter=0
net.ipv4.conf.enp1s0.disable_policy=1
net.ipv4.conf.enp1s0.disable_xfrm=1
net.ipv4.conf.enp2s0.disable_policy=1
net.ipv4.conf.enp2s0.disable_xfrm=1
net.ipv4.conf.ipsec10.disable_policy=1
net.ipv4.conf.ipsec10.disable_xfrm=0
net.ipv4.conf.ipsec1.disable_policy=1
net.ipv4.conf.ipsec1.disable_xfrm=0
EOF
sysctl --system

13.3. Route VPC CIDRs over the VTIs (ECMP)

# replace 10.10.0.0/16 with your VPC CIDR(s)
nmcli con mod cudn -ipv4.routes "10.10.0.0/16 192.168.1.10"
nmcli con mod ipsec10 ipv4.routes "10.10.0.0/16" ipv4.metric 1
nmcli con mod ipsec1  ipv4.routes "10.10.0.0/16" ipv4.metric 1
nmcli con up cudn
nmcli con up ipsec10
nmcli con up ipsec1

13.4. Quick verification

# path should be via a VTI
ip route get <EC2-private-IP> from <CUDN-IP>

# ICMP test (use one of your EC2 private IPs)
ping -I <CUDN-IP> -c3 <EC2-private-IP>

# SAs should show increasing packets/bytes when you ping/ssh
ip -s xfrm state | sed -n '1,160p'

# optional: confirm tunnel IKE/Child SAs are up
ipsec status | grep -E "established|routing|vti-interface"

And if you go to VPN console:

both-tunnels-up

Availability footnote

ECMP across two tunnels on one VM protects against a single TGW endpoint flap and gives smooth failover, but it’s not AZ-resilient. For true HA, run two IPsec VMs in different AZs (each with both tunnels) behind a health-checked on-prem router, or a pair of routers that can prefer the healthy VM.

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2023 Red Hat, Inc.