Goodbye 1.25 Gbps: AWS Site‑to‑Site VPN grows up to 5 Gbps per tunnel

Blog ENG - AWS - Post 10 2025

If you’ve ever had to squeeze big‑data moves, disaster‑recovery syncs, or chatty hybrid apps through a 1.25 Gbps IPSec tunnel, you know the feeling: one eye on CloudWatch, the other on dropped packets, and a sixth sense for which flow will get hashed where.
That ceiling has finally lifted. AWS has introduced Large Bandwidth Tunnels (LBT) delivering up to 5 Gbps per tunnel, a straight 4× jump over the previous 1.25 Gbps cap.

This change matters not just because of raw speed; it’s about simplifying hybrid connectivity. Many of us relied on ECMP across multiple standard tunnels to hit higher aggregate throughput. Now, for a big slice of use cases, you can achieve the same capacity per tunnel without stitching together a bundle of paths.

Where 5 Gbps tunnels fit in your architecture

  • Transit Gateway & Cloud WAN: Large Bandwidth Tunnels are supported on TGW VPN (including private‑IP VPN) and AWS Cloud WAN VPN. If you’re still terminating on Virtual Gateway (VGW), this isn’t for you (LBT is not supported there).
  • Regions: The feature is available across AWS commercial and AWS GovCloud (US) Regions where Site‑to‑Site VPN exists, with the following exceptions at launch: Asia Pacific (Melbourne), Israel (Tel Aviv), Europe (Zurich), Canada West (Calgary), and Middle East (UAE).
  • Use cases:
    • Data center connectivity: for bandwidth‑intensive hybrids (analytics pipelines, backup/restore, media workloads).
    • Direct Connect overlay or backup: keep traffic encrypted and resilient while your DX does the heavy lifting.

How the new mode is exposed (and a subtle but important constraint)

AWS exposes an option on the VPN connection to choose Tunnel Bandwidth: Standard (up to 1.25 Gbps) or Large (up to 5 Gbps). The choice applies to both tunnels of the VPN connection (remember: each AWS VPN connection includes two tunnels for HA). You cannot mix Standard and Large within the same connection.

Personal take: I like this “connection‑level” switch. It prevents lopsided failovers and asymmetric behavior that can be deceptively hard to troubleshoot at 2 a.m.

ECMP is still your friend (just used more deliberately)

Even with 5 Gbps per tunnel, ECMP remains valuable for aggregate throughput:

  • Two tunnels in one LBT connection: up to 10 Gbps aggregate.
  • Two LBT connections (four tunnels): up to 20 Gbps aggregate.

Design note: TGW ECMP uses a 5‑tuple hash (protocol, src/dst IP, src/dst port). A single TCP/UDP flow maps to one tunnel, so one flow cannot exceed 5 Gbps. If your workload benefits from parallelism (multiple flows, partitioned transfers), you’ll achieve the advertised aggregates more predictably.

Upgrade & downgrade paths without the drama

AWS outlines clean steps for moving between Standard and Large.
The gist:

  1. Create a new VPN connection in the desired mode (Large or Standard).
  2. Associate/propagate the attachment in the relevant TGW route tables.
  3. Verify end‑to‑end routing, then delete the old connection.

In practice, I schedule a brief maintenance window, pre‑stage CGW configs, and swing traffic after path validation. It keeps change risk low and rollback crisp.

Real‑world considerations I’d bake into the design

  • Customer Gateway (CGW) capacity: 5 Gbps per tunnel means your firewall/VPN device must push line‑rate IPSec with your chosen cipher suites. Check CPU, crypto offload, and memory headroom.
  • Internet underlay: You’re only as fast as the weakest link. Ensure uplinks, peering, and QoS don’t choke the tunnel.
  • MTU/MSS tuning: With higher throughput, packetization inefficiencies become more visible. Right‑size MSS, keep fragmentation in check.
  • Flow planning: If a single flow needs >5 Gbps, IPSec tunnels won’t change physics. Shard flows or parallelize transfers to ride ECMP cleanly.
  • Direct Connect interplay: As an overlay or backup, LBT gives you encrypted path diversity and consistent capacity when DX is unavailable or congested.

A quick mental checklist before you flip the switch

  • Do my CGW(s) and underlay circuits sustain 5 Gbps+ of encrypted traffic?
  • Is my TGW using dynamic routing so ECMP is available when I need higher aggregates?
  • Have I planned parallel flows for workloads that need more than 5 Gbps end‑to‑end?
  • Am I in a supported Region?

When I’d still reach for Direct Connect first

If your profile is consistent high throughput + predictable latency + low jitter, DX remains the right foundation. LBT‑backed VPN is excellent as encrypted overlay/backup and for bursty or time‑bound transfers that don’t justify another circuit. Use the right tool for the right traffic class.

Final thoughts

From an architect’s chair, the 5 Gbps per‑tunnel capability modernizes the IPSec option on AWS and reduces operational complexity for a ton of hybrid scenarios. It doesn’t remove the need for good network hygiene or flow‑aware designs but it does remove one of the biggest bottlenecks we’ve been living with. That’s real progress.