Blog ENG - AWS - Post 1 2026
If you’ve ever stitched AWS to another cloud the old way, you’ll recognize the pain AWS Interconnect aims to eliminate.
As someone who lives in cloud networking and security every day, I’ll walk you through what Interconnect is, how it works, and where it fits, layering in the personal takeaways that matter when you’re designing real platforms.
What AWS Interconnect is
AWS Interconnect is a managed private connectivity service that lets you create high‑speed, private network links between your Amazon VPCs and VPCs on other public clouds without deploying or configuring physical/virtual routers yourself.
You pick your source AWS Region, the destination region on the other cloud provider, and the capacity you need; AWS and the partner cloud provision redundant capacity and hand you a single logical object: the interconnect.
Today, Interconnect is in Public Preview with Google Cloud, offering a free 1 Gbps preview connection per customer in supported region pairs. Preview connections will be removed at General Availability, and AWS advises not routing production traffic during the preview. Pricing will be announced before GA.
Why this matters (beyond the press release)
From a practitioner’s perspective, the magic is in the abstraction and time‑to‑value:
- No router build‑outs across clouds and no manual cross‑connect hustle; the network plumbing is handled by AWS and the peer CSP.
- Provisioning in minutes, not weeks, aligns with modern DevOps rhythms where environments are ephemeral and connectivity should be, too.
- Traffic rides the AWS global backbone until it’s handed off to the other provider, avoiding internet paths and their variability.
As an architect, this means you can design for intent (which VPCs must talk, under which guardrails) and let the service deliver the pathing and resiliency underneath.
How Interconnect works
AWS and Google Cloud pre‑provision capacity in each supported region pair across multiple network devices and at least two physical buildings with independent power and networking. All links between AWS and Google edge devices are encrypted by default with IEEE 802.1AE (MACsec), and devices only transmit customer traffic when the encryption session is active.
Interconnect multicloud architecture

Picture two separate facilities, each hosting redundant AWS routers and peer CSP routers. Your green logical attachment spans those facilities, riding multiple logical connections. In the console, you only see the abstracted interconnect, not the device topology: resiliency is baked in, not bolted on.
Creation flow:
- Create: Start the request in AWS and obtain the activation key.
- Accept: Use the activation key on the other cloud provider to accept the request.
- Provision: The providers complete provisioning automatically.
- Attach: The interconnect attaches to the Direct Connect gateway (DXGW) on AWS and to Cloud Router on the peer cloud provider. No additional customer steps are required once both sides are approved.
Key concepts you’ll actually use
- Multicloud Interconnect: The logical object that represents the provisioned capacity between AWS and another CSP, delivered in a highly available configuration.
- Attach point: Each interconnect must attach to a logical construct on each cloud. On AWS this is the Direct Connect gateway (DXGW); on Google Cloud it’s Cloud Router. You need both in place before you create an interconnect.
- Direct Connect gateway integration: DXGW acts as the global attachment hub for services like Virtual private gateway (VGW), Transit Gateway (TGW), Cloud WAN, and Interconnect.
Deployment patterns
There are three common patterns, choose based on scope:
- Regional VPC connectivity with VGW or TGW
Use VGW/TGW in a specific AWS Region, attached via DXGW, to reach the local interconnect for that region’s paired Google Cloud region. TGW/VGW do not stretch to interconnects outside the paired region. - Global reach with Cloud WAN
Define Core Network Edges (CNEs) across AWS Regions. Using native Direct Connect attachments, any CNE can reach any interconnect globally as long as it’s attached to the same DXGW, enabling simpler global policy and routing. - Multi‑interconnect for HA/latency optimization
The architecture supports multiple interconnects; in practice you can anchor traffic to the nearest region pair for latency while retaining failover to a secondary pair (operational policy lives in TGW route tables or Cloud WAN segments).
Region pairs available in Public Preview
- US East (N. Virginia) → Google Cloud N. Virginia
- US West (N. California) → Google Cloud Los Angeles
- US West (Oregon) → Google Cloud Oregon
- Europe (London) → Google Cloud London
- Europe (Frankfurt) → Google Cloud Frankfurt
Operations: health & capacity you can trust (and measure)
- CloudWatch Network Synthetic Monitor: Each interconnect includes a single synthetic probe for round‑trip latency and packet loss at no extra cost; you can wire CloudWatch alarms to thresholds you care about (Network Health Indicator isn’t supported yet for Interconnects).
- Bandwidth utilization metric: CloudWatch exposes percentage utilization of provisioned capacity per interconnect; use it to right‑size or alert on saturation before congestion bites.
Design notes that save time later
- Address families: Interconnects carry both IPv4 and IPv6.
- Prefix limits: AWS can receive up to 1000 IPv4 plus 1000 IPv6 prefixes from Google Cloud; plan summarization and segment boundaries accordingly.
- MTU: AWS sets MTU 8500 on multicloud Interconnects automatically; handy for performance, but confirm end‑to‑end MTU across paths.
- Gateway coexistence: You can attach Interconnect to a DXGW that already has Private or Transit virtual interfaces; you can keep adding VIFs of the same type alongside Interconnect.
A quick, real‑world flow (what I’d do in a pilot)
- Plan your addressing to avoid overlap; decide pattern (VGW/TGW regional or Cloud WAN global).
- Create DXGW (AWS) and Cloud Router (Google) as attach points.
- Request the interconnect in the AWS Console, choose region pair and capacity (1 Gbps in preview), note the activation key.
- Accept the request on Google Cloud using the activation key; provisioning completes automatically; verify attachment to DXGW and Cloud Router.
- Monitor CloudWatch latency/loss and utilization; set alarms; validate failover and routing intent in TGW or Cloud WAN.
Personal take: where Interconnect shines
- Architectural simplicity without sacrificing resiliency: The right kind of abstraction for multicloud underlay.
- Operational velocity: Minutes‑level provisioning is the difference between networking being a blocker vs an enabler for platform teams.
- Policy alignment: Clean handoff to TGW/Cloud WAN for segmentation, route control, and intent, so you keep the levers where they belong.
- Security posture: Default MACsec on provider‑edge links reduces the attack surface and you don’t have to bolt it on later.
Final thoughts
AWS Interconnect brings private, resilient multicloud connectivity into the ” just configure it ” era. If you’re migrating data, building distributed services across providers, or simply want to cut the operational overhead of bespoke cross‑cloud networks, Interconnect is worth piloting now to shape your design patterns before GA. The key is to think in attachments and policy: pick the right attach point (DXGW), choose the regional vs global pattern (VGW/TGW or Cloud WAN), and let the managed underlay do the heavy lifting. When multicloud stops being an obstacle course and becomes a design choice, teams can focus on application intent, and that’s the real win.