ExpressRoute, VNet‑to‑VNet Connectivity, and the Simple Toggles That Keep You on the Fast Path

Blog ENG - MS Azure - Post 1 2026

There’s a moment in almost every Azure network engagement when someone asks: Why are my inter‑VNet flows taking the scenic route?
If you’ve ever watched packets hairpin out to an ExpressRoute peering location before coming back into another VNet, you know the feeling. As consultants, we’re paid to turn complexity into clarity and, in this case, clarity arrives in the form of a few small, powerful checkboxes.

Today, you can actively control whether virtual networks talk to each other (or to Virtual WAN) over ExpressRoute.
The result: cleaner routing, less latency, and fewer surprises during migrations or coexistence phases.
Here’s how I explain it to customers and how I design for it in real projects.

Why VNet‑to‑VNet over ExpressRoute needs a second look

If you connect multiple VNets to the same ExpressRoute circuit, Azure will happily exchange routes via the Microsoft Enterprise Edge (MSEE) devices at the peering location. That’s fine for hybrid traffic (on‑prem to Azure), but it’s rarely optimal for Azure‑to‑Azure flows.
The peering location is not your Azure region; it’s a separate Point of Presence. Sending east‑west traffic through it can add latency and load your ExpressRoute gateways with work they shouldn’t have to do.

The two places you control the behaviour

There are two control planes you’ll care about in most designs:

  1. the ExpressRoute Virtual Network Gateway (VNG) in customer‑managed hub/spoke.
  2. the Azure Virtual WAN hub in VWAN‑based designs.

Each offers a simple toggle to accept or block prefixes learned from ExpressRoute that originate from other Azure networks.

1) On the VNG (customer‑managed hub/spoke)

  • Allow traffic from remote virtual networks
  • Allow traffic from remote Virtual WAN networks

With these toggles, the gateway either accepts the MSEE‑learned prefixes that belong to remote VNets or VWAN hubs or filters them. Filtering those prefixes prevents inter‑VNet traffic from taking the ExpressRoute path by default, which is exactly what you want if your design prefers peering or NVA‑mediated routing inside Azure.

See “Diagram 1 – ER VNG traffic togles” for a portal‑level snapshot of where this lives:

Diagram 1 - ER VNG traffic togles

2) On the VWAN hub

  • Allow traffic from non‑Virtual WAN networks

This controls whether the VWAN hub imports prefixes that originated on a customer‑managed VNG over ExpressRoute. Keeping this disabled avoids accidental hairpins and locks you into hub‑native routing policies and preferences.

See “Diagram 2 – ER VWAN traffic togles” for a portal‑level snapshot of where this lives:

Diagram 2 - ER VWAN traffic togles

Practitioner’s tip: In legacy deployments, you may find these accept/allow settings enabled because teams historically relied on ExpressRoute to stitch Azure networks together.
In modern designs, I default to blocking inter‑VNet over ExpressRoute and explicitly enable peering or hub‑to‑hub preferences for east‑west flows.

Hub‑to‑Hub behaviour in VWAN: one parent versus many

VWAN brings an extra wrinkle: **hub‑to‑hub** routing.

  • Same parent VWAN resource: Use Virtual Hub Routing Preference set to AS‑Path so hubs prefer their native hub‑to‑hub path over ExpressRoute. This keeps traffic on the Microsoft backbone directly between hubs instead of detouring through MSEE.
    See “Diagram 3 – VWAN Hub‑to‑Hub: Same Virtual WAN parent resource“:
Diagram 3 - VWAN Hub‑to‑Hub: Same Virtual WAN parent resource
  • Different parent VWAN resources: Hub routing preference isn’t applicable. In this rare case, directly connected VNets behind those hubs will continue to transit via ExpressRoute. It’s not ideal, but it’s predictable and you can design around it.
    See “Diagram 4 – VWAN Hub‑to‑Hub: Different Virtual WAN parent resource“:
Diagram 4 - VWAN Hub‑to‑Hub: Different Virtual WAN parent resource

Important transit filters you can’t (and shouldn’t) override

Even with the toggles, Azure will filter transit‑originated routes in specific cases to prevent unintended backdoors:

  • Static routes defined on VWAN VNet connections
  • Routes learned via VWAN BGP peering
  • Routes learned via Azure Route Server BGP peering
  • Routes learned from remote VWAN hubs via hub‑to‑hub

These are blocked regardless of your checkbox settings, which keeps the control plane sane and your traffic predictable.

A real‑world cleanup: UDR simplification across regions

Here’s a scenario I frequently encounter: two customer‑managed hub/spoke regions connected to the same ExpressRoute circuit for resilient on‑prem connectivity, and Global VNet Peering between the hubs for east‑west flows.
Historically, we had to stuff the AzureFirewallSubnet UDR with one route per remote spoke (think: 100+ entries) to beat the highly specific prefixes learned via ExpressRoute: painful to manage, fragile during growth, and easy to get wrong. Once you filter remote VNG prefixes over ExpressRoute, you can summarize. That same UDR becomes one aggregate (e.g., 10.100.0.0/16) that wins for all remote spokes, keeping traffic on peering and off ExpressRoute. Symmetry matters, so repeat in the opposite hub.

  • Before: one UDR entry per remote spoke.
    See “Diagram 5 – Inter‑region VNet‑to‑VNet UDR management before toggles“:
Diagram 5 - Inter‑region VNet‑to‑VNet UDR management before toggles
  • After: a single summary route per remote region.
    See “Diagram 6 – Inter‑region VNet‑to‑VNet UDR management after toggles“:
Diagram 6 - Inter‑region VNet‑to‑VNet UDR management after toggles

Design guardrails I use in the field

  • Default to blocking inter‑VNet via ExpressRoute: Treat ExpressRoute as hybrid ingress/egress; keep east‑west on peering or hub‑to‑hub.
  • Prefer backbone routes inside Azure: When VWAN hubs share a parent, set AS‑Path preference so they pick hub‑native paths.
  • Summarize aggressively in UDRs once you filter remote VNG prefixes: Less config, fewer human errors, cleaner growth.
  • Respect transit filters: If Azure blocks transitive routes, it’s protecting your design. Don’t fight it; architect around it.
  • Use toggles intentionally during migrations: Make coexistence deliberate, documented, and time‑boxed.

The consultant’s takeaway

In network strategy, small controls often unlock big outcomes.
These toggle‑based filters do exactly that. They let you pick the right data plane for the job – ExpressRoute for hybrid, backbone for Azure‑to‑Azure – and they simplify your route management in the process.
If your inter‑VNet traffic is still taking the scenic route, grab the diagrams above, open your gateways and hubs, and make the path explicit. Your packets and your change calendar will thank you.

Final Thoughts

The beauty of these controls is their simplicity: a couple of well‑placed toggles let you separate hybrid from east‑west traffic and keep Azure‑to‑Azure flows on the backbone where they belong. In practice, that means lower latency, fewer surprises, and a routing posture that scales as your estate grows, without the brittle UDR sprawl we’ve all inherited at least once. When I’m designing or refactoring customer environments, I treat ExpressRoute as hybrid plumbing first. Then I make east‑west intent explicit: hub‑to‑hub on the backbone, spoke‑to‑spoke via peering or through the hub services you control. The toggles simply codify that intent, so your gateways and hubs don’t try to be helpful in ways that undermine performance.
If your current topology still relies on the scenic route, now’s the time to tighten it up. Review the toggles on your gateways and hubs, and make the desired path unambiguous. It’s a small change with outsized impact, exactly the kind of refinement that turns a good Azure network into a great one.