Blog ENG - MS Azure - Post 2 2026
When hub-and-spoke grows from a tidy drawing into a living platform, the forwarding layer often becomes the first pressure point: more east-west traffic, more service-to-service flows, more transit through the hub, and suddenly the components doing packet forwarding feel like the bottleneck. Azure Virtual Network routing appliances are designed to address exactly that scenario: a managed, high-performance forwarding layer that runs on specialized networking hardware to deliver low latency and high throughput for routed traffic flows.
A routing appliance is an Azure-managed routing device you deploy inside your virtual network and manage as a top-level Azure resource, using familiar Azure governance and lifecycle patterns. You place it in a dedicated subnet named VirtualNetworkApplianceSubnet, and it becomes part of the data path for forwarded traffic, removing the need to build your forwarding layer on virtual machines.
Where it fits: hub-and-spoke transit, done at scale
Most deployments place the routing appliance in the hub virtual network to provide scalable spoke-to-spoke (east-west) transit, especially when the goal is to reduce routing bottlenecks and keep private traffic moving efficiently across spokes and shared services.
Three routing patterns you can apply
The routing appliance is typically introduced through user-defined routes (UDRs) in the spokes, and the article outlines three common patterns that differ mainly in how broadly you steer traffic to the appliance.
Pattern 1: route Azure private address space to the appliance
In this approach, spoke subnets use UDRs to route Azure private address space (for example, RFC1918 ranges) to the routing appliance, while internet egress and on-premises prefixes continue to use other next hops according to your architecture. This is useful when you want the appliance to carry east-west traffic without becoming the default next hop for all traffic, and when you already have an established egress design you prefer not to change.
Pattern 2: default-route spokes to the appliance
Here, spoke subnets use a 0.0.0.0/0 UDR with the routing appliance as the next hop, and then the hub routes on-premises and internet traffic according to the overall design. This pattern is attractive when you want standardized, “cookie-cutter” spoke route tables and want to avoid maintaining many per-prefix UDR entries in spokes. The article also calls out an important caution: review limitations carefully before using a default route to the appliance, especially for Azure Private Link and Private Endpoint traffic.
Pattern 3: RFC1918 to the appliance, default route to egress
This pattern routes RFC1918 prefixes to the appliance for predictable east-west and private transit while sending 0.0.0.0/0 to your chosen egress solution. It’s useful when you want consistent east-west steering through the appliance but still want internet egress pinned to a dedicated egress component to reduce the risk of asymmetric routing through a firewall.
Why it’s interesting: throughput, flows, and operational simplicity
The routing appliance is positioned as a lightweight, high-performance forwarding layer meant to reduce the risk that the forwarding tier becomes the choke point. In preview, the documented bandwidth tiers and scaling guidance include 50 Gbps, 100 Gbps, and 200 Gbps options, with corresponding maximum connections per second and maximum concurrent flows that scale up significantly at higher tiers. More broadly, the intent is horizontal scaling and accelerated east-west flows with low latency to meet large bandwidth demands.
From an “Azure-native” standpoint, it integrates with familiar virtual network constructs and governance, and it supports common VNet features such as network security groups, admin rules, user-defined routes, and Azure NAT Gateway. For availability, it provides built-in high availability and zone resilience by default, and it doesn’t require an additional load balancer in front of it (and a load balancer placed in front won’t forward traffic to it).
Preview realities to keep in mind
Routing appliances are currently in preview and are intended for testing, evaluation, and feedback rather than production workloads. During preview, there are limits such as up to two routing appliances per subscription, a maximum configurable bandwidth per appliance, IPv4 support with IPv6 not in scope, and constraints including no metrics or logs and no support for client tools such as Azure CLI, PowerShell, and Terraform. There are also limitations around global and cross-region private endpoints, and regional availability is limited during preview. The preview is described as free, with advance notice planned before billing is enabled.
Final thoughts
What I like most about this feature is the direction: it separates forwarding performance from VM operations. Instead of engineering and maintaining a forwarding fleet of NVAs, you get an Azure-managed forwarding layer designed to scale and accelerate east-west traffic in hub-and-spoke topologies. At the same time, it’s still preview, and the missing day-2 essentials (especially observability and automation tooling support) mean it’s best approached as a controlled pilot rather than a production cornerstone today.
If the feature matures with stronger operational capabilities, it has the potential to become one of those quiet building blocks that makes large Azure networks feel simpler, not because the topology is less complex, but because the forwarding layer stops being the piece you constantly fight.