
Aria Networks, a Palo Alto-based networking startup founded in January 2025, has raised $125 million in its first funding round to build what it describes as the first AI-native network designed from the ground up to maximize token efficiency.
The round was backed by Sutter Hill Ventures, Atreides Management, Valor Equity Partners, and Eclipse Ventures.
With the announcement of this funding came another announcement of the general availability of Deep Networking, Aria Networks’ core platform for AI infrastructure.
What Is Deep Networking?
Deep Networking is built on five pillars – AI-optimized hardware and hardened SONiC, fine-grained end-to-end telemetry, intelligent agents at every layer, intent-based configuration, and real-time adaptive performance optimization.
The telemetry component is particularly notable, as Aria collects data at 100 to 10,000 times finer resolution than traditional tools, across switches, transceivers, and hosts in a single unified view. What this means is that rather than waiting for operators to flag an issue, the system continuously evaluates network state and takes action in real time to keep accelerators productive.
As such, operators can express what they need in natural language, and the platform configures the fabric accordingly. Aria also exposes an MCP server, allowing external systems such as job schedulers and LLM routers to query network state directly and integrate it into their own decision-making.
The Problem With Existing Networks
Traditional networking is typically evaluated in terms of bandwidth and latency, but Aria is centering its platform around two different metrics, Model FLOPS Utilization (MFU) and token efficiency. MFU measures how much of an accelerator’s theoretical processing power is actually being used. In practice, MFU for training workloads typically runs between 33% and 45%, and inference often comes in below 30%.
That gap is expensive as a single bad NIC in a 10,000-XPU cluster can drop MFU by 1.7% during an All Reduce operation. A bad transceiver can trigger persistent traffic rerouting that burns both MFU and a significant share of infrastructure spend.
From Funding to Deployment
Founded in 2025, Aria has already moved from founding to live customer deployments in just over a year. That pace is unusual for an infrastructure company, where hardware development and enterprise sales cycles tend to move slowly. The company says it already has customer orders in place and is deploying the platform in production environments.
Part of how Aria maintains that pace is through what it calls forward deployed engineers (FDEs). These engineers are embedded with customers from deployment onward, and everything they learn in the field gets fed directly back into the product.
Rather than operating as a separate professional services team, they function as a continuous feedback loop between customers and the engineering team, with Aria targeting weekly software updates as opposed to the semi-annual or annual cycles typical of established networking vendors.
The $125 million raised will go toward expanding those deployments as enterprise demand for AI infrastructure continues to grow.
