
The internet’s missing layer and our investment in Datum
If you’ve worked at a tech company of a certain vintage, you may have interfaced with a unique and unfortunately dying character: the network engineer. They were the ones who spoke in acronyms like BGP and ASNs, who could trace a packet through a network by sheer force of will, and who were the only people allowed to touch the glowing racks in the data center. These digital wizards were the keepers of the arcane knowledge that made the internet actually work.
A decade ago, this was the norm; every software company needed to think about networking, because every software company was in the business of software delivery. To build a serious software company, you had to have a deep, foundational understanding of networking. Your software and data didn’t just magically make it to where it needed to go – you engineered it to get there.
Fast forward to today, and that world feels like ancient history. A generation of brilliant engineers has grown up in the warm, curated embrace of the cloud. They think in VPCs, serverless functions, and managed services. The messy, physical reality of routers, ports, and switches has been abstracted away — and for good reason. It allowed us to build faster and scale more easily than ever before.
But that abstraction is starting to leak. The world is no longer contained within us-east-1. The tectonic plates of technology are shifting, creating a new set of problems that the old cloud primitives were never designed to solve. Data is all over the place, and the typical company is storing and deploying across Vercel, Databricks, Cloudflare, Snowflake, Neon, AWS, and dozens of other services simultaneously. Moving data and packets around has never been more important (and difficult) yet we have never been more poorly equipped to handle it. We’ve lost a critical skill set, and the ramifications for cost, compliance, security, and even innovation are becoming impossible to ignore.
The internet needs a new generation of builders who can innovate up and down the stack, including the networking layers that sit between the clouds. In order to make that happen, they’ll need what every developer needs: access.
The networking abstraction has gone bad
For years, the networking bargain has been simple: let the cloud provider handle the network, and you focus on your application. This worked beautifully when your entire world—your app, your data, your customers—lived within a single cloud. And this was pretty much how things went pre 2020s. But over the past 5-10 years we have seen unbelievable cloud sprawl, and your data stack is now split across multiple, sometimes even dozens of providers.
This sprawl and collective network amnesia are causing major problems:
1) Compliance has become a game of chance
Imagine you’re selling to a European client. They have a simple request: "Ensure my data never, ever leaves the EU."
A decade ago, a network engineer would solve this by building a virtual network that constrained traffic to European data centers and peers. Today, a typical backend engineer, working with cloud primitives, is in a bind. They can't even see their network, let alone control it. So, how do they guarantee a call from their front end in Madrid doesn't take a weird, non-compliant detour on its way to a database in Frankfurt? They can't. Instead, they cross their fingers, maybe encrypt, and hope the cloud's routing logic aligns with their legal requirements.
2) The cloud bill is a black box
The cost implications are similarly problematic. There are any number of database companies whose entire model is built on replicating data everywhere for resilience and low latency. It’s a brilliant architecture… and also completely uneconomical in the public cloud because they (and we) don't have the primitives to control how traffic moves between regions and clouds. They're at the mercy of exorbitant egress fees and opaque inter-region data transfer costs. They can't manage the internet, so the internet manages their COGS.
3) There are all kinds of things we can’t build
The abstraction of networking is even limiting the kinds of experiences we can build. The local-first movement is trying to build a new generation of real-time, responsive applications. This requires innovating at the protocol level, out at the very edge of the network. But you can't do that if you don't control the network. You're a guest in someone else's house, and you have to follow their rules. The most foundational innovations are gated and locked away in the hands of the hyperscalers.
Why we are finally at a breaking point
Three megatrends are colliding to make this a full-blown crisis for almost every modern application:
- Everything is everywhere: for many (most?) companies their architecture is all over the place. The frontend is on Vercel, analytical data is Snowflake and Databricks, the app hits OpenAI APIs, observability is in Datadog, and you're running GPUs on CoreWeave or Modal. The idea of "a cloud" is dead, at least in the singular sense. Your application is in practice a distributed system spread across dozens of specialized providers, and the connective tissue between them is the public internet.
- Distributed by default: The architectural patterns have changed. Ten years ago, building a globally distributed system was a monumental feat. Today, it's the default. Engineers no longer think in terms of a single EC2 instance; they think in terms of global components. Data and applications are more mobile and decentralized than ever before.
- The forthcoming flood of agents: AI isn't just another workload, it’s going to require a totally different connection model. Instead of one human shipping software, you will have thousands of autonomous agents accessing countless data sources in parallel. The complexity of managing these connections is growing exponentially.
The security posture for this new world is very uncertain. How do you observe and protect the connections between your 50 counterparties? The old model of a secure perimeter in your VPC is meaningless when your critical infrastructure is spread across the globe.
Datum: A programmable network for all of us
You can't just tell every app dev to "go learn BGP" or stand up and manage a global MPLS network. We are not going to magically train a brand new generation of networking engineers: the solution needs to be an API. The only way forward is to embed networking capabilities directly into the software itself. This is why we’re excited today to announce Datum.
Datum is building the virtual meet-me room for the internet, turning the concept of a physical place where different networks interconnect into a programmable, software-defined virtual layer for the entire cloud ecosystem.
Speaking of data centers, there is probably no better pair to build this company than Jacob and Zac Smith (yes, they’re twins). Prior to Datum they started Packet. Packet helped cloud native developers access physical, specialized infrastructure (Intel, AMD, Arm, and yes…Nvidia) at scale. And in 2020, Packet was acquired by Equinix.
If you haven’t heard of Equinix – they do keep a relatively low profile – they are exactly what’s described earlier: a meet-me room for the physical internet. Equinix provides the neutral, 3rd party colocation for clouds, financials, telecoms, and anyone else running networks (or connecting those networks to enterprise customers) so that traffic can move smoothly through the world for the rest of us. Without Equinix the internet as we know it does not exist. The idea for Datum is to do the same, but for where it’s needed today: connecting clouds and applications, on top of the internet.
The vision is for Datum to be embedded in the platforms you already use. When you're using Snowflake or Vercel, Datum aims to be there, making connecting a network to a cloud service as easy as it is to link your favorite fintech app to your bank accounts. The goal is to be a neutral, open platform that allows you to connect to anyone, anywhere, using the method that makes sense for you:
- A developer-centric connection via a Tailscale node.
- A traditional enterprise connection via a cross-connect from a Bank of America router.
- A cloud-native connection via an AWS Direct Connect.
- A cross-cloud connection between your assembled best-of-breed cloud services.
The goal is to provide a single, programmatic interface to manage the complex, multi-party reality of modern infrastructure. It's about taking the superpowers that were once the exclusive domain of network engineers and a handful of hyperscalers and making them accessible to every developer.
Today we’re excited to announce Datum’s $3M pre-seed round led by us at Amplify and Alex at Encoded, as well as their $10M seed round led by Reid at CRV.
We can't rewind the clock and magically create new network engineers. The only path forward is to build better tools—to create a new layer of the internet that is as easy to program as the applications that run on top of it. Datum is building that infrastructure, turning the lost art of networking into a simple, powerful API for the next generation of the internet.




