Delivering a great application user experience often starts with something that developers overlook when using the cloud, because they cannot see or directly interact with it: the provider’s global network backbone, and how that enables better performance with more (or less) included network transfer.
Guaranteeing low latency or a certain level of performance isn’t just about choosing a data center. The experience and overall performance of anything hosted in the cloud depends on how data and traffic is transmitted from one point to another, and how all the points eventually connect.
Ideally, your customers or end users don’t need to think about how they’re accessing information or a service through their phone or browser. As long as the performance matches their expectations and they’re able to complete a task, they’re thinking about the experience that’s directly in front of them – not how it’s getting to them.
That’s where we come in – and by we, I mean cloud providers. As more businesses shift to the cloud and new applications pop up to serve more needs, internet service providers are challenged with maintaining infrastructure that can handle the increased demand. Cloud service providers, meanwhile, are challenged with packaging services that tap into a competitive network while meeting developers’ needs – all while setting prices that make sense.
Why network transfer makes the cloud more expensive
It isn’t a virtual machine or database instance itself that drives up the cost of cloud computing or creates billing surprises. It’s network transfer, where customers need to pay extra for larger allotments or are hit with higher prices for overages.
Cloud service providers often run on an enterprise network model, meaning they are paying for pre-packaged networking and other services to make a data center available to customers with managed WAN and managed LAN. This allows cloud service providers to be more hands-off with the networking components and focus on offering additional software or scaling their offerings. But relying on and maintaining relationships with ISPs ultimately makes cloud products less profitable.
Luckily, Linode is a little bit of both.
The Linode network model
In 2016, we decided to control our own networking future. We became our own ISP and began building out a global network. This decision had many benefits, including full control over our network destiny, strategic buying from multiple providers, and peering on internet exchanges. This set the foundation for us to tackle the data center network to strengthen the connections between DCs so customers can achieve a more seamless experience with multi-data center workloads.
As we worked on expanding our global network, three things were non-negotiable: maintaining vendor diversity, balancing flexibility and control, and incorporating Linux starting at the network level as much as possible.
(Since we are Linode, we are very good at Linux.)
The philosophy of staying away from proprietary software or feeling like you’re putting all your eggs in one vendor’s basket is something we advise all our customers. We also apply that to the tools we work with and intentionally create infrastructure options that empower customers with more flexibility. Since we are constructing our own tooling and managing our own peering relationships, this removes additional exchanges that would normally provide an incentive to ding customers with transfer fees.
With the expansion of our network and how we continue to grow our networking capabilities, generous network transfer is part of the customer experience that keeps cloud computing affordable. Linode plans consistently include more bundled transfer than our competitors, from a $5 1GB Shared instance to powerful Dedicated CPUs and GPUs.
Some upcoming product releases will make it even easier to monitor and reduce your network transfer costs. We’re still recruiting testers for our Cloud Firewall and VLAN betas. Sign up for the betas, or learn more about Cloud Firewall and VLAN.