Solving The Internet’s Congestion Problem
2013/03/22

BitTorrent CEO Eric Klinker discusses the impact of distributed computing on Internet sustainability. Originally published in Harvard Business Review, March 22, 2013.

An open, neutral Internet has been a force for sweeping social change: democratizing information, commerce, and access to jobs, triggering GDP growth and a rising standard of living. Our collective ability to equally access and innovate on Internet platforms, from search and social networks, to content and commerce sites, is fundamental to continued growth.

It is not a given.

With Internet innovation comes congestion. The amount of content shared and accessed by the world’s 2.4 billion Internet users is constantly increasing — in volume, and in size. It is outpacing the Internet’s, and Internet Service Providers’, ability to efficiently deliver it.

Internet penetration and usage continues to expand at exponential rates. But the issue is not just a growing Internet population. It’s the growing fidelity demands of today’s Internet population, as they transition from text, image, and standard-definition media-sharing, to high-definition (and even 4K) file transfer. Innovation in content quality has surpassed innovation in Internet delivery. Today, billions of people are increasingly using the same pipes to deliver billions of HD media files. We’re facing a congestion crisis.

Barriers to TCP Innovation

The way that the Internet combats congestion is through a protocol called TCP. TCP is a regulator that’s designed to allocate Internet capacity evenly across all applications that use it. This ‘congestion control’ keeps the Internet from being overwhelmed. Every single computer, smart phone, smart TV, and web server — every single Internet-connected object or device — relies on this protocol. So, in order to upgrade the TCP protocol, you would have to upgrade the entire world’s Internet devices. Billions of them. Simultaneously. No practical solution for TCP innovation exists.

The All-You-Can-Eat Internet

Today, the primary business model for the consumer Internet is all-you-can-eat usage. This has successfully driven widespread Internet adoption, and Internet Service Provider (ISP) profitability. However, the model separates subscriber revenue from the cost of the network. The same person can use more Internet bandwidth without increasing revenue gains for the network. As more powerful applications for the Internet emerge, congestion builds with increased usage, which has the potential to result in diminished margins for providers.

Congestion, rather than raw usage, is the key driver of this phenomenon; given that the Internet Service Provider network is largely a fixed-cost asset. Like any fixed-cost asset, such as the Interstate Highway System in the U.S., it is cheap to operate and expensive to upgrade. It is congestion, like rush hour on the roads, that drives the necessary upgrades and cost of the network. Congestion, or the threat of it, forces more capacity (capital expense) to be added to the network, in a never-ending race to keep up with Internet growth.

The Challenge:

If we want to create a sustainable future for the Internet, we need a new way of solving the congestion problem. Today, the solution is simply to add more capacity. Thus, we have seen the Internet’s core evolve (from the original 56kbps links of the ARPANET backbone, to T1 (1.5Mbps) lines, and on and on to the multi-Gigabit links of today’s core network). Likewise, consumer connections are increasingly capable. Dialup has been replaced with DSL and cable; emerging FTTH offerings promise upwards of 1Gbps. More internet has been at the heart of every serious solution to the congestion problem, but the Internet keeps inventing new applications to fill this capacity; in turn, increasing demand for Internet capacity.

We need a better solution. Is it new pricing models for the consumer Internet that try to re-couple network cost to revenue? Do we need a new set of incentives that can help manage the Internet’s growth? Do we need new economic models where applications pay consumer networks for access to users? Do we need government regulation to ensure a level playing field on the network for applications, giving tomorrow’s innovators the same access as yesterday’s Googles, Facebooks, and Amazons? I’ve heard arguments for all of the above.

Or do we need better technologies that can be more efficient at using the Internet we’ve already got?

What the Business Community Needs to Do:

ISPs are spending more and more money to provide bandwidth. These costs are being passed on to the business community, as well as to individual households and mobile plans. We are all impacted by Internet congestion. And it will take cooperative innovation to fix this problem, and restore the health of the Internet. As business leaders, we must get involved and lead this change.

The pricing models and economic systems underpinning the internet will not be easy to change, and have served us well so far. Regulation will inevitably bear unintended consequences. Only through technology do we have the power to solve the problems facing the Internet, while preserving its ultimate value.

One of the best technologies that we can apply to the issue of Internet congestion is something called distributed computing. Full disclosure: I am the CEO of BitTorrent, Inc., a distributed computing company. Needless to say, I believe in this technology. Here’s why.

Distributed computing systems work with unprecedented efficiency. You don’t need to build server farms, or new networks, to bring an application to life. Each computer acts as its own server; leveraging existing network connections distributed across the entirety of the Internet. BitTorrent is a primary example of distributed computing systems at work. Each month, via BitTorrent, millions of machines work together to deliver petabytes of data across the web, to millions of users, at zero cost. And BitTorrent isn’t the only example of distributed technology at work today. Skype uses distributed computing systems to deliver calls. Spotify uses distributed computing systems to deliver music.

The idea of distributed computing isn’t a new one. In fact, the principles of distributed computing are the core principles of the original Internet, which was designed as a distributed system of loosely coupled elements. The Internet was made to be simple at the core, and intelligent at the edge. And these core properties are what proved to be the Internet’s advantage over the centralized phone network it replaced.

As we look to solve for congestion, and the future of internet innovation, we can look to the past. The principles of the original Internet, and the principles of distributed computing, allow for:

Resilience, resource pooling, and infinite-scaling

Distributed technologies follow the original design principles of the Internet, distributing data to make it more resilient. Distributed networks are people-powered and efficient, allowing users to reliably pool resources and scale infinitely. Because resources are widely distributed, they can rely on other parts of the network that are not congested. Effectively, the entire built network is utilized. This reduces congestion pain points.

User-network prioritization

Distributed technologies put users in control; allowing people to express intent to their networks (e.g., to prioritize specific content over others), and prioritizing their needs from inside the network. This means your Skype conference call takes network precedence over your software download, running in the background. This also means that these activities don’t compete for bandwidth.

Greater security, data control, and privacy

Today, much of our download information is stored on servers, and within ISP network infrastructure. Adding more bandwidth requires adding more machines; each of which is vulnerable to theft or attack. Distributed systems decentralize information. There are no intervening servers. This gives users control of their data, and their privacy.

Support for new and emerging applications

Distributed technologies support new and emerging applications, by adding network efficiency. Skype and Spotify could not exist without distributed computing. Nor could platforms like Facebook or Twitter, which rely on distributed technologies for system updates.

Re-imagine any application using these principles — from content delivery to social networks, storage to search — and you’ll see that distributed technologies make the Internet better.

We have inherited more than two decades of open Internet innovation; and with it, unprecedented opportunity, access, and growth. Now, we have an obligation to uphold this legacy. We have the tools at hand to preserve it for generations to come. And now, we need to employ them.

Your device isn’t compatible with BitTorrent Web for Windows.

Would you like to download BitTorrent Web for Windows?

[Yes]

[No, please let me continue from this page.]