Peer-to-Peer Applications

The history of peer-to-peer (P2P) networking is rooted in the early concepts of networking and the decentralized nature of the Internet. In 1969, in RFC 11, Steven Crocker laid the groundwork for host-to-host communication that would eventually evolve into the distributed network that is the Internet today. Later RFCs introduced other terms such as client, server, and over time several paradigms for building applications on the Internet have emerged, namely P2P, client-server, and a hybrid approach.

  • Client-Server

    • A central server stores resources that clients access.
    • Bandwidth can become a bottleneck when multiple clients request large files simultaneously, leading to slow transfer rates.
  • P2P

    • Each computer (peer) acts both as a client and a server.
    • Peers share bandwidth and resources directly with one another, which can enhance download speeds for popular files.
    • The downside of P2P networks is that if a file is rare, and the peers who have the pieces of the file often leave and join the network, or leave the network entirely, the file is essentially lost.

    Generally there are two types of P2P networks, structured and unstructured.

    • Structured

      • Peers within the network follow a specific structure based on the underlying protocol used. Structured P2P networks are great for predictability of scalability, but also for performance, as routing between peers within the structure is more efficient compared to unstructured P2P networks. For example, Kademlia is a DHT where the peers route queries and locates other peers based on a XOR-based topology2.
    • Unstructured

      • Peers in an unstructured P2P network do not follow any structure. This poses many performance challenges, for example, querying for a resource within the network can become quite costly as there is no structure with which the query can determine the shortest path to reach the resource. Instead, it would have to rely on other techniques such as flooding the network with a request that would be propagated to the next peers until the resource was found. Compared to structured P2P networks, this type of network is more resilient to node failures within the network.
  • Hybrid

    Hybrid systems combine aspects of both client-server and P2P models. For example, while the user interface may operate like a client-server model, the underlying technology could leverage P2P principles for efficiency

In the early to late 2000s, P2P networks accounted for a significant portion of Internet traffic, with studies estimating that around 60% of all traffic was generated by P2P applications3, however. This dominance has since declined due to the rise of streaming services and cloud-based solutions that utilize client-server architectures, But also due to many P2P applications adapted by implementing encryption and using non-standard ports which would make identifying P2P traffic harder, to avoid intentional throttling by ISPs.

Despite this decline, the evolution of P2P networks has led to the development of various applications beyond file sharing, including decentralized finance (DeFi) and blockchain technologies.

Napster

One of the first P2P file-sharing networks, Napster primarily focused on sharing audio files like MP3s and gained immense populariy, reaching around 80 million users at its peak, however. Its lifespan was relatively short, lasting only about two years from 1999 to 2001. The service was ultimately shut down by court order due to the widespread sharing of copyrighted material among users4.

The architecture of Napster was quite straightforward, It relied on a single central server that indexed all the content available on each participating peer. When a user wanted to acquire a file, they would contact this centralized server, which would respond with a list of peers sharing the requested files. This design allowed users to easily find files available within the network, as illustrated in the following figure.

alt text

BitTorrent

BitTorrent is another P2P network that was launched in the same year that Napster was shut down. BitTorrent did not focus on music sharing, but rather took a general approach where it was possible to share any file on the network, movies, mp3s, games, etc. BitTorrent did not suffer the same fate as Napster, as BitTorrent is still in use today and has even been adopted by large corporations specifically for sharing large popular files. For example, AWS S3 storage implements the BitTorrent protocol5, but also many game companies use it to distribute patches among all the clients or files within the internal infrastructure6.

The BitTorrent protocol started out as a fairly simple protocol supporting a few messages, but over the years the protocol has seen a number of extensions called BEPs that adapt the protocol for different scenarios, one of which was the adoption of DHT of peer information exchange.

BitTorrent has a similar model to Napster. The participants in the network are called peers and trackers. Trackers are an index server that has information about specific .torrent files, and peers (seeders) who have those files and are willing to share them with other peers (leechers) on the network. Once a leecher has fully downloaded the file, it can become a seeder and be a net positive for the P2P network. The protocol also handles scenarios where no peer has the full shared file, but each peer has pieces of a .torrent file, and if it is possible to combine these scattered pieces to reconstruct the original file, the peers will do so. Trackers are not centralized by the creator of BitTorrent, but rather have a decentralized nature in which any individual can host their own trackers containing information about specific .torrent files. A number of peers participating in a single .torrent file is called a Swarm. The BitTorrent architecture is shown in the following figure:

alt text

As mentioned earlier, BitTorrent has adopted the DHT extension, which enhances the decentralization of the network by removing the need for a centralized tracker to obtain peer information. When enabled, peers themselves act as trackers, storing information about .torrent files and the peers they can connect to for downloading those files from. The DHT protocol used in BitTorrent is based on Kademlia which operates over UDP7

alt text

Skype

This section describes the earlier, P2P version, of Skype’s overlay network as described in this paper. initially developed as a P2P VoIP service, Skype operated on a decentralized model that allowed users to make calls and exchange messages without the need of a central server, however.In the early to mid 2010s Microsoft slowly moved Skype to a centralized service8, as a result Skype is not longer the P2P VoIP platform it once was in the 2000s.

The Skype network is made up of three main types of nodes: ordinary nodes (ON), supernodes (SN), and the login server (LS), each serving a distinct purpose. Ordinary nodes are the Skype clients running on users’ computers, allowing them to initiate actions with other peers within the network, however. A challenge arises when ordinary nodes are behind a NAT (Network Address Translation) or Firewall in a private network, making it difficult for other peers to initiate connections.

This is where supernodes come into play. Supernodes have publicly accessible IP addresses and possess enough CPU power and network bandwith to facilitate connections between peers that are behind NATs/firewalls. Any ordinary node with a public IP address and sufficient resources can become a supernode within the Skype network.

There are three typical scenarios that illustrate the necessity of supernodes in the Skype network:

  • Both peers are supernodes: In tthis case, both peers have publicly accessbile IP addresses and can connect directly to each other to make calls/exchange messages without any intermediaries.

  • One peer is behind a NAT/firewall, and the other is a supernode: One peer is publicly reachable while the other is behind a NAT/firewall. The peer behind the NAT connects to a supernode (which is not the target peer to which the call/messages are meant to be send to) within the Skype network and this supernode acts as a proxy that forwards the communication send from the peer behind the NAT to the intended recipient peer.

  • Both peers are behind NATs: Both peers cannot connect directly due to being behind NATs/Firewall. Similar to the previous case, they rely on a supernode as a proxy for communication, allowing them to exchange messages/calls through an intermediary.

Finally, the login server is the only centralized part of the Skype’s overlay network. It manages user logins and ensures that usernames are unique across the platform, playing a crucial role in maintaining the integrity of user accounts within the Skype network.

alt text

Bitcoin

To conclude our exploration of P2P applications from the 2000s, it’s important to highlight the launch of Bitcoin, a revolutionary P2P electronic cash system, in 2008. This year was particularly notable due to the financial crisis triggered by the collapse of the housing market, which resulted in the most significant economic downturn since the Great Depression in 1929. Bitcoin emerged at a time when public trust in banks was severely shaken, leading many to seek alternatives for their financial transactions. This timing proved to be pivotal, as Bitcoin offered a decentralized solution that resonated with those looking for more control over their money.

As of now, Bitcoin has reached an all-time high (ATH) of approximately $92,000 and is currently fluctuating around $90,000. While the future remains uncertain, it’s clear that Bitcoin has established itself as a legitimate alternative to traditional banking systems. Its rise in popularity has also inspired the development of other decentralized digital currencies like Ethereum and Cardano, which offer features that Bitcoin currently lacks. Despite the emergence of these alternatives, Bitcoin continues to be regarded as the standard in the cryptocurrency world. Its unique position and ongoing developments suggest that it will remain a key player in the financial landscape for years to come.

At its core, bitcoin relies on a consensus mechanism known as Proof of Work (PoW). PoW addresses the challenges associated with achieving consensus among peers in a decentralized network, often referred to as the Byzantine General Problem. In this mechanism, nodes (miners) expend vast amounts of energy performing computations that involve finding a particular value of a hash for a block of transactions. When a hash for the block is successfully computed, the miners broadcast it to other nodes in the network. Each node in the network maintains a copy of the blockchain, which is a distributed ledger that records all validated transactions. This ledger can be used to verify the authenticity of transactions and ensures transparency within the network. As new blocks are published within the network, nodes follow the “longest proof-of-work chain” rule, accepting the chain with the most cumulative work performed as the valid ledger. It is not possible to change the broadcasted blocks unless the Hash value for them is recomputed again and any appended blocks after that. This technique makes it computationally infeasible for malicious actors to alter the transaction history, as doing so would require redoing all subsequent blocks, which requires an enormous amount of computational power. As a result, PoW not only facilitates consensus, but also enhances integrity within the network.

alt text

If the network primarily consists of hones nodes, they can effectively outpace any malicious actors by continuously creating new blocks and contributing them to the network at a faster rate than those nodes who are in minority and are not honest. This leads to the concept of a 51% attack, where when a single entity controls more than 50% of the network’s computational power, it can dictate the future of that network.

For further insights into Bitcoin’s inner workings you can refer to the original whitepaper Bitcoin: A Peer-to-Peer Electronic Cash System.

Conclusion

In the tech world, Peer-to-Peer (P2P) applications have played a big role, especially during the 2000s when many popular platforms came onto the scene. This was a time of innovation, and P2P technology changed how we share and access information. In this post, I focused on some of the most well-known P2P applications and how they are built, but there are many more out there that we didn’t cover. The 2000s were truly the heyday for P2P applications. While some have disappeared over time, others have managed to adapt and thrive, even finding a place in larger companies. One of the standout successes from this era is Bitcoin. Launched about 15 years ago, it has revolutionized how we think about money and transactions.

Looking forward, it’s exciting to think about what’s next for P2P applications. These systems continue to impact various areas, from file sharing to cryptocurrency. As technology evolves, it will be interesting to see how P2P applications grow and change to meet new challenges and opportunities.

Footnotes