Is the Web Broken?
The saying goes “If it isn’t broken, don’t fix it”, and this is the exact mentality that we have had towards the World Wide Web since its inception in 1989-1991. Since the beginning of the internet, our world has changed dramatically, with the rise of personal electronics (and more recently smartphones) and the advent of social media. Instead of looking at a small network consisting of a countable group of nodes, we are now looking at a massive structure consisting of millions if not billions of nodes and links that is constantly evolving and being tested by the millions of users daily. While the structure of the internet may have been sufficient for the 1990s, the state of today’s digital world begs the question, “Is the World Wide Web broken?”
According to Amber Case, CEO of Geoloqi and author of “Calm Technology: Designing for Billions of Devices and the Internet of Things”, the answer is “not yet”. Technically, there is nothing wrong with the internet right now. Considering millions of users still lurk on the various websites of the web, it is clear that the structure of the internet has not caused the complete downfall of the digital world. However, the signs of stress are present, as shown in the TechCrunch Article “Why the Internet Needs IPFS Before It’s Too Late”. Take, for instance, the rising costs for data providers and the number of DDOS (Distributed Denial of Service) attacks that happen. The main issue is that our Web is bounded by the fact that each piece of information only appears once in the network. As we have discussed during lecture, the internet is essentially a directed graph with edges (hypertext links) that connect pieces of information together. In order to reach a web page from your computer, you have to pass through several links before arriving at the target web page. Two issues may occur in this process. The first issue is the cost of moving from node to node. For a data provider, which is charged for every edge that it passes through, this is probably the largest issue as the size of our network (and lengths of paths between any two given webpages) is constantly increasing. In the event that two pieces of information are connected by a long series of edges, either the data provider will be forced to charge the consumer extra or drop the speed of obtaining such data. The second issue is that the web page that you are looking for only exists on a single server. If this server were to go down, by accident, natural disaster, or DDOS attack, every single node connected to it would be affected. Once again, in a smaller graph, the impact of removing a single edge would not be that dramatic, but in a large connected graph where nodes can be very embedded, the removal of even a single node could destroy a large number of paths. Hence, it is clear that the current structure of the internet is prone to failure.
In order to make the network more resilient and less prone to catastrophic failures, engineers at the startup Protocol Labs are attempting to switch over the old HTTP (Hypertext Transfer Protocol) to IPFS (InterPlanetary File System). The main difference between HTTP and IPFS are the nodes. In HTTP, each webpage (a single node) is hosted on a single server somewhere in the world, meaning that if anyone wants to access it, they must traverse a path from node to node until they reach the webpage or find that it does not exist. In IPFS, the information is decoupled from a server. Instead of only existing on a single server, the information will be distributed throughout points in the network. In more logical terms, this means that copies of webpages or information will be stored on a variety of computers connected to the network. This way, even if you attempt to access a site that may be thousands of miles away, your data provider will not have to traverse the entire path from your computer to the webpage. It will simply traverse a path from your computer to the nearest node with the same information. Because the data also exists in multiple locations, DDOS attacks will be a thing of the past; there is no longer a single server to target. If one path to a website is being contested by traffic, there will be more paths to other identical websites for your browser to take.
Thus, we cannot say that our internet is broken, but clearly, better alternatives are being researched every day. While it may take time for us to switch over to IPFS or another distributed solution, it is clear that we cannot continue to use the same structure that we have been. As the internet-using population grows (at a rapidly increasing rate), we must adapt before the sheer flux of users and information clogs the digital network.
TechCrunch article: http://techcrunch.com/2015/10/04/why-the-internet-needs-ipfs-before-its-too-late/#.imtjwv:IMe8