The Internet is a busy place. Every second, approximately 6,000 tweets are tweeted; more than 40,000 Google queries are searched; and more than 2 million emails are sent, according to Internet Live Stats, a website of the international Real Time Statistics Project.
But these statistics only hint at the size of the Web. As of September 2014, there were 1 billion websites on the Internet, a number that fluctuates by the minute as sites go defunct and others are born. And beneath this constantly changing (but sort of quantifiable) Internet that’s familiar to most people lies the “Deep Web,” which includes things Google and other search engines don’t index.
Deep Web content can be as innocuous as the results of a search of an online database or as secretive as black-market forums accessible only to those with special Tor software. (Though Tor isn’t only for illegal activity, it’s used wherever people might have reason to go anonymous online.)
Combine the constant change in the “surface” Web with the unquantifiability of the Deep Web, and it’s easy to see why estimating the size of the Internet is a difficult task. However, analysts say the Web is big and getting bigger.
With about 1 billion websites, the Web is home to many more individual Web pages. One of these pages, www.worldwidewebsize.com, seeks to quantify the number using research by Internet consultant Maurice de Kunder. De Kunder and his colleagues published their methodology in February 2016 in the journal Scientometrics.
To come to an estimate, the researchers sent a batch of 50 common words to be searched by Google and Bing. (Yahoo Search and Ask.com used to be included but are not anymore because they no longer show the total results.) The researchers knew how frequently these words have appeared in print in general, allowing them to extrapolate the total number of pages out there based on how many contain the reference words.
Search engines overlap in the pages they index, so the method also requires estimating and subtracting the likely overlap.
According to these calculations, there were at least 4.66 billion Web pages online as of mid-March 2016. This calculation covers only the searchable Web, however, not the Deep Web.
So how much information does the Internet hold? There are three ways to look at that question, said Martin Hilbert, a professor of communications at the University of California, Davis.
“The Internet stores information, the Internet communicates information and the Internet computes information,” Hilbert told Live Science. The communication capacity of the Internet can be measured by how much information it can transfer, or how much information it does transfer at any given time, he said.
In 2014, researchers published a study in the journal Supercomputing Frontiers and Innovations estimating the storage capacity of the Internet at 10^24 bytes, or 1 million exabytes. A byte is a data unit comprising 8 bits, and is equal to a single character in one of the words you’re reading now. An exabyte is 1 billion billion bytes.
One way to estimate the communication capacity of the Internet is to measure the traffic moving through it. According to Cisco’s Visual Networking Index initiative, the Internet is now in the “zettabyte era.” A zettabyte equals 1 sextillion bytes, or 1,000 exabytes. By the end of 2016, global Internet traffic will reach 1.1 zettabytes per year, according to Cisco, and by 2019, global traffic is expected to hit 2 zettabytes per year.
One zettabyte is the equivalent of 36,000 years of high-definition video, which, in turn, is the equivalent of streaming Netflix’s entire catalog 3,177 times, Thomas Barnett Jr., Cisco’s director of thought leadership, wrote in a 2011 blog post about the company’s findings.
In 2011, Hilbert and his colleagues published a paper in the journal Science estimating the communication capacity of the Internet at 3 x 10^12 kilobits per second, a measure of bandwidth. This was based on hardware capacity, and not on how much information was actually being transferred at any moment.
In one particularly offbeat study, an anonymous hacker measured the size of the Internet by counting how many IPs (Internet Protocols) were in use. IPs are the wayposts of the Internet through which data travels, and each device online has at least one IP address. According to the hacker’s estimate, there were 1.3 billion IP addresses used online in 2012.
The Internet has vastly altered the data landscape. In 2000, before Internet use became ubiquitous, telecommunications capacity was 2.2 optimally compressed exabytes, Hilbert and his colleagues found. In 2007, the number was 65. This capacity includes phone networks and voice calls as well as access to the enormous information reservoir that is the Internet. However, data traffic over mobile networks was already outpacing voice traffic in 2007, the researchers found.
Read More: Here