Join me, if you will, in a little time travel. Let’s go back thirty years, to March of 1989. A gallon of gas cost about a dollar. The average American income was under $30k. On March 6th, the Yankees beat the Mets in their first game in four years. On the 24th, the Exxon Valdez would rupture its hull in the Prince William Sound, off the coast of Alaska, and damage almost 1,000 miles of coastline. That same night, NBC re-ran its 1960 recording of Mary Martin’s Peter Pan. And on the 26th, Boris Yeltsin was elected to the parliament of the USSR with 92% of the Moscow vote. It was an eventful month—and not terribly different from today.
For the purposes of this publication, the most eventful part of that March had to be the 14th, when Jim Stiles spent five hours watching Volume 1 of the Canyon Country Zephyr slide, copy after copy, off the printing press in Cortez, Colorado. When he arrived back in Moab, the back end of his Volvo sinking under the weight of all those papers, he found out that his friend Ed Abbey had died that morning. Within the next few days, he attended Ed’s funeral and then lost his beloved dog, Squawker, to cancer. So Jim could be forgiven for missing the other big development of March, 1989: the invention of this crazy thing called the “Web”.
On March 12th, 1989, a British scientist named Sir Tim Berners-Lee formally submitted a proposal for a system to connect all the various “internets” developed over the previous decades at institutions and universities across the world. He called his system “The World Wide Web” , and he suggested that it would rely on “hyperlinks” to connect different pages, identified by Uniform Resource Locators (URLs,) to each other. Those URLs would contain that abbreviation “www” to identify them as belonging to the World Wide Web.
This is typically the first point of confusion when we laypeople start talking about the Internet. What does the Internet mean? The development of Internets—or networks of computers that can interact with each other—goes back in the United States at least as far as the mid-1960s, or even earlier if you start quibbling with definitions. The most commonly known pre-Web Internet was called ARPANET and it was developed by the Department of Defense’s Advanced Research Projects Agency (ARPA) in the mid-60s as a way to allow researchers across the country who were employed by the Agency to connect with each other and collaborate. The military used ARPANET until 1990, when widespread adoption of The Web rendered it unnecessary. Another pre-Web internet was called USENET. Developed in 1979 at the University of North Carolina-Chapel Hill and Duke University, USENET expanded through the 80s to become a type of bulletin board and discussion center for universities and academics across the world. It was on USENET that Sir Tim Berners-Lee first announced his “Web.”
Other scientists proposed their own systems of connecting all these Internets, but Berners-Lee won out. And most of us were introduced to the notion of the Internet by way of the “Web.” Meaning, to reach the Internet, we opened a browser—in the early days, probably Netscape—and we either entered a URL directly into its address bar or else we used a web crawler—in the early days, Alta Vista or Lycos; Later, AskJeeves or Yahoo; Now Google or, just kidding, you use Google—to find the address of a website to visit. Fundamentally, that process hasn’t changed in 30 years. The names have, but not the process. Websites still exist at their URL addresses. We still use hyperlinks (like this!) to link one page to another. And web crawlers—Google—still monitor all the surface traffic around the Web to serve up “the most relevant” answers to our searches. And so, knowing that, you’d be forgiven for thinking that the landscape of our digital life was pretty well set in 1989 and, with minor changes, has stayed the same in the intervening three decades.
The trouble, though, with limiting our view of the “landscape” of the internet to those fundamentals—hypertext, browsers, webcrawlers—is that, while those tools are necessary for the architecture of our experience, they have very little to do with how we experience digital life. You don’t need any particular technological knowledge to complete the two or three tasks necessary to get online. What shapes your online experience is the world you find once you’ve signed on. The communities, the information available, the threats and the constant claims to your attention. And that world has changed drastically over time.
The early web, sometimes called Web 1.0, was a period of experimentation and rapid development. Among the first websites were recognizable faces like bloomberg.com, wired.com, and the International Movie Database (IMDB.) But other early sites previewed the inanity and spirit of fun that would mark the Web 1.0. From 1993 to 2001, early web visitors could watch a webcam documenting the ongoing status—full, partly full, nearly empty, empty, then full again—of a coffee pot at the University of Cambridge in England, (see an archive of that website here.) Or they could read the web’s first popular webcomic Doctor Fun, (archived here.) In 1994, a small neighborhood-based website in Los Angeles went global under the name GeoCities and it offered web users the chance to easily create their own webpages, on any topic they liked, for free. The idea was wildly successful. It turned out that lots of people would love to have their own personalized space on the internet—a front door to their personality, networked within a community of like-minded web friends. By 1997, Geocities was the world’s fifth most popular website. By 1999, when the site was purchased by Yahoo! for an astronomical $3.6 Billion, it was the third most popular, behind only AOL and its new owner Yahoo!. Geocities websites are now remembered mostly for their exuberant ugliness, but they exemplified the overwhelming ethic of playfulness that pervaded the early web.
But the web, since its birth, has been plagued by two gnawing problems: how do we preserve the world’s largest library of combined knowledge and creativity? And how does anyone make any money?
The question of preservation always arrives late to new technologies. There’s a reason why so few films created in the 1910s and 20s were saved. And why the early products of the printing presses in Europe were mostly destroyed. When a technology is new, we tend not to place any value on what it produces. Surely a book, printed in a matter of hours, is more trivial than a manuscript that would have been created over a period of months. And a telegraph is more superficial than a letter. So too, the websites of the early internet were considered trivial, throwaway phenomena in the grand scheme of a more serious world. After all, why save someone’s X-Files fan site from 1997, or a promotional page for a candidate for a City election in 1996? Few people would have considered the possibility that these websites might become relics of an important, revolutionary period in human history. And even fewer people realized just how rapidly those relics would destroy themselves and disappear. The average lifespan of a website in 1997? 44 days.
When a website is gone, it’s gone. A few services crawl the web to archive sites—most notably the Internet Archive’s Wayback Machine, which in 2015 stored an astronomical 20-some petabytes of archived webpages culled from the previous nine years. For reference, a petabyte is one million gigabytes. And that number of petabytes has almost certainly doubled again, if not tripled or quadrupled since 2015. The homepage of the Wayback Machine, where you can search for archived copies of websites by URL, offers over 345 Billion web pages for perusal.
But even the directors of the Wayback Machine recognize that they’ve barely scratched the surface of what has existed on the web since its infancy. “At this point, if you mean the web when Tim Berners-Lee invented it, right now that web does not exist, not really.” said Jason Scott, an archivist and historian for the Internet Archive, quoted in a 2015 Atlantic article. “News organizations kill old articles, YouTube’s old videos go away. And while the Archive and other entities are saving—quote-unquote saving—these sites, even those will go to new URLs. They won’t be in the same place. You’ll have to search for them… There are success stories. But meanwhile, silently, thousands of useful things are disappearing. As time goes on, I have even less and less hope for how long it will last.”
And, because so much is constantly lost, the foundation of the web—hyperlinked information—is considerably less stable than it ought to be. According to that same Atlantic article on web archiving, “ A 2008 analysis of links in 2,700 digital resources—the majority of which had no print counterpart—found that about 8 percent of links stopped working after one year. By 2011, when three years had passed, 30 percent of links in the collection were dead.” Which is troubling, considering the importance of establishing sources when reading information online. More frightening, though, is the death rate for links used in law reviews and court opinions. A 2014 study by the Harvard Law School found that “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.”
Obviously, we need a greater push to save websites, at their original URLs where that’s possible or else within the internet archives. When so much of our information—news, firsthand accounts of historical events, photos and art—only exists in digital format, we leave far too much of our accounting of this period of time to the whims of fickle web hosts and servers.
But the preservation of the early web isn’t just a matter of archiving information that would otherwise be lost. It’s also a way of saving the feeling of the early web. The looseness and semi-lawlessness of spaces like Geocities or the anonymity of AOL chat rooms. The playfulness and quirkiness of all those adobe flash-based websites. Without archiving, I don’t know whether younger generations will ever believe us when we tell them there was a time before they were the primary product sold by every website they visited.
Of course, online advertising arose fairly early in the development of the web. The first “Banner Ad” appeared on HotWired (the first website for Wired magazine) in October 1994. It was an ad for AT&T and users who clicked on it were linked, not to the company’s website, but to an online tour of the world’s greatest art museums. Sounds quaint, doesn’t it? Geocities introduced ads to its pages in 1997. And everyone remembers the horror of the cascading pop-up ads that wreaked havoc in the late 90’s and early 2000s. But, despite the intrusiveness of even those earliest ads, they were still clunky and impersonal in a way that almost feels charming now. You weren’t yet seeing an ad on the New York Times website for a brand of toothpaste you’d been looking at on the Target website. You didn’t feel constantly watched. And, in retrospect, that was a wonderful feeling.
The rise of surveillance advertising seems inevitable now. “Free and easy” was the ethos of the early web, and the “free” part made the “easy” part even easier. There weren’t any barriers to entry. Anyone could join, regardless of their wealth. But web-based companies have employees, and those employees can’t work for free. Somehow, somewhere, money had to change hands. And if it wasn’t coming from the website users, then it would have to come from advertisers.
Once advertising was accepted as the predominant way to eke money from the web, the rest was easy to predict. Online companies would seek to keep you as long as possible on their sites, to see as many ads as possible, and then use your trail of data to develop increasingly invasive ways to target you with more ads. It’s no coincidence that the companies with the greatest cache of data to surveil are the ones most successful at sucking up your time. Thus, Google, with its access to every question you want to ask about the world and yourself, is also the primary online seller of advertising. Facebook, with its access to your political and cultural opinions, your likes and dislikes, and a map of your connections to family and friends, is its primary competitor.
And both sites are endless loops back into their own servers. While, in the old days, clicking a hyperlink would take you completely off of one page and onto another, now clicking on a Google search result sends a plume of data off into the Google cloud, and every link you click from that page follows behind it. Equally, every link clicked from Facebook is monitored, stored away, and used to sell predictions to advertisers about what you’ll click on tomorrow. In the end, you can hardly remember what it used to feel like, surfing the web. You just know it was different, and so much better, than this.
The greatest argument for preservation is that, while the Internet is flourishing in 2019, the Web is dying. Again, remember that “the Web” and the Internet are two separate things. The Web is a wide open network, accessed through browsers. When you’re surfing sites through a browser like Firefox or Chrome, you’re on the Web. When you’re Skyping with your family, or watching Netflix, scrolling through your Facebook app or checking the forecast on your weather app, you’re using the Internet but not the web. Inside an app, you’re in a closed system that belongs to a company. You’re behind a wall that shields your activity from the broader network. And, increasingly, that’s where people want to be. Apps are doing great. The so-called “Internet of Things” (all the various internet-connected vacuums and cameras and refrigerators,) terrifying as it may be, is doing great.
But the web—chaotic and decentralized—is dying. Granted, it’s a slow death. We’ve been talking about it for nine years now, at least, since the advent of the iPad and the iPhone. And it’s hard to imagine Berners-Lee’s system dying completely. If nothing else, people will still want to research things, and so far I can’t imagine the whole world being content to search only pre-indexed information saved within an app.
I could be wrong, though. People seem pretty happy to restrict themselves to only the most controlled environments. As recently as 2007, 50% of traffic online was dispersed through several thousand websites. In 2009, that 50% was restricted to just 150 sites. In 2014, just 35 sites. And by 2017, 70% of all web traffic was held by just Google and Facebook, and companies they own like Youtube and Instagram. I don’t know what you call that kind of consolidation, but I sure wouldn’t call it a “web.”
I’ve read some great analogies for the modern internet while researching this topic. George Soros compared the major Internet platforms to casinos—where the house controls everything, offering you lodging and food and entertainment to keep you captive inside their walls. Nicholas Carr, writing in the LA Review of Books, suggested that the experience of the “surveillance capitalism” of Facebook and Google, to the people who prefer it, is like an all-inclusive resort where their pre-determined and insulated experience is a kind of comfort. A writer in the New York Times compared the browser-based Web to the urban inner city and the growing app-based internet to the suburbs. Everyone seems to understand that our new era has been designed to feel like a more secure, more controlled environment. And that the reason this consolidation has been so successful is because, like a vacation in a casino or at a resort, the time spent on these platforms feels safer and easier than time spent on the open web. Given the choice, we like safe. We like easy.
All of this is a horror, of course, to poor Sir Berners-Lee, who released an open letter to the internet on its 29th birthday last year. He dedicated a third of his letter to the perils of consolidation, lamenting, “What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.” In his letter, Berners-Lee maintained his optimism, challenging “us all to have greater ambitions for the web.” He suggested revisiting the reliance on advertising as the only possible business model and pushing for a new “legal or regulatory framework” to reign in the big platforms.
And he was right that this is still a young technology, historically speaking, and there’s no reason to think its course couldn’t be altered with the collaboration of the public and a general willingness to fight. I’d be more tempted by his cheerful prognostications, though, if the transformation of the internet weren’t continuing so blatantly, so steadfastly, in the absolute opposite direction.
So, at the very least, we need to talk about maintaining the existence of the web and archiving what is left. Preserving those archives and that architecture for the future. Because I don’t see time moving backwards. The last 30 years of our collective history played out, largely, across a digital screen—a vast library of the whole society’s correspondence, our news, politics, and pop culture—and, when it’s deleted, it’s deleted forever. We’ve lost so much every time we’ve under-valued our popular culture and technologies. The destruction of early Television shows, like the early Johnny Carson episodes, comes to mind. Who wouldn’t love to go back and preserve all those shows as they were happening? Now they’re gone. The scale of our current, ongoing loss of culture—the endless wave of abandoned web addresses, broken links, deleted videos—is so enormous, its only historical comparison might be the destruction of the great Library of Alexandria. And this will certainly be a period of history we want to be able to study and remember. The rise and the fall of the Web. A great experiment.
The Web was always a terrible place to try to make money. It was too anarchic. And these closed-wall environments are a lot more comfortable and profitable for companies who prefer a captive audience. We might hate to see the day when it’s 95% or 97% of web traffic going solely to Google or Facebook, with 99% of all internet traffic going to apps, but that day will likely come.
Still, the Web exists. Hopefully will continue to exist, in some less-visited form, and we shouldn’t forget that it remains enormous, containing billions of sites. More websites come online everyday. And a few of us will always seek those outer reaches, wanting to re-capture that early Web feeling. While the whole world might shift to smartphones and apps, there will always be a little group wanting to flee the safe, suburban “resort” experience of the Internet. There will be deviants, and fringe societies who won’t be welcomed in that brave new world. There will be people who want to play and create something nutty. People who want to dig deeper than any walled garden would allow. And the Web, hopefully, will still be there for us merry few. Maybe that isn’t a death, even if it is a degeneration, for the Web. A return to its roots. The same community of hackers and researchers, freaks and students and bored insomniacs who built the early web could still find each other in the trails of hyperlinks. We were its first governers, after all, the moneyless ragtag enthusiasts. Let the whole world have the internet, if that’s what they want. The Web was ours all along.
Tonya Stiles is Co-Publisher of the Canyon Country Zephyr.
To comment, scroll to the bottom of the page.
Don’t forget the Zephyr ads! All links are hot!
*Note: The Cartoonist screwed up. In a subconscious attempt to escape the world’s news, he changed one of our Backbone Member’s names from “Michael” to “Richard” Cohen. Sorry, Michael. We know you’re a way better guy than that infamous Michael Cohen and we beg your forgiveness.