HIST 216: History of Information
Course Author(s)
Estimated study time: 1 hr 6 min
Table of contents
Sources and References
These notes draw on historical scholarship, including assigned readings from Elizabeth Eisenstein (The Printing Revolution in Early Modern Europe), John Naughton (From Gutenberg to Zuckerberg), Abby Rumsey Smith (When We Are No More), Fred Turner (From Counterculture to Cyberculture), Robert MacDougall (The People’s Network), Finn Brunton (Spam), Zeynep Tufekci (Twitter and Tear Gas), Benjamin Peters (How Not to Network a Nation), Tim Berners-Lee (Weaving the Web), Belinda Barnet on hypertext history, Vannevar Bush (“As We May Think,” The Atlantic Monthly, 1945), Ted Nelson’s 1965 ACM paper, Gillies and Cailliau (How the Web Was Born), Steven Levy (Hackers), Limor Shifman on memes, Mark Nycyk on trolling, and various primary documents cited throughout. All figures referenced in the original course materials are credited to their respective photographers and institutions.
Module 1: Communication Before the Digital Age
1a. An Early History of Communication
Humans have always sought ways to communicate beyond the immediate moment. Oral traditions, still central to many cultures including First Nations within Canada, transmitted knowledge across generations long before writing existed. In Canadian legal systems today, oral history carries equal weight alongside written records when interpreting treaties – there is no inherent primacy of one over the other. Yet speech has a fundamental limitation: its range. Step out of a room and the conversation is lost; move to another building and you might as well be on another continent.
Cave paintings represent some of the earliest attempts to leave a lasting record. Found across the globe – from Argentina (dating to 13,000-9,000 BCE) to Indonesia (perhaps 39,900 years old) to the famous Lascaux caves in France – these paintings depict animals, hand imprints, and abstract forms. Whether cave painting was brought around the world by early humans migrating out of Africa or arose independently in different groups remains a matter of scholarly debate. What matters for this course is the impulse to leave something behind. Without leaving something behind, without a record, there is no history. If we speak and nobody remembers, the thought is lost forever. But if we write something down – on a cave wall, a scroll, a laptop, or in the cloud – others can read it. We can communicate outside our immediate vicinity.
Early Text
Well before 3000 BCE, people clustering in densely populated urban centers in Mesopotamia began inscribing symbols on clay tablets. In the city of Sumer, traders recruited skilled transcribers to draw symbols on clay tokens to certify trades – without a receipt, either party could claim to have been cheated. Clay was abundant in the region and reeds were available to press into it. Part of the reason we know about these practices today is that clay is remarkably durable and survives archaeological excavation.
Writing appeared independently in multiple locations: Egypt by 3000 BCE, China by 1200 BCE. The Epic of Gilgamesh (c. 2100 BCE), written as epic poems in Mesopotamia, is considered the first true work of literature. Its themes prefigure motifs found later in the Christian Bible and Greek poetry. By 1200 BCE, Egyptians were writing on papyrus, a surface made from a plant that was durable but expensive. Finally, by the fifth century BCE, people began writing on parchment – prepared animal skins that were dried, scraped, and bleached. Making parchment required flaying an animal, soaking the skin, removing hair, stretching and smoothing it, sometimes rubbing in flour or egg whites. This laborious process meant parchment remained the expensive default writing surface in the western world until the fifteenth century.
1c. The Vulnerability of Information
All of these early media – clay, papyrus, parchment – are relatively physically durable, which is why we can study them today. But they are also scarce. Each copy requires enormous human labor. This scarcity has profound implications. Modern digital preservationists follow the principle “lots of copies keep stuff safe,” maintaining at least three backups (two on campus, one off-site, perhaps even on another continent). The converse is equally true: “few copies keep stuff vulnerable.” The Library of Alexandria, the most significant library of the ancient world, founded in the third century BCE, symbolizes this vulnerability. Its eventual decline demonstrates what happens when a society stops actively caring for its knowledge.
Communication is Laborious
By the sixth century, monastic institutions began preserving texts through the widespread copying of books in rooms called scriptoria. Monks would sit quietly and copy manuscripts – some perhaps illiterate themselves, merely replicating the shapes they saw. This system meant that producing textual knowledge was expensive, that every copy introduced errors (like a game of “telephone” played across centuries), that science was difficult because data from other books might contain transcription errors, and that images, maps, and charts could not be reliably standardized. Communication could cross time and space, but it was constrained by human muscle.
Module 2: The Printing Press
2a. Introduction: The Production of Knowledge Before the Printing Press
Elizabeth Eisenstein captured the pivotal transformation: “In the late fifteenth century, the reproduction of written materials began to move from the copyist’s desk to the printer’s workshop.” Before the press, if humans wanted to transmit knowledge, they had to reproduce it through their own physical labor. Scholars have argued that the printing press represents such a dramatic turning point that someone standing in 1990 may have more in common with someone from the 1500s than a person in the 1500s would share with someone from the 1400s.
In monasteries, monks copied manuscripts in scriptoria, sometimes working within a “putting-out” system where each monk handled a piece of a manuscript assembled later. This process had five major consequences: books easily corrupted through copying errors; books were extremely rare and expensive; every book was its own unique “edition”; tables, diagrams, and numbers could not be trusted; and scholars had to physically travel to scattered libraries to amass knowledge. The default condition of a book was to disappear.
2b. The Gutenberg Press
The Idea of Moveable Type
A printing press requires three things: movable type, ink that sticks to the type, and a good supply of paper. Moveable type – the idea of using reusable blocks for individual characters – originated in China around 1040 CE using clay and later woodblocks. However, the vast number of Chinese characters made this less practical, and clay and wood wear down quickly. In Korea, metal moveable type appeared shortly after, but the educated class continued to prefer Chinese script. The relative simplicity of the Roman alphabet, with its small set of characters, made moveable type especially promising in Europe.
Gutenberg and His Press
Johannes Gutenberg (1398-1468), born in Mainz, Germany, trained as a goldsmith and moved to Strasbourg in the 1430s. There he combined three key innovations. First, he applied mechanical pressure to transfer ink to paper evenly, borrowing the concept from the wine press. Unlike the uneven pressure of stone rubbing, the press applied force uniformly across the entire page. Second, he developed a new oil-based ink that adhered to metal type – existing water-based inks simply ran off. Third, he benefited from the expanding availability of paper, a Chinese technology that had recently reached Europe and was far cheaper than parchment.
The resulting productivity gains were enormous. One estimate suggests that three printers working for three months could produce 300 books – what a scribe might manage in a lifetime. Gutenberg’s famous Bible appeared around 1445, and the press spread rapidly: Italy by 1465, Paris by 1470, London by 1476. By 1480, at least 87 presses operated across Europe. Within fifty years, the estimated number of books in Europe exploded from roughly 30,000 to 10-12 million.
The Spread of the Printing Press and the Impact on Society
The printing press made texts stable – a typo would appear identically in every copy – and dramatically cheaper. As supply increased, prices fell according to basic economic principles. Books moved from the exclusive domain of aristocrats, monasteries, and universities to middle-class households: rich merchants, professors, lawyers, and government officials could now own them. This expansion of access set off cascading changes in society.
2c. The Printing Press and Its Impact on Scholarship
Before the press, scholars could not rely on graphs or maps (hand-copied, they were unreliable), had no personal reference collections, and spent much of their time traveling between far-flung libraries. After the press, old texts gathered together in growing collections, enabling an era of cross-referencing as insights from different books and authors were combined. Maps, charts, and diagrams became standardized and trustworthy. Data could be built upon rather than suspected.
This arguably culminated in the Scientific Renaissance of the 1660s. Scientific journals were established, enabling the systematic sharing of research results. As Eisenstein noted, before printing, gifted astronomers spent lifetimes “making copies, recensions and epitomes of an initially faulty and increasingly corrupted twelfth-century translation from an Arabic text.” The printing press allowed scholars to be in conversation with one another – and that is what science fundamentally is.
2d. The Printing Press and the Protestant Reformation
The press also enabled the Protestant Reformation. When Martin Luther wrote his Ninety-Five Theses against the Catholic practice of indulgences in 1517, it was standard academic discourse – preachers routinely shared proposals on church doors. Luther was objecting specifically to Pope Leo X’s special indulgence to fund St. Peter’s Basilica, which could cover almost any sin. The popular saying was “As soon as the coin in the coffer rings, the soul from purgatory springs.”
The posting itself was unremarkable. But the printing press transformed what would have been a local academic exchange into a continental firestorm. Within two weeks, people across Germany knew about it; within months, all of Europe. Luther himself was astonished: “I then remarked – I see no reason why intelligence might not be transmitted instantaneously.” Three factors made this possible. First, a new class of literate priests could now own and read bibles. Second, educated lay printers staffed workshops throughout European towns, reading and discussing what they printed. Third, these printers’ workshops became social hubs where townspeople, academics, and clergy gathered to hear the latest news.
Luther was eventually charged with heresy and excommunicated, launching the Protestant Reformation. The religious wars that followed killed an estimated 25-40% of the German population. You could argue that the Reformation truly began in 1466 with the Gutenberg Bible rather than in 1517 with the Theses – because the press created the conditions that made Luther’s ideas explosive.
2e. Conclusions
Gutenberg kick-started an expanding media ecosystem: books, flyers, leaflets, pamphlets, and eventually magazines and newspapers. One thing united these technologies with what followed – the telegraph, telephone, and early computers – they all had high costs of production. Printing presses were expensive to own and operate, just as telegraphs, telephones, and early mainframe computers would be.
The parallel with our digital age is compelling. As computer scientists Carenini, Murray, and Ng have noted, “at an accelerating pace, people are having conversations by writing in a growing number of social media, including emails, blogs, chats and texting on mobile phones.” We live in a world dominated by text. Researcher Ian Milligan has studied GeoCities.com, a service that between 1994 and 2009 allowed anyone to create their own website on any topic they chose – a favorite sports team, a family tree, or an early form of blogging. By 2009, some seven million users had created about 186 million pages of content. As web archivist Jason Scott argued, a person putting up a web page in the mid-1990s “might have a farther reach and greater potential audience than anyone in the history of their genetic line.” Was each person operating their own free printing press?
The amount of data generated today is measured in exabytes (an exabyte is 1,000 petabytes; a petabyte is 1,000 terabytes). The printing press increased the density of social interactions; this process is now accelerating exponentially as we collectively exchange billions of emails, tweets, and text messages daily. The question remains: should we consider the Internet or Web to be revolutionary in the same way that the printing press was? It may still be too early to tell.
Module 3: Libraries and the Organization of Knowledge
3a. Introduction (The Idea of the Library)
As written records multiplied – from cuneiform tablets to scrolls to books – four recurring challenges emerged: storage (where to put objects), security (controlling access), preservation (ensuring long-term safety), and cataloguing (retrieving needed information). These challenges recur with every recording medium, from clay to silicon. Around 2400 BCE in Mesopotamia, we begin to see specifically designed buildings staffed with trained experts to manage growing collections – the earliest information infrastructure.
3b. Preserving Information in the Library of Alexandria
By the first century BCE, libraries proliferated across the ancient world, driven by two developments. First, a larger proportion of the population became literate, partly because the expansion of slavery freed time for the free population – a sobering reminder that early literacy came at the expense of human suffering. Second, lighter media like papyrus and parchment scrolls replaced heavy clay tablets. Moving from tablets to papyrus and later parchment brought serious advantages: these thin pieces of animal hide or plant fiber were flexible, strong, light, stable in dry climates, and could be rolled into scrolls rather than lugged around as heavy clay tablets. Writing on them required only ink, not chiseling. The trade-off was durability: clay tablets actually grow stronger when exposed to fire through the “firing” process, while scrolls burn easily. If we see the history of communication as one of technological improvement, we could make a counter-argument that clay tablets may have hit a peak in sheer durability – and we have been going downhill ever since.
The Library of Alexandria, founded in the fourth or third century BCE on the strip of land where the Nile Delta meets the Mediterranean, grew because of its strategic geographic location. Alexandria was Egypt’s portal to the world: boats traveling down the Nile passed through it. As ships entered the port, manuscripts were confiscated, copied, and the originals often kept. Given Alexandria’s critical location in ancient trade, the library quickly assembled a very large collection. It soon became a place to support scholarship, and in order for scholars to do their work in such a large and growing collection, dedicated professionals were needed. Enter the librarian. Libraries had to organize information for retrieval and physically steward collections for security and longevity – satisfying the four challenges of storage, security, preservation, and cataloguing.
When rolled-up scrolls proved hard to stack (taking one from the bottom caused the rest to tumble), librarians attached tags to their ends for quick identification. Later, sheets were cut to uniform size and bound between covers – the codex – allowing books to stand on shelves with identification codes on their spines. Modern cataloguing systems like the Library of Congress Classification and the Dewey Decimal System continue this tradition. At the University of Waterloo’s Dana Porter Library, for example, each book carries a call number (like HM 741.R35 2012) that classifies it within a universal system used by academic libraries worldwide.
What Happened to the Library of Alexandria?
The popular narrative of Julius Caesar burning the library is an oversimplification. Caesar may have damaged part of the collection in 48 BCE, but it was likely restocked with 200,000 volumes from the Library of Pergamum. Scholars now generally agree that war did not destroy Alexandria. Rather, “what survives war can ultimately die from inattention and neglect.” As Christianity and Islam developed, pagan works of Romans and Greeks lost their perceived value. Economic pressures and plagues compounded the problem. The lesson echoes through the centuries: knowledge in any medium requires active preservation.
3c. From Scarcity to Abundance: The Library and the Printing Press
Libraries Before the Printing Press
Medieval libraries were symbols of cultural inspiration and wealth. European aristocrats collected books not only for the knowledge inside but because rare items demonstrated status. Matthias Corvinus, King of Hungary from 1458, commissioned 4,000 manuscripts and had them carted to Budapest – beginning this project just as the printing press was making such efforts obsolete.
After Gutenberg, books ceased to be objects of wonder. Ruling elites turned to other forms of conspicuous consumption: sculptures, tapestries, paintings. Renaissance libraries suffered rapid decline. Corvinus’s library was lost entirely. When Hugo Blotius was appointed librarian of the Viennese Hofbibliothek in 1575, he found 7,379 neglected volumes: “How neglected and desolate everything looked! There was mouldiness and rot everywhere, the debris of moths and bookworms.” University libraries fared no better – Oxford and Cambridge both closed their libraries, with Oxford even discarding the books and furniture.
The Revitalization of the Library
Thomas Bodley (1545-1613), a scholar turned diplomat, began rehabilitating Oxford’s abandoned library in 1598 using his personal wealth. When voluntary donations proved insufficient, he purchased books systematically from across Europe. By 1620, the Bodleian Library held 16,000 items and attracted scholars from across the continent. Bodley required visitors to take an oath promising to “use the books and other furniture in such manner that they may last as long as possible” – the precursor to modern library policies. The Bodleian’s success launched a golden age of libraries, which shifted from relying on private patronage toward sustainable subscription models.
3d. Enlightenment Principles
The Enlightenment reframed knowledge as “something to be acquired, organized, and shaped into an instrument of progress” (Rumsey Smith). Thomas Jefferson (1743-1826), an obsessive book collector, believed that free citizens required access to knowledge. His private library of 6,487 books – the largest and most diverse collection in the western hemisphere, spanning Voltaire, Locke, Homer, Shakespeare, and beyond – became the foundation of the Library of Congress in 1814 after the British burned the original congressional library during the War of 1812. Jefferson’s 1813 vision that ideas should “freely spread from one to another over the globe, for the moral and mutual instruction of man” has been invoked by advocates of free intellectual property and an open Internet.
This rhetoric of freedom must be placed alongside its contradictions. Jefferson argued “all men were created equal” while owning enslaved people at Monticello. The Enlightenment ideal of an informed citizenry was, in practice, a very white and very elite conception of citizenship.
3e. Conclusion
The idea of the universal library persists today. In Ottawa, the Library and Archives Canada building sits alongside Parliament and the Supreme Court – it is no accident that our national library is next to the institutions of democracy. The Preservation Center in Gatineau, Quebec houses the country’s most important national treasures in vaults set to precise temperature and humidity levels, staffed by highly educated technicians working on documents, paintings, old computers, and websites. The Internet Archive, Project Gutenberg, and university libraries each approach the ideal of universal access differently, but all inherit the ambition first realized in Alexandria: knowledge requires not only collection but active, ongoing care.
Module 4: Electricity, the Telegraph, and the Telephone
4a. Shocking People for Fun and Profit
Between the domestication of the horse (some 4,000-6,000 years ago) and the eighteenth century, human communication on land was limited by the speed of a rider. Sending a message meant sending a person. Monarchs dispatched armies and waited weeks for news. Jean-Antoine Nollet’s mid-eighteenth-century experiments with electricity hinted at a faster alternative: communication by electrical current that could span miles, go around corners, and work regardless of weather or darkness. The limitation seemed fundamental, however: all you could apparently do with electricity was turn the current on and off.
4b. Early Telegraphs
Claude Chappe (1763-1805) and his brother developed the first practical telegraph by combining sight and sound. They initially set up two clocks, each with ten numbers and two hands. They synchronized them by banging a loud dish (when Claude banged, his brother knew to set his hands at the same position). They could then communicate by banging at specific moments as the clock hands passed certain numbers. This worked, but wind and noise made sound unreliable. They dropped the sound component and focused entirely on vision.
The result was a semaphore system: towers with moveable arms that could produce 98 different combinations of positions. Communication worked through a two-code system – first the page number of a codebook, then one of 92 entries on that page. Lines of towers stretched across the landscape, each a few miles apart, each staffed by an operator who would read the signal from one tower, copy it, and relay it to the next. Well-trained operators could send complex messages across countries in less than an hour. Chappe presented his design to the National Assembly of France during the French Revolution in 1793, demonstrated it successfully that year, and the Paris-Lille line opened in 1794. Connections to Strasbourg and Dunkirk followed by 1798. England adopted a slightly different system in 1795 – rotating panels rather than moveable arms – connecting the Admiralty in London to Channel ports.
These optical telegraphs had significant drawbacks. They were extremely expensive, requiring large physical infrastructure and shifts of personnel. They did not work at night (experiments with lanterns failed). Fog or mist disrupted communication entirely. And they served primarily military and government networks – the messages traveled in encrypted codebooks between armies, government leaders, and institutions. There were some civilian applications (lottery numbers were transmitted to prevent the old trick of learning winning numbers and racing to distant towns to enter), but ordinary citizens could only watch the towers signal overhead. Despite these limitations, optical telegraphs were an entrenched system, and people working on electrical alternatives seemed like cranks.
4c. Back to Electricity: The Development of the Electric Telegraph
While optical telegraphs dominated, experimenters continued working with electricity. Two problems held them back: until the 1820s, people could not reliably measure electricity (beyond zapping themselves), and after a few miles, electrical signals weakened on a line. The discovery that electricity could be measured through its magnetic fields, combined with redesigned batteries (many small batteries in series rather than one large one), opened new possibilities.
The Cooke and Wheatstone telegraph of the 1830s used multiple wires and needles to point at letters, primarily serving railroads. It was expensive and slow. Samuel Morse (1791-1872) took a different approach. After learning about electricity during an Atlantic crossing in 1837, Morse developed a simple binary code of short and long pulses – Morse code – that required only a single wire. On 24 May 1844, the first Morse telegraph line connected Washington, D.C. to Baltimore. Unlike predecessors who saw telegraphy as a military novelty, Morse envisioned it as a revolutionary communication system for the whole of society.
Wiring the World
Getting telegraph wires under water proved exceptionally difficult. In 1850, engineers laid a cable across the English Channel between Dover and Calais; it worked for a few hours before waves buffeted the wire against rocks, wearing off the insulation. A much tougher cable in 1851 succeeded, permanently connecting England to France.
The transatlantic cable saga was one of the great dramas of the nineteenth century. In 1853, engineers decided to connect Newfoundland to New York, so ships from England could deposit messages in St. John’s for relay. It took two and a half years just to complete this leg. Then, in 1857, two ships – the USS Niagara and HMS Agamemnon – set out from opposite shores to meet in the mid-Atlantic, each carrying half a massive cable. They began laying it, and after a few days the cable snapped and fell into the sea. They sailed home, raised more money, and tried again from the middle of the ocean. The cables snapped again. And again. And again. After four failed attempts, on 5 August 1858, the two ends were finally connected.
But the cable was so unreliable that the initial welcome message took over a week to transmit, and each individual message took more than sixteen hours. On 1 September 1858 – less than a month after connection – the cable failed permanently, likely because the chief electrician had used too high a voltage in an attempt to boost the signal. Years of recrimination followed.
Finally, on 13 July 1866, a new cable with superior insulation was successfully laid from Ireland to Newfoundland. Repair procedures were developed so breaks could be fixed within weeks, using electrical pulses to locate the exact point of damage. By the 1870s, cables connected France to Newfoundland (1869), India, Hong Kong, China, and Japan (1870), Australia (1871), and South America (1874). Within thirty years of Morse’s first line, 650,000 miles of cable encircled the globe. A message to India and back that once took ten weeks now required four minutes. This was a tool of empire – but empires had the capital to build such infrastructure.
The Rise of the Telegraph
The telegraph transformed daily life in multiple ways. The famous story of Fiddler Dick illustrates the end of criminal impunity: in 1844, a pickpocket stole a woman’s belongings at Paddington Station and hopped a departing train, confident he could outrun the news. Before the telegraph, this was a productive crime – news could not travel faster than a train. But a telegraph message reached Slough Station before Dick did, and officers met him on the platform twenty-five minutes later.
The telegraph also disrupted the news industry. Newspapers had previously competed on who could physically deliver information fastest. Now that telegraph networks spread news almost instantaneously, papers realized they did not all need correspondents at the same events. They began forming cooperative wire services like the Associated Press, treating news stories as commodities that could be sold, reprinted, and repackaged.
There were also social effects. In the United States, where roughly two men worked as telegraph operators for every woman, the profession offered significant employment equity for the era. Operators chatted across the wires to pass time, forming friendships and even romances – one 1891 story from Yuma, Arizona tells of a bored operator who befriended “Mat” through the wires, arranged a fishing trip, discovered Mat was a woman, and married her. Natural monopolies emerged in telegraphy – in Europe, governments ran the systems, while in the United States private companies dominated some 80% of traffic by the 1880s.
4d. Gabbing Around the World: The Telephone
Alexander Graham Bell (1847-1922), an elocution teacher concerned with deafness, patented the telephone on 7 March 1876 – the same day Elisha Gray arrived at the patent office with a competing design. Bell’s famous first words: “Mr. Watson – come here, I want to see you.” The telephone was marketed as “a serious instrument for serious people” – for bankers, hospitals, and police, not for personal gossip.
All Those Damn Wires: The Telephone Fights
Telephone poles immediately generated conflict. In December 1880 in Quebec City, Bell Telephone placed a pole directly in front of James Carrel’s newspaper office on the narrow Rue du Buade. Carrel’s furious editorials against Bell’s “abominable aggressions” launched a legal battle that shaped Canadian telecommunications. Municipal governments and telephone companies clashed across Canada and the United States, with firefighters sometimes chopping down poles while linemen perched on top to prevent it.
Regulatory Divergence: Canada vs. the United States
The outcomes diverged dramatically. In the United States, municipalities retained control over telephone infrastructure, passing beautification laws to force wires underground and negotiating affordable rates. Towns like Muncie, Indiana simply started their own telephone companies when Bell would not serve them. In Canada, Bell petitioned the federal government to declare telephones a national undertaking, insulating the company from municipal authority. The consequences were stark: by 1905, Indiana had one telephone per 12 people; Ontario had one per 90. American flat-rate pricing encouraged casual social use – gossip, songs, ad hoc concerts. Canadian metered service meant phones were for serious business only.
The Telephone as an “Annihilator of Space”
This period of local creativity and competition would ultimately give way to consolidation. As long-distance telephony evolved, Bell marketed itself not as a connector of local places but as an “annihilator of space.” On the occasion of the first coast-to-coast telephone call in 1915, Bell in New York repeated his famous words to Watson in San Francisco: “Mr. Watson, come here, I want to see you” – nearly forty years after first saying them. The telephone companies adopted the rhetoric of the independent networks: promoting the telephone as democratic and space-annihilating.
Conclusion
The telephone eventually became a regulated monopoly. The 1910 Mann-Elkins Act in the United States designated telephone companies as common carriers under the Interstate Commerce Commission, requiring non-discriminatory service and extending free-speech principles to privately owned carriers. A common carrier must offer service on a non-discriminatory basis: CN Rail, a hotel, or Grand River Transit serve everyone regardless, though they are not responsible if a customer commits a crime using their services. AT&T’s monopoly persisted until 1982, when the Department of Justice launched an antitrust suit. AT&T broke itself into seven “Baby Bells” – local competitors, some of which later reconsolidated into today’s AT&T and Verizon. The story demonstrates that communications technologies are profoundly shaped by political and regulatory decisions.
4e. Net Neutrality and Standards Today
History teaches that communication technologies are inherently political. How the telephone was used was shaped by court rulings, regulatory regimes, and corporate behavior. The same forces shape the Internet today. During the 2017-2018 debate over net neutrality, commentators frequently drew parallels to telephone regulation. As George Santayana wrote, “Those who cannot remember the past are condemned to repeat it” – though historians prefer to see themselves as informed commentators on the game, not prophets of the outcome.
Module 5: Hypertext and the Idea of Linked Information
5a. Introduction to Hypertext
Hypertext was formally defined in 1965 by Ted Nelson as “a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” We encounter hypertext constantly: “HTTP” in your browser stands for the Hypertext Transfer Protocol. The power of hyperlinks is easy to demonstrate – on Wikipedia, you can navigate from “University of Waterloo” to “Canada Goose” in just two clicks, a journey that would have required walking between library floors in the pre-digital age.
Yet the Web’s implementation of hypertext has specific characteristics that are choices, not inevitabilities. Web links are one-directional. The Web permits broken links (404 errors). Reading and writing content are generally separate activities. And links typically point to whole pages rather than specific sentences or words. Understanding hypertext’s history reveals alternative designs that might have been.
5b. The Memex
Introduction to the Memex
Many technologies discussed in these notes lack clear genealogies – the telephone and telegraph have scattered origin stories. Yet scholars tracing the history of hypertext consistently return to the same starting point: Vannevar Bush and the Memex, introduced in his July 1945 Atlantic Monthly article “As We May Think.”
Bush was an outsized figure in American science: he co-developed the differential analyzer at MIT between 1928 and 1931 – a general-purpose mechanical calculator that used gear ratios and spinning disks to solve differential equations. He led wartime research that contributed to the atomic bomb and radar, and helped establish the National Science Foundation. His Memex was a theoretical desk-based device using microfilm that would let a user store documents, add new material via a desk camera, and – most crucially – create associative links between documents.
The Two Factors: Differential Analyzer and Microfilm
The Memex drew on two foundations. The first was the differential analyzer, which demonstrated that machines could process information faster than humans. The second was microfilm – documents shrunk to roughly 1/25th of their original size on film strips, viewable through specialized readers. By the 1920s and 1930s, microfilm was revolutionizing information access: the Library of Congress microfilmed three million volumes from the British Library, meaning American researchers no longer needed to cross the Atlantic. Microfilm was extraordinarily information-dense – the entire run of a newspaper could fit on a few shelves. But it was frustratingly difficult to navigate: imagine fast-forwarding and rewinding through thousands of pages with no search function.
Bush’s first attempt to solve this problem was the Selector, which would map microfilm to codes so users could search for topics. But the Selector suffered from the same limitation as traditional cataloguing: it could find “cats” and “dogs” separately but not the complex interconnections between related concepts.
The Memex
Bush’s key insight was that human minds work by association, not rigid categorization. “The human mind does not work that way,” he wrote. “It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain.” The Memex would mirror this, allowing a user to tie related concepts together and create “trails” through information. Though never built, the Memex established the conceptual foundation for hypertext: the idea that connections between ideas are as important as the ideas themselves. It also contained a rudimentary machine-learning concept, where the device might learn from a user’s linking patterns.
5c. The oN-Line System (NLS)
Douglas Engelbart, while stationed in the Philippines during World War II, read Bush’s article in a Red Cross library. It transformed his thinking: could the same scientific skills that produced weapons of destruction be used to prevent it? By 1950, Engelbart had three “intellectual flashes” about how computers could augment human intellect, envisioning himself at a CRT console working in ways that were “rapidly evolving.”
At Stanford Research Institute, Engelbart developed the oN-Line System (NLS). It featured a CRT display, QWERTY keyboard, mouse, hyperlinked documents with bidirectional links (you could follow a link back to its source), keyword searching, and even video conferencing. The “Journal” software allowed any document to cite any arbitrary passage in any other document.
Engelbart demonstrated all of this at the legendary “Mother of All Demos” at the December 1968 ACM/IEEE conference before 2,000-3,000 computing professionals. The demonstration was revolutionary but also revealed the NLS’s weaknesses: it was complicated (Engelbart made several mistakes during the demo), required an army of technicians to deploy, and did far more than any single organization needed. When government funding dried up, key engineers left for Xerox PARC, and the NLS was eventually acquired and buried within corporate office software development.
5d. Xanadu and the Web
Ted Nelson, unlike Bush and Engelbart, came from the humanities. A sociology graduate student at Harvard in 1960, Nelson was frustrated by the difficulty of organizing his notes. After taking a computer course, he began thinking about how computers could serve information handling. In his 1965 ACM paper, he coined the term “hypertext” and described his Evolutionary List File (ELF), a system for comparing and linking documents. He envisioned Xanadu: a universal filing system that would store and deliver all the world’s literature through hyperlinked text, with bidirectional links, no broken documents, acknowledgment of authorship, and a micropayment system built in.
Xanadu has been called “the longest-running vaporware project in the history of computing” (Wired, 1995), and it remains under active development today. The Web implemented hypertext very differently – with one-way links, broken links, and a separation between reading and writing – but Nelson’s vision of interconnected knowledge profoundly shaped the digital world.
Module 6: The Internet: From ARPA to the ARPANET
6a. What is the Internet?
The Internet (capital I) is the largest internet (lowercase i) in existence. An internet is a network of two or more networks – an “internetwork.” The Internet is the interconnection of millions of networks using a common set of standards for seamless data exchange. It is not synonymous with the World Wide Web; the Web is an information system that uses the Internet, just as email, FTP, and other services do. Physically, the Internet is cables, wires, and switching equipment. Senator Ted Stevens’s 2006 description of it as “a series of tubes” was widely mocked but, as an analogy, is not entirely wrong.
The common standard that makes everything work is TCP/IP (Transmission Control Protocol/Internet Protocol). Competing protocols existed until TCP/IP was adopted by the U.S. Department of Defense in the early 1980s, incorporated into the UNIX operating system in 1983, and released as public domain software in 1989.
6b. The Context of the Cold War
The launch of Sputnik on 4 October 1957 terrified the United States. The subsequent explosion of the Vanguard TV-3 rocket on 6 December 1957 deepened the crisis. On 7 February 1958, the Advanced Research Projects Agency (ARPA) was formed with three defining characteristics: minimal bureaucracy, rapid response capability, and a mandate to tackle the future. ARPA’s early projects were audacious – Operation Argus launched nuclear warheads into the upper atmosphere to create radiation “force fields” against incoming missiles.
The Cuban Missile Crisis of October 1962 brought the problem of information sharing into sharp focus. Computers were used in a real-time military crisis for the first time, and commanders discovered two problems: too much information, and too much lag in sharing it. How could nuclear forces be effectively controlled without rapid information exchange?
6c. Information Processing at ARPA
J.C.R. Licklider, who took charge of ARPA’s Information Processing Techniques Office in October 1962, had articulated a vision of “man-computer symbiosis” in a 1960 paper. At ARPA, he penned memos describing a “Galactic Network” of connected computers sharing data and programs. Meanwhile, Paul Baran at the RAND Corporation simulated networks under nuclear attack and found that a distributed network with three levels of redundancy could survive. His concept required packet switching – breaking messages into small units (1024-bit “postcards”) routed independently through the network.
ARPA also inherited an unwanted bomber-defense computer (the AN/FSQ-32) from the Air Force and began developing time-sharing so multiple users could access one powerful machine simultaneously. When Robert Taylor took over as director in 1965, he had three separate terminals in his Pentagon office, each connecting to a different network. His frustration – “If you have these three terminals, there ought to be one terminal that goes anywhere you want to go” – became the driving force behind the ARPANET. The three building blocks were ready to converge: time-sharing, packet switching, and common communications protocols.
6d. Building the ARPANET
Licklider and Taylor’s 1968 paper “The Computer as a Communication Device” articulated the vision for the ARPANET: initially fourteen diverse computers sharing resources. The authors speculated that “life will be happier for the on-line individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity” – a prediction that may remind us of Reddit or other online communities.
The contract to build the ARPANET went to Bolt, Beranek, and Newman (BBN), who designed Interface Message Processors (IMPs) – predecessors to modern routers. Crucially, IMPs handled inter-computer communication: each computer connected to an IMP, and IMPs communicated with each other. This meant computers running different operating systems did not need to communicate directly. By 1969, four nodes were online: UCLA, UCSB, SRI, and the University of Utah. The network was publicly demonstrated in 1972 and by 1973 had grown to include Harvard, MIT, Stanford, the University of Illinois, and ARPA in Washington, D.C.
Imagine a message traveling from SRI in Stanford to ARPA in Washington, D.C. It could route via Los Angeles, Utah, and Illinois, or via Cleveland and Boston. If any node went down, messages could find alternative paths. This was Baran’s distributed network vision realized – redundancy built into the system so that even a nuclear attack on one city would not sever communications.
However, the early design had each IMP verify every packet as it passed through, causing serious congestion at scale. In a real-world analogy, imagine every post office along a letter’s route carefully inspecting it before passing it on – mail would slow to a crawl. The French CYCLADES network, run by the Institut de Recherche en Informatique et en Automatique, proposed a solution: verify data only at the sender and receiver endpoints, not at every intermediate point. By adopting this approach, the modern Internet could begin to scale.
6e. TCP/IP and the Birth of the Modern Internet
Robert Kahn and Vint Cerf designed the TCP/IP protocol to unify disparate networks. Their 1974 specification was incorporated into UNIX (ancestor of Linux and macOS) and adopted as the U.S. defense communications standard. When ARPANET adopted TCP/IP in January 1983, it became just another subnet of the broader Internet. Corporations dropped proprietary protocols. The Internet became an open platform. ARPANET was decommissioned on 28 February 1990. Cerf marked the occasion with a poem: “Lay down thy packet, now, O friend, and sleep.”
Module 7: Counterculture, Hackers, and the Political Shift
7a. Introduction: Is Technology a Force for Good?
In December 1964, Mario Savio of the Free Speech Movement at UC Berkeley addressed 5,000 people: “There’s a time when the operation of the machine becomes so odious – makes you so sick at heart – that you can’t take part.” As he said in 1965: “At Cal, you’re little more than an IBM Card.” Computers symbolized the dehumanization wrought by the military, corporations, and universities. Three decades later, every tech startup pitch seemed to conclude with “making the world a better place.” How did this transformation happen?
7b. Conformity in the 1950s and 1960s
Post-war affluence brought a baby boom and economic growth but also cultural anxiety: stiffening gender roles, the specter of nuclear war, and the dominance of large bureaucratic organizations. The children of this abundance challenged the status quo in two ways. The New Left – student radicals – sought political change through parties and protest. The counterculture – “hippies” – sought cultural transformation through communalism, psychedelics, and a return to the land, trying to build self-sufficient communes that imagined life before industrialization.
Stewart Brand and the Whole Earth Catalog
Stewart Brand, a Stanford graduate, was shaped by three influences: cybernetics (the interaction of human and machine), Marshall McLuhan’s The Gutenberg Galaxy (1962), which argued humanity was leaving the typographic age for an electronic one that could create a “global village,” and an idealized vision of Native American life. In 1968, Brand created the Whole Earth Catalog (WEC) – not quite a book or catalog, but a “network forum” connecting commune-dwellers, academics, technologists, artists, and the psychedelic community. By 1971, it grew to 448 pages, 1,072 items, and sold 2.5 million copies. The cover featured the “blue marble” image of Earth, an icon Brand had lobbied NASA to release.
7c. The Whole Earth… Online?
The WEC was published in Menlo Park, California – home of Stanford Research Institute, where Engelbart developed the NLS. Brand himself was the camera operator for the “Mother of All Demos.” When NLS engineers migrated to Xerox PARC, they recognized the WEC as an analog information system remarkably similar to their digital one. Brand’s 1972 Rolling Stone article “Spacewar” portrayed graduate students playing networked video games, presenting computing as creative, communal, and personal.
The 1973 OPEC Oil Crisis and ensuing recession largely ended the commune movement, but the computing industry kept growing. By the 1980s, personal computers were entering homes, and people wanted Brand to do for computers what the WEC had done for commune life.
7d. The 1980s: The Hacker Conference and the WELL
Steven Levy’s 1984 book Hackers defined a “hacker ethic” of hands-on learning: “Hackers believe that essential lessons can be learned about the systems – about the world – from taking things apart.” Brand organized the Hackers Conference of 1984, bringing together Steve Wozniak, Ted Nelson, Richard Stallman, and others to wrestle with questions of openness, code sharing, and identity.
In 1985, the WELL (Whole Earth ‘Lectronic Link) launched as a Bulletin Board System designed to recreate the WEC’s communal spirit online. Organized along “conferences” (Arts and Letters, Grateful Dead, etc.), the WELL celebrated collaborative organization. Unlike commercial services like Prodigy or CompuServe that treated users as consumers of packaged information, the WELL saw users as generators of community. Two foundational concepts emerged: the “electronic frontier” and the “virtual community.”
The WELL Ethos Goes Mainstream
Wired magazine launched in March 1993, declaring itself not about technology but about “the Digital Generation” who were “making it happen.” Many WELL members became prominent contributors. Wired blended countercultural ideals with right-wing libertarian politics, culminating in its August 1995 cover featuring Republican Newt Gingrich.
7e. The Political Shift
This all crystallized in some ways with John Perry Barlow’s 1996 “Declaration of the Independence of Cyberspace,” written at the World Economic Forum in Davos. The former Grateful Dead lyricist opened with memorable language: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.” The document declared cyberspace naturally independent of governmental authority: “We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.” It envisioned a “civilization of the Mind” where anyone could express beliefs “without fear of being coerced into silence or conformity.”
Barlow’s declaration invoked Jefferson and the American founding fathers, argued that legal concepts of property and identity based on matter did not apply in cyberspace, and proposed that the Golden Rule would serve as the only universally recognized law. It was utopian, sweeping, and influential – though critics would note that cyberspace was not, in fact, free from the power dynamics of the physical world.
The shift was nonetheless remarkable: in thirty years, computers moved from symbols of oppression to instruments of liberation. But as Fred Turner emphasized, just as 1960s communes had exclusions and dark sides (they were predominantly middle-class and white), the digital frontier carried its own contradictions – more places to work but through precarious contracting, more choice but fewer communal ties.
Module 8: The World Wide Web
8a. Introduction to the Web
The Web’s conceptual ancestors – Bush’s Memex, Engelbart’s NLS, Nelson’s Xanadu – demonstrated hypertext’s potential but never reached a broad public. The Web changed all that.
CERN and the Context
The European Organization for Nuclear Research (CERN), established in 1952 near Geneva, Switzerland, straddling the Swiss-French border, represented European-wide scientific cooperation. By the 1970s, CERN was dreaming of a new golden age of physics, planning to collide matter and antimatter in facilities that would eventually become the Large Hadron Collider – a 27-kilometer tunnel with gentle curves to minimize energy loss. The experiments required building precise detectors, writing computer simulations, capturing collision data, and analyzing it – all demanding large collaborations. The smallest team involved some 300 people; the largest, 700.
Researchers came from dozens of countries on rotating six-month to two-year contracts. They spoke different languages, used different computer systems, and saved data in incompatible, self-contained databases. By the late 1980s, CERN’s internal network had adopted TCP/IP, joining the global Internet. The infrastructure for connectivity was there; the organizational challenge of making information accessible and coherent across these diverse teams was not.
Tim Berners-Lee and ENQUIRE
Tim Berners-Lee (b. 1955), an Oxford-trained physicist, first worked at CERN in 1980. Confronted with “TMI” (Too Much Information), he created ENQUIRE, a program named after the Victorian how-to book Enquire Within Upon Everything. Users created nodes and linked them with typed relationships (“What is xxx part of?”, “What must I alter if I change xxx?”). ENQUIRE was cross-platform, featured bidirectional links, and integrated editing. Berners-Lee left CERN in 1980 without taking the program, and a colleague who received it lost it.
8b. “Vague, but Exciting”: The World Wide Web
Returning to CERN in 1984, Berners-Lee found things worse than ever. By 1989, personal computing was widespread and TCP/IP networking was mature. He wrote “Information Management: A Proposal” in March 1989, outlining the problems with keyword-based systems, explaining hypertext, and requesting two people for six to twelve months. His supervisor approved it with the famous marginal comment: “Vague, but exciting.”
To build the Web, Berners-Lee needed a NeXT computer – a machine from Steve Jobs’s post-Apple company that combined ease-of-use with UNIX’s development power. By Christmas 1990, he had defined the Web’s three pillars: URLs, HTTP, and HTML. He built the first web browser and the first web server. Intern Nicola Pellow developed the line-mode browser so that any computer with a text interface could access the Web.
On 6 August 1991, the Web was announced via Usenet newsgroups. On 30 April 1993, CERN declared the technology freely available, ensuring universal adoption.
8c. From Mosaic to Internet Explorer
NCSA Mosaic: The Killer Browser
The Web remained niche until early 1993, when NCSA Mosaic arrived. Developed by Marc Andreessen and Eric Bina at the National Center for Supercomputing Applications at the University of Illinois, Mosaic was the first fully-featured, popular graphical web browser. What made it special? It was fully tested and supported, easy to install (just download and run), and graphical – users could click on hyperlinks and see images load, experiencing the Web much as we do today. One significant omission: Mosaic let people browse but not edit content. While Berners-Lee was confident this feature might eventually be added, it largely never was.
Mosaic captured massive media attention. While people might have heard vaguely of the “information superhighway,” Mosaic seemed to be the way anyone could access it. It quickly became nearly synonymous with the Web itself, much to CERN’s frustration (since Mosaic relied on code libraries CERN had developed).
The Browser Wars
Some of the people from NCSA Mosaic were recruited to a new company: Mosaic Communications, which after a threatened lawsuit wisely renamed itself Netscape. The browser war had begun. Microsoft responded with Internet Explorer, leveraging its dominance in operating systems by bundling IE with Windows. By 2003, IE held approximately 95% of browser market share. The U.S. government alleged that Microsoft had abused its near-monopoly to run competitors out of business. After initially being ordered to break into two companies, Microsoft settled out of court with lesser penalties. Netscape was eventually bought by AOL, and members of the Netscape team founded the Mozilla Foundation, which today brings us Firefox.
The Growing Web
Canadian Internet access grew from 4% in 1995, to 25% in 1998, to 60% in 2001, to 71% in 2005, and to 88% in 2015. The Web was here to stay.
8d. Accessing the Early Web as a Historical Resource
The Internet Archive’s Wayback Machine, founded in 1996, preserves over 300 billion web pages. Researchers can visit archived versions of sites – the University of Waterloo’s 1997 homepage, Yahoo’s 1996 directory, early Amazon pages. However, the Wayback Machine searches only domain homepages and its keyword functionality is limited. Finding specific content requires deeper exploration, often through period web directories like Yahoo’s categorized listings from the 1990s.
Module 9: Web Cultures: Spam, Trolls, and Memes
9a. Web Cultures
User cultures developed in the margins of early networks. Graduate students playing on terminals at 3 a.m. in basement computer labs created new norms. Many were fans of Monty Python, and the word “spam” entered computing culture from the famous “Spam Restaurant” sketch. If you wanted to annoy another user, you wrote a script to flood their screen with “SPAM SPAM SPAM.” From these playful origins, spam, trolling, and memes became defining features of the Web.
9b. Spam
Finn Brunton defined spam as “the manipulation of information technology infrastructure to exploit existing aggregations of human attention.” The first spam message predated the name itself. On 1 May 1978, Gary Thuerk, a DEC Corporation salesperson, took the ARPANET directory, found all West Coast users, and sent a blanket message advertising an open house. The Department of Defense responded that this was “a flagrant violation” of ARPANET’s official-use policy. Of course, the incident also revealed that the network was already being used for unofficial purposes – swapping science fiction stories, sharing hobbies – and nobody had cared until someone tried to sell something.
The major watershed came on 12 April 1994, when lawyers Laurence Canter and Martha Siegel posted a Green Card Lottery advertisement to 5,500 Usenet groups. Usenet, a network of threaded discussion forums that started in 1980 (with groups like alt.fans.x-files for television fans), was the first major victim of commercial spam. The Green Card Lottery – an annual event that could change the lives of people in the developing world – was the bait. The return on investment was clear: minimal cost, maximum reach. This was spam’s fundamental formula: find a place where people are paying attention and exploit it.
Spam Helps Us Understand the Web
The history of spam is really the history of the Web’s growth. If it were just a few nerds in a basement swapping stories, nobody would bother spamming them. But once millions of people with money were online, spam became economically rational. The Web grew from one website in 1991, to 20 in 1992, to 10,000 by 1995, to millions by 1998.
Spam and Search Engines
Spam drove search engine innovation. Early search engines relied on HTML meta tags (easily manipulated by website owners) and keyword frequency (leading to practices like “BMW BMW BMW BMW BMW” hidden at the bottom of pages to game rankings). Google’s PageRank algorithm used hyperlinks as votes for quality – the more sites linking to you, the higher your ranking. Spammers responded with link farms – networks of websites linking to each other to inflate rankings artificially. Google countered by weighting the value of links based on the linking site’s own authority. The evolution continued through blog comment spam, phishing emails (which started in the mid-1990s with hackers capturing AOL accounts, noted with a fish symbol <>< in their lists), like farms, and CAPTCHAs that force us to prove our humanity daily. Fake news can be seen as the continuation of spam – continually tricking people into believing something that is not true.
9c. Trolls
Trolling – communication that is provocative, offensive, or disruptive, often “for the lulz” – has roots in Norse literature, where trolls are “troublesome, shifting, changing and hard to pin down.” The first high-profile incident occurred in 1993 on LambdaMOO, a text-based multiplayer game, when a user forced text-based sexual acts on others. Written up as “A Rape in Cyberspace” in the Village Voice, the incident sparked debates about virtual assault, online governance, and the difficulty of stopping such behavior.
Early Usenet groups engaged in organized trolling wars. The Karl Malden Harvard group attacked the Beavis ‘N Butthead group, who retaliated by driving the Harvard students off Usenet. The shift from Web 1.0 (information resource) to Web 2.0 (user-generated content) amplified trolling’s reach. Memorial pages on Facebook became targets. 4chan’s anonymous /b/ board enabled mass trolling campaigns. State actors like Russia weaponized trolling during the 2016 U.S. election. Scholar Andreas Birkbak has offered a partial defense, arguing trolls force us to “slow down reasoning” and question norms – though this defense becomes much harder to sustain when trolling crosses into harassment or state manipulation.
9d. Memes
Richard Dawkins coined “meme” in 1976 to describe cultural units that replicate through imitation, analogous to genes. Internet memes circulated on Usenet by the early 1990s. Godwin’s Law (1990) – “As an online discussion grows longer, the probability of a comparison involving Nazis approaches one” – was a deliberate exercise in “memetic engineering” by lawyer Mike Godwin, frustrated by the constant Nazi invocations in online debate.
Graphical memes emerged in the mid-1990s, with the “Dancing Baby” becoming the first mainstream Internet meme. The “Bert is Evil” meme – photoshopped images placing the Sesame Street character alongside villains – illustrates how memes escape their creators’ control. In 2001, protesters in Bangladesh inadvertently used Bert-and-Bin-Laden images on protest signs, having found them through a web image search. The creator took the site down, but the meme had already spread beyond his reach. Similarly, Pepe the Frog’s creator tried and failed to reclaim the character from the alt-right, killing Pepe off in 2017. These cases raise fundamental questions about meme ownership, consent, and whether such material should be preserved or removed.
Module 10: The Web Beyond North America
10a. Introduction
The Internet’s history has been predominantly told from the perspective of the United States and Western Europe. ARPANET, TCP/IP, hypertext, and the Web all originated in North America and Western Europe, giving English a dominant position online. Most early Web content was in English, and users worldwide often needed English to participate. But networked communication developed very differently elsewhere, shaped by local politics, economics, language, and culture.
10b. Web Alternatives: The Case of the Minitel
In the history of the Internet and the Web, we often treat technologies that did not lead to the Web as dead ends. But what if the Web had not caught on and some other technology had risen? The story of Minitel forces us to reconsider.
As Julien Mailland explained: “In 1991, most Americans had not yet heard of the internet. But all of France was online, buying, selling, gaming, and chatting, thanks to a ubiquitous little box that connected to the telephone.” Minitel, part of an earlier wave of telephone-based computer networks launched in 1978 as Teletel, succeeded for three reasons. First, dedicated terminals were given to subscribers free as a loss-leader alternative to paper telephone directories – each terminal cost about 500 francs to build, but France’s telecommunications agency absorbed the cost. Second, the system was truly plug and play: unbox, plug into the phone line, and it worked immediately, unlike the complicated personal computers of the early 1980s. Third, the French agency oversaw only the network infrastructure, not content, allowing vibrant user cultures to emerge. This was an early form of net neutrality: while illegal activity was still regulated, users could do what they wanted with the network.
By 1989, five million Minitel terminals were active. Users could shop, play games, chat, and access forums. The system collected payments from users and passed them to sellers (taking a cut) – functioning much like today’s Apple App Store. One particularly popular service, 3615 SM (originally “serveur medical”), attracted a large mainstream audience partly because its initials had racier connotations in French. This contrasted sharply with Britain’s Prestel, launched the same year but expensive, censored, and requiring a specialized engineer for installation – it peaked at only 90,000 subscribers. One French researcher even argued that Prestel’s failure was due to British “prudishness.”
Minitel survived into the 2000s, trusted for credit card transactions more than the early Web. But as Gillies and Cailliau noted, “Minitel may bring France to your home, but the Web brings you the world.” Its national character proved to be its limitation. Minitel was finally shut down in 2012.
10c. The Soviet Internet: A Failed Alternative
After World War II, the Cold War divided the world between the capitalist west and the communist east. Part of this contest was over which system was superior. If the ARPANET emerged from the American defense research establishment, surely the Soviet Union – with its ability to direct economic resources – could develop its own networked communication?
The story, told in Benjamin Peters’s How Not to Network a Nation, reveals a series of frustrated attempts. In 1959, Anatoly Kitov proposed the EASU (Economic Automatic Management System), linking factory mainframes through military networks to optimize production. The Soviet military did not take kindly to sharing their computers: Kitov was stripped of Communist Party membership and effectively lost his career. In 1962, Aleksandr Kharkevich proposed a distributed research network (ESS) conceptually similar to what Baran was developing at RAND, but the project died when Kharkevich died in 1965 – illustrating paradoxically how individual-dependent the Soviet system was.
The most ambitious attempt was Viktor Glushkov’s OGAS (National Automated System for Computation and Information Processing), conceived in 1962. The goal was to network the entire Soviet economy: a worker could input data about resources and production, a factory manager could view it, and the information could flow all the way to central planners. Unlike the distributed ARPANET, OGAS needed a central processor at the heart of the network – reflecting the Soviet vision of a nation as “a single incorporated body of workers” (Peters).
When presented to the Politburo on 1 October 1970 – hastened by the news that the ARPANET had come online in 1969 – OGAS was close to approval. But it was caught in a fight between the Ministry of Finance and the Central Statistical Administration. The Finance people successfully argued that the Statistics people would become too powerful. The project went nowhere after this denial. As Peters argued, OGAS “threatened to strip their institutions of the thing that justified their existence – the need to manage the command economy in the first place.” The irony is remarkable: in the United States, government funding enabled the decentralized ARPANET; in the centralized Soviet Union, entrenched bureaucratic interests prevented networking.
10d. ASCII Imperialism and Unicode
North American dominance shaped how computers encode text at the most fundamental level. When computers communicate, they transmit binary strings of bits (0s and 1s). Each character must be “encoded” into bits, transmitted, and decoded at the other end – much like a telegraph message, but at enormous speed. The earliest dominant standard was ASCII (American Standard Code for Information Interchange, 1963), which used seven bits per character. This was great for English: you could encode “Hi Ian” perfectly. But try encoding “Niels Brugger” – there was no representation for the u-umlaut. ASCII could not handle French accents, Scandinavian characters, or any non-Latin script. For the earliest decades of the Internet, this English-only encoding was the default. Non-Latin domain names were unavailable until 2011, meaning Chinese web users had to visit “.cn” rather than a domain in their own characters.
By the 1980s, ISO 8859 added an extra bit, allowing support for Western European languages (Part 1, 1987), then Turkish (1989), and eventually Thai (2001). But switching between language encodings within a single document required cumbersome “switching codes.” The breakthrough came with Unicode, developed in the 1990s and sometimes called the “Unicode Miracle.” Unicode provided a universal standard that could represent virtually all written languages simultaneously, serving as a nearly universal translator. Its adoption grew rapidly through the 2000s, but the legacy of English-language dominance in networked communication persists.
10e. Three Case Studies: Taiwan, Japan, and Korea
Three case studies from Asia illustrate how the same technologies evolved very differently depending on local context.
In Taiwan, Bulletin Board Systems persisted long after BBSes died elsewhere in the world. Thanks to government policies that provided faster connections to university students, free access via dormitories, and student-selected administrators, Taiwanese BBSes developed a distinct culture and remained relevant as platforms for student activism well into the 2000s.
In Japan, Mark McClelland has documented how language difficulties with ASCII-centric computers – which strongly preferred Latin characters – meant that early computer networking happened on corporate intranets rather than the global Internet. Internet adoption lagged throughout the 1990s because the global network was dominated by English. It was not until Internet access via mobile phones arrived in the late 1990s that Japan really began connecting to the global network at scale.
In South Korea, Dongwon Jo has argued that Internet use primarily evolved through H-Mail, a public email service launched in 1987. Originally designed as a broadcast service (messages sent out to many users), early Korean users creatively manipulated the system so that it became a standard email service and even an early online community. Each of these cases demonstrates that technology is not destiny – local politics, language, infrastructure, and culture shape how people actually use the tools available to them.
Module 11: Web Archives and Cultural Datasets
11a. Introduction
The Internet Archive and similar institutions address a fundamental shift in the cultural record. We have moved from source scarcity – historians wishing they had more information – to source abundance. Ordinary people now leave behind vastly more written material than celebrated authors of past centuries.
11b. A Quick Introduction to Web Archives
GeoCities.com (1994-2009) illustrates both the scale and fragility of digital culture. By 2009, seven million users had created accounts and some 186 million pages of content – websites about favorite sports teams, family trees, fan fiction, and early blogging. The Internet Archive, founded in 1996 by Brewster Kahle and Bruce Gilliat in San Francisco, preserves over 300 billion web pages. Thanks to its Wayback Machine, researchers can visit archived sites: early Amazon, university homepages from the 1990s, Tamagotchi fan pages. Web archives allow historians to access not just elite voices but the experiences of people who never before would have appeared in the historical record.
What’s the Catch?
There is too much data. Historians have traditionally worked with source scarcity; now they face source abundance. The University of Toronto’s Canadian political web archive returns over 637,000 results for “Stephen Harper.” Search engines determine what researchers see first, effectively becoming co-authors of historical scholarship.
11c. Ethics and Scale
Two major challenges confront researchers working with cultural big data.
Ethics: Just because material is publicly accessible does not make its research use unproblematic. A tweet from someone with fifteen followers carries a different expectation of privacy than a presidential statement. If you were on Twitter and the University of Waterloo used your tweets in a report, you might feel violated. Researchers must weigh the expectation of privacy of content creators and the scale of their analysis.
Scale: Working page by page through the Wayback Machine is impractical for millions of documents. Historians increasingly train computers to read text at scale, leverage hyperlink analysis (similar to PageRank) to identify important sources, and use machine learning to find patterns. In one research example, hyperlink analysis of GeoCities’ “Enchanted Forest” neighborhood – containing hundreds of thousands of pages largely written by children – revealed that the most-linked site was an awards page, providing insight into the social dynamics of that online community.
11d. Creating Our Own Cultural Datasets
Tools like twarc allow researchers to collect Twitter data systematically. Archiving websites is technically challenging: technology constantly changes (infinite scroll, embedded content from multiple domains, growing file sizes), and websites are really composites assembled from many sources in the user’s browser. Preserving government websites is particularly urgent – reports that once existed in print now live only online and can be deleted during transitions of power.
Individuals can also contribute to preservation. The Internet Archive’s Save Page Now service lets anyone archive a webpage, and tools like WebRecorder.io capture sites exactly as they appear on screen.
11e. Conclusions
If you ever wonder about the importance of these new forms of communication, consider Zeynep Tufekci’s Twitter and Tear Gas, which documents how social media has become a major site for bringing diverse groups together for meaningful change. As Tufekci writes of the 2011 Egyptian Revolution: “Thanks to a Facebook page, perhaps for the first time in history, an internet user could click yes on an electronic invitation to a revolution. Hundreds of thousands did so.”
Our cultural record is dramatically expanding. Information was rarely preserved in the era of print; now we preserve social media streams, websites, movies, and photographs at an unprecedented rate. This brings new ethical territory – much material is archived without creators’ knowledge or consent – and demands new technical skills for humanists working with data at scale. Through web archives and big data, with ethical caveats and technical limitations kept in mind, historians should be able to garner better insights from lives lived online and off, and include more voices and more diverse voices in the historical record. The era of “dead white men” dominating the record may finally be ending.
Module 12: Conclusions
12a. Conclusion
The Information Revolution
Module 1 introduced the idea of a “disruptive information age.” Throughout these notes, we encountered similar rhetoric of revolution around earlier technologies: the printing press, the telegraph, the telephone, hypertext, and the Internet. Each was heralded as world-changing. A key question emerges: is it proper to understand what is happening as “the” information revolution, or just “an” information revolution, or not really a revolution at all? Remember: if you are calling something a revolution, you are making a historical claim.
The Value of History
Module 1 opened with a quotation from ex-Uber engineer Anthony Levandowski, who told the New Yorker: “I don’t even know why we study history. It’s entertaining, I guess – the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow.”
These notes have shown the opposite. Understanding the printing press illuminates debates about digital disruption. Understanding telephone regulation informs net neutrality policy. Understanding the Soviet Union’s failed networks reveals why the ARPANET succeeded where OGAS did not. Understanding the Whole Earth Catalog helps explain Silicon Valley culture. History does not predict the future, but it provides the informed expertise to understand the structure of the game, to know when to be surprised, and to recognize patterns that might otherwise go unnoticed.
We’ve Come a Long Way
These notes have covered considerable ground. We began with the earliest forms of recorded human communication – cave paintings in Spain, Argentina, and Indonesia – and moved to inscriptions on stone in Mesopotamia, the Gutenberg Press, and objects printed on parchment and vellum. It was with Gutenberg that we first considered whether human history could be divided into two epochs: before the printing press and after. We explored methods of transmitting information: through vision (the optical telegraph, long lines of towers across France and England) and sound (clock bells synchronizing time), and then through electricity. The telegraph connected the world, with transatlantic and transpacific cables joining areas that had previously taken weeks to communicate.
Then came the Internet. As information density grew dramatically, ideas of hypertext challenged the linearity of text, allowing us to think about how computers could connect information in novel ways. Much of this potential was realized through the ARPANET and ultimately through the World Wide Web, which has reshaped our world in a truly global fashion since its availability in 1991. In turn, all of this is transforming how society understands itself, as we record far more historical information than ever before.
Our Lives Have Changed
Consider Edgar Allan Poe (1809-1849), one of the most celebrated and influential American writers of the nineteenth century, best known for poems like “The Raven.” If you are a literary scholar wanting to study Poe, you have 422 letters written by him to figure out his life. Most people alive today have already “published” more than Poe ever did and will leave more behind. We live in a world dominated by communication – through text, images, video, and beyond. The question endures: is that a good thing?