HIST 216: History of Information
Course Author(s), University of Waterloo
Estimated reading time: 5 hr 13 min
Table of contents
HIST 216: History of Information
This course traces the history of information and communication technology from the invention of the printing press to the modern Web. It explores how each new medium—from moveable type to the telegraph, the telephone, the internet, and the World Wide Web—transformed society, reshaped culture, and created new possibilities and new dangers for the organization and sharing of knowledge.
Module 1: Communication Before the Digital Age
1a. Welcome to the Course
Introduction
Review the Syllabus
Now that you have watched the introductory video, I would like you to review the syllabus in depth. In general, you will see the pattern we fall into throughout the course.
Once a week, I will post an announcement in LEARN that contains all of the information about the week’s readings and remind you about the discussion activities.
There will usually be a discussion activity where you will engage with your classmates.
There will be some longer-form assignments to help you explore course themes in a more creative and expressive format.
Material in the lecture modules and readings will be assessed on a final exam.
1b. An Early History of Communication
Humans have long wanted to communicate with each other. We often communicate to each other through our voices. If we all ended up in the same physical space, we would probably talk to each other, tell stories, joke, laugh, etc. Indeed, many early cultures operate in a story telling narrative tradition. Stories are passed down from generations to the next, perhaps changing along the way, representing a proud oral tradition that has implications today for legal systems.
For example, many First Nations within Canada operate in an oral tradition, and today when we weigh the oral history interpretation of a treaty, say against what English negotiators wrote down at the time, we understand that there is no primacy of one against the other.
One limitation to oral communication is its range. Imagine we are in a lecture hall. If I raise my voice and you are sitting in the back row, you could probably hear me. But if I speak at a normal conversational level, you might have trouble hearing me. Step out to the bathroom and we can’t communicate or go out to another building and we might as well be on different continents in terms of our immediate ability to exchange information.
Figure 1: Cave painting from Argentina, 13,000 - 9,000 BCE. Buenaventuramariano/iStock/Getty Images
Figure 2: Sulawesi Cave Art with hand imprints, Indonesia. KvitaJan/iStock/Getty Images
Some three million years or so after the emergence of our race, we begin to find evidence of humans communicating by using cave paintings. Our ancestors made paint from fruits, berries, animal blood, coloured minerals, and then began to paint on the walls of caves.
Here is one of the oldest cave paintings from Indonesia, which might be around 39,900 years old. Because it is so long ago, we struggle to know where and when the idea of painting on a wall came from. Some theories see cave painting as being brought around the world by early humans as they migrate out of Africa; others believe that different groups of humans around the world came to this idea spontaneously and independently.
Here are some more in France, for example, beautiful paintings of predators like lions and mammoths. We can see common patterns, such as hand imprints, which you have seen in Figure 2, as well a lot of animals. Perhaps they were warnings of animals, or just art, or part of early religious traditions to increase numbers of animals, or to translate visions or dreams.
To me, they are significant because they speak to our desire to leave something behind. I’m struck by this as a historian, because without leaving something behind, we don’t have a history: if we speak, and nobody else hears this or remembers this, it is lost forever.
But if we write something down, whether it is on a cave or on a scroll or on a laptop or in the cloud, others can read it. We can leave something behind. We can communicate outside of our immediate vicinity. This sentence, for example, is being written and presented to you through the University of Waterloo’s online learning system… and will be seen by hundreds of people for who knows how long.
Figure 3: Lascaux animal painting. (Lascaux Animal Painting, c. 17,000 cal BP)
Early Text
Figure 4: Approximate location of Sumer. PeterHermesFurian/iStock/Getty Images
If cave paintings were one such way to express content, humans at a much later time would begin expressing themselves in text.
Well before 3000 BCE, people were clustering in densely populated urban centres in Mesopotamia, today’s Iraq. One group, living in the city of Sumer, began to write receipts when they were conducting small trades. This is important: without a receipt, one of the parties could claim that they were cheated or try to renegotiate what had happened. Traders thus begin to give these tablets to certify what had happened. They would recruit someone skilled in the art of transcription to draw certain symbols on a clay token which were understood by both trading parties.
Figure 5: Timeline indicating the appearance of writing in the ancient world. © University of Waterloo
We have seen from cave art that clearly humans knew how to communicate in pictures and other written forms of ideas, so this is not the first time people are communicating ideas through text. But in Mesopotamia, they have two key things: clay, and reeds to press down and write in the clay. Part of the reason we today know that they were doing this is that clay is relatively durable and we can find it today!
This idea is found around the world. Similar to cave paintings, writing begins to appear in different places, and there are debates about whether there was a single origin or if it was occurring independently in different places. Certainly by 3000 BCE, Egyptians are also writing, and by 1200 BCE, the Chinese.
Pictured in Figure 6 is the Epic of Gilgamesh from around 2100 BCE, considered to be the first true work of literature written down by a human. Written as epic poems, the Epic of Gilgamesh is used today by scholars to try to understand these societies – and even seeing themes that would appear in the later Christian Bible, or Greek literature and poetry.
By 1200 BCE or so, in Egypt, people begin writing on new surfaces. There, for example, papyrus was used as a writing service, made out of a plant. Papyrus is quite durable, but is expensive.
Figure 6: Egyptian Papyrus showing the last judgment. Aloya3/E+/Getty Images
Parchment
Finally, by the fifth century BCE, people begin to write their texts on parchment . Parchment is made of prepared animal skins that were dried, scraped, bleached, and then people could write on them. Until the fifteenth century throughout the western world, this was the expensive surface that most people could write on.
Figure 7: Qumran Caves Scrolls in Israel. alefbet/iStock/Getty Images
Imagine if even a single piece of paper that you wanted to use had to be prepared by flaying an animal, soaking their skin in water, using a solution to remove all the hair, hang it up and stretch it, perhaps rubbing flour, egg whites, or milk into the skin to make it smooth and white… you probably would not use much paper!
We will consider what the implications of a world based on clay , papyrus , or parchment is like in the next module.
1c. The Vulnerability of Information
All of what you have seen so far is relatively physically durable – a clay tablet or parchment are actually fairly durable, more than in the pages of the recent paperback novel you might have read – which is why we are talking about them today. They are also very, very valuable because they are scarce .
There are not many clay tablets because they take a long time to copy; there aren’t many parchments, because of the process of not only making the sheet but also the need to sit down and physically copy every word on the page; and beyond.
This has some dramatic impact. Those of us who work in digital preservation often think of the phrase “LOTS OF COPIES KEEP STUFF SAFE”. For example, if our research group really want to preserve data here at the University of Waterloo, we often have a minimum of three copies: two on campus, and a third one off campus, so that if some crisis happens on campus, we’ll still have a copy somewhere else. If our data was even more valuable, we might want to have a copy in another country, or another continent.
Figure 1: The physical separation of data. Rainer Lesniewski/iStock/Getty Images
Figure 2: A symbol of vulnerability in the ancient world. Grzegorz Zdziarski/iStock/Getty Images
The converse, however, is “FEW COPIES KEEP STUFF VULNERABLE”. Nothing symbolizes this better to me than a place like the Library of Alexandria, which we’ll be discussing in a few weeks. The Library of Alexandria was the most significant and one of the largest libraries in the ancient world, founded at some point in the third century BCE until it collapsed (we will talk about the details of this) at some point in the first six centuries of the common era.
Accordingly, until the processes we talk about in the following modules, human information is quite scarce.
Communication is Laborious
By the time we get into the sixth century, monastic institutions begin to preserve history through the widespread copying of books. This requires people sitting down, copying material, drawing illustrations (or “illuminating” a text), just to keep a slow pace of copying going. We’ll explore this more in the next module, but the summary is that:
Producing textual knowledge is expensive.
It becomes full of errors and corrupts, because every time you copy you get more errors.
Because knowledge corrupts, science is very difficult, because you can’t just rely on data from another book because it might have been transcribed in error.
Because you can’t really reproduce images, we don’t have standardized maps or start charts or tables.
Figure 3: The corruption of knowledge, as texts are copied and re-copied. aluxum/iStock/GettyImages
In other words, we can communicate with each other over time and space, but it’s slow and limited by our muscles.
Now that you have had a chance to review the syllabus and this opening module, I would like you to participate in our opening discussion activity.
Module 1 Class Discussion Activity
The entire class will participate in the Module 1 Discussion.
Please answer the following three questions:
Who are you?
What program are you in?
Reflect on the following open-ended prompt in no more than 100-150 words: The Internet is the most disruptive invention of the last 100 years . Yes or No? Give two or three reasons to support your argument.
Note that there is no “right” answer and the reflection question itself can be called out as problematic. But as you think on this, reflect. Were you on the fence? Was it easy to write down two or three reasons? Or hard?
Be sure to post your responses in the Introduce Yourself Discussion Topic by the date specified in the Course Schedule.
Discussion activities for Modules 2-12 will take place within your groups for the remainder of the course.
Module 2: The Printing Press
2a. Introduction: The Production of Knowledge Before the Printing Press
“In the late fifteenth century, the reproduction of written materials began to move from the copyist’s desk to the printer’s workshop.”
(Eisenstein, 1983)
When we think about the impact of the printing press, nothing underscores this to me more than this quotation above. It captures the big shift in the production of knowledge that we will be talking about in today’s module.
In simpler words, until the printing press, if humans wanted to transmit knowledge, we largely had to reproduce this knowledge ourselves, through our own physical labour. We labour to speak, or if we wanted to transmit text, we laboured to write it down.
Indeed, you could argue something like the following:
Figure 1: Printing press pivotal in human history. Nadiinko/iStock/Getty Images
Scholars, like Elizabeth Eisenstein and others, have argued that the printing press represents a pivotal moment. That we, standing in 1990, might have more in common with somebody from the 1500s than somebody from the 1500s would have with somebody from the 1400s. In other words, that the printing press is such a significant moment that we today live in a print world, dominated by text.
Part of the debate we face today, as we consider the impact of the Internet and the World Wide Web, is similar to this. Are we still in the era of the printing press? Or has something more revolutionary come along with these new forms of media? Keep this question in mind as you read and explore this module.
Scarcity: The Production of Knowledge Before the Printing Press
Let’s consider what producing knowledge was like by the fifth century.
Large monasteries had rooms called scriptorias where monks would copy manuscripts. These would be quiet places of rote copying: a monk might see a mistake, but he would not have corrected it and would have just copied what he saw. Some of these monks may actually have been illiterate themselves, copying the shapes and scratches that they saw before them. This was not a static system, and perhaps by the twelfth century we begin to see the rise of a “putting-out” system, where monks took their work home with them, each of them working on a piece of a manuscript that would be later assembled.
What does this kind of system mean? What does it mean if your society is based on people writing all of your books?
First, under this system of knowledge reproduction, knowledge easily corrupts . Texts that were copied would corrupt, as every time a new edition was produced a human had to copy it, introducing errors here and there, as they went.
It’s sort of like the children’s game “telephone” (where you repeat a phrase to the person next to you, and they in turn to the person next to them, and the ensuing message after a dozen translations can sometimes be hilarious) except it’s happening with every book produced in the world. Libraries face shrinkage: if you can’t replace a book easily, you can imagine how a place could lose them quicker than they could produce them.
Figure 2: The corruption of knowledge as texts are copied and re-copied. aluxum/iStock/Getty Images
Second, books were very rare . They were expensive and slow to produce, relying on a lot of human labour. This scarcity of books has five different effects.
The default condition of a book is to disappear. Most books exist once or twice, and then disappear.
There are no such things as editions (think of a microeconomics textbook in its seventeenth to eighteenth printing, for example). Instead, every book is its own edition.
You can’t really trust tables, diagrams, numbers, etc. because there is always a chance an error crept in.
If you want to amass knowledge, you have to physically travel: library A, library B, library C, spread across multiple towns, cities, or even countries.
Text is difficult to copy, but images, charts, diagrams, etc. are copied by hand as well, so there is even less standardization.
In short: books are rare, knowledge is spread out and hard to get.
2b. The Gutenberg Press
The Idea of Moveable Type
So how does this state of affairs begin to change?
A printing press requires three main things to work: movable type, ink that will stick to said movable type, and a good supply of paper. The first is so important, so let’s talk about it at length.
Figure 1: Moveable type lettering. southsidecanuck/iStock/Getty Images
Moveable type is probably the most important of these three things. The idea behind movable type is that instead of writing the letter “A”, you instead use a block that has the letter A. Imagine you have five blocks: “G”, “O”, “O”, “S”, and “E”. Instead of writing out GOOSE over and over again, you could just stamp the blocks and before you know it, you have a flock of goslings.
This approach to printing pre-dates Gutenberg and the event we talk about in this module. Moveable type dates back to around 1040 CE in China, where we see initially the use of clay blocks and later woodblocks were used to arrange Chinese text. The Chinese language presents some challenges for this sort of work, however, because there are so many different letter types that you have to have many, many different blocks for a printing press to work. Clay and wood are not ideal, however, as they wear down quickly: to get the full benefit of movable type, you want the characters to last!
Shortly afterwards, in Korea, we see the development of metal movable type. While this might have been promising, since the Korean alphabet is much smaller than the Chinese alphabet, yet their educated class continues to want to use Chinese text. The relative simplicity of the Roman alphabet (the modern-day language used in English and French, for example), makes movable type especially promising.
But this idea is important. Again, the idea is that you begin to lay out these characters, arrange them into a document, and then apply them to pages.
Moveable type would be put into action in Europe by Johannes Gutenberg, who also benefits from paper and ink inventions.
Figure 2: Moveable type in action. ferrantraite/E+/Getty Images
Gutenberg and His Press
Given how important Johannes Gutenberg (1398-1468) is to both printing and the development of the modern world, it is surprising that his personal biography is a bit opaque. Born in Mainz, Germany to a prosperous family, Gutenberg began his career as an apprentice goldsmith and ends up in Strasbourg by the 1430s. There he decides to try to make a printing press, which he will ultimately succeed in doing with his famous printing of the Gutenberg Bible in 1445.
So if moving type is not a new invention, what makes Gutenberg’s specific press so important? Three factors which we will explore in turn:
pressure to transfer ink,
a new kind of ink that stuck onto metal pieces, and
access to paper.
The first major advantage of Gutenberg’s press was pressure . Gutenberg took metal, movable type and came up with the idea of literally pressing the type down onto the paper.
Figure 3: Johannes Gutenberg. (Unknown, c.1468)
You may have done a stone rubbing before, such as when you lay tracing paper over a gravestone and then run a pencil sideways over it many, many times. The downside of that approach is that it’s both time consuming and uneven: parts of it will have more pressure applied, parts of it less pressure applied.
The printing press applies pressure evenly across the entire sheet. You can see the press here, similar to how a wine press works:
Figure 4: Printing press, showing printer holding the press crank. nicoolay/iStock/Getty Images
The person highlighted above would move that stick and apply pressure down on the sheet. Don not worry if it is a bit unclear, as we will be watching a video shortly.
Secondly, Gutenberg needs a new kind of ink . Most inks at this time are water-based inks, which have the downside of running off the moveable type. Imagine you take your metal letter “A”, dip it in the water-based ink, and when you pick it out of the ink, it all comes running off the metal type. Little of it is remains on the letter until you actually apply it to the page! Gutenberg makes a new kind of oil-based ink that “sticks” to the letters.
Finally, he needs paper . Here is where luck plays a major role. Around the fifteenth century, when Gutenberg is working on his press, there is an expansion of papermaking in Europe. In the introductory module, we talked a little bit about how difficult parchment was to make – papermaking, a technology which has come to Europe from China, is a lot easier than killing animals, skinning them, tanning them, stretching them, all that fun stuff. Paper is now available, which is both strong and cheap.
Let’s watch a short clip from a great documentary where Stephen Fry actually builds a modern Gutenberg Press!
(The Gutenberg Press, 2008)
Pause and Reflect: The Printing Press
Reflect on the video that you just watched. What did you find most surprising about the printing press? Do we learn anything when we actually build something rather than just read about it in a course reading?
Note: This is for individual reflection only; you are not required to submit your answer.
The Gutenberg press leads to a revolutionary shift in the speed by which we produce knowledge. One estimate is that three printers working for three months with the Gutenberg press could produce 300 books. A scribe working for a lifetime might be able to write 300 books. Imagine the productivity gains!
Figure 5: Efficiencies of the printing press on book production. Nadiinko/iStock/Getty Images; PeterPal/iStock/Getty Images
The Spread of the Printing Press and the Impact on Society
As we have seen in the video, the printing press was a very successful product. It begins to quickly spread from that initial press in Strasbourg: by 1465 there’s a press in Italy, one in Paris by 1470, London by 1476. By 1480, there are at least 87 presses spread throughout Europe. Beyond numbers of presses, though, the explosion of the number of books is significant.
One estimate is that there might have been 30,000 books in all of Europe before the Gutenberg’s press – fifty years later, there were 10-12 million books. In the next module, we will look a bit more at what that means for libraries and the relative value of a book. For now, let’s consider some of the impacts that suddenly being able to produce knowledge at scale means for society.
With Gutenberg, we now have the ability to mass-produce books – or, in other words, to mass produce and distribute knowledge. This means a few things. First, texts become stable (a printer might make a mistake and make a typo, but it will be the same typo in every copy).
Figure 6: Book production pre- and post-1455. mayrum/iStock/Getty Images
Second, and more importantly, books become far cheaper. As the supply of books increases, their price begins to fall. If you have taken a basic economics course, you have seen a “supply-and-demand curve”, something like this:
Figure 7: Supply and Demand Curve. © University of Waterloo
When books are copied out by hand, they are fabulously expensive. Imagine how you would treat a book if you knew it was copied out word-for-word by hand? Books were just for aristocrats, monasteries, universities during this time. After Gutenberg, they became more affordable and plentiful. This doesn’t mean poor or working-class people were buying books, but middle-class households could now own books: rich merchants, a professor, a lawyer, a government official.
Figure 8: Access to books pre- and post-1455. Nadiinko/iStock/Getty Images; mayrum/iStock/Getty Images
Now this begins to have dramatic effects on society. In the next two sections, we will explore some of these implications in depth.
If you haven’t read the chapter from John Naughton, From Gutenberg to Zuckerberg yet, please do so. In this excerpt, he talks about some of the major legacies of Gutenberg’s Press.
We will return to this book in the last module, but please reflect on his argument. What does Naughton think are the major legacies at play? Do you agree?
What your prof thinks: Naughton is using historical arguments to help shed light on the contemporary “information revolution” – in this case, exploring how we can use Gutenberg as a way to process today’s events. Beginning on page six, he begins giving a series of legacies that the printing press left behind: mass production, advertising, intellectual property, accessibility, scholarship, science, and even childhood. I think Naughton’s general approach is sound when it comes to summarizing the scholarship on Gutenberg, but he’s on shakier ground when it comes to thinking about the Internet. We’ll encounter the themes of utopianism and dystopianism in a later module (Module 7 on the counterculture), but suffice it to say that nothing is ever so straightforward. Clay Shirky probably should get the last word, as he almost does in the chapter: “It is too early to tell.”
2c. The Printing Press and Its Impact on Scholarship
One of the places that the printing press has dramatic effect on is scholarship and research. Now that books are more accessible, they can begin to be spread around.
In the last module, I alluded to the difficulties of finding books if they are so rare and only one or two copies of each text exist. Imagine what it would be like to be a historian or scholar in this kind of environment. First of all, you cannot rely on graphs or maps, because they’ve all been hand-copied so you never know if your monk had a shaky hand or demonstrated a lack of attention to detail.
Secondly, books are so expensive, so you don’t have your own reference collection, and even if you worked at a university or school or monastery, they would not have a reference collection of any size either. Instead, you would have to travel all over regions or countries to find books. Imagine if you were writing an essay and you needed to read a book that was at the University of Waterloo, and then one that was at the University of Toronto, and then suddenly one that’s at the Boston Public Library or even in New Orleans.
While partaking in mardi gras might be fun, this would get annoying and slow down scholarship: a scholar couldn’t produce much, couldn’t consult important books at their own desks, and would be on the road a lot.
Indeed, if you have visited any your professors at the University of Waterloo and seen our offices, you can see how important books are today! Even in the age of digital sources, humanists have rows and rows of books that we consult on an hourly basis when we write.
Now that libraries and research collections begin to grow, suddenly old texts are now all together in the same room. An era of cross-referencing begins, as insights from different books and authors are combined.
Crucially, this is all standardized . As we saw above, this is a double-edged sword: one printer’s mistake can be repeated hundreds of times (if not more); but a great insight can now become trusted as it is reproduced verbatim across these hundreds or thousands of copies. Maps, charts, and diagrams all become more trustable, and so too does data. This means that scholars can begin to rely on tables made by earlier people, allowing people to build on trusted foundations of scientific research.
Figure 1: Professor Milligan’s office. © University of Waterloo
Indeed, this all arguably culminates in the explosion of science, which is the Scientific Renaissance of the 1660s. Scientific journals begin to be established, allowing the sharing of research results. In astronomy and navigation, for example, it was hard to build on other scholarship due to ongoing corruption of knowledge. As Eisenstein notes:
Few Western astronomers in any one generation were equipped to read the entire work or to instruct others in its use. The lifetime of gifted astronomers were consumed … making copies, recensions and epitomes of an initially faulty and increasing corrupted twelfth-century translation from an Arabic text.
(Eisenstein, 1979)
In short, the printing press allowed scholars to be in conversation with other scholars. And isn’t that what science really is, being in conversation with other scientists? We will return to some of these themes in the next module when we explore the rise of the library.
Yet if the printing press had a positive impact in terms of science, its record with respect to other social transformations would be a bit more mixed.
Figure 2: The reliability of information allows scholars to build on each other’s work. mayrum/iStock/Getty Images; pixomedesign/iStock/Getty Images
2d. The Printing Press and the Protestant Reformation
Second of all, the printing press had dramatic changes on religion in Western Europe. Arguably, it enabled the shift for the Protestant Reformation , that massive destabilizing of Catholic power. This is in part because suddenly ideas could be transmitted without relying on priests but by looking to middle-class people as well. Indeed, the protestant reformation is arguably the first movement to fully rely on print as a “mass medium.”
Figure 1: Julius Hübner’s painting of Luther posting his 95 Theses. (Hübner, 1878)
Please look at the painting above, which is Julius Hübner’s painting of Martin Luther posting his Ninety-Five Theses on the church door at Wittenburg in 1517. Spoiler alert: this never happened.
Martin Luther, a professor at the University of Wittenberg and a preacher, did write the Ninety-Five Theses against the Catholic Church and in particular their system of indulgences.
Why didn’t Luther like indulgences? In very brief terms, Catholics theology at this time means that adherents are marked by sin, confession, forgiveness, and punishment. A person would sin, they would confess, and then they would perform a work of mercy (such as a prayer such as the Hail Mary). If an individual sinned, but didn’t do the works of mercy, they would have to spend time in purgatory. There was, at this time, a “loop hole” to get out of purgatory however. You could lessen your time in purgatory by purchasing an indulgence! The popular saying at this time was “As soon as the coin in the coffer rings, the soul from purgatory springs.” Indulgences are becoming big business by the early sixteenth century. Indulgence letters are being mass-produced, and printers are actually being put to work to churn them out (indeed, one of the first things that Gutenberg’s workshop printed was actually an indulgence letter).
The Catholic Church, led by Pope Leo X, is under financial pressure as they are essentially fundraising to build St. Peter’s Basilica in Vatican City.
This is a beautiful, large… and expensive church to build. In order to do so, Pope Leo X grants a special indulgence which would cover almost any sin, including adultery.
Martin Luther was unhappy with this, especially the special indulgence. He was preaching about and studying this issue, trying to convince people that they should truly repent their sins rather than purchasing an indulgence. Luther thus decides to write his 95 Theses about indulgences, discussing in them issues such as guilt, sin, purgatory, false certainty (how do you know an indulgence works?), and beyond.
This was standard practice at the time. Preachers and academics would have conversations through publicly sharing their works on places such as church doors! So the folk vision, which we saw in Hübner’s painting above, of Luther proudly marching up to a door, pounding it in, with a crowd of people standing agape as he does so, really does not work. In short: the actual posting of the 95 Theses was not a big deal, and was just part of everyday academic conversation at the time.
But something changed – and arguably, it wasn’t the ideas that Luther had, but the way in which they could now spread around Europe. Enter the printing press. In other words, it was not so much the ideas – such ideas had been distributed before – but the medium by which they could begin to be spread.
Figure 2: St. Peter’s Basilica. adisa/iStock/Getty Images
It is a mystery to me how my theses, more so than my other writings, indeed, those of other professors were spread to so many places. They were meant exclusively for our academic circle here.
(Luther, 1517)
Indeed, scholars have long wondered – before relatively recent literature on the printing press – how this happened. How did this academic proposal, written in Latin, published in the small German town of Wittenberg, kindle such enthusiastic support and lead far-reaching impact? How does it spread so quickly? As we see above, even Martin Luther was curious about what had happened.
In short: the audience is now bigger than Luther could have anticipated. To Luther’s pre-printing press mind, when you write an academic argument in Latin, the only people that are going to read it and engage with it are going to be other educational elites. But now, with the printing press, more people can engage with these ideas. Indeed, these ideas spread so quickly that probably within two weeks people all over today’s Germany knew about it, and within months people all over Europe would have heard of the ideas of this previously obscure preacher from Wittenberg.
Figure 3: Spread of academic ideas pre- and post-1455. mayrum/iStock/Getty Images
First, there is a new class of priests now. Before printing, a priest could be “ignorant and inarticulate”. But now priests have to read because they can have their own bibles. They are increasingly steeped in new learning, can read more languages, and can now engage in scholarly debates. The printing press has meant that they need new skills.
Second, there is also a new class of people in towns all over Europe who can read and engage with ideas. There are now printers’ workshops in many of these communities, and printers’ workshops are staffed by – you guessed it – printers . These individuals have knowledge of how to read and write, often in Latin, but unlike those who would have read Latin before, they are not academics or priests. They are lay people with education . So, as they print things, they read them, they talk about them. Gossip can begin to spread about the things they print.
Third, printers workshops are fun places . Because these are lay people who are reading and getting the goods on what’s going around in Europe, people tend to drop by workshops: townsfolks, academics, priests, just to hear what’s going on and learn about the news. They help ideas spread from town to town.
Figure 4: Luther’s Bible. (Luther, 1534)
Finally, the printing press itself may have facilitated the mass printing of indulgences.
Luther himself later bemoaned that he might not have done this had he known what would have happened (which reminds me of when people post provocative things on Twitter and wish they hadn’t spread!). But in any case, Luther was charged with heresy, excommunicated, and would go on to set off the Protestant Reformation . Out of this controversy over indulgences, this Reformation changed the shape of religion in Europe (and eventually much of the world). In 1534, Luther himself translated the Bible into German, letting people have the words of the bible in their hands themselves, and increasingly radical Protestants came to reject the authority of the Catholic Church.
It’s not all good by any means – somewhere between 25 and 40% of the German population will die in the religious wars that follow – but the spread of ideas has now been dramatically changed.
Given the impact and role of the printing press in this process, you could argue that the Protestant Reformation began in 1466 with the Gutenberg Bible, rather than in 1520 with the 95 Theses. You could argue that bibles are more accessible; priests have more education; indulgences are being mass produced and there’s all of this pressure; and that you now have a technology that allows it to spread. None of this is possible without a mass medium like the printing press. Other scholars may disagree with this interpretation, but the power of the press is undeniable.
2e. Conclusions
In some ways, Gutenberg and his press kick-started a process of a communications revolution. These concepts are critical to the course so we will reflect on them more at length than our usual conclusions.
With Gutenberg, the media ecosystem begins to expand. Before printing, if you wanted to communicate, you would mostly communicate by talking – or writing out notes with our own hands. But suddenly, there is the publishing explosion: books, flyers, leaflets, pamphlets, and ultimately, magazines and newspapers. In the next few modules we will explore some of the subsequent expansions of human reach: the telegraph and telephone in the nineteenth century, for example, or eventually the Internet. One thing that united these technologies, however, was the cost of production. Printing presses are expensive, as are telegraphs, telephones, and early mainframe computers.
What about today? Should we, as the reading suggests, consider the Internet or Web to be revolutionary in the same way that the Printing Press was? Is the Web a new medium transforming the nature of human thought?
Module 2 Group Discussion Activity
For this week’s discussion, reflect on the following questions, and post your responses in the Printing Press Discussion Topic by the date indicated in the Course Schedule.
Is it appropriate to draw parallels with the invention of the Internet?
Does history help us in this respect, or does it obscure?
It is a complicated question. Certainly, we all write even more today than ever before.
At an accelerating pace, people are having conversations by writing in a growing number of social media, including emails, blogs, chats and texting on mobile phones.
(Carenini et. al., 2011)
As the computer scientists Carenini, Murray, and Ng have noted above, we live in a world dominated by “text” – from the texts you send to your friends to Tweets, Instagram captions, e-mails to your instructors, and beyond.
Indeed, some of the work that I, the professor designing this course, work on explores this medium shift!
As part of my research, I work on GeoCities.com, which was a service that existed between 1994 and 2009. It allowed anyone to create their own website. People could create a website on any topic they wanted to, such as their love of the Toronto Maple Leafs, or their family tree, or how much they loved Winnie the Pooh, or even early forms of blogging or explaining their life to other people online. What really makes me curious about GeoCities is the sheer scale of it, especially compared with the amount of information historians usually find in libraries and archives. For example, 10,000 users created pages by October 1995, 100,000 by August 1996, 1,000,000 by October 1997, and by 2009 some 7,000,000 users had created about 186,000,000 “pages” of content.
This makes me wonder: Was it like each person operating their own free printing press? As “guerrilla” web archivist Jason Scott has argued, that’s not a bad way to think about it:
At a time when full-color printing for the average person was a dollar-per-printed-page proposition and a pager was the dominant (and expensive) way to be reached anywhere, mid-1990s web pages offered both a worldwide audience and a near-unlimited palette of possibility. It is not unreasonable to say that a person putting up a web page might have a farther reach and greater potential audience than anyone in the history of their genetic line .
(Scott, 2009)
The amount of data generated by and about everyday people is astounding. Google has servers so big we share photos of them as “server porn”, and we measure their data holdings in exabytes (an exabyte is 1,000 petabytes and a petabyte is 1,000 terabytes). It is hard to describe the sheer amount of data without resorting to scientific notation!
The printing press increased the density of our social interactions. Is this process exponentially changing as we collectively exchange billions of e-mails a day, tweets, Facebook posts , text messages, WhatsApp messages, and beyond.
Stay tuned as we continue to explore this medium shift, and begin to look at how infrastructure begins to rise in order to store all of this information: the library.
Figure 1: Server room. simonkr/iStock/Getty Images
Module 3: Libraries and the Organization of Knowledge
3a. Introduction (The Idea of the Library)
Remember the cuneiform tablets from the introductory module? There, we discussed that around 3000 BCE, people were beginning to cluster in Mesopotamia. Sumerians began creating cuneiform tablets to warrant transactions, and by 2100 BCE in Mesopotamia, we have the Epic of Gilgamesh . We are seeing writing being used to spread literature, certify financial transactions, and beyond.
Figure 1: E mergence of recorded human activity timeline (Mesopotamia) . © University of Waterloo
Storage: Where do you put these objects?
Security: How do you ensure that those who are allowed to look at them do so, and those who are not, cannot?
Preservation: How can you make sure they are safe and in order over the long term?
Cataloguing: How can you retrieve the information that you need?
The reading for this week, an excerpt from Abby Rumsey Smith’s When We are No More , does a wonderful job of exploring this process that begins to emerge around 2400 BCE.
Figure 2 : Emergence of systematic processes to organize knowledge timeline (Mesopotamia) . © University of Waterloo
Figure 3: The development of s ystemic processes to organize knowledge. © University of Waterloo
In short, around this period, we are seeing the evolution of systemic processes to organize knowledge. As several hundred cuneiforms grow into several thousand, we begin to see the rise of specifically designed buildings staffed with specifically trained experts to begin the development of a core information infrastructure within society.
Let’s pause for a second. You could argue that as you get more and more information, the need to provide storage, security, preservation, and cataloguing are something we will see again and again throughout this module. These are arguably steps in the evolution of each recording medium that we can see from clay tablets, through to scrolls, through to books, through to the silicon chips, and bits and bytes that power our world today.
Figure 4: Storage, security, preservation, and cataloguing are issues that pertain to all recording media as they evolve over time. Nadiinko/iStock/Getty Images
3b. The Shift to Papyrus and Parchment: Preserving Information in the Library of Alexandria
By the first century BCE, we begin to see some critical transformations. There are growing collections, and importantly, more and more people who could actually read these collections. Accordingly, libraries begin to proliferate around the Ancient World. This is for two reasons. First, there are social changes that lead to a larger proportion of the population being literate . I won’t go into details in this module, but in short, part of this is because there is an explosion of slavery, which gives more free time to the free population. It is sobering that this early explosion of literacy comes at the expense of human suffering.
Figure 1: Libraries in the ancient world. Image description . (Patterson, Map of the World)
Secondly, there are new mediums to preserve information. In the introductory module we discussed the shift from tablets towards papyrus and later parchment scrolls, around 2000 BCE, and continuing forward through to the first century CE.
Figure 2: Timeline showing the evolution of writing media. © University of Waterloo
Moving to tablets to papyrus and later parchment bring some serious advantages. These thin pieces of animal hide or plant fibres (papyrus) are flexible, strong, light, stable in dry climates, and can be rolled up into scrolls so you don’t have to lug around a giant, heavy clay tablet. Moreover, when you want to record information on parchment or papyrus, you don’t have to chisel or scratch it as you do with a clay tablet, you just get ink and write.
There is a catch, of course. What is the one thing that a tablet is good at?
Clay tablets are very good at not burning down . When you put a clay tablet in a fire, what happens? Think about how we make clay. Putting clay into a fire ironically makes it stronger through the “firing” process, making them more durable. If we can see the history of communication as one of technological improvement as we move forward from clay tablets towards more modern forms of communication, we could make a counter-argument that clay tablets may have hit a peak in terms of sheer durability; and we have been going downhill ever since: scrolls are easily destructible, just as books, CD-ROMs, silicon chips, etc. are today.
By the time the scroll is here, it’s here to stay.
The Library: Knowledge for Its Own Sake
We discussed Alexandria briefly in our introductory module, and we will return to it shortly. There are other great libraries that are appearing around the Ancient world by the first century CE: great libraries in Rhodes, Pergamum, Athens, Rome, and many smaller collections and private libraries elsewhere. The reading by Rumsey discusses this spread of libraries, inspired by the Greek idea of “concerted cultivation of knowledge for its own sake”. Libraries are becoming not just depots for records, but places where people who visit them can find meaning and pleasure alongside their instrumental value as information places.
It’s worth pausing for a second to reinforce why libraries matter so much. When you think of a library, you often think of a place like the Dana Porter Library here at the University of Waterloo, and you might think of (if you are on-campus) studying in the stacks. The rise of the World Wide Web has really forced libraries to explore their new place in the information ecosystem, as it has dramatically changed their role. This is because until essentially the turn of the twenty first century, if you wanted access to almost any kind of structured information , you basically had to go where books, journals, maps, and manuscripts were held: you had to go to the library.
Let’s now turn to the Library of Alexandria . Created at some point in the fourth or third century BCE, the library was built on the strip of land where the Nile Delta meets the Mediterranean Sea. Alexandria is an important place as it was Egypt’s portal the world: if you wanted to travel down the Nile, you would go through Alexandria. The collection largely grew because of this geographic location. As boats came into Alexandria, manuscripts would be taken from the boat and then copied. Given Alexandria’s critical location in trade in the ancient world, this meant that they quickly had a very, very large collection.
It soon became a place to support scholarship. In order for scholars to be able to do their work in such a large and growing library, there needed to be dedicated professionals to make all of this information useful. Enter the librarian . Libraries had to do two main things:
They had to organize the information so you could find things.
They had to physically steward the information so that the information would be secure and usable over time.
In short, they are satisfying those four issues we discussed in the introductory remarks to this module.
But all of this information leads to difficulty.
Figure 3: Ancient Library of Alexandria, Egypt. Image Description . (Encyclopædia Britannica)
Consider this image from the Library of Alexandria.
Scrolls are great. After reading them, you can roll them up to store, and then you begin to stack them. What is the problem with a stack of scrolls?
First, if you take the scroll at the bottom, the ones at the top can tumble down. Secondly, when rolled up, scrolls are identical. Accordingly, librarians begin to attach tags to the end of the scrolls for quick identification. This doesn’t resolve the first problem, however, of stacking scrolls. The next breakthrough is the idea to take the sheets, cut them into uniform size, and bind them between covers. These are easier to search than scrolls, are more portable, and with a book spine you can now stand it on a shelf and affix an identification code or title to the side.
These methods are efficient both at keeping order as well as maximizing usable shelf space.
Thinking About Cataloguing
With these physical objects, let’s take a moment to think about how they can be managed. I often find when thinking about cataloguing information, it is easier to learn through doing than just reading about it.
When you go into a library, such as the Dana Porter, you see all sorts of numbers on the side of a book. Modern libraries have adopted a few standardized cataloguing systems to classify information – in the case of Waterloo, we use the Library of Congress System (as do most academic libraries), whereas public libraries often tend to use the Dewey Decimal System .
Figure 4: Close-up of the spines of Networked and Electronic Commerce and Case Law . © Ian Milligan
Above, you can see two books: Networked , with a call number of HM 741.R35 2012 and Electronic Commerce and Case Law with a call number of KE 452 C6 S33.
What do those numbers mean?
Cataloguing Books
In practice, what those numbers mean is that when the publisher was sending the book Networked to print, they worked with the Library of Congress to classify the book. Books will have the same call number around the world! I would like you to open up the following resource: Library of Congress Classification System
Now let’s go see how the book Electronic Commerce and Case Law got its classification. We first go to
K – Law
We see that there is:
KE – Law, Canada
And
KE1-9450 is – Law of Canada, Federal Law
Exercise: Cataloguing Networked
Let’s look up the other book ( Networked ) . Please use the above Library of Congress resource to find what HM 741 is. Remember to follow the links at the Library of Congress Classification Outline .
Jot down your answer here.
(Example: if KE1-9450 was “Law of Canada, Federal Law” what is HM 741?)
Module 3b Individual Activity: Cataloguing books
I would now like you to imagine how to catalogue the following book: Before the Crash .
Try to classify it using the Library of Congress Classification Guide . Assign it a fictional call number based on your best guess of how to interpret it. Answer the following questions:
What fictional LC call number would you assign this book?
What are the categories for the codes that you assigned?
Did you consider any other options? Was it difficult to arrive at this call number? Are there any unique challenges associated with trying to catalogue “video games” in the Library of Congress?
I look forward to reading your thoughts in an informal paragraph or so, and I hope that this provides insight into the power and possibilities of library cataloguing!
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Module Conclusion: What Happened to the Library of Alexandria?
At this point, the Library of Alexandria is looking very promising: they have an impressive physical collection of books, they have developed a classification system so that people can retrieve this information, and they have a collection stewarded by a professional class of librarians.
Why can’t we today get on a flight to Alexandria and see the original Great Library? Why do we talk of the “LOSS” of the Library of Alexandria? We often think of a cataclysm like in this 1876 painting:
Figure 5: Artist’s conception of the destruction of the Library of Alexandria. Image Description . (Göll, 1876)
Like many events during this period of history, the record is not clear. One popular narrative is that Julius Caesar burned down the Great Library. Caesar may indeed have arrived in Egypt in 48 BCE and burned down some of the library, but within a few years, the library was probably restocked with over 200,000 volumes from the Great Library of Pergamum.
Indeed, scholars generally agree now that it was not war that led to the fall of Alexandria.
For what survives war can ultimately die from inattention and neglect . The Great Library of Alexandria died because people stopped valuing its contents. Why this happened is a point of debate. It may have been that as Christianity and Islam developed, they did not value the pagan works of Romans and Greeks as much as they did before. Economic pressures and raging plagues certainly provided short-term pressures. As the reading from this week notes, “The cultural amnesia induced by their complete loss was not Caesar’s doing, but the work of many generations, Christian and Muslim, who felt no responsibility to care for pagan learning”.
Figure 6: What really happened to the Library of Alexandria? Fourleafclover/iStock/Getty Images; greyj/iStock/Getty Images
When they stopped caring, the Library slowly fell apart. Knowledge, in physical form as much as in digital form, requires active preservation . Knowledge cannot just be collected, but requires care.
3c. From Scarcity to Abundance: The Library and the Printing Press
If you recall from the last module, we discussed the explosion of printing throughout Europe, which led to an important shift, as there was more books around. In other words, the written word begins to survive in greater abundance from the printing press onwards, becoming the main way that we understand and transmit knowledge.
Libraries Before the Printing Press
Now we know that the printing press did not invent the book. Medieval Europe had been full of books – all carefully crafted by hand. Some were beautiful, some were utilitarian, and actually right before Gutenberg and his press, wealthy individuals had begun to compile their own collections of considerable size. Some of these were scholarly collections, but most of these collections were held by European aristocrats and royalty: libraries in medieval Europe were a symbol of both cultural inspiration and wealth . A European prince, for example, might begin to collect lots of books not just because they cared about the knowledge inside – but because books were rare, and having many rare items was a convenient way to show off your wealth and importance.
Figure 1: Matthias Corvinus. fotokon/iStock/Getty Images
Early scholars were part of this system too. Not because they were rich, but because they could give advice on which books to buy, how you could obtain a certain rare and valuable text, and how you could find and oversee scribes to do some copying yourself to create new books for collections. Some of the largest collections in Europe are from names that might be familiar to you today: the Papal Library hosted thousands of volumes, for example, and the Medicis of modern-day Italy had a large, impressive collection.
One good case study of this is Matthias Corvinus, the King of Hungary, who comes to power in 1458. He decides that he wants a monumental library. He decides to bring four thousand manuscripts to Budapest: they are commissioned, produced by scribes, and carted to Budapest.
The year is worth pausing on – 1458. Why would you start collecting scribe-written books when Gutenberg and his technique are already beginning to produce books en masse ?
Figure 2: Printing press pivotal in human history. Nadiinko/iStock/Getty Images
Figure 3: Book production pre- and post-1455. Nadiinko/iStock/Getty Images
Pause and Reflect: Libraries
Let’s pause for a second – what do you think might happen with the state of books in Europe in the wake of the printing press?
Note: This is for individual reflection only; you are not required to submit your answer.
Mouldy Books: The Library After the Printing Press
Suddenly, having a book is not as rare as it once was. The book ceases to be an object of wonder, but an everyday aspect of life. For ruling elites, it is no longer desirable to collect a library for the sake of having a library, and they turn to other forms of conspicuous consumption: sculptures, tapestry, paintings, warships, etc.
Renaissance libraries suffer a rapid decline. Some of them we lose completely: Corvinus’s library, for example, is lost completely. He had spent all of that time and money to produce 4,000 books, send them over mountains, and it loses interest. Other libraries just decline but linger: monarchs and rich people don’t care about them, so books are packed into crates, stored in places around the palace, literally getting in the way.
For example, Hugo Blotius is appointed a century later in 1575 as the first head librarian of the Viennese Hofbibliothek. He goes to Innsbruck to the Holy Roman Emperor’s palace and discovers that they have 7,379 volumes. Blotius described this:
How neglected and desolate everything looked! There was mouldiness and rot everywhere, the debris of moths and bookworms, and a thick covering of cobwebs. The windows had not been opened for months and not a ray of sunshine had penetrated through them to brighten the unfortunate books, which were slowly pining away: and when they were opened, what a cloud of noxious air steamed out.
(Blotius, 1575)
A century before, 7,379 volumes would have been amazing and an object of veneration. By 1575, the emperor literally did not know how many books he had, had them stashed off, and as the quotation above shows, they were being housed in a state of neglect. The only silver lining was that books were so comparatively valueless then that people did not even largely think of stealing them!
Scholars and the Great Research Libraries
If elites no longer cared about libraries, at least there were universities to steward this knowledge, right?
University libraries also ran into travails. The great libraries of the thirteenth and fourteenth centuries for example, would have had hundreds of books, and that would have been an impressive collection. By the time the sixteenth century arrived, hundreds of manuscripts were becoming commonplace. By 1550, a junior scholar might have a hundred books in their possession, with a full professor having several more hundreds.
Personal collections turn out to have advantages for these institutional collections, which are now comparatively quite a bit smaller. A scholar’s own collection at this time might be fresher; they do not have many older dirty books that are not relevant to projects, and more importantly, they are on your own shelf.
The impact is dramatic. Oxford and Cambridge Universities both close their libraries, and Oxford even throws out the books and furniture from the library! This meant that libraries needed to have a new sense of scale and purpose to stay relevant.
The Revitalization of the Library
To understand the revitalization of the modern library, we can turn to the story of Thomas Bodley and the library, that will eventually come to bear his name in Oxford.
Figure 4: Bodelian Library. Joanna_Harker/iStock/Getty Images
Figure 5: Thomas Bodley. (Bodley, 1894)
Oxford, as noted above, did have a library and it did have hundreds of books, but by the sixteenth century, it had been literally stripped and abandoned. Books were gone, furniture sold, and the building was closed.
Enter Bodley. Thomas Bodley, born 1545, had a varied career by the time he turned his attention in 1598 to revitalizing the library. Bodley had been a scholar before turning to diplomacy where he completed a European tour and learned many languages. He had become a diplomat, later a politician, and had carried out secret missions in France. Near the end of his political career, he became frustrated with his ambitions to become Secretary of State due to the political wheelings and dealings of that time. With his political career finished, he decides to turn his attention to Oxford.
In 1598, Bodley decides to begin to rehabilitate the Oxford library, using his money from his previous careers. His first step was to try to rehabilitate the library, restoring the shelves, getting seats in the building again, thinking that if he rebuilt the library, people would come and donate their books and things could begin to work again. When it quickly became clear that people would not simply donate their books, he began a large-scale endeavour to purchase books to help rebuild the library. It begins to grow being officially refounded in 1602. By 1620 (a few years after Bodley’s death in 1613), the library had some 16,000 items from all over Europe.
Now here we can see scale playing again. A small collection of even a few thousand books is not that valuable, but 16,000 items – many of them rare – means that the library begins to attract visitors from all over Europe.
It also begins to lay the groundwork for the modern library. Everybody who comes to the library now needs to take an oath, and actually need to read the oath out loud before inscribing their name as a reader.
You promise and solemnly engage before God. . . that whenever you shall enter the public library of the University, you will frame your mind to study in modesty and silence, and will use the books and other furniture in such manner that they may last as long as possible. Also that you will neither yourself in your own person steal, change, make erasures, deform, tear, cut, write notes in, interline, wilfully spoil, obliterate, defile, or in any other way retrench, ill-use, wear away or deteriorate any book or books nor authorise any other person to do the like.
(Ward and Heywood, 1851)
Today, if you visit the Bodelian library, you can buy a mug or t-shirt with a slightly modified version of the oath. The modern oath, still taken today, makes you promise not to “kindle a flame”. Everybody jokes now that they haven’t thought of lighting a fire in the building until the oath made them think of, but the core idea is here.
What the large-scale collection and the approach to the library do is arguably set the groundwork for the modern library .
Pause and Reflect
In a few words, reflect on what makes a library special: How would you define a “modern library”, such as the Dana Porter Library at the University of Waterloo?
Note: This is for individual reflection only; you are not required to submit your answer.
So how do we get from this medieval library to what you might have thought? When you think of a library, the cliché pop culture image that comes to mind is probably something like this image.
Figure 6: Perception of the modern library – a cliché. Brosa/iStock/Getty Images
Before Bodley, libraries may have held many books, but theywould have been very loud. They were places for conversations or a place to show off your wealth from books. It is only with the library of the early 1600s, when they become attached to institutions like Oxford or other universities or associations, that they become places for study or contemplation. This begins a long process through to today, in some ways, where libraries grapple with the right approach to take towards noise or activity in the library.
This kicks off a golden age. With the Bodleian Library’s refounding in 1602, many followed: city libraries throughout England, state libraries around the world, and the British Library by 1753. Importantly, these libraries discovered a sustainability model: they moved away from the vagaries of private funding towards a subscription model , where you would pay to access. For example, the Bodleian required that you buy a catalogue to use their collections. While there are debates about whether the rise of the library is an unalloyed good for humanity the French philosopher Voltaire, for example, worried about people having increasing access to books lest people get ideas in their head. Increasingly people begin to see this sort of knowledge as a way to possibly liberate humanity.
3d. Enlightenment Principles
This brings us to the last part of this module: the Age of Enlightenment, and how this period influences our world in North America through principles of the library.
Some definitions can help make sense of this.
The library would be key to this. As Rumsey Smith has noted, “knowledge became something to be acquired, organized, and shaped into an instrument of progress” (Smith, 2016, 64).
In North America, then, much of this would be enacted through Thomas Jefferson (1743-1826). Best known today as one of the founding fathers of the United States and principal author of the American Declaration of Independence, Jefferson was concerned in the late eighteenth century about how Americans could become citizens rather than subjects of a king. This would require free, enlightened individuals to exercise the responsibilities of citizenship. This course is not going into detail about Jefferson’s life and times, but for our purposes, let’s think of him as a “bibliomaniac”. Jefferson adored books: he was addicted to acquiring them, bought multiple copies, purchased other libraries (i.e. Ben Franklin’s library), and basically came up with the idea of “create a bank of knowledge full of the accumulated riches of human thought and memory that the young nation could draw on for generations at a time” (Smith, 2016, 64) . This would be a key to citizenship.
Figure 1: Thomas Jefferson. (Peale, 1800)
At his country estate, Monticello , today the centre of the University of Virginia, Jefferson assembled the largest library in the United States: almost 6,500 books. Why?
Figure 2: Fun fact – If you have held an American nickel, you would have seen Monticello . filo/iStock/Getty Images
The changing role of knowledge in the Enlightenment played a big role in this. If the United States was going to be an independent and free country, the argument that him and others made were how to best equip citizens with the responsibilities of freedom . The argument was that you could only have truly free citizens if you had an informed citizenry, with knowledge of current events, agriculture, history, theory, etc. That learning things would be a moral endeavour.
Now, if we were on the set of a TV movie, this is about the moment you would hear a record scratch. This is lofty rhetoric, of course, coming from individuals who were also presiding over a widespread system of slavery. Jefferson argued that “all men were created equal” at the same time as owning slaves at Monticello. As we discuss in this module, as we explore some of the loftier ideals of the American Revolution, this needs to be kept in mind (and we will return to this again).
In this Enlightenment “ideal” (again, I cannot stress enough how divorced this rhetoric of freedom was from reality for the majority of Americans, both slaves, freed Blacks, women, and the working class), the library would be supremely important. For democracy would demand equal access to knowledge, and this means that both the government and citizens had a responsibility to ensure the education of people. Indeed, Jefferson felt that ideas needed to spread – that they could not belong to just one person – and that in this would lie their power. In an 1813 letter, Jefferson wrote of a vision:
That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, extensible over all space, without lessening their density in any point.
(Rumsey, 2016)
Figure 3: U.S. Capitol building and Library of Congress, after the British burnt it down . (Munger, 1814)
This is a vision of ideas that sees them as having the power to change the world (and, indeed, this quotation has been used by people to make arguments about the desirability of free intellectual property to even presaging the power of the Internet).
The pathway towards this, however, would languish until an event in the War of 1812 that led to the revitalization of this ideal library in practice. In April 1813, the United States invaded Upper Canada (today’s Ontario) and captured the capital of the province, York (today, Toronto). These American invaders burned down the parliament building, including the imperial and parliamentary records. In a supreme tit-for-tat, the next year, British forces sailed to Washington DC, captured the city, and in August 1814 burned down the US Capital. In the process, the British burned down the congressional library. Now, in the United States, the congressional library – or Library of Congress – is the de facto main national library of the country. They needed a new one, so they turned to…
Jefferson. The US government purchases the private library of Jefferson, who at the time had 6,487 books, comprising of 4,931 unique titles: books, maps, manuscripts, and more. This was, thanks to Jefferson’s enlightenment principals and interest in classical literature, arguably the largest, most diverse and comprehensive collection in the western hemisphere. With this, the Library of Congress was born.
So what had they purchased? Crucially, it was not a library that was just for governance. Yes, there were legal and political books, but more importantly, the US government had now acquired the works of Voltaire, Locke, Hume, Homer, Virgil, Xenophon, Thucydides, Herodotus, Tacitus, Dante, Milton, Shakespeare, etc. Jefferson made some hand waves towards this knowledge being useful to legislators, but really, it was the idea that that the library took what it needed to achieve the ideals of before: for Americans to educate themselves and become responsible, self-governing citizens (again, the major proviso being that this was a very white and very elite conception of citizenship). This would be the lynchpin of self-rule.
This idea of the universal library is with us today, and this is why Thomas Jefferson keeps coming up again and again when we think about the Internet as a library. That we are today, perhaps, continuing down the pathway that he began.
Figure 4: The appropriately-named Jefferson Reading Room in the Library of Congress today. Dhuss/iStock/Getty Images
3e. Conclusion and Final Activity
The idea of the universal library is alive today. If you have the fortune to visit Ottawa in the near or distant future, and are walking down Wellington Street in the downtown area, you can see the important buildings of our federal government here in Canada: parliament, the Supreme Court of Canada, and then Library and Archives Canada. It is no accident that our national library is next to our parliament and court… without knowledge, how can we truly have a democracy?
Figure 1: At far right, you can see the Library and Archives Canada building. Christophe Ledent/iStock/Getty Images
Indeed, one of the most impressive buildings I have seen in Canada is the Preservation Center. Located in the suburbs of Gatineau, Quebec, it is a long-term storage facility for our most important national treasures. The outside of the building is glass, housing a separate interior concrete shell – each of the vaults is set to a particular temperature, humidity level, and beyond. Teams of highly-educated technicians work on documents, paintings, old computers, websites, and beyond, with an eye to making sure that Canadians and others have access to our world today in the decades and centuries to come.
This idea of the universal library is with us today, and this is why the library is still relevant. In the coming weeks, we will return to these touchstones: the Library of Alexandria, of preserving and storing information at scale, and crucially how this can all relate to larger issues of democracy, citizenship, and accessibility.
Figure 2: Preservation Center. On the right is the outer shell, on the left is the inner concrete shell. © Ian Milligan
Module 3 Group Discussion Activity
Take some time now to participate in the Module 3 discussion.
There are many different “kinds of libraries,” today. I’d like you to reflect on four different types of library, and analyze them using the questions outlined below. They are all a bit different: a traditional library like the Dana Porter Library at the University of Waterloo, the Internet Archive, Project Gutenberg, and JSTOR.
How do I participate in this activity?
- Your first step is to visit each of the following four libraries:
University of Waterloo’s Library
The Internet Archive
Project Gutenberg
JSTOR
Note about the Dana Porter , the University of Waterloo’s Library : If you’ve been to Dana Porter or DC Library in person, you can reflect on your engagement there in person, but if not perhaps you can think of another university library you might have been to (or, at least, read about or seen in some media).
Engage with each of these for a few minutes, until you feel that you’ve gotten a handle on them.
Next, spend a bit of time analyzing these libraries. What makes each a library? When you’re playing with them, I encourage you to try to look for things that interest you. Get a feel for the interfaces, and reflect on what you’re doing. If you’re stumped on what to search for, you could look up topics pertaining to some of the modules we covered today: Thomas Jefferson, for example, or the War of 1812 or the Library of Alexandria. What sorts of things do you find? What sorts of things do you not find that you might expect to? How do these sources dovetail with what we talked about in the course, specifically the ideal of a “universal library” of knowledge? Don’t spend more than ten minutes on each one to both analyze and write it up.
Then, in your small groups, reflect on the following questions and post your responses in the What is a Library? Discussion Topic by the deadline specified in the Course Schedule:
How do the four libraries compare to the ones we’ve studied in this module? What makes each one similar to the Library of Alexandria or Jefferson’s Universal Library?
How are the libraries different from the ones we’ve studied in this module? What makes each one different from the Library of Alexandria or Jefferson’s Universal Library?
How does this help us understand how libraries have evolved since Alexandria?
How durable are each of these libraries? Could they last?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Module 4: Electricity, the Telegraph, and the Telephone
4a. Shocking People for Fun and Profit,or the Quest to Connect the World
Figure 1: Since the domestication of the horse, human communication has been more or less limited by the speed of said animal. uplifted/iStock/Getty Images
Between the domestication of the horse (somewhere between 4000 and 6000 years ago) and the eighteenth century, the limitation on human communication on land was more or less the speed of a rider on a horse (or, by water, the speed of a boat). If you wanted to send a message, you would get on a horse or give the message to somebody on a horse, and they would gallop off to give them the message.
This is very slow (imagine if you wanted to send a joke to your friend in Toronto and instead of texting them, you had to walk there. You would probably send a lot less jokes).
Figure 2: Jean- Antoine Nollet. (Corones, c. 1760)
As a result, lots of stuff is slow: monarchs or leaders would send armies out; they would march over the horizon, and they would have to wait for weeks or months to find out what happened. News spreads slowly. Even in the case of Martin Luther, whom we discussed in Module 2, within Germany it still took months for something to go “viral.” By the eighteenth century, we now produce information quickly – we just cannot distribute it quickly!
The reason Nollet was so significant was that he was trying to introduce a new way for people to communicate: by electricity. Before electricity, communication is limited to the use of sounds (i.e. a clock bell which might “bong” at noon so everybody in a town knows what time it is) or to visuals (i.e. Big Ben’s clock face tells you what time it is from fairly far away in town; a lighthouse has a large beacon so that ships miles away can steer by their lights). However, sound and vision are limited. Electricity, however, promised to be a new approach: Nollet was discovering that electricity might be able to span miles at a time; perhaps it could go around corners, and maybe it wouldn’t be subject to the whims of weather and darkness.
One limitation, though… all you can seem to do with electricity is turn on the current and turn it off. How could you send a message like that?
In this module, we will explore the idea of the “telegraph” and then the “telephone.” How have people found new and innovative ways to communicate over distances?
4b. Early Telegraphs
Let’s step back, now, and look at the problem that Nollet and others were trying to solve with the power of electricity: the problem of trying to communicate with other people over long distances.
The first “telegraph” comes from a scientist named Claude Chappe (1763-1805). Chappe begins experimenting on sending messages with his brother. They decide first to try a way that combines the power of sight and sound to try to send complicated information a long way away. They accordingly set up two clocks, each having 10 numbers and two hands that go around the dial – one going twice as far as the first one.
Claude and his brother each have a clock. They first synchronize the two clocks by banging a big dish (i.e. Claude bangs when his hands are at the top of the clock, so his brother knows that he should put his hands at the top of the clock too). They can then communicate by banging a dish whenever the hand goes over a number (i.e. if Claude wants to tell his brother something about the number four, he can bang his big dish, and his brother just needs to look at his clock that is synchronized to receive the message). They come up with a dictionary, and begin sending numbered codes. Yet they realize that this has a problem: if there is wind or loud noises, it is hard to synchronize and use sound. They begin to drop the sound component and focus on vision.
Figure 1: Claude Chappe. (Rousseau, 1889)
Figure 2: Télégraphe Chappe. Image description . (Figuier, 1868)
Look at this closely: by moving the smaller arms or the one big one, you can make a lot of different combinations. Indeed, Chappe creates a codebook that has 98 different combinations. You could then communicate by sending two codes: first, you would send the page number; second, you would send one of the 92 different things on the page.
How could you build these beyond the range of human sight? Well, you need to think big! They decide that this could become a large telegraph system by building lots of towers: each a few miles apart, each with an operator, who could look at the tower in one direction when it sends a message, copy the message, and then send it down to the next tower. Well-trained operators could begin to send complicated messages across countries in less than an hour.
It catches on quickly. In 1793, Chappe sends his design to the National Assembly of France (during the French Revolution), shows off a successful demonstration this year, and just one year later, a line connecting Paris to Lille opens. These lines begin to spread quickly. On August 1794, the Paris-Lille line reports on military activities; by 1798, there’s telegraph lines connecting Paris with Strasbourg, and via Lille, to the English Channel port of Dunkirk.
There’s a catch with this when it comes to thinking about the social impacts of these telegraphs. They are largely military and government networks! There are some exceptions – lottery numbers are soon transmitted by optical telegraphs to prevent the old trick of finding out the winning numbers and racing as far as you could to enter the lottery before news reached the far-flung towns. Largely, these are networks of encrypted code books, militaries, government leaders, and beyond. You can imagine being an everyday person, in the countryside of France, watching these towers sending messages to each other…interesting, perhaps, but not the sort of thing you could imagine using to send a joke or message to a friend.
Figure 3: A Chappe telegraph tower. ricochet64/iStock/Getty Images
Figure 4: The British optical telegraph, 1795. Image description . (Birkbeck, 1828)
The optical telegraph idea spreads. By 1795 in England, London begins to use optical telegraphs to connect the Admiralty in London with the ports and ships along the English Channel. Theirs is slightly different: a series of panels that can be rotated to send messages, but the idea is conceptually similar to Chappe’s idea.
These are really interesting, but we can imagine a few downsides to them:
Very expensive: This is a big factor. They require large physical infrastructure and shifts of people to staff them.
Do not work at night: While they try experiments with lanterns, this doesn’t really work.
Unreliable: If fog or mist suddenly comes in, your communication fails.
Despite this, they are an entrenched system . There are still people trying to transcend these problems using electricity, but they seem like cranks. The problem of long-distance communication was a solved one. Right?
4c. Back to Electricity: The Development of the Electric Telegraph
While the optical telegraphs are in operation, there are still lots of people trying to figure out electronic communication. The problem is that they now seem like cranks! By the early nineteenth century, building on early work like that of Nollet, there are dozens of people working on tons of experiments. Many of these experiments simply involve the introduction of electricity to earlier examples – e.g., setting up synchronized clocks, but instead of banging a dish, sending an electric shock.
One reason that these people seem a bit eccentric is that there are two major problems with using electricity to communicate, as opposed to tried-and-true things like sound and vision:
The problem of measuring electricity: until the 1820s, people did not quite know how to measure electricity…beyond zapping themselves! In 1820, this problem is solved once they discover how to measure the magnetic fields that are present when power is flowing through a line.
While the early experiments like those of Nollet were promising, it turns out that after a few miles, electricity begins to weaken on a line . People are not sure if they can circumvent this problem to send a charge uninterrupted for miles and miles.
For these reasons, the optical telegraph seems like the ‘serious’ way to communicate.
The First Electric Telegraphs
Luckily, elsewhere, somebody realizes that you can solve this distance problem somewhat by redesigning batteries: many small batteries connected in a row rather than just one big battery, and then the signal will travel further. By the 1830s, this leads to the development of the Cooke and Wheatstone telegraph . The first version relied on five wires that would connect two stations and each of the wires would move a needle.
By sending the charge along various wires, needles would move and point at letters. This meant that you would not need a codebook as it was human-readable. This was mostly adopted, again, by institutions – railroads built them alongside their tracks to help with operations. While this originally required five wires between each station, when a few broke, they fortunately realized that just two wires with two needles provided enough fidelity to make sense of what was going on.
This was an expensive technique, however, as it required multiple wires and it was fairly slow because operators had to sit and watch needles point at letters one at time.
Inventing the Language of the Telegraph
Enter Samuel Morse (1791-1872). Morse is a painter, living in the United States. Unlike some European countries, the United States doesn’t have any big optical telegraph systems so the problem of communication is even more pressing. In 1837, Morse is travelling across the Atlantic Ocean, and strikes up a conversation with a fellow passenger. This other passenger explains electricity to him and how it seems to be instantaneous. As he would note in an 1837 letter, Morse argued that
I then remarked, this being so, if the presence of electricity can be made visible in any desired part of the circuit, I see no reason why intelligence might not be transmitted instantaneously by electricity.
(Morse, 1837)
Figure 1: Samuel Morse. Photos.com/iStock/Getty Images
This code would become known as the Morse code :
Figure 2: Morse Code. Image description . Anastasia/iStock/Getty Images
Exercise: Deciphering Morse code What does this word mean in Morse code? Try to translate it using the series of short and long pulses above. .– .- - . .-. .-.. — —
If you are curious to hear this in action, the Morse Code Translator website can let you type in messages and “listen” to the Morse code in action.
Samuel Morse is excited about this problem, and fortunately, the battery problem was largely solved by the aforementioned redesign of the battery (using many small batteries instead of one big one). He was able to get funding, and after he rigs up a demonstration in the Capitol building, the U.S. Congress supports the development of funding a telegraph system based on Morse’s system. On 24 May 1844, the first telegraph line was inaugurated between the Supreme Court in Washington DC and Baltimore, Maryland.
Why do we care so much about Morse? Unlike the earlier telegraph systems – from the optical telegraphs of France to the Cooke and Wheatstone system of England – Morse saw this as a revolutionary new form of communication rather than just a novelty or a military tool.
Changing the World through Telegraphy?
A few famous cases soon appear to demonstrate how the telegraph is going to begin to change things. Some of the earliest stories involve how criminals were caught.
One famous case is that of Fiddler Dick . For context, a productive crime in the pre-telegraph era was to pickpocket people at train stations. A pickpocket could wait on the train platform until right before a train was going to depart: they would steal something, hop on the train just as it left, and because news could not go faster than a train, the pickpocket could simply get off at the next station and be as free as a bird! In 1844, Fiddler Dick tried this at Paddington Station: he pickpocketed a woman, hopped on the train, but a telegraph sent news and a description to the next station. When Dick alighted at Slough Station twenty-five minutes later, officers met him on the platform.
Or, this, from 1849 – a similar story, depicting in cartoon form, how a murderer was literally entangled by the telephone wires.
Figure 3: 1849 illustration from Punch Magazine, anthropomorphizing telegraph poles, which are chasing a murderer. Image description . (Punch’s Almanac, 1849)
The Spread of the Telegraph
This begins to excite popular consciousness about the power of the telegraph. In the United States, between 1846 and 1852, the telegraph system grows some 600 times over.
By the 1850s, telegrams are increasingly part of the world. But they are expensive , and are typically only sent for commerce or urgent messages. Europe and North America begin to be connected by telegraphs in each continent. You would go to a telegraph clerk, write out your message, and the operator would transmit it using Morse code. There would be local lines between local offices (say in a large city), with long-distance wires connecting cities and towns, and each office could only communicate with offices that it was directly connected to. These are, of course, national systems: in Europe, this becomes comical in some places, where a message might come to a station at a border, the operator would write it down, walk to the other side of the border, and give the message to their counterpart to encode and send on.
Figure 4: British telegram form, 1891. whitemay/iStock/Getty Images
Figure 5: A nineteenth-century telegraph. nicoolay/iStock/Getty Images
Wiring the World
It takes a while before they figure out how to get wires under the water to connect places. In 1850, telegraphic engineers try to lay wires across the English Channel. It works for a few hours before the waves buffet the wire enough against rocks that the insulation wears off and the wire breaks. They also run into trouble with water interfering with the electric properties of messages. A year later, in 1851, they build a very massive tough cable, with lots of insulation, and lay it down – this connects England to France.
Figure 6: Map showing the English Channel, with Dover (England), and Calais (France) labeled. (Google Imagery et. al., 2019)
Connecting the world takes a series of baby steps. In 1853, they decide they will try to connect Newfoundland to New York, so that a ship could leave England, sail to St. John’s, Newfoundland, drop off messages, and it would then only take a few days to get messages across the Atlantic. Perhaps presaging some of the difficulties ahead, it takes about two and a half years to connect Newfoundland to New York!
One of my favourite stories then happens in 1857. Two ships, the USS Niagara (from the U.S. Navy) and the HMS Agamemnon (from the British Navy) each set out from their ports to meet in the middle of the Atlantic Ocean. The plan is that they will lay a large cable that can connect Europe to North America and achieve the dream of intra-continental communication! Each of the Niagara and Agamemnon ships carry half of the cable, begin laying it as they leave their harbours, and after a few days, the cable snaps and falls into the sea… and they sail home.
More money is raised, and they set out again. This time the Niagara and Agamemnon ships will meet in the middle of the Atlantic, tie their parts together, and then sail back to their homeports. The cables snap again. They go back to the middle, try again, and they snap again. They go back to the middle, try again, and the cable snaps again. Now they go home again. Finally, after four failed attempts, on 5 August 1858, the two ends are connected.
Figure 7: USS Niagara. (Ray, c. 1855)
Connected!
Since the discovery of Columbus, nothing has been done in any degree comparable to the vast enlargement which has thus been given to the sphere of human activity.
(London Times, 1858)
Figure 8: Telegraphs as a tool of empire. Photos.com/iStock/Getty Images
There was a catch. The cable is so unreliable that it takes more than a week to send the welcome message, and that message takes over sixteen hours to send. Messages begin to pile up on either end in Britain and the United States, and on 1 September 1858 – less than a year after its first connection – the cable fails completely and forever. There would be recriminations and finger-pointings for years after this failure, with consensus largely being that the decision of Wildman Whitehouse (the chief electrician on the project) had used too high a voltage on the line in an attempt to boost the signal and burned through the insulation. Fortunately, they tried again.
On 13 July 1866, eight years later, money was raised again and the cable was finally laid with more rigorous cable from Ireland to Newfoundland and the Atlantic Ocean was connected. Repair procedures were also developed so that cables can be fixed within weeks of a break. They eventually develop pioneering processes of using pulses to find exactly where a cable is broken.
By the 1870s, the world begins to be connected: France to Newfoundland in 1869; India, Hong Kong, China, and Japan join by 1870; Australia in 1871, and South America by 1874.
Within 30 years of Samuel Morse building the line from Washington D.C. to Baltimore, M.D., there are 650,000 miles of cable in the world. In 1844, sending a message to India from London and back took about ten weeks, now it can be done in a matter of four minutes. That does not mean that this wasn’t a tool of empire, but empires had the capital to build these large infrastructure projects.
Lots of Change: The Rise of the Telegraph
As the telegraph networks grow, telegraph offices begin to spread and begin to become ever-present fixtures. We can see a few interesting things coming out of these growing networks.
First, we can see the rise of online dating (well, not quite, but bear with me). In the United States, there are roughly two males to each female telegraph operator, which is a significant level of employment equity at the time. In 1891, there is an interesting story of a telegraph operator in Yuma, Arizona. A bored telegraph operator is largely by himself (he is at this railway siding with six maintenance guys and nobody else for company), and he begins chatting online with other operators to pass the time. Over time, he becomes friends with “Mat” and they decide to go fishing. They meet and he realizes that Mat is a woman – and they decide to get married!
Second, we can begin to see the disruption of news (a process that of course is continuing today with the Web). Newspapers used to compete on breaking news first (who could get the information from point A to point B), but now that everybody uses the telegraph, news spreads almiost instantaneously. While people are worried about the implications for news agencies, it turns out that at least people don not have their own personal telegraph devices, so newspapers are still needed to assemble the news. Newspapers realize they do not all need to send their journalists to report back at the same time, so they begin to form groups: the wire services , like Associated Press.
Figure 9: A woman using a Morse code radio communication device. cjp/iStock/Getty Images
This sees news stories being treated like commodities, sold to various newspapers, who can then reprint, repackage, and sell them to consumers (along with their own reporting and analysis).
Third, we begin to see the rise of monopolies . With infrastructure, it often makes sense to not have more than one or two cables. There are high barriers to entering the telegraph network market. Similar to say Internet cable infrastructure or water provision, we see the rise of natural monopolies. In Europe, the government takes on the role of telegraphy – even in England where the railroads played such an important role early on, they are taken under the wing of the Post Office. Yet in the United States, the private sector takes a major role with about 80% of traffic by the 1880s.
So, as we near the end of the nineteenth century, where are we?
Conclusions
The world has largely been connected with an expectation of quick communication. Yet telegraphs remain an expensive, monopoly-dominated product for “important” news. Technology continues to develop firstly with duplexing (where one line could be used for communication in both directions at the same time, like the later telephone), and later with quadruplexing (which saw multiple messages being sent over the same wire at the same time), allowing more capacity per wire and movements towards automatically transcribing telegraph content.
Read Relays, Repeaters, Duplexing and Quadruplexing if you are interested in learning more about the technical aspects of the history of telegraphs.
The next step was to see if they could carry more information. Maybe even voice? Could you create a “harmonious telegraph”? Some of the democratic potential of the telegraph would be realized in the telephone.
4d. Gabbing Around the World: The Telephone
While we often take the telephone for granted today – you may have one in your pocket, or sitting on your desk – it is interesting to think that there was a time when the telephone was as groundbreaking as the iPhone was ten years ago. Imagine what it might have been like to be one of the first telephone users.
CBC. (2013, April 22). 22 Minutes: Heritage Minutes: Alexander Graham Bell | CBC. [Video]. YouTube. https://www.youtube.com/watch?v=RuQzp_TI-3s
The more things change… the more they stay the same!
The transatlantic telegraph was completed in 1866. But the telephone – completed in 1876 – would grow to “augment, rival, and eclipse its older sibling”, as Rob MacDougall writes in the reading for this module. Unlike the telegraph, the telephone would end up directly connecting the home.
So where did this begin?
In some ways, people have been trying to extend their voices for a long time: think of ear trumpets, or using an ear trumpet the other way to extend your voice a long way. Or, with a technology that you might have experimented with as a child: the “lover’s telegraph” or “tin cup telephone” where you would take two cups, tie a wire between them, and if it was taut enough the vibrations could be sent and reconstituted so you could speak a fair distance between the cups.
All of the above come initially to be known as “telephones”. But by the nineteenth century, people were realizing that the future might be in finding some way to convert electrical waves into sound. There are early experiments: in 1837, electromagnets are rigged up to produce sound; in 1846, a rudimentary device is developed that can analyse sounds and move tuning forks in certain directions.
Figure 1: Louise Elisabeth de Meuron using an ear trumpet, mid-20 th century. (Unknown c. 1980)
Figure 2: Alexander Graham Bell. Photos.com/iStock/Getty Images
Enter Alexander Graham Bell (1847-1922), an elocution teacher concerned with deafness. He is working with deaf people and wondering if there was some way to render speech visible – to take sound and automatically write it down so that a deaf person could listen. In 1872, he does this weird thing where he takes a dead man’s ear, rigs it up to a metal horn, hooks the stylus (a writing instrument like a pencil) up to the ossicles (little bones in the ears that transmit sounds). He begins to move the stylus to draw lines on a piece of paper. It sounds a bit promising, but it does not lead anywhere. But he continues to work on this idea of his. As we noted briefly in our introductory module, Bell is not alone in trying to do this. Elisah Gray is also developing the telephone and by 1875 is probably a bit closer – we don’t want to get too granular here.
Through a lot of work on this problem, by 1876 Bell develops the telephone . It is not totally working, but it can transmit some sound, and he decides to write a patent to have more time to develop it. Indeed, Gray shows up at the patent office on the same day – 7 March 1876 – but Bell receives the first patent. He gets it to work shortly thereafter with the famous sentence “Mr. Watson – come here, I want to see you”.
The telephone is first marketed and explained as “a serious instrument for serious people”. Bell, seeking investors, downplays the potential for individuals to use it. Instead, the telephone is marketed as a way for bankers, merchants, fire stations, police, newspapers, hospitals, and the like to communicate with each other. It is not going to be “mom and pop”. It is going to be big business!
Even if the telephone is targeted at big business, its impact is first felt in how it literally begins to overshadow city streets.
All Those Damn Wires: The Telephone Fights
Figure 3: A telephone tower in Stockholm, Sweden, 1887-1913, with 5000 connected wires. (Tekniska museet, n.d.)
Telephones begin to spread across Canada and the United States, and the phone lines begin to physically take over our streets.
One of the most famous cases, expertly recounted in Robert MacDougall’s The People’s Network (which you read in part for this module), happened in December 1880. The newly-chartered Bell Telephone Company of Canada decided to start to connect utilities on Rue du Buade, a short but busy street in downtown Quebec City. It is a narrow 32-feet-wide road, and those 32 feet need to do a lot: provide for sidewalks, horse carriage passage, and now telephone poles. Bell decides that they will put the telephone pole right on the sidewalk, directly in front of the door of a local newspaper, the Daily News .
Now if somebody put a telephone post in front of my door, I would be angry.
James Carrel, the publisher of the Daily News , is very angry. He begins writing in his newspaper about Bell Canada and their “abominable aggressions”, their “useless extravagance”, and the “intolerable grievance” of their “unsightly and obstructive telephone masts” (MacDougall, p.20).
Now Carrel isn’t alone. People hate the telephone poles. And why not? They are a physical symbol of infrastructure that most people can’t use, but have to walk under and look up to and dodge around every single day. Municipal governments around this time have been in conflict with telephone companies. For example, by sending firemen to chop them down. The telephone companies then respond by having linemen perching on top of the poles so that the firemen don’t chop them down. In one story from Montreal in 1885, a mob literally attacks the telephone exchange because they think smallpox is being transmitted over the wires across the city (MacDougall, p. 20).
That one pole on the Rue de Buade sets off a major debate around the nature of telephone networks in Canada, and arguably – as we will be discussing – perhaps lays the groundwork for how we regulate today’s Internet. Indeed, the question of who had the right to put a telephone pole on the road led to debates like:
Figure 4: Early debates about the telephone. Image description . © University of Waterloo
Our instinct is often, today in the twenty-first century, to think “national”. But at that time, just as with many of our connections today, people mostly talked to each other locally. Your average person in Ottawa didn’t really need to contact somebody in Montreal, let alone somebody in Vancouver – their friends, family, etc. would more than likely be in the same city as they were.
Things end up being very different between the United States and Canada, thanks in part to our friend James Carrel and his newspaper pole. In the United States, municipalities play a very active role: regulating service and rates, encouraging local ownership and providing ownership. This seems natural to many – long-distance telephony is an unaffordable luxury well into the twentieth century. When people live their lives locally, it makes sense that most people would just want to have local networks and it would make sense for local governments or utilities to provide this. So small towns like Muncie, Indiana (the focus of the reading by MacDougall), might start by asking a big player like Bell for telephone service. If that didn’t pan out, they would just start their own.
How does a municipality in the United States end up taking control? They realized that the power over telephones lay in where the telephone pole was. In Muncie, Indiana, for example, the city passes a civic beautification law that tries to force all of the lines underground. Telephone companies didn’t want to do that – it is a lot more expensive to bury a line than it is to just string it between two poles – so they end up negotiating with municipalities to find rates that can keep them profitable but affordable.
The same does not happen in Canada. Municipal governments lose power over telephones, thanks in no small part to the telephone pole on the Rue de Buade.
Carroll had been so angry about the telephone pole popping up in front of his newspaper office door, that he decided to sue. He first went to the municipal government and had no success as the city was not sure if they could regulate a federally chartered corporation as opposed to a provincial one. He then filed a private criminal suit against the Bell Manager for Quebec – who then is arrested, is convicted of obstructing the Queen’s Highway, and has his conviction upheld on appeal.
But despite Carrol’s celebration, the pole remained.
Bell is not happy with their employees being arrested! They decided to petition the federal government to change its charter, declaring the telephone a work for the benefit of the whole country and thus insulating the company against municipal and provincial authority. It does the same in the provinces of Ontario and Manitoba too. Parliament says yes and local municipalities, as opposed to those in the United States, lost power over Bell Canada’s affairs.
We accordingly then see a big difference:
In the United States, telephones become understood as local enterprises under municipal control until well into the twentieth century;
In Canada, telephones become a “national undertaking” – the federal government interferes and assumes responsibility well before it was even technically possible to have widespread intercity calling.
Figure 5: Regulatory differences in the history of the telephone in Canada and the U.S. da-vooda/iStock/GettyImages; Yevhenii Dubinko/iStock/GettyImages
The Indiana Telephone War vs. the Canadian Monopoly: Changing Usage Patterns
We can see this in two case studies.
In Indiana , we have something that has come to be known as the Indiana Telephone War . In 1884, the State of Indiana tries to create new telephone companies due to discontent with the existing Central Union telephone company. In 1885, Indiana tries to impose a $3 per month maximum rate for telephone service, a decision that was upheld by the state supreme court the following year. In response, Central Union begins to disconnect subscribers in Indianapolis, as they claimed that they could not make money on these rates. While Indiana backs down in 1889, this emboldens Americans to try to fight these monopolies and reinforce the importance of local control.
This hatred came in bad timing. Slowly the original patents that Bell had on the telephone began to expire. American towns begin to construct their own telephone companies. Let’s return to what this means in a second.
Meanwhile, in Canada , things look like they might be good for smaller telephones. Bell soon loses its patent in Canada. This is because at the time, if you wanted to keep a patent in Canada you had to manufacture your products in Canada: when it surfaces that Bell had been making their telephones in Chicago, the patent is nullified in January 1885.
Bell is able to circumvent this problem, however, through some sneaky business. Bell is often able to negotiate exclusive franchises with municipal governments by offering them a cut of profits. In other places, Bell will do questionable things like set up a dummy company (i.e. “The People’s Telephone Company”) which can undercut other competitors and Bell itself. They would then force competitors out of business, and end up getting ingested into the company again. Most importantly, thanks to Carrel’s case, they never admitted that municipalities would have any control over them.
Figure 6 : Telephone operator using a switchboard. Creative!/iStock/GettyIm ages
So how do Canadians and Americans use their phones differently thanks to these two legal and social regimes?
Some statistics, drawn from MacDougall, drive home how these regulatory and government differences manifest themselves. By 1905:
In Indiana, there is one telephone for every 12 people
In Ontario, there is one telephone for every 90 people
Figure 7: Access to telephones, Indiana vs. Ontario, c. 1905. da-vooda/iStock/GettyImages
This is a shocking difference. Beyond even the statistics that you can see above, we can see this manifested in how people use the telephones differently. In the United States, the more ubiquitious telephone and more accessible rates mean that telephones end up in exciting places like saloons, stables, barbershops, and beyond. Whereas in Ontario, the telephone remains the purview of business offices and wealthy homes. In the United States, public telephones appear, not so much in Canada.
We can see how the different cultures emerge. In places where Bell had a monopoly, like in Ontario, they would institute “measured service”: a low or no monthly rate, but five or ten cents per call. Think of this if you had an Internet connection where you had to pay a few dollars per GB – you would probably tend to only use the Internet when you needed to . Yet in places where there was competition or government regulation, like many locations in the United States (Indiana), places would have “flat rates for unlimited local calls” . Flat rates would let people be frivolous (just as you would be with unlimited Internet – you would watch a lot of crappy Netflix shows just because you could). On the phone, they could gossip, sing songs, and give ad hoc concerts over the phone, chat about whatever they wanted to chat about, etc. Metered rates meant that phones were serious places.
Ultimately, this moment of creativity and fun would fall apart as long-distance telephony evolved. Bell would eventually be able to market themselves not as a connector of local places, but as an “annihilator of space” . By 1915, the United States had its first coast-to-coast telephone call. Bell stayed in New York, his old assistant Watson went to San Francisco, and Bell said “Mr. Watson, come here, I want to see you” – the very same words he had said almost forty years before. Here we see the telephone companies adopting the rhetoric of the independent networks: promoting the telephone as democratic, democratizing, and a way to – as I said above – to annihilate space.
Figure 8: The telephone as an “annihilator of space.” RosLilly/iStock/GettyImages
Conclusion
Fortunately, telephones became a regulated monopoly . In the United States, the federal government in 1910 began to regulate interstate telephone service . The 1910 Mann-Elkins Act designated telephone companies as common carriers under the Interstate Commerce Commission. This would let governments freeze rate increases, but it also extended free speech principles to privately owned carriers . This is because a common carrier needs to offer service on a non-discriminatory basis: i.e. CN rail, or a hotel, or Grand River Transit are “common carriers” in this sense, because they have to serve you but they are not responsible if you commit a crime using their services.
The telephone monopoly would eventually fall apart in 1982. The United States Department of Justice launched an antitrust lawsuit against AT & T (which controlled long-distance telephone in the United States and made most of the phones by that point), and in 1982 AT & T broke themselves up into the seven “baby bells” or local competitors. Some of these “baby bells” would see some re-consolidation (re-joining to become AT & T again, or Verizon), as we can see in this CNN article How AT&T Got Busted Up and Pieced Back Together .
In the experience of the telephone, and how different national regulatory regimes gave rise to different usage cultures, we can see some parallels to debates over the Internet today. We return to these, and some broader points, in the conclusion.
*The History of the Telephone*4e. Net Neutrality and Standards Today, or Conclusions
Figure 1: What about the parallels today? imaginima/iStock/GettyImages
What about today? In many ways, you could argue that the telecommunication industry today in Canada or the United States look a bit like the ones from a long time ago before the monopolies took hold. We had competition, right? Rogers vs. Telus, or Start Internet vs. TekSavvy. Maybe competition is too strong a word as argued in this CBC article Canadians pay some of the highest wireless prices in the world – but report says they’re worth it .
We can learn a few things from history:
Communications technologies are political . How the phone was used was profoundly influenced by court rulings, regulatory regimes, and how businesses acted in these companies. Sometimes today on the Internet, we can think that “politics are irrelevant”, because the Internet can transcend borders and make all that seem like it doesn’t matter. Indeed, in our module on the counterculture, we will be returning to some of these ideas. But we know that the history of the telephone demonstrates that communication networks are shaped by governments.
Standards are great…but can be overrated. When there were a lot of different telephone companies, many of which did not connect with each other, this ironically led to lots of excitement and varied uses of the telephone. Similarly, as How GeoCities Suburbanized the Internet argues, in the early days of the World Wide Web, when websites were garish and scattered, maybe we saw more excitement and varied uses than we do in the days of a network dominated by Facebook or Twitter.
Indeed, these historical parallels can help us understand this. During the 2017/2018 debate over “net neutrality”, many commentators harkened back to telephone regulations to make their cases. Indeed, in the above two points, I used a similar rhetorical strategy. One way that newspapers, editorial writers, and magazine authors can make an argument is to appeal to history ; they build on the idea that what has happened before might help us understand what is to come .
Those who cannot remember the past are condemned to repeat it.
(Santanyana, 1905)
Historians have a love-hate relationship with this quote by George Santayana. On the one hand, historians like it as it speaks to the rhetorical power of history, but in other ways, historians worry about being perceived as mystical soothsayers. Historians understand the past, which might help illustrate the present, but we of course cannot see the future. To use a Canadian example, I – and others – often think of historians as great sports commentators. If you watch enough hockey, when the Leafs play the Islanders, you will not know who is going to win. However, you will have enough informed expertise to understand the basic structure of the game, what might happen, when to be surprised, etc. In short, an informed opinion.
Read and Reflect: Long Before Net Neutrality Reading
So let’s look at one of the short readings that I assigned you all to read this week: Long Before Net Neutrality, Rules Levelled the Landscape for Phone Services .
Figure 2: Long Before Net Neutrality. (NPR, 2015)
Take a few minutes to read the article, and reflect on these questions:
What is this article’s argument?
How does the author use history to support that argument?
What evidence do they use?
Are you convinced?
Is this a responsible use of history?
Then move on to the Module 4 Discussion below.
Module 4 Group Discussion Activity
This week, we are going to think collectively about the power of historical arguments to inform contemporary debates. Your instructor will post one of the following four historical articles in your group’s Historical Argument Discussion Topic , by the deadline indicated in the Course Schedule:
The Debate Over Net Neutrality Has Its Roots in the Fight Over Radio Freedom
How the FCC’s Net Neutrality Plan Breaks With 50 Years of History
Ten parallels between the telegraph and the Internet in international politics
Net Neutrality: Lessons from the Past
Read the article assigned to your group, and write a short response to the followng questions:
What is this article’s argument?
How does the author use history to support that argument?
What evidence do they use?
Are you convinced?
Is this a responsible use of history?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Module 5: Hypertext and the Idea of Linked Information
5a. Introduction to Hypertext
Hypertext
As we will learn about today, the idea of hypertext was formally defined in 1965 by Ted Nelson, although the idea predated it by twenty years.
Figure 1: Conceptual predecessors of hypertext . Image description (Office for Emergency Management & Library of Congress, c. 1940-1944); (Engelbart, 2008); (Gotanero, 2013)
What is hypertext ?
A body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.
(Nelson, 1965)
We see hypertext everywhere. When you open up a Web Browser, you see “HTTP”
Figure 2 : Browser address bar. (Google Inc., 2018)
HTTP stands for the Hypertext Transfer Protocol, which is the protocol that underlies communication on the World Wide Web. In a nutshell, hypertext documents use links – or hyperlinks – to connect nodes. We will talk about the specifics of the Web at the end of this lesson.
You may also see HTTPS, which stands for Hypertext Transfer Protocol Secure. This is an extension of the original protocol that allows for better privacy and authenticity.
Let’s begin by just reminding ourselves a bit about the power of links in a short exercise.
Exercise: Ten Degrees of Wikipedia
You might have heard of the popular game “ Six Degrees of Kevin Bacon ”. It’s the idea that the popular actor Kevin Bacon could be connected to almost anybody else in Hollywood in six degrees.
Figure 3: Six degrees of Kevin Bacon. (Skidmore, 2014)
Let’s play a similar sort of game to remember how hyperlinks work. Wikipedia is a great example. This large, collaboratively written and edited compendium of knowledge contains a mind-boggling array of informaton on topics big, small, obscure, important, and beyond. Let’s try to begin to navigate through Wikipedia using only hyperlinks.
Please navigate to the University of Waterloo’s Wikipedia page . Those of you who are on our campus, or know a little bit about it, know that we have a fun problem with Canada Geese. Yet on the University of Waterloo’s Wikipedia page, there is (as of writing) no link or even mention of our infamous waterfowl. Using only links, I would like you to find yourselves on the Canada Goose page.
What’s the quickest route you could find?
The answer is “2 clicks” -> i.e. Canada -> Canada Goose
Exercise: Ten Degrees from “Canada”
Now let’s see how far we can go on Wikipedia. I would like you to begin on the Canada Wikipedia page. Now click one of the links on that page, and jot down the URL and the name of the page. Do this sequentially until you’ve visited 10 pages in total. Once you are done, you will have a sense of the strangest place you can find.
The above examples reflect the power of hypertext. Before hypertext, if you were reading a book about the University of Waterloo and then wanted to learn about geese, you probably had to put down the book, go to the library, pick up the book, and read it. As they are on disparate topics, they would undoubtedly be on different floors of a large research library like Dana Porter.
In short, hypertext has revolutionized how we consume and use information.
Conclusion
Hypertext, which we will be discussing in this module, is today synonymous with the Web. If you think about hypertext, you may be thinking about the Web. Conversely, if you are thinking about the Web, you are probably thinking about hypertext (and even if this isn’t how you normally thought of the Web after our last exercise, you hopefully are).
Yet this module demonstrates that the World Wide Web is one particular implementation of hypertext as a concept. It is the most successful and distributed model of hypertext, but that does not mean that it is the “best” in any way. Indeed, history can show us how hypertext can be different!
For example, the Web’s version of hypertext has a few unique characteristics compared to what we’ll see in the examples for this lecture.
- The Web permits broken links : i.e. a “404”
Figure 4: 404 error page. (Google Inc., 2018)
Links are one-directional.
Reading content is generally distinct from writing content – for example, consuming content and writing content on the Web are very divided.
Links generally go to whole pages: while they may go to anchors or headers in pages, our links generally do not go to sentences or even words.
To understand hypertext, then, we need to go back to where it all begins.
What your prof thinks: The answer is “2 clicks” -> i.e. Canada -> Canada Goose
5b. The Memex
Introduction to the Memex
Many of the technologies discussed in this course do not have clear genealogies. For example, in Module Three, we discussed how the telephone and telegraph have very scattered origin stories. Yet when scholars look at hypertext and begin to trace citations and ideas, they continue to come back to the same origin story: Vannevar Bush and the Memex.
The Memex was introduced by Bush in a July 1945 article in The Atlantic Monthly , entitled “As We May Think”. If you haven’t read the article, please do so now.
Module 5b Individual Activity: The Memex
I would like you to reflect on the article you have read by Bush. In one paragraph, please describe the following:
What is the Memex?
How does the Memex work?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Figure 1: The Memex. (Bush, 1945)
Now that you have thought about the Memex, let’s reflect on a few key characteristics of the device.
It was based on microfilm to store and record information.
You can use a desk camera to add more material to the Memex.
You can tie things together between documents: you can draw links between criteria (for example a bow and an arrow), and the link that you have tied between the two concepts.
We will return to the last point shortly, as it lies at the heart of the Memex’s importance for hypertext. To understand the Memex, we need to understand two constituent parts of it.
Vannevar Bush
The author of “How We May Think” was Vannevar Bush. His personal biography can help us understand the significance of this idea, as well as why it appeared in the Atlantic Monthly . Bush himself would become an outsized figure in American engineering and science, participating and leading projects that would see outcomes as varied as the atomic bomb, the development of the radio, the establishment of the American National Science Foundation, and even – as we discuss in this lesson – the ancestors of the World Wide Web.
Figure 2: Vannevar Bush. (Office for Emergency Management & Library of Congress, c. 1940-1944)
Figure 3: Vannevar Bush timeline. Image description . © University of Waterloo
The Memex continues to impact both the fields of information retrieval as well as new media.
The field of information retrieval has been inspired by Bush’s vision of simple, elegant information access. New media, however, has been inspired by a different aspect of Bush’s vision: the scholar creating links and pathways through this information. … this remains one of the defining dreams of new media.
(Wardrip-Fruin, 1999)
To understand the Memex, we need to understand two major factors that influenced its development.
The Two Factors: Differential Analyzer and Microfilm
The reading that you did for this week by Belinda Barnet argues that the Memex’s foundation can be found in the idea of the differential analyzer . As the reading covers much of this ground, I will not go into detail here. Basically, the differential analyzer is a giant calculator. It solves differential equations by a series of wheels and disks that generate mathematical answer s: y ou would have gear ratios between different gears, and through mechanical revolutions, you would be able to do multiplication, addition, subtraction, and more sophisticated forms of mathematical analysis. Or in other words, the differential analyzer is better at mathematics than your professor is!
Bush and another man, Harold Locke Hazen, created the first general-purpose analyzer at MIT between 1928 and 1931. This is important because it meant that it was a machine that was not created for a specific purpose, but rather could be used in a multiplicity of functions. It allows a user to calculate very complicated things far quicker than ever before.
It is a useful connection to understand as the differential analyzer relies to a large degree on physical computing: wheels and disks spinning. The Memex operates under similar principles.
Yet much of what goes into the Memex relies on microfilm. Microfilm dates back to the early nineteenth century, but it became commercialized and successful by the 1920s. Soon, throughout the 1920s and the 1930s, academics are increasingly interested in what microfilm can do to revolutionize studies and the spread of ideas.
What’s microfilm ?
Microfilm is scaled-down documents on a film strip. Documents are roughly reduced by to 1/25th of their original size, and are then viewed through microfilm viewers which illuminate and zoom them in. Today, in many academic libraries, they are primarily used to consume newspaper and theses and dissertations. But additional uses have ranged from ways to shrink down written letters from soldiers to send home during the Second World War or even volumes of sheet music and other forms of government records.
If you are on the University of Waterloo campus, you can visit a microfilm room in the basement of Dana Porter Library. Any university library or many large public library systems will have microfilm collections, and the rooms that contain these collections are usually some of the densest information sources! Imagine that a library wanted to keep every copy of the Toronto Star ever published: that would take up rooms if they were full size. But the whole run of a newspaper can fit on a few shelves once they have been condensed to microfilm.
Figure 4: Microfilm from the Dana Porter Library, University of Waterloo. © University of Waterloo
Some of the early potential of microfilm can be seen throughout the 1920s and 1930s. The Library of Congress in the United States, for example, microfilms some three million volumes from the British Library and brings them back to Washington. Now instead of having to make the long journey across the Atlantic Ocean to London, England, a researcher could now do research from the comfort in the United States. This process begins to repeat itself around the western world, leading to a massive revolution in information sharing and accessibility.
Microfilm is a beautiful analog system.
It is a moment of massive excitement: the idea that information could be nearly universally accessible.
But there’s a problem.
Microfilm is difficult to use. If you haven’t used a microfilm reel before (you are lucky), try to imagine a long Netflix video that you want to find a particular scene in. However, you can only fast forward and fast rewind. So you might fast forward too quickly, then have to go back, and then forward, ad nauseam, in order to find content. Now, of course, imagine that instead of Netflix scenes, there are thousands of indivdual pages, and you can begin to see how this could become a bit frustrating.
The Memex
One of Vannevar Bush’s first ideas in this area is to rise to the problem of finding information in microfilm. He first proposes the idea of the Selector , which would map microfilm to a series of codes. For example, a user could say that they wanted to access information about “cats,” and the machine would then spin microfilm until it found material relating to our furry friends.
This led to a crucial shortcoming. The problem was that while you could find say, cats here and dogs there, other related concepts would be nearby – fur coats (sorry) here, cat houses there, domestication here, the use of human pets in entertainment there, and beyond. You can see the problem. You might use the selector to find cats. But then you would have to zoom out and find domestication separately. Its quicker and more accessible than a stack of books, but it still suffers from the same problem of not finding the complex interconnections within text.
Bush had a crucial point to make in response to this shortcoming.
The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain . It has other characteristics, of course; trails that are not frequently followed are prone to fade, items are not fully permanent, memory is transitory. Yet the speed of action, the intricacy of trails, the detail of mental pictures, is awe-inspiring beyond all else in nature.
(Bush, 1945)
Figure 5: Example of a semantic network in the human mind. Image description © University of Waterloo
Enter the Memex then, which you have now read about. The Memex was an analog system, based on a series of photographing, microfilming, and information retrieval concepts. Crucially, there would be a hypertext component as well.
As I noted above, the most important part of the Memex is the concept of hypertext. Bush doesn’t call it that – the term “hypertext” itself won’t be defined until 1965 – but the basic building blocks are there. A user would link concepts together, and these links were a critical part of the Memex. There is even a rudimentary machine learning idea here: a user would draw connections between people, places, and other concepts, and the Memex itself might learn how to do it as well.
While the Memex was a private network, these links would form a giant record of your actions.
Module 5 Group Discussion Activity
I’d like you to take some time now to discuss Bush’s “As We May Think”. Select and reflect on at least three of the following five questions, and post your responses in the As We May Think Discussion Topic .
Is “As We May Think” still relevant today?
What did Bush predict that came true?
Is the Memex comparable to systems that we have today?
How can machines help people think?
Why does the Memex matter?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
5c. oN-Line System (NLS)
Douglas Englebart
The Memex was conceptually and theoretically extremely important, but was not built. So what would come after the Memex to help realize the vision?
Enter Douglas Engelbart and the oN-Line System, or NLS. Engelbart’s vision was that human beings are always augmenting ourselves: born with innate genetic capabilities, we then augment with tools, techniques, skills, language, and yes, technology. This is an important starting point as it underpinned his vision of computing. As we will see, he was profoundly influenced by a feeling that the world was getting more complex and moving quicker and quicker, and that accordingly humans “needed to ‘harness’ the inherited mess”.
Where did these ideas come from?
To understand Engelbart, we need to go back to the Second World War. He was stationed in the Philipines and while at an airfield, sees a copy of the Atlantic Monthly in the Red Cross Library. It catches his eye and he sits down to read it. The Atomic bomb had just been dropped on Japan and as he read “As We May Think”, Engelbart wondered if the same set of scientific skills that had produced such monstrous weapons of destruction could be used to prevent destruction. He begins to think about this, and it stuck with him for the coming years.
Five years later, he has three “intellectual flashes” that shape his conceptual development of hypertext.
Figure 1: Douglas Engelbart. (Engelbart, 2008)
Boosting humankind’s ability to deal with complex, urgent problems would be an attractive candidate as an arena in which a young person might try to “make the most difference”.
An “aha” graphic vision surges forth of me sitting at a large CRT console, working in ways that are rapidly evolving in front of my eyes (beginning from memories of the radar screen consoles I used to service).
These visions are important. This was influenced by Engelbart’s work in the Second World War working with radar consoles – anti-aircraft operations are often seen in early cybernetic texts as the melding of man and machine – but also crucially articulated an important new paradigm of computing.
Figure 2: Mainframe computer at the University of Waterloo, 1968. © University of Waterloo
Figure 3: Waterloo punch card, 1969. © University of Waterloo
Figure 4: Information retrieval from a mainframe computer. makyzz/iStock/GettyImages; Meilun/iStock/GettyImages; jj_voodoo/iStock/GettyImages
The Mother of All Demos
This all then comes together with the NLS. Engelbart, by the late 1950s, had come to work at Stanford Research Institute (or SRI). There he begins working on a series of projects which culminated in the oN-Line System (NLS). Development is initially slow, but subsequently they receive funding from the Advanced Research Projects Agency, or ARPA. ARPA is trying to figure out timesharing computing, working on ways to improve access to large machines, and they begin giving funding to SRI to work on these problems.
We will return to ARPA and these broader problems in the next module, when we talk about the ARPANET.
One of the major pieces of software that can run on the NLS is “Journal”, a hyperlinking software. It is a realized vision of hypertext. It sees internal document links – any document could cite any other arbitrary passage in any other document. It would be a two-way link where you would also see a back link to return to the previous document. In this aspect it was similar to the Memex, but it went one-step forward in that all of the content would be stored and made public.
This all came to a head in the “Mother of All Demos”. Douglas Engelbart presented on NLS and Journal at the December 1968 ACM/IEEE-CS Joint Computer Conference. In front of 2,000 to 3,000 computing professionals, a nervous Engelbart demonstrated the system that they had developed at SRI.
Pause and Reflect: “Mother of All Demos” Video
Please watch a section of this video, excerpted below (~1:40-17:06). While some parts of this demonstration are not directly connected to hypertext, watching the context around the early hyperlinking is important: it not only shows both how nervous Engelbart is, but also the revolutionary input methods that are at play.
Doug Engelbart Institute. (2017, March 12). 1968 “Mother of All Demos” with Doug Engelbart & Team (1/3). [Video]. YouTube. https://www.youtube.com/watch?v=M5PgQS3ZBWA
After such a successful demonstration, however, you may be wondering why you have never heard of the NLS or if you have, why you haven’t used it?
Note: These questions are for individual reflection only; you are not required to submit your answers.
The NLS came to an end quite quickly. Some if this is due to internal dynamics at SRI. Some key members begin leaving for another organization, Xerox PARC (Palo Alto Research Centre), between 1969 and 1971. We will talk a bit more about Xerox PARC later when we talk about the Bay Area in some more detail. More importantly, government funding begins to trickle off, meaning that the NLS needs to find commercial clients if they are going to keep the project going.
Yet while the Mother of All Demos was so impressive, it also highlights some of the shortcomings of the system: it is complicated (Engelbart makes a few mistakes throughout his demo, which highlights this complexity) and more importantly, it arguably did too much. It needed to be maintained as a very complicated networked system, and corporations would need to hire a small army of technicians to deploy it across their corporations – even if they only wanted a few of the functions. NLS ends up being acquired by another company called Tymshare, then by McDonnell-Douglas, and the company cares less about this innovative new system than they do about the development of enterprise office software.
This was the premature death of a technological project, but the ideas would continue….
*Douglas Engelbart's Mother of All Demos (1968)*5d. Xanadu and the Web
Ted Nelson
The final step that we need to discuss in this lecture is the next hypertext vision: Xanadu . Xanadu presented an idea of a computer filing system that would store and deliver all of the world’s literature through hyperlinked text , while acknowleging authorship, ownership, and quotations. It would be like the World Wide Web that you understand today, but with no broken links, no lost documents, and a micropayment system built right into its core.
To understand Xanadu, we need to briefly understand its inventor: Theodor Holm Nelson, better known as Ted Nelson. Similar to Bush and Engelbart, Nelson had a theory around how human knowledge was both inherited and transmitted.
Ted Nelson, unlike Bush and Engelbart, came at this problem a bit differently. If we think of the University of Waterloo, Bush and Engelbart would be coming from the north side of campus (Engineering, Computer Science) – whereas Nelson would be coming from the Faculty of Arts from the south side! (Go Porcellino )
Nelson is a graduate student at Harvard University in 1960, studying sociology. He is having trouble keeping his notes clear and tries many methods. Many of these might be familiar to you as an undergraduate student – file cards, index tabbing, edge-notched cards. Making multiple copies of documents and then copying them multiple times to mash them up. You can imagine. None of these methods seemed to solve the major problem Nelson now saw himself as being confronted with: the need to have information in several places at one time.
Figure 1: Ted Nelson. (Gotanero, 2013)
Figure 2: Methods for organizing information . (MacKay, 2008); FlamingPumpkin/iStock/Getty Images; christopherhall/iStock/Getty Images
What to do?
Fortunately, for the development of hypertext, Nelson then takes a computer course. He then begins to think about how computers could be put into the service of information handling.
Hypertext Defined
The first major idea that gives rise to hypertext is Nelson’s “Evolutionary List File” or ELF. In a nutshell, ELF allows a user to compare documents. Crucially, it helps him get published in the ACM 20 th National Conference . This gets hypertext to appear in print.
It is worth us looking at this paper in a bit more depth. Please read the first three paragraphs below.
THE KINDS OF FILE structures required if we are to use the computer for personal files and as an adjunct to creativity are wholly different in character from those customary in business and scientific data processing. They need to provide the capacity for intricate and idiosyncratic arrangements, total modifiability, undecided alternatives, and thorough internal documentation.
The original idea was to make a file for writers and scientists, much like the personal side of Bush’s Memex, that would do the things such people need with the richness they would want. But there are so many possible specific functions that the mind reels. These uses and considerations become so complex that the only answer is a simple and generalized building-block structure, user-oriented and wholly general-purpose.
The resulting file structure is explained and examples of its use are given. It bears generic similarities to list-processing systems but it is slower and bigger. It employs zippered lists plus certain facilities for modification and spin-off of variations. This is technically accomplished by index manipulation and text patching, but to the user it acts like a multifarious, polymorphic, many-dimensional, infinite blackboard.
The costs are now down considerably. A small computer with mass memory and video-type display now costs $37,000; amortized over time this would cost less than a secretary, and several people could use it around the clock. A larger installation servicing an editorial office or a newspaper morgue, or a dozen scientists or scholars, could cost proportionately less and give more time to each user.
Let me introduce the word “hypertext”~ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper . It may contain summaries, or maps of its contents and their interrelations; it may contain annotations, additions and foot notes from scholars who have examined it. Let me suggest that such an object and system, properly designed and administered, could have great potential for education, increasing the student’s range of choices, his sense of freedom, his motivation, and his intellectual grasp~*. Such a system could grow indefinitely, gradually including more and more of the ~rld’s written knowledge. However, its internal file structure would have to be built to accept growth, change and complex informational arrangements. The ELF is such a file structure.
Read the above passage in depth, as it provides the definition of hypertext that both inform this lecture but crucially the lectures that follow.
Crucially, we see the idea of the link here. Both Engelbart and Nelson independently invent the concept of a link. But in this case, Nelson’s link is also a bit different – its links are bidirectional. Indeed, today, Nelson will still call the World Wide Web a series of “one-way hyperlinks”. Out of this idea would come some of the kernels for today’s World Wide Web.
Xanadu
But before we get to the World Wide Web, both in passing at the end of this module and then in more depth in future modules, we should discuss Nelson’s implementation of Hypertext: Xanadu.
Pause and Reflect: Xanadu
To understand Xanadu, read The Xanadu Parallel Universe . In particular, pay attention to the following:
How does Xanadu work?
How is it similar to the World Wide Web that you use today?
How is it different from the World Wide Web that you use today?
Note: This is for individual reflection only; you are not required to submit your answers.
You may be curious, as you have never used Xanadu before. What happened to it? Why was it never realized? The actual history of Xanadu is a bit outside of the scope of this class. In 1995, Wired magazine wrote a famous article “ The Curse of Xanadu “, describing it as the “longest-running vaporware project in the history of computing”.
Xanadu continues under active development today, as we can see in this demonstration from Ted Nelson himself.
TheTedNelson. (2016, August 29). New Game in Town. [Video]. YouTube. https://www.youtube.com/watch?v=72M5kcnAL-4
While Xanadu’s dream continues on, the Web will ultimately become the dominant and most-widespread version of Hypertext.
Conclusions and Next Steps
In this module, I have only provided a few examples of hypertext. At Wikipedia, there is a well-maintained list of other forms of Hypertext implementations – if you are curious, please feel free to check it out.
The next logical step for us in this course, will be the World Wide Web. In the “Birth of the Web” module, we will explore the link between these earlier hypertext modules and the Web. In a nutshell, Tim Berners-Lee would develop the World Wide Web while he was a physicist at the European Organization for Nuclear Research or CERN in Geneva, Switzerland. Berners-Lee will find himself in a complex, ever-changing environment of people coming and going, and will begin to create ideas of bringing order to chaos in 1980.
Through this module, however, hopefully you can see that the history of the Web cannot be seen as just beginning with Tim Berners-Lee and his specific implementation of hypertext in 1989. It comes from a rich heritage dating back Vannevar Bush and his 1945 “As We May Think”; Douglas Engelbart and the oN-Line System (NLS); and Ted Nelson and Project Xanadu.
This week we have covered a lot of ground. As noted at the beginning of the module, we had the following three goals which have been achieved in three different ways:
Have a conceptual understanding of hypertext and its history : We explored this through the beginning idea of hypertext with Vannevar Bush and the Memex; its implementation in the NLS under Douglas Engelbart; and the definition of the term itself and early visions of interconnected documents in Project Xanadu.
Compare and analyze historical hypertext systems to contextualize today’s World Wide Web : We were able to see the differences between Memex, NLS, and Xanadu – both which each other, and crucially, from the Web that you all use today.
Understand three pivotal moments in the development of hypertext t hrough close document readings : We have read something close from each. Bush’s “As We May Think”, “Mother of All Demos” and NLS; and explorations of Ted Nelson’s pivotal 1965 ACM paper.
Module 6: The Internet: From ARPA to the ARPANET
6a. What is the Internet? From a Series of Tubes to Common Standards
In June 2006, Senator Ted Stevens, a Republican from Alaska, stood in the U.S. Senate and appeared to betray a shocking ignorance of how the Internet worked:
There’s one company now you can sign up and you can get a movie delivered to your house daily by delivery service. Okay. And currently it comes to your house, it gets put in the mail box when you get home and you change your order but you pay for that, right.
But this service is now going to go through the internet* and what you do is you just go to a place on the internet and you order your movie and guess what you can order ten of them delivered to you and the delivery charge is free.
Ten of them streaming across that internet and what happens to your own personal internet?
I just the other day got, an internet was sent by my staff at 10 o’clock in the morning on Friday and I just got it yesterday. Why?
Because it got tangled up with all these things going on the internet commercially.
So you want to talk about the consumer? Let’s talk about you and me. We use this internet to communicate and we aren’t using it for commercial purposes.
We aren’t earning anything by going on that internet. Now I’m not saying you have to or you want to discrimnate against those people [ø]
The regulatory approach is wrong. Your approach is regulatory in the sense that it says “No one can charge anyone for massively invading this world of the internet”. No, I’m not finished. I want people to understand my position, I’m not going to take a lot of time. [ø]
They want to deliver vast amounts of information over the internet. And again, the internet is not something you just dump something on. It’s not a truck.
It’s a series of tubes .
And if you don’t understand those tubes can be filled and if they are filled, when you put your message in, it gets in line and its going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material.
Now we have a separate Department of Defense internet now, did you know that?
Do you know why?
Because they have to have theirs delivered immediately. They can’t afford getting delayed by other people.
[ø]
Now I think these people are arguing whether they should be able to dump all that stuff on the internet ought to consider if they should develop a system themselves.
Maybe there is a place for a commercial net but it’s not using what consumers use every day.
It’s not using the messaging service that is essential to small businesses, to our operation of families.
The whole concept is that we should not go into this until someone shows that there is something that has been done that really is a violation of net neutrality that hits you and me.
(Wired Staff Security, 2006)
As Googlers Eric Schmidt and Jared Cohen have argued, the “Internet is among the few things humans have built that they don’t truly understand” (2013, p. 3). We live today in a society dominated by the power of networked communication but generally haven’t the faintest idea of how it came to be.
Before we get started, I thought it might be good to get your reflections on two questions. We will revisit the last one later in this module, so try to keep your answer in mind as you read through things.
In two or three sentences, can you define the Internet?
Do you agree with the following statement? “The Internet Grew Out of Fears of Nuclear Armageddon.”
Note: This is for individual reflection only; you are not required to submit your answers.
Before we get started, a few quick definitions are in order.
What is the Internet?
The Internet, with a capital I, is the biggest internet (lower-case i) in existence.
An internet is a network of two or more networks, which is traditionally called “an internetwork” created by “internetworking”. You might have a network at home or work. This can take several forms: for example, an Apple TV or gaming console hooked up to a television, a printer, two computers and assorted mobile devices, all connected to a single router. I also have a network in my home. If my network connects to your network, sharing data and facilitating interconnections, we have an internet. The Internet, then, is the interconnection of millions of networks with a common set of standards to make sure all of the systems can exchange data seamlessly.
It is also important to recognize what the Internet is not – it is not a synonym of the World Wide Web. The Web is an information system that uses the Internet – just like email uses it, and the File Transfer Protocol uses it, or Usenet used it.
Figure 1: The Internet vs. the World Wide Web. Image description . rungrote/iStock/Getty Images
Is it a Series of Tubes?
It is also worth noting that the Internet physically exists. The network of networks that comprises the Internet physically exists. It is not quite a series of tubes, but as an analogy, this is actually a good starting place. In an era of ever-connected cloud-computing, with files saved to a Dropbox and music streaming across wireless connections, the Internet can sometimes seem to be “just out there”. Yet the Internet is a series of interconnected cables, wires, and switching boxes – often running in actual tubes to protect them from the outside world.
Therefore, the “series of tubes” allegory is not an awful one.
How does it work?
The Internet, then, might best be understood as a common protocol by which disparate computers and computer networks can communicate . The magic that underlies this network lies in the establishment of common standards that allow a Canadian-based computer manufactured by Apple Computers in Texas to exchange information with a Korean-manufactured computer in the Republic of South Africa. That this happens seamlessly is a tremendous human achievement.
This common standard, the Transmission Control Protocol/Internet Protocol , or TCP/IP, makes the Internet possible. There used to be many other competing protocols, until TCP/IP was adopted by the American Department of Defence in the early 1980s and spread throughout the private sector, before being freely released as open source software in 1989.
The important thing is that everybody has agreed how they should talk to each other. That, more than anything else, is the magic of the Internet.
6b. The Context of the Cold War
DARPA
You have probably heard of the Central Intelligence Agency : a feature of innumerable Hollywood blockbusters, video games, TV shows, airport novels, and beyond. Or the National Security Agency , also part of video games and increasingly part of everyday consciousness after the revelations of Edward Snowden.
You may not have heard of DARPA, or the Defence Advanced Research Projects Agency. This is a shame because this agency is arguably core to understanding how the American government responded to many of the most important problems that it faced since the end of the Second World War: from major advances in precision targeting to dealing with modern problems of command and control, which in some ways led to the development of the Internet as we know it.
Figure 1: DARPA. Evgeny Gromov/iStock/Getty Images
So where does the story of how the American defence industry and agencies led to the Internet begin?
To try to understand this, we need to go back to 4 October 1957 , to a small sphere that was launched from the Soviet Union.
The Panic of Sputnik
This was the launch of Sputnik, the first man-made object in space. Sputnik terrified the United States because it shows that despite the comparative advantage that the United States had over the Soviet Union in terms of wealth and power, they were falling behind in the space race .
This is because while the Soviet Union had trouble with their domestic consumer goods industry – industrial quotas are bad for producing goods for people, it turns out – they had an advantage when it came to military and space research. Authoritarian states can focus their resources as they see fit.
Initially, the United States was not too worried about Sputnik. They had their own satellite almost ready to launch, Vanguard, and had actually almost launched it a month prior, in September! After some delays, hundreds of people gathered on 6 December 1957 to see the Americans join the space race with the launch of Vanguard TV-3.
As you can see in the image… it did not go well. The rocket blew up, the satellite it was intended to propel into space falling instead to the earth. While the U.S. government was able to launch their Explorer satellite a month later on 31 January 1958, they were worried about their race with the Soviet Union. A new agency was thus formed to move research forward: the Advanced Research Projects Agency , or ARPA.
Figure 2: Unsuccessful launch of Vanguard. (NASA, 1957)
ARPA
We must be forward-looking in our research and development to anticipate the unimagined weapons of the future.
(Eisenhower, 1958)
On 7 February 1958, ARPA is formed with the mandate to perform research and development work within the Department of Defense. They quickly become a unique branch of the defense services, marked by three main characteristics:
Minimal bureaucracy : The military handles paperwork, contracting, administrative overhead. This frees up ARPA researcher time to work on science (f you ever work in a university, you will understand how important this is!).
Rapid response : Given the minimal bureaucracy, projects can be funded almost immediately.
Vision of the future: They wanted to tackle the future and to figure out what was next. In other words, what would come after the atomic bomb?
We can see how this plays out in a few case studies. One project that I think sums up the early ARPA organization well is Operation Argus.
The first big problem they are tackling is how to stop incoming nuclear missiles. ARPA comes up with a plan to launch nuclear missiles into the upper atmosphere, creating a “force field” (or radiation belt) through electrons that could stop incoming missiles and interfere with enemy radio and radar transmissions. They end up carrying out three tests, ultimately realizing that while it does have some impact, the radiation belt in general would evaporate too quickly to be useful. It shows how ambitious…and audacious ARPA could be. Shooting nukes into the sky to create force fields to stop other nukes?
ARPA also plays a role in other dimensions. Before the ban on the atmospheric testing of nuclear weapons, ARPA funds Project Orion which explored how nuclear weapons could be used to propel spacecrafts (drop them behind you and detonate them and your ship will fly as long as it doesn’t kill you). During the Vietnam War, ARPA basically develops the idea of counterinsurgency warfare , developing doctrine and methods that are used in Afghanistan today.
Where does the Internet come in?
In October 1962, it became clear that the Soviet Union had placed nuclear missiles in Cuba: missiles that had a range of more than a thousand miles, and could reach Washington D.C. in about thirteen minutes. This was a big deal because it could upend the established doctrine of Mutually Assured Destruction which governed the Cold War between the United States and the Soviet Union at this time. The idea that the Soviets could not really attack the Americans with their nuclear missiles, because the Americans could detect the launch and then retaliate in turn, meaning both sides would be wiped out. Missiles this close to the United States made them worry that launches could succeed before the Americans could retaliate.
The United States imposed a naval blockade of Cuba, Soviets and Americans stared down each other, and the United States military went to DEFCON 2, the second-highest alert level before nuclear war.
Figure 3: X-17 with nuclear warhead on board the USS Norton Sound. (U.S. Navy, c. 1955)
Figure 4 : Surface-to-air missile activity in Cuba, 5 September 1962. Image description ( Central Intelligence Agency, 1962)
This was a very complex, high-stakes operation. A lot of information was flowing around the American government and military. For the first time, computers were used in a real-time crisis: helping people decide where ships and forces should be deployed. Commanders realized that there was:
a) too much information and
b) too much of a time lag.
Military commanders wanted information, but it was hard to get information from commander A to commander B.
In other words, how can you effectively control nuclear forces if you cannot share information? How can you make very high-stakes decisions in a very limited amount of time, unless you have the full picture? Luckily, ARPA had decided by this point that this was a big problem they would study.
6c. Information Processing at ARPA
To understand ARPA’s role in the development of the Internet, we need to consider a few important people who made it possible.
The first is J.C.R. Licklider , who would become head of the “Information Processing Techniques Office” at ARPA in October 1962 – the same month as the Cuban Missile Crisis. Even before coming to ARPA, Licklider had been interested in the interplay between computers and humans. His 1960 “ Man-Computer Symbiosis ”article saw a symbiotic relationship between computers and people: how human brains and machines could work together, and that one could “augment human intellect by freeing it from mundane tasks.” (Foster, 2007)
So Licklider has accordingly become head of the Information Processing Techniques Office. Already a believer in computers – he truly believes that in the future, people will have computers, will interact with them, and that they will be connected together. In August 1962, soon after joining ARPA, he pens a series of memos that discussed an idea of a “Galactic Network,” which would connect computers so that they could quickly share data and programs. He begins to convince others at ARPA that networked communication is very important .
Figure 1: J.C.R. Lickkider. (Grech, 2001)
Figure 2: Paul Baran. (Dharapak, n.d.)
The second individual that is key to this is Paul Baran . You read his “On Distributed Communications” piece for this module. Baran was an electrical engineer, working on early computers, who ended up working for the RAND Corporation starting in 1959. RAND, in a nutshell, is a think-tank that does most of its contracting and consulting to the United States’ government. It is similar to ARPA in that it is flexible; it gets to explore big questions, and has the freedom to answer them.
At RAND, Baran is tasked to think about communications during a nuclear war. He begins working on ways that a communication system could be made resilient – how it could be survivable. He and his colleagues do so by simulating a fake network and running computer simulations to see what would happen in the event of a nuclear attack. He finds that if you have three levels of redundancy, messages could still traverse a network. This is important because if you could survive a nuclear attack, you could then launch your nuclear missiles and kill your opponents, which in turn makes it less likely for the nuclear attack to happen in the first place (wasn’t the Cold War fun? Sigh).
Figure 3: Centralized, Decentralized, and Distributed Networks and the Effects of a Nuclear Attack. (Baran, 1964); Soloma_Poppystyle/iStock/Getty Images
A Centralized Network : All traffic flows in and out of a central hub. If you are from Toronto, this is kind of like the role of Union Station in the GO Train network. All trains go to Union Station or at least through it. If Kitchener station closes, the network is fine because it’s only that one spoke affected. If Union station has to close, the entire network fails.
A Decentralized Network : Traffic flows through a few hubs . This is kind of like United Airlines or Air Canada today. If you want to say fly from Nanaimo, BC to London, ON you need to fly via the hubs : Nanaimo to Vancouver (an Air Canada hub), Vancouver to Toronto (another Air Canada hub), and then finally Toronto to London.
A Distributed Network : This was the Baran idea – a true network. This is closer to a network of highways or arterial roads in a city – there are many alternate routes to go from Point A to Point B.
This starts a whole process going.
To use a distributed network, you need to use packets. Baran introduces these in his document as well, as you would need to break a big message down into smaller parts. In his idea, each packet would be 1024 “bits” of computer information. Think of these as postcards: the start of the message, the address, some other metadata, some text, and then the end of the message. We will talk more about these packets soon, as they are conceptually key to how the Internet works today.
Module 6c Individual Activity: Baran reading
Let’s take a quick break. You have all read the Baran reading. Answer the following questions related to this reading:
How much of Baran’s vision resonates with you today?
What are his major arguments?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
The network we are building towards in this lecture, however, is called the ARPAnet and not the RANDnet. While Baran advanced an interesting idea, it was never built. The Air Force read the RAND report, was interested, but then a bureaucratic reorganization meant that the agency who would build this network ended up being the Defence Communications Agency. Baran was worried that a Pentagon agency like that would not let him actually build something great, so he decides to pull the plug and wait until a more competent agency comes along.
Eventually, that would be ARPA, when they read his material.
The White Elephant and Time Sharing
Meanwhile, let’s go back to ARPA. ARPA, around 1964 and 1965, is facing frustrations of their own: they have received a large computer that they didn’t want.
The Air Force had ordered a massive prototype system (the AN/FSQ-32) that would run a bomber defence system, but by the early 1960s, it was clear that nuclear threats would not be coming via bombers but via intercontinental missiles instead. Their command and control computer is thus a white elephant (aptly described by Wikipedia as a “possession which its owner cannot dispose of and whose cost, particularly that of maintenance, is out of proportion to its usefulness”), and they decide to drop it off on ARPA’s doorstep. Alongside the computer, the Air Force also sends along the contractors they have hired to run it.
Uh oh.
Licklider inherits this machine and begins to realize that he needs to make this computer valuable. It is a big machine and is just being used for batch processing, which is where it handles one job at a time, delivered by a technician. Licklider realizes that this is a big, powerful machine that many people should use, and begins to fund time-sharing . Time-sharing is a computational technique that lets multiple people use the same computer at the same time, usually by using different terminals connected to the one computer. Out of this they begin to move things forward so that multiple people can remotely use this one powerful computer: this leads to funding awards that support things like Douglas Engelbart’s NLS (which we saw last module); e-mail programs at MIT, and beyond.
At this point, we have time-sharing, big computers, and conceptual ideas of networks floating around.
Figure 4: An earlier version of a bomber computer system. (Hernandez, Tech Time Warp )
The Three Terminals: Bringing it All Together
Enter the final big story that sees the idea of the Internet come together. Licklider leaves ARPA and Robert Taylor takes over as director in 1965.
In this famous story, Taylor surveyed his Pentagon office in 1966. He had three different computer terminals. One was connected to a defence contractor’s network in Santa Monica, California; another to the University of California’s Berkeley campus; and the third to a network at the Massachusetts Institute of Technology near Boston. Each terminal had its own separate set of access commands and its own separate community, only accessed through physically different systems. While each was powerful in and of themselves, he wondered if there was a way to bring all three computers together – to let different networks, users, and computer types interact with each other.
Figure 5: Robert Taylor. (Campbell, 2008)
Figure 6: Taylor’s three terminals, each connecting to a different network. mayrum/iStock/Getty Images
If you have these three terminals, there ought to be one terminal that goes anywhere you want to go.
(Taylor, 1968)
Taylor hoped to build a computer network to connect the ARPA-sponsored projects together, if nothing else, to let him communicate to all of them through one terminal.
Figure 7: Taylor’s vision of one terminal connecting to all three networks. mayrum/iStock/Getty Images
It would require the three building blocks that we have spoken about to come together. These are, in essence, the building blocks of the modern Internet:
Time Sharing
Packet Switching
Common Communications Protocols
The first building block is time sharing, which ARPA is working on thanks to their White Elephant of a computer. The second is packet switching, which needed to be developed so Baran’s vision of a distributed network could work. As we’ve seen, this requires a message being broken down into parts and being sent independently by routers, so that it can be reassembled at their destination. Other work is being done that makes this a possibility, most notably in 1966 by the National Physical Laboratory (NPL) in the United Kingdom. Finally, this would all only work if there was a common standard so that Taylor’s three computers could talk to each other.
Taylor wondered if these three factors could be brought together into one system. In 1968, together with J. C. R. Licklider, he wrote a paper called “The Computer as a Communication Device.” It articulated the idea of an ARPANET, which would allow networks to talk to each other without having to translate code to work on each individual computer. In this lies the core to the Internet as a standardized network of networks, which we discuss in the following section.
6d. Building the ARPANET
J.C.R. Licklider and Robert Taylor’s “Computer as a Communication Device” paper outlined the vision for an “ARPANET”, which would initially consist of fourteen diverse computers that could share each other’s resources. Crucially, it goes beyond articulating the physical hardware that would make this possible but also reflects on some of the community effects that might be possible. The authors speculating that “life will be happier for the on-line individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity”. (I’m not sure… does Reddit make us all happier?)
With this basic vision of the ARPANET coming together, we should pause and return to one of the questions that I opened this module with: What is the link between the Internet and Nuclear Armageddon? We can see that it was inspired by the context of the Cold War and nuclear warfare. It is not built as a nuclear command-and-control system, but it is inspired by ideas of this, notably thanks to Baran.
Figure 1: Excerpt from “The Computer as a Communication Device,” p. 32, illustrating the concept of nodes. Image description . (Licklider and Taylor, 1968)
First, however, ARPANET needed to be transformed from vision to reality. The contract to construct ARPANET was awarded to the research firm Bolt, Beranek, and Newman Inc., or BBN. They designed a series of Interface Message Processors , or IMPs, that would route messages to where they needed to go. The IMPs would enable time-sharing on a massive scale. Crucially, this meant that computers on the ARPANET would not send messages to each other – each would be connected to an IMP which would communicate to other IMPs, and in turn, to the computers themselves. IMPs are predecessors to today’s routers , the backbone of the modern Internet.
This figure explains how it might work:
Figure 2: IMPs send messages to each other. Image description . © University of Waterloo
Note that the IMPs talk to each other, not the computers. This is essential in making the network function, as the computers – which may be running different operating systems – do not have to worry about connecting with each other, just with their attached IMP.
Figure 3: An Interface Message Processor (IMP). (FastLizard4, 2011)
Remember our distributed network – it is the IMP that makes this possible!
By 1969, ARPANET is online – and is just four sites, or nodes. There is one at Stanford Research Institute (SRI) in California, one at University of California, Santa Barbara (UCSB), one at the University of California at Los Angeles (UCLA), and one at the University of Utah in Salt Lake City (UTAH). Here is what the network looked like:
Figure 4: Nodes in the ARPANET network, 1969. (Walker, 1978)
It continues to grow. It is publicly demonstrated in 1972, and by 1973, the node is quite a bit larger. In the following image, you can see Stanford, University of Southern California in Los Angeles, Case Western in Cleveland, Lincoln Labs, MIT, and BBN in Boston, Harvard, ARPA in Washington D.C., the University of Illinois in Urbana-Champaign, Illinois.
Figure 5: Nodes in the ARPANET network, 1973. Image description . (Walker, 1978)
Imagine how messages could then travel. Imagine a message being sent from SRI in Stanford, California (in the upper-left corner of the image) to ARPA in Washington, D.C. (in the mid-right). The message could go via Los Angeles (thanks to the USC node in the lower-left); Utah, Illinois, and MIT via Harvard or via Cleveland. This is the majesty of the distributed network. If nodes went down, messages could route around them. This is the Baran vision realized. If a nuclear attack, let’s say, destroyed Cleveland or Boston, messages could still find different paths. Redundancy was built into this system.
The network begins to run into trouble as more and more people use it – the initial version of the ARPANET does not scale well. This is because the network itself was responsible for the integrity of the data: each IMP router had the job of receiving a packet, making sure that it had been received properly, and then passing it onto the next IMP. In other words, if the message is going from Stanford Research Institute to ARPA, it is passing through at least eight routers, each of which is verifying the data. This gave them a lot of work, leading to network congestion, as packets waited in long queues to be processed and verified by each IMP.
Imagine this in a real-world analogy: a letter is being sent from Vancouver to Halifax, and at every postal facility it touches, a mail employee has to carefully inspect the letter to make sure it is in perfect shape. Sure, they would catch problems a bit earlier (since when it arrived in Edmonton they could send the letter back and get it to be resent) – but mail would in general slow to a snail’s pace.
To solve that problem, ARPANET engineers looked across the Atlantic to a French research network run by the Institut de Recherche en lnformatique et en Automatique: CYCLADES, located in Rocquencourt, France. This had a different paradigm that saw the verification of data just being done by the senders and receivers, rather than anything in the middle. The infrastructure in the CYCLADES model did not have to do the heavy lifting of verifying data. By adopting this new approach, the modern Internet could begin to scale.
So what brings this all together into what we think of as the modern Internet? The United States had ARPANET, the French had CYCLADEs, and the British – who we have seen briefly before – had their NPL network. With these networks, there now needed to be one single standard to bring them all together. How could the U.S. computer talk to the French computer? In the next section, we will explore this single standard, which ironically, would herald the end of the ARPANET as well.
Figure 6: ARPANET in the U.S., CYCLADEs in France, and NPL in the U.K., none of which speak to each other. Image description . Yevhenii Dubinko/iStock/Getty Images
6e. One Protocol to Rule Them All: TCP/IP and the Birth of the Modern Internet
Transmission Control Protocol/Internet Protocol
TCP/IP, the Transmission Control Protocol/Internet Protocol we have encountered at the beginning of this module, was that unifying standard. ARPANET engineer Robert Kahn approached Vint Cerf, then a junior assistant professor at Stanford University, and the two set to work designing the protocol that would enable network intercommunication. In 1974, the specifications of the TCP program – authored by Cerf, Yogen Dalal, and Carl Sunshine – were released to the broader community for comments and input.
The resulting document, Specification of Internet Transmission Control Program , makes for interesting reading about the birth of the modern Internet.
Processes exchange finite length LETTERS as a way of communicating; thus, letter boundaries are significant. However, the length of a letter may be such that it must be broken into FRAGMENTS before it can be transmitted to its destination. We assume that the fragments will normally be reassembled into a letter before being passed to the receiving process… We specifically assume that fragments are transmitted from Host to Host through means of a PACKET SWITCHING NETWORK.
(Cerf et. al., 1974)
Figure 1: Letters broken into fragments in a network. mayrum/iStock/Getty Images
Impressed, the Defense Advanced Projects Agency (DARPA), ARPA’s successor, issued contracts to implement TCP/IP. While there were competing standards, TCP/IP was incorporated into the UNIX operating system, which today forms the foundation of Linux and Mac operating systems, and was also adopted as the standard for defense communications in the United States. Once TCP/IP is baked into UNIX in 1983 and declared public domain in 1989, it is increasingly present in modern computers… indeed, the flavour of UNIX that incorporates TCP/IP is the ancestor of many operating systems today, such as Apple OS, many different Linux distributions, and TCP/IP even ends up in Windows 3.1!
In January 1983, ARPANET ends up adopting TCP/IP, and while ARPANET sticks around until its full decommissioning in 1990, once it adopts TCP/IP, it becomes a subnet of the broader Internet. ARPANET is no longer special.
Soon, corporations begin to adopt TCP/IP – they drop their own proprietary network protocols – and the Internet becomes an open platform.
By 1990, the decision is made to decommission ARPANET as it is no longer needed with this modern Internet which has grown around it. Vint Cef, on the occasion of its shut down on 28 February 1990, wrote the following lamentation:
Requiem of the ARPANET
… It was the first, and being first, was best, But now we lay it down to ever rest. Now pause with me a moment, shed some tears. For auld lang syne, for love, for years and years Of faithful service, duty done, I weep. Lay down thy packet, now, O friend, and sleep.
(Cerf, 1990)
The Internet begins to grow from this point.
We have covered a lot of ground in this module, from the advent of the Cold War and the Space Race, to the White Elephant dropped on the doorstep of ARPA, and to the three terminals that Robert Taylor looked at with frustration to wonder whether there was a way to connect them all together.
At the beginning of this module, we explored the idea of the Internet being designed to resist Nuclear Armageddon. I have tried to contest that narrative a bit, but throughout my words and the readings, you can probably find a way to make a competing argument. I would like you to try to do that in the Module 6 Discussion Activity.
Module 6 Group Discussion Activity
For this discussion, I’d like you to imagine that you’re going to write an imaginary essay with two or three main arguments. What would you argue if somebody asked you to explain why the Internet was created?
Go to the Why Was the Internet Created? Discussion Topic and advance your thesis and structure. This will serve both to chat a bit about the history of the Internet, but also give us a chance to workshop our theses for our final research paper!
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Module 7: Counterculture, Hackers, and the Political Shift
7a. Introduction: Is Technology a Force For Good?
Pause and Reflect: A Force for Good?
Before we go back in time to the 1950s and 1960s, I would like to begin this module with a question. Take two minutes to reflect on and answer the following question:
“Technology is a force for good.” Do you agree with that statement? Why?
Note: This is for individual reflection only; you are not required to submit your answers.
The reason I started with this question is that the major question we consider in this module is the transition between seeing technology as an oppressive force to seeing it as a liberatory one. This will have profound impact on how audiences receive the Internet and Web (and how they are understood today).
Let me then start with the two main vignettes that the reading for this module, Fred Turner’s From Counterculture to Cyberculture . To me, they really do underscore the shift that is at play here – one of the most interesting transformations in the history of modern computing and technology.
Figure 1: Mario Savio addresses students at the University of California on December 7, 1964. (AP Photo / Robert. W. Klein, 1964)
We open in December 1964 when Mario Savio, leader of the Free Speech Movement at Berkeley, stands and speaks before 5,000 people in the main plaza at UC Berkeley. He is denouncing the world that he sees around him, in a speech that ends up beginning to kick off the New Left in the United States – and would spread around the world.
“There’s a time when the operation of the machine becomes so odious — makes you so sick at heart — that you can’t take part. You can’t even passively take part. And you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all.”
(Savio, 1964)
Machines are a metaphor for evil. Students are raw material to be processed by machines .
The actual specifics of the Free Speech Movement are notterribly germane to this – in a nutshell at this time, political fundraising on campus was limited to just the Democratic and Republican student clubs, and the Free Speech Movement was agitating for the right to advertise and solicit donations for the Civil Rights Movement – but the rhetoric is an important milestone. As Mario Savio noted in 1965:
“At Cal, you’re little more than an IBM Card.”
(Savio, 1965)
This was the idea that the university – just like the military, the corporate world, and other pillars of American society – were using computers to dehumanize people . That they would take the rich humanity and reduce you to the content on a punch card, a series of 1s and 0s.
Figure 2: People as punch cards. © University of Waterloo
You might feel like this a bit in your own experience at Waterloo when you sign your emails with your student numbers – that you are not student John Smith but are instead 123 456 789.
How the times have changed.
If we were at a university in 1964 or 1965, hanging out with student protestors, we would probably understand technology as a force of corporations, of the military, of the “man” who runs the university. In short, it is the opposite of liberatory.
How times have changed.
Today, if you visit Apple.com, you are likely to see something like this:
Figure 3: Apple Watch Series 4 advertisement . (Apple Inc., 2018)
Technology is now seen as liberatory. On the HBO show Silicon Valley , they lampoon that almost every tech pitch concludes with “to make the world a better place”, whether it is through compression algorithms or new routines for pushing code up to GitHub. It is a cliché.
Figure 4: Headlines announcing startups “making the world a better place”. (Choi, 2017); (52 Insights, 2017); (Biswas, 2018); (Palmer-Derrien, 2018)
Figure 5: The perception of technology shifts from dehumanizing to liberating in 30 years. (AP Photo / Robert. W. Klein, 1964) ; (European Graduate School, 2006)
*Technology as a Force for Good?*7b. Conformity in the 1950s and 1960s
The end of the Second World War in the United States and Canada brought a period of profound affluence. Economies, especially that of the United States, begin to grow dramatically. Returning soldiers begin to have babies in this climate of abundance, and the population begins to explode as well. Things look pretty good, right?
Beneath the happy veneer of American society, however, there is a growing wave of cultural fear. Gender boundaries begin to stiffen, relegating men and women to specific gender roles. The fear of nuclear war, which we discussed in the last module, looms over everything. Mutually assured destruction may be a good way to prevent nuclear war, but it is still terrifying. More importantly, to some onlookers, the world in which they lived and worked seemed to be dominated by large, bureaucratic organizations: large corporations, the powerful American military and its accompanying defence industry, and even the large research universities. In other words, the world itself is beginning to seem like a machine.
Challenging the Status Quo
Children growing up in abundance begin to challenge this status quo in two main ways. The first was the New Left , or student radicals, who were interested in politically challenging the system .
Figure 1: A suburb in the 1950s. (Holt, 2010)
Children growing up in abundance begin to challenge this status quo in two main ways. The first was the New Left , or student radicals, who were interested in politically challenging the system . While they posed a significant challenge to politics, they were also mainstream: they were interested in forming political parties, influencing the vote, and mobilizing people. A parallel today might be students in the Canadian Federation of Students or the left-wing of the New Democratic Party or Democratic Party in the United States.
Figure 2: The New Left challenges the status quo politically (left image); the counterculture challenges it culturally (right image). Image description . (Mjlovas, 2005); (Life Magazine, 1969)
The second way, however, is the counterculture – or “hippies” of popular parlance. These young people similarly see society as corrupt and falling apart, but unlike the politically-inclined students of the New Left, they were focused on culturally challenging the status quo . Think sex, drugs, and rock-and-roll – they might agree with New Leftists that society needs to change, but would laugh at those trying to earnestly change it through politics and just instead drop LSD. Note that I am summarizing an entire body of research here in two paragraphs – I wrote a book just on the New Left in Canada! – but it gets at the difference here. To change the world, the counterculture would look towards the mind.
One shape that this countercultural activism took was “ the new communalism ”. It was born of an ideal that the system was not working because of how it was organized, so what if they could do something completely different. Communalists drive out to the countryside, trying to build self-sufficient communes that can go back to what they imagined life was like before the industrial revolution. I say “imagined” because this was very much a utopian vision: that before machines and complex society, people were all-egalitarian and free, which is a myth, but it is what they believed.
Let’s take stock at these three main forces and what they are reacting to. If we were in the early-to-mid 1960s, we might believe that technology is not great:
It’s what the military uses to plan nuclear attacks.
It’s what your university uses to reduce you to a student number and track you.
It’s what the government uses to reduce you to a social insurance number.
And it’s what, if you work for a corporation, your managers might use to dehumanize you.
To get away from it, radicals from the counterculture are thinking – let’s go out to the countryside and farm, let’s get away from it all.
So where does the Internet come in?
Enter Steward Brand
Strangely enough, it will be through Steward Brand , who holds the record, (as a few historians have joked with me), for having been everywhere at just the right time. His reaction to these forces I have outlined above will help people understand the Web.
Brand enters our story in 1957, when he is a freshman at Stanford University in the San Francisco Bay Area. Brand is worried about the Soviet Union invading the United States: his fear is that if they were to occupy, his
Figure 3: Stewart Brand. (cellanr, 2010)
He also sees some of the same problems with the world around him – with the besuited business people who march off to work every day within the capitalist world of the United States. So what to do in this kind of world?
To shape his worldview, Brand thus looks to three major influences that impact his thinking.
The first is cybernetics – the interaction of human and machine, which is discussed in this week’s readings. After Brand graduates from Stanford, he joins the U.S. Army and is posted to Fort Dix in New Jersey. A quick train ride from Manhattan, he soon begins to hang around artists and other people within the counterculture. He begins to read about cybernetics and sees people who begin to recognize that technology might be able to be used for the common good. We will return to this shortly.
Figure 4: Marshall McLuhan. (Library and Archives Canada, 1945)
The second are the ideas of Marshall McLuhan . McLuhan was an English professor at the University of Toronto, and indeed had been for almost twenty years at this point, but McLuhan has begun branching out into cybernetics. In 1962, he publishes The Gutenberg Galaxy . It argues that humanity is leaving the typographic age and entering an electronic one, that type has certain characteristics (it is sequential, segmented, and leads to bureaucratization and rationalization), and that electronic technologies might be able to break this down and bring humans into a new age. Through technology, we could perhaps have a “global village”.
For Brand, McLuhan presents an opportunity that maybe technology could resolve the problems he feared about the modern world – of being trapped.
Finally, after Brand leaves the army, he ends up visiting Indian Reservations in the United States. He develops an idealized vision of Native Americans , seeing in them a way to “be at home again”, and become attached to the land again. This is heavily idealized, to be honest, with few links to reality, but it is important to understand why he becomes interested in joining the new communalists.
These three main forces – McLuhan, Native Americans, and Cybernetics – come together in the last part of this section: the Whole Earth Catalog .
The Whole Earth Catalog : A Prototype for the Web?
By March 1968, Brand realizes that many people are influenced by these three factors and are now moving out to the countryside. They are going to need things: blueprints to build their own houses, books to read to teach them how to survive, magazines to entertain them – all sorts of stuff to create new societies. He begins by creating a simple catalogue, and over the next four years, between 1968 and 1971, the catalogue would eventually grow to 448 pages. The cover image that he chooses for the first issue – the “blue marble” showing the “whole earth” – was an important one. Brand had been lobbying NASA to release a (rumoured) image of the earth since the mid-1960s, as he felt it would be an icon of unity. In spring 1969, Whole Earth Catalog would also have a similar image, speaking to Brand’s interest in this image.
Figure 5: WHOLE EARTH CATALOG 1968. Image description . (Brand, 1968)
The Whole Earth Catalog (WEC) is a complicated document to understand. It is not quite a book or a catalogue itself, but is best understood, as Turner argues, as a network forum . It is a place where readers can exchange ideas and begin to tease out new ways of working with each other – each from their own backgrounds or disciplines. Individuals from communes participate, of course, but so to do people based in universities, science and technology, the art scene, and even the psychedelic community.
I’d like you to read a short excerpt from the Whole Earth Catalog (PDF) .
Reflect on and respond to the following questions; they will help inform your reading:
What’s going on here?
What is the purpose of this catalogue?
What would it offer a reader?
How is it a catalogue? How is it different from a traditional catalogue?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
The Whole Earth Catalog lets readers imagine a community, as people all over the United States can chat and recommend products – in other words, they can communicate! It strikes a chord, and by 1971, 1,072 items are in the final catalogue and more importantly two and a half million copies are sold in total.
So what happens when the WEC goes online? Read on to find out.
7c. The Whole Earth… Online?
The Whole Earth Catalog is published in Menlo Park, California – which happens to be where Stanford Research Institute is located. You might remember Stanford Research Institute from Module 5, when we met Douglas Engelbart and the National Online System (NLS). What happens when you have people around Stanford Research Institute and the Whole Earth Catalog co-existing in the same city? The groups of people begin to mingle and collaborate, with Stewart Brand in the middle.
Figure 1: Brand and his Whole Earth Catalog intersect with Stanford Research Institute and Douglas Engelbart in Menlo Park, CA. RosLilly/iStock/Getty Images (map); pixomedesign/iStock/Getty Images (building); (cellanr, 2010); (Engelbart, 2008) ; Fourleafclover/iStock/Getty Images (earth)
The idea of the “ personal computer ” would be at the core of this, as it made computing look accessible. Remember your reaction to the NLS, the Hypertext system that we saw as part of the “Mother of All Demos” in Module 5. It let a person sit in front of a CRT monitor, with a QWERTY keyboard and a mouse, connecting text via hyperlinks, jumping from one point to another in text, conducting video conferencing, and doing keyword searching throughout their data. One fun fact: the camera operator for the “Mother of All Demos” was actually Stewart Brand!
So there is potential here, you might think.
NLS meets the Whole Earth Catalog.
Similarly to the NLS, the WEC linked multiple geographically distributed groups and allows them to collaborate (albeit not in real time). The co-existence of these two groups, the engineers at SRI and the folks involved in producing and distributing the Whole Earth Catalog , begins to plant some seeds that begin to germinate. As the NLS team begin to fall apart due to a lack of funding, many of their engineers end up at Xerox PARC. People there begin to understand the WEC as an information tool, and as an analog to what they had been working on with the NLS: a hyperlinked information system.
This is ideologically important as well, as it lets these engineers at Xerox PARC (and elsewhere in the Bay Area) begin to imagine their research not just as something that needed to be done in research collaborations with groups like ARPA or Stanford University, but that they could be part of something different – a new, communalist project!
Think of this today. When you think of doing cutting-edge technological work, you are not thinking about going to the government for grants or putting on your button-up shirt with a tie… but perhaps of something that is more akin to this kind of communalist project.
In 1972, this sort of ethos is introduced to the broader public through an article that Brand writes for Rolling Stone , entitled Spacewar: Fanatic Life and Symbolic Death amongst the Computer Bums In the article, Brand watches a group of graduate students play Spacewar , a popular video game at the time, and observes how they use computer networks to play video games against each other. This is not the AN/FSQ-35, but an exciting computer that’s letting these students invent a new collaborative culture of play using technology.
Figure 2: Spacewar brings together user friendsliness and information sharing. (Graetz, et al., 1961); (Love, 2013)
Here we see two things coming together: the vision of user-friendly, time-sharing computing articulated in the NLS system , and the vision of the Whole Earth information community. Brand suggested that these were two sides of the same coin. Articles like “Spacewar” helped make computers seem human and “personal” as opposed to the impersonal mainframe. It also had the side effect of making programmers look cool!
Everything looks like it is finally coming together until the economy begins to crash. Remember how I described the economic boom that started after the Second World War? By the mid-1970s, the wheels begin to come off the economy and the boom becomes bust. In 1973, the OPEC Oil Crisis sees fuel and oil prices rise dramatically (some 300% by 1974) and the North American economy enters a recession. In the United States, this is marked by an economic condition that had previously been seen as impossible: “stagflation,” where inflation and stagnant economic growth happen at the same time. The counterculture and the New Left, which had been waning by this time, largely disappear or are dramatically transformed as an effect of this economic crisis.
Figure 3: Oil prices 1970-1979. © University of Waterloo
Throughout the 1970s, despite economic travails throughout the United States, the computing industry keeps growing. The 1970s begin to see the rise of miniature computers, and by the late 1970s to early 1980s, computers are beginning to enter affluent middle-class homes.
By the 1980s, people begin thinking that Stewart Brand might want to do for computers what he did with the Whole Earth Catalog : help people know what tools to use and how to use these new, ever-changing machines.
7d. The 1980s: The Hacker Conference and the WELL
In 1984, Steven Levy published an influential book: Hackers: Heroes of the Computer Revolution . Levy was trying to figure out what made for a “hacker ethic” and identified a series of principles. One such principle was that hackers were “hands-on”:
Hackers believe that essential lessons can be learned about the systems – about the world – from taking things apart, seeing how they work, and using this knowledge to create new and even more interesting things. (Levy, 1984)
Levy had been part of the Whole Earth Catalog community, and showed Brand and other people in that community a copy of his book, and they decide to bring together a gathering of hackers.
Figure 1: Steven Levy. (Donck, 2006)
With the Hackers conference of 1984 , which would bring together figures such as Steve Wozniak, Ted Nelson, Richard Stallman, and others, Brand was back in technology. Crucially, at this conference, they tried to define a hacker – the dilemmas around openness and giving away code that continue to be at the heart of software development debates today, and beyond. While they did not come to a consensus at the Hackers conference, they began to form a group identity around the idea of a “hacker”.
With Brand, and the pivotal involvement of folks who had been part of the Whole Earth Catalog community, many of these new communalist ideas would arguably live on within hacker culture!
The Whole Earth ‘Lectronic Link, or WELL
Figure 2: Well Logo. (Well.com, 2016)
Enter the WELL, or the Whole Earth ‘Lectronic Link . Started in 1985, the WELL was designed – arguably, as Fred Turner has stressed in this week’s reading – to recreate the ideas that had been at the heart of the Whole Earth Catalog in a virtual environment. In other words, to try to create community online. Many of the ideas that we take for granted around the Web and the Internet today, come out of groups like the WELL, especially the early idea of an “ electronic frontier ” and the idea of a “ virtual community ”.
The WELL began as a Bulletin Board System, or BBS. In a nutshell, the BBS technology goes back to 1978. Born in a Chicago snowstorm when computer enthusiasts couldn’t get out of their houses to physically meet, and they decide to exchange digital information through modems over phone lines, BBSes are, by the early 1980s, grassroots networks run by people out of their basements, garages, and living rooms. People would run their own BBSes, and other computer users would call in to play games, share stories, swap messages, and socialize from the comfort of their homes.
WELL in many ways is similar: it is organized along the lines of “conferences” where people would exchange messages. You would navigate to the “Arts and Letters” conference, for example, or one devoted to the “Grateful Dead”, and then you would post messages and chat with each other around these similar issues.
What makes the WELL different? There are other bulletin boards or commercial services rising at this time. Most BBSes are at a very local scale – i.e. a local BBS serving people in Cleveland or Mississauga. On the other end of the spectrum are big companies like Prodigy or Compuserv, which are aiming to be comprehensive portals for users – they want to provide news, health information, financial news, etc. They are advancing a vision of networked communication where information is a commodity to be exchanged or sold, and that users are consumers of information, not generators.
WELL, on the other hand, wants to be a place where people can talk and, fittingly given its name, realize the communal vision of the Whole Earth Catalog online. It thus celebrated “cybernetic forms of collaborative organization” (Turner, p. 148) and brought together the Bay area technology types, academics, journalists, consultants, etc., all leading towards the formation of a cyberculture.
If this ethos just stayed on the WELL, that would be one thing – we probably would not be talking about it today. It begins to go mainstream thanks to one of the largest technology magazines of the 1990s.
The WELL Ethos Goes Mainstream in Wired Magazine
Figure 3: First cover of WIRED. (Condé Nast, 1993)
Wired magazine launches in March 1993. The inaugural issue argues that:
There are a lot of magazines about technology. Wired is not one of them. Wired is about the most powerful people on the planet today—the Digital Generation. These are the people who not only foresaw how the merger of computers, telecommunications, and the media is transforming life at the cusp of the new millennium, they are making it happen.
(Rossetto, 1993)
In other words, the Internet was a revolution not just incremental change.
More importantly, for our lecture, it saw Wired and new technologies as part of a history stretching back to the 1960s. As the co-founder and president of Wired noted,
the ‘60s generation had a lot of power, but they didn’t have a lot of tools.
(Turner, 288)
The difference, to folks at Wired and elsewhere, was that the Internet might be able to provide that tool. Indeed, we can see this ethos throughout Silicon Valley politics today. Libertarian politics have joined with countercultural aesthetics, coupled with the vision that technology can truly, as they say in HBO’s Silicon Valley , “make the world a better place”.While the WELL is a small bulletin board, its ethos arguably goes mainstream with Wired magazine. The Whole Earth Review had been a networking space, so had the WELL, and many of the WELL’s most prominent members are well-positioned to begin writing articles in Wired . Indeed, the ethos is quite similar.
Hacking Politics with the Electronic Frontier
If you have been around Internet circles, you may have run into the Electronic Frontier Foundation – a group that defends civil liberties in cyberspace. The EFF, adopting the “frontier” rhetoric that might have seemed familiar to those individuals who had decamped to the deserts some four decades earlier, was devoted to local government and using technology to unite people. In June 1994, the EFF was described by Wired in a January 1994 article “ The Merry Pranksters Go to Washington ,” as doing
“something that Mitch Kapor has wanted to do for three decades – find a way of preserving the ideology of the 1960s.”
(Quittner, 1994)
Figure 4: Electronic Frontier Foundation logo. (Electronic Frontier Foundation, n.d.)
7e. The Political Shift: Conclusions
This brings us back to the opening vignettes that we began in this module. In August 1995, Wired magazine featured a cover image of right-wing Republican politician Newt Gingrich. One of the key points of Fred Turner’s transformative book on this topic is that this was a melding of left-wing ideology as seen around the WELL and right-wing libertarian ideologies like that of Newt Gingrich and the Republican Party of the 1990s. In Wired , contributors might argue (as George Gilder did in his When Bandwidth is Free article) that
“the chief effect of these [new] technologies is to put you in command again.”
(Gilder, 1993)
This dovetailed well with a right-wing vision that saw the information age as transcending the earlier welfare state.
This all crystallized in some ways with John Perry Barlow’s A Declaration of the Independence of Cyberspace The former Grateful Dead guitarist wrote this in 1996 at the World Economic Forum in Davos, Switzerland.
Figure 1: Wired cover depicting Newt Gingrich, August 1995. (Condé Nast, 1995)
A Declaration of the Independence of Cyberspace
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.
Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.
You have not engaged in our great and gathering conversation, nor did you create the wealth of our marketplaces. You do not know our culture, our ethics, or the unwritten codes that already provide our society more order than could be obtained by any of your impositions.
You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different.
Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.
We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.
Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge. Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose.
In the United States, you have today created a law, the Telecommunications Reform Act, which repudiates your own Constitution and insults the dreams of Jefferson, Washington, Mill, Madison, DeToqueville, and Brandeis. These dreams must now be born anew in us.
You are terrified of your own children, since they are natives in a world where you will always be immigrants. Because you fear them, you entrust your bureaucracies with the parental responsibilities you are too cowardly to confront yourselves. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.
In China, Germany, France, Russia, Singapore, Italy and the United States, you are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace. These may keep out the contagion for a small time, but they will not work in a world that will soon be blanketed in bit-bearing media.
Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish.
These increasingly hostile and colonial measures place us in the same position as those previous lovers of freedom and self-determination who had to reject the authorities of distant, uninformed powers. We must declare our virtual selves immune to your sovereignty, even as we continue to consent to your rule over our bodies. We will spread ourselves across the Planet so that no one can arrest our thoughts.
We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.
(Barlow, 1996)
Module 7 Group Discussion Activity
Take a few minutes to read John Perry Barlow’s “Declaration of the Independence of Cyberspace,” referenced above, and reflect on the following questions. Then provide your thoughts in the Independence of Cyberspace Discussion Topic , making sure to connect his rhetoric to the earlier documents you’ve read in this module, including the “Whole Earth Catalog” and the evolving rhetoric around technology since the 1960s.
What is Barlow arguing?
What evidence does he use?
Do you find it a persuasive read?
What political position does he adopt?
Connect it to one of the other documents that you have read in this module. Is it more of the same, or something different?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
So, what does this all mean?
We have come a long way in the last half century. In the late 1950s and the 1960s, influential thought-leaders like Stewart Brand and others came of age being afraid of things as wide-ranging as the Soviet Union, the bureaucracy of the United States, and the mechanical form of machinery that some feared being swallowed up by. Over the following decades, they had pushed back against these forces and ultimately saw the prospect for liberation in technology.
Figure 2: Shift in the perception of computers from oppressive to liberating in 30 years. mayrum/iStock/Getty Images; LongQuattro/iStock/Getty Images
When the Web began to explode in the popular consciousness and regular people began adopting networked communication, a process that we will discuss in the next module, the groundwork had been laid for how the public could imagine the online: virtual communities and electronic frontiers. But just as the communes of the 1960s had a dark side (remember, they excluded and were based in the middle-class), so too does this new world: in some cases, there were more places to work, but this was based on contracting; or having more choice, but there were also fewer ties with each other.
Module 8: The World Wide Web
8a: Introduction to the Web
We have already seen some conceptual forerunners to the World Wide Web in Module 5, which looked at the idea and evolution of hypertext: the idea which came from Vannevar Bush’s Memex in his famous 1945 Atlantic article “As We May Think”; Douglas Engelbart’s “Mother of All Demos” with the NLS in 1968, and the conceptual idea of Xanadu from Ted Nelson. Yet while people had been experimenting, developing, and playing with the idea of hypertext, their ideas didn’t catch on with a broader public.
The Web would change all that.
Figure 1: Conceptual predecessors of hypertext. Image description . (Office for Emergency Management & Library of Congress, c. 1940-1944); (Engelbart, 2008); (Gotanero, 2013)
The World of CERN: Early Moves towards the Web
To understand where the Web came from, we need to go to the world of the European Organization for Nuclear Research, or the the Conseil Européen pour la Resherche Nucléaire (CERN) which was established in 1952. Two years later in 1954, a formal treaty brought CERN into legal existence. Based in Geneva, Switzerland, but straddling the Swiss-French border, CERN would represent European-wide cooperation. By 1999, it would include members from not only Europe, but also Israel, Japan, Russia, Turkey, and the United States – Canadian researchers are there but as non-members.
CERN is a place that brings people together from all over the world to engage in fundamental, curiosity-driven research. While the focus is fundamentally physics, there are many spin-off technologies: the Web, which we discuss in this module, but also computer chip manufacturing, contraband detection, and even the techniques used to paint pop cans!
To understand the Web, we need to go back to CERN in the 1970s and 1980s, to understand the context in which a system like the Web made sense.
Figure 2: Professor Milligan at CERN. © Ian Milligan
CERN in the 1970s and 1980s
The 1970s was a period of excitement for CERN – they were dreaming of a new golden age of physics. Basically, they decided to bring together an anti-matter factory, where they could collide matter and antimatter, in the form of protons and antiprotons, head-on, and creating tiny fireballs of pure energy. It is what would become the Large Hadron Collider, or LHC, many, many years later.
They begin to lay the plans for large colliders, and in this case, the largest one would be about 27 kilometers long – they had to do that to make gentle corners, because when you send an electron or positron around a corner, it loses energy and sends out x-rays. It accordingly has to be very, very big so that the curves are as gentle as can be. Indeed, when completed in the 2000s, it looked a lot like this:
Figure 3: Location of the 27 km-long tunnel of the Large Hadron Collider, with inset showing an internal view of the tunnel. Naeblys/iStock/GettyImages; (CERN, n.d.)
In order to run these theoretical experiments, you thus have to:
build highly-precise detectors,
write computer simulations,
develop the ability to grab data that is generated during a collision between matter and antimatter, and
eventually begin to analyze the data that you collect.
This requires a lot of large collaborations . There are proposals for four main experiments, and the smallest team in these early days is some 300 people and the largest team is 700 people. Indeed, in 2012, when the article on Higgs Boson particle appeared, it was dedicated to people that died during the project’s multi-decade run.
Technology is needed to help facilitate these extremely large scientific collaborations.
Throughout the 1970s and 1980s, while CERN never does join the ARPANET (for complicated reasons), it does get quite networked. Different labs begin to be connected and they do have a packet-switching-based network that allows different research projects to share data. By the late 1980s, it does not matter that they have not joined the ARPANET, as the CERN network switches over to TCP/IP and is now part of the global Internet.
The technology is there for collaboration, in theory.
Enter Tim Berners-Lee and ENQUIRE
Into this context comes Tim Berners-Lee (b. 1955), a physicist who was trained at Oxford but begins to work at CERN for the first time in 1980. The context in which he sees CERN can be summed up as “TMI”: Too Much Information! CERN is much like it is today – researchers coming and going from the member countries (as well as non-member countries), many of them on six-month, one-year, or two-year contracts. They speak different languages, work on many different research projects, and most importantly, for the development of the Web, are saving their data in different, separate, and self-contained databases.
Responding to this, in 1980, Berners-Lee begins working on a program called ENQUIRE .
Figure 4: Tim Berners-Lee. (Clarke, 2014)
Figure 5: Enquire book cover. (Unknown, 1923)
ENQUIRE was based on an 1856 how-to book from Victorian England entitled Enquire Within Upon Everything . It is a book that tries to tell you how to do, well, everything – etiquette, recipes, laundry instructions, basic first aid, etc. The computer program ENQUIRE is accordingly designed to document everything as well – to “ document a system ”. You would create nodes and then link them together.
A user would create different nodes of information and create links between them. These would be questions such as:
What is xxx part of?
What is xxx composed of?
What must I alter if I change xxx?
What facilities does xx use?
Where do I find out more about xxx?
Who or what produced xxx?
Figure 6: An example of how data relates to each other in ENQUIRE. © Ian Milligan
It had a few crucial characteristics that are important to understand.
It is cross-platform , meaning that computers with different operating systems can run it. In today’s terms, that would be like a program that could run on Windows, Mac, and Linux alike.
It could store data that users provided to it.
It would let users edit that data.
And, it would let people answer questions that they posed of the data within it about the system.
Figure 7: An example from the ENQUIRE manual. Image description . © University of Waterloo
We can see a few examples of this from the ENQUIRE manual .
It is similar to the Web in how it organizes material and connects things together, with a few key differences:
hyperlinks are bidirectional (i.e. if a page is linked, you can see that link and follow it back);
everything is editable; and
the ability to write and edit content is built right into the system.
Unfortunately, we do not know too much about ENQUIRE today. Berners-Lee left CERN in 1980 and he didn’t take the system with him. He gave it to a colleague who lost it. But it is a good starting idea around organizing information in a hyper textual way.
The Return of Berners-Lee: The Web
Berners-Lee leaves CERN in 1980 but after working elsewhere, comes back in 1984. Things are worse than ever in terms of disorganization, and the difficulty of teams working together but using different systems. But, as we have seen from the reading, Berners-Lee continues with his side-project work of trying to come up with a system to make it all legible.
Indeed, as he notes in the reading from this module, you can see the common element of connection coming across in these projects:
What matters is in the connection. It isn’t the letters, it’s the way they’re strung together into words. It isn’t the words, it’s the way they’re strung together into phrases. It isn’t the phrases, it’s the way they’re strung together in a document.
(Berners-Lee, 2000)
By 1989, however, the situation begins to get better for a “web” of information. ENQUIRE may have just been ahead of the curve. The personal computing revolution has happened, meaning that there are lots of smaller computers instead of one big mainframe computer. TCP/IP networking, as discussed in the previous module, is becoming generalizable and its very approach means that it is intrinsically scalable.
What will bring it all together?
8b: “Vague, but Exciting”: The World Wide Web
The stage is thus set for the Web. Berners-Lee decides that he wants to further develop his idea of an information system, and thus begins to write a proposal so he can purchase a new kind of computer: a NeXT Machine.
It is worth taking a detour to understanding why Berners-Lee would need a new kind of technology in order to actually develop the Web: we need to turn to Steve Jobs and a computer that he had developed in the late 1980s. While Jobs is best known as the founder of Apple Computers, in 1985 Jobs got into a fight with his Board of Directors and left Apple to fund a company called NeXT.
We do not want to go into details about the NeXT model, but in short, it is founded on an understanding that it is not necessarily specs (such as memory or processing cores) that matter, but rather the consumer experience. The NeXT computer thus takes the ease and intuition of an Apple personal computer with UNIX and the idea of object-oriented programming. UNIX is great for developing software in, and for multitasking, but it is tough to program – this is where the nice interface of a NeXT station comes in.
So, Berners-Lee feels that he needs funds – and eventually a NeXT station – to develop the Web. What do academics do when they want money?
We write proposals!
Figure 1: Steve Jobs with his NeXT computer. (Day in Tech History, 1988)
Figure 2: The Grant Cycle: Theory and Practice. Image description . (Cham, 2011)
Information Management: A Proposal
Module 8b Individual Activity: How Berners-Lee conceptualizes the Web in his proposal to CERN
One of your readings for this module was the proposal that Berners-Lee wrote in March 1989 to secure funding for this computer: Information Management: A Proposal .
I would like you to read this proposal and respond to the following questions:
Where does the idea of a “Web” come from?
Are there any significant differences in how the Web works today with how it is outlined in the proposal?
What is wrong with trees or keyword-based systems?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
As Gillies and Cailliau note,
the proposal contained all the ideas that would eventually make the World Wide Web. It even anticipated the sort of problems the Web would encounter as it spread about the globe.
(Gillies and Cailliau, 2000)
The “Information Management” proposal:
outlined the problems with basing a system on keywords – notably that people never choose the same ones – a problem that persists with online “tags” on blogs today,
explained hypermedia and hypertext, and
asked for the support of two people for six to twelve months to realize the project.
Berners-Lee’s boss granted approval with the famous marginal comment, “ Vague, but exciting ”.
In 1990, the proposal was resubmitted to support the purchase of a NeXT computer. It all came together quickly after that, with a subsequent proposal later that year giving concrete details and timelines for implementation. If you are curious, you can read that proposal here: WorldWideWeb: Proposal for a HyperText Project .
The First Web Server
By Christmas 1990, Berners-Lee had defined the Web’s basic concepts: the Uniform Resource Locator or URL; HTTP or the Hypertext Transfer Protocol; and HTML or the Hypertext Markup Language. And, he had also written the first web browser and software to run a web server.
The first web server was thus launched in December 1990, providing access to info.cern.ch. You can today visit it at World Wide Web . I encourage you to check it out for yourself.
The first catch was making sure that people who didn’t have a rare and expensive NeXT computer could access the Web! Imagine today if you wanted to use the Web you could only do so on a very high-end Apple iMac or something along those lines. Responding to this problem, a CERN intern, Nicola Pellow, developed the first “line-mode browser”. As all platforms could display and enter text on a command prompt, this was the simplest solution to let the Web run across all kinds of different computers. The Web was now cross-platform.
Figure 3: First Web Server. © Ian Milligan
Figure 4: Screenshot of the Line Mode Browser emulator . Image description . (CERN, 2013)
You can check this out yourself at Line Mode Browser 2013 . Click “Launch Line Mode Browser” at the top of the page and explore it for a bit. Then reflect on what it might have been like to use the early Web:
How is exploring a website different in the Line Mode Browser than in one of today’s browsers?
What do we gain from “using a site” like this?
Should historians begin to view old documents through emulators like this?
I go back and forth on whether we should be using old browsers to view websites, or just explore them using our modern Chrome, Firefox, Safari, or Edge browsers. I suppose, to me, it depends on the question. If the aesthetic really matters, then it is worth looking at a few pages using things like the Line-Mode Browser so we can understand how people used these documents and explored them at the time. But I don’t think it scales: I would not want to do all my reading for a project using these emulators.
It is like earlier historical eras. People read documents and books by candlelight, but today, I read the same books in a well-lit, air- conditioned or heated room. It is worth thinking about the material conditions that surround a historical document, but we should not obsess about it.
The Web was now accessible across multiple platforms, so that most computers could now connect to it. The final step was making sure that people outside of CERN, all around the world, could know about and use the Web. On 6 August 1991, the two browsers (WorldWideWeb and the Line Mode one), as well as the basic code to run a Web server, were placed onto the Internet and publicized via USENET newsgroups. With this step, the Web was arguably born. The potential and strength of the platform was ensured on 30 April 1993 when CERN declared that the technology would be available freely to everybody, without fees or royalties.
The Web was Born!
The Web has a few different birthdays: March 1989 when “Information Management: A Proposal” was Submitted; August 6, 1991 when it was made known to the world through newsgroups; and April 30, 1993 when it was declared free.
Figure 5: World Wide Web Timeline. © University of Waterloo
Of course, the Web is only important because non-geeks end up on it! For the web to catch on and really grow, they would need graphical user interfaces on all computers, from Mac to PC alike. The line-mode browser is neat, but as described above, it is not the most user-friendly interface. While CERN was proud of the Web, it was also a bit far afield from its core mission of working with theoretical physics.
The next step would be engaging with the community and volunteers to get the Web going.
What your prof thinks: I go back and forth on whether we should be using old browsers to view websites, or just explore them using our modern Chrome, Firefox, Safari, or Edge browsers. I suppose, to me, it depends on the question. If the aesthetic really matters, then it is worth looking at a few pages using things like the Line-Mode Browser so we can understand how people used these documents and explored them at the time. But I don’t think it scales: I would not want to do all my reading for a project using these emulators. It is like earlier historical eras. People read documents and books by candlelight, but today, I read the same books in a well-lit, air- conditioned or heated room. It is worth thinking about the material conditions that surround a historical document, but we should not obsess about it.
8c: Accessing the Web: From Mosaic to Internet Explorer
From here on, the Web begins to develop rapidly. Figure 1, for example, was uploaded in 1992. Appropriately, given the satirical cultures that would emerge on the Web over the following decades, it was an image of the CERN parody rock group: Les Horribles Cernettes .
Figure 1: The first image sent on the World Wide Web. (Bowden, 2010)
Things were still, stuck, however in the realm of the geeks until early 1993, when the first fully-featured and popular browser, NCSA Mosaic, arrived. This would be the shot needed to propel the Web to its current-day success.
NCSA Mosaic: The Killer Browser
In early 1993, Marc Andreessen and Eric J. Bina, were both at the National Center for Supercomputing Applications (or NCSA, a research unit at the University of Illinois’ campus in Urbana-Champaign), they began working on a new graphical web browser. Formal teams were assembled by the NCSA to begin working on a browser for each of the three major computing platforms – UNIX, Apple, and PC.
Figure 2: Marc Andreessen and Eric Bina. (Pugliese, 2015); (Bina, n.d.)
What made these browsers so special?
They were fully tested and supported: Those of you who have worked on open-source projects, or have tried to install unreliable software, can probably appreciate how important testing and customer support is.
They were easy to install: You just had to download the browsers, or get the disk that they were on, and install them as you would any other piece of software.
They were graphical: I will show some screenshots in a second, and give you a chance to try it out yourself, but in short: they were not just typing characters on a dark screen, but rather they let a user use the Web much like you do today.
There was one major thing that was left out of Mosaic – it let people browse webpages, but did not let them change them. Editing was left out. While folks like Berners-Lee were confident that soon this feature might be able to be added back in, it largely never would be.
But never mind. As the screenshot below indicates, this was a major change in how the web could be used!
Figure 3: Mosaic browser. (Calore, 2010)
This program – so simple today (compared to Chrome, Safari, Firefox, or Edge today, it would seem antiquated, but also familiar ) – captured media attention. While people might have been familiar with the rough idea of the “information superhighway”, or the Internet, Mosaic seemed to be a way that anybody could use to access websites. You would click on hyperlinks to view content and images would load – in short, the web was here.
You can play with Mosaic today, if you want – visit oldweb.today and you will be brought to an “emulated” version of Mosaic looking at an old version of the University of Waterloo’s website from 1996. We will be returning to these old websites at the end of this module, so stay tuned.
Figure 4: University of Waterloo’s website from 1996. (Rhizome, n.d.)
Fight, Fight, Fight: Mosaic vs. CERN vs. Netscape vs… Microsoft?
Some of the first tensions came from CERN itself. As more and more people used Mosaic to access the Web, it almost became synonymous with the Web itself. Even though Mosaic relied on code libraries that CERN had developed, it seemed like the limelight was being cast on NCSA at Urbana-Champaign rather than the original developers in Geneva! The bigger fight will come as people from NCSA Mosaic begin to be recruited to a new company: Mosaic Communications, which wants to launch its own web browser.
Now, after the obvious threat of a lawsuit (don’t name your company after your competitor’s product!), Mosaic Communications renames itself Netscape after their main product, Netscape Navigator. The infamous browser war has begun.
Figure 5: The short-lived Mosaic Communications Netscape Navigator 0.9. (©Netscape/Microsoft, 2008)
The goal of Internet Explorer was to leverage Microsoft’s power as the operating system leader to become the leader in how the Web was consumed and understood. Soon, Internet Explorer was bundled with Windows versions. It quickly began to overtake Netscape: by 2003, it had something like 95% of the browser market share. This led to serious legal issues for Microsoft – the bundling of a web browser with the operating system led to the United States government alleging that Microsoft abused its dominant near-monopoly status in operating systems to run other web browsers out of business! While the initial ruling suggested that Microsoft should be forcibly broken into two components (one operating system and the other software). After appeal, the two parties settled out of court with some lesser penalties and remedies.
Netscape lives on today, of course – Netscape was bought by another web portal America OnLine (AOL) – some of the Netscape team started the Mozilla Foundation that today brings you Mozilla Firefox .
The Growing Web
All of this was happening while the Web dramatically grew. The percentage of Canadians with Internet access increased from 4% in 1995, to 25% in 1998, 60% in 2001, 71% in 2005, and finally to 88% in 2015.
Figure 6: Percentage of Canadians with Internet access. © University of Waterloo
The Web was here to stay. In the next module, we will be exploring the users of the Web and the cultures they generate: from memes, to trolling, to spam. For now, let’s begin to think ahead to what these early websites looked like, as a way to think about Assignment 2 in this course.
8d. Accessing the Early Web as a Historical Resource
Looking ahead to the Final Assignment
In Module 9, we will return to what the early Web was like. In preparation for the final essay assignment, I want to spend some time explaining how you can access the early web as a historical resource – so you can begin to think about the process of working on your final projects.
Please follow along with me as you work through Module 8d. You may want to have two browser windows open, or two tabs, so you can toggle back and forth between this module and the Internet Archive’s Wayback Machine.
Let’s Look at an Old Website
To visit an old website, you can visit the Internet Archive’s Wayback Machine. The Internet Archive was founded in 1996 with the mission of making old websites – and now, books, television footage, video games, and beyond – accessible for both the general public and researchers. We will talk a bit more about the founding mission of the Internet Archive in a later module when we discuss the ethical implications and consequences of having all of this data. For now, however, let’s just learn how to use it.
Internet Archive’s Wayback Machine will look something like this:
Figure 1: Screenshot of the Internet Archive’s Wayback Machine. (Internet Archive, n.d.)
the University of Waterloo Library (lib.uwaterloo.ca),
the University of Waterloo homepage (uwaterloo.ca),
the University of Waterloo’s bookstore (bookstore.uwaterloo.ca), and
perhaps the student newspaper, some research groups, and beyond.
First, you might be wondering why certain things are appearing. Why does the bookstore and library appear, for example, and not the computer science department or the history department?
The trick is in the structure of the URL that is being searched. The Internet Archive has over 300 billion websites: that is too many to search given their infrastructure (they are not Google, but a non-profit organization on a relatively shoestring budget) but also, as we discuss in a later module, searching all of the websites might lead to some ethical quandaries. The Internet Archive only searches domain homepages. This means that they will search parts of the website that come before the main site, but not afterwards. This is a bit confusing, so this table might help show you what is searched and what is not.
This is important because you might think you are keyword searching everything , but you are only searching for small amount. However, the keyword search can help you find websites, you just need to dig a bit deeper! In other words, it can find CNN, but it can’t find specific stories about goats or the University of Waterloo.
Let’s Look at a Page
Now that we have seen this, let’s look at a page.
I would like you to select the University of Waterloo’s home page, and take a look at what you see here. To select the University of Waterloo’s page, you will need to click on the result like so.
Figure 2: Wayback Machine – University of Waterloo. (Internet Archive, n.d.)
You will then see a graph of how often the University of Waterloo has been crawled.
Figure 3: Wayback Machine, University of Waterloo calendar. (Internet Archive, n.d.)
Click on the top graph to select a “year” – it shows how many times a site has been crawled – and then click on the date in the calendar to find the site. Click on October 22, 1997 and then the date that it was crawled, and you will be brought back to the University of Waterloo’s website for that date.
Figure 4: University of Waterloo homepage from October 22, 1997. (Internet Archive, n.d.)
Now you can also browse forward and backwards using the arrows in the top navigational bar.
Figure 5: Wayback Machine navigational bar. (Wayback Machine, “University of Waterloo Homepage”)
Pause and Reflect: The Wayback Machine
I would like you to take a few minutes now to play with this system.
Try to find some sites of interest. Do you see anything that you thought would be collected? Anything that was not? Can you find your favourite band, or restaurant, or school?
Note: This is for individual reflection only; you are not required to submit your answers.
Finding Sites
You will quickly discover that the Internet Archive’s keyword search functionality only gets you so far. To find websites of interest for your project, you may have to use a Web Directory to find information.
In the Wayback Machine , navigate to Yahoo.com circa 1996. As a reminder, type Yahoo.com, and then click on the earliest instance of the site that you can find. The earliest snapshot of Yahoo.com is from October 1996.
Figure 6: Yahoo homepage from October 1996. (Internet Archive, n.d.)
Now try to navigate these directories to find sites of interest. You will notice that there are many gaps, but the information becomes far clearer and more useful the later in time you go. For example, if you go to Yahoo in 2001 , you will find even more information that works.
Figure 7: Yahoo homepage from 2001. (Internet Archive, n.d.)
I would now like you to begin to explore the Wayback Machine and sites of interest for your project.
Now that you have got the hang of the Wayback Machine, I would like you to engage in a discussion in your small groups, as a way to begin to prepare for your final essay.
Module 8 Group Discussion Activity
Before you get started on this week’s discussion activity, first look at some old university webpages from the 1990s:
If you are stumped, choose one like the University of Toronto, Laurier, York, Western, Alberta, Calgary, Dalhousie, or University of British Columbia – but the more creative the more fun! Feel free to explore universities in other countries too.
Try to be creative in what universities you look at!
Then answer the following questions in The Early Web Discussion Topic .
What has changed from what you would see on a modern page today?
What’s the same?
What can we learn about this as a primary source about what life was like in the 1990s?
What can we not learn? What do we need to be careful about thinking about?
These sorts of questions will help get you ready for the final assignment.
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Over the coming week, please begin to think about your essay.
Module 9: Web Cultures: Spam, Trolls, and Memes
9a. Web Cultures
In Modules 6, 7, and 8, we explored the rise of both the Internet and the World Wide Web: a perspective on the creators, the systems, and some of the details around the underpinning ideologies and how to use the Web. We haven’t, however, examined what it was like to be an “everyday user” of these systems.
Partly, that is because it was so hard to get access to the Internet and the Web for those who were not working for the government, or engaged in research, or working at one of the large corporations fortunate enough to be connected to the network.
User cultures would begin in the margins of the Internet before growing to become central to our experience today.
Let’s imagine some of these early users using an example from Internet scholar Finn Brunton’s masterful book Spam . Picture a computer lab in the 1980s (I picture something like the basement of the Psychology, Anthropology, and Sociology building: lots of brick, dark, etc.). It is full of graduate students, playing on computers. They are there because during the day, computers are so official that they need to be used for official purposes; but at night, at 3:00 a.m., the terminals are sitting there and you can have fun with them. So graduate students are hanging out in the basement, playing on these terminals, and they begin to develop new cultures of how to use technology.
If I can make a gross generalization, some of these users are, well, for lack of a better word, nerds. And nerds, of which I am one, really enjoy Monty Python. There is a famous sketch in that show about a “ Spam Restaurant ” where customers are perusing the menu and discover that almost every item in it contains the processed meat product “Spam”.
Figure 1: Computer lab, PAS Building, University of Waterloo. © University of Waterloo
The details of the clip are not too important, but needless to say, it involves a lot of people repeating the word SPAM over and over again.
If you were working in these basement computer labs, hacking away at the terminal, and you wanted to irritate another user, you would start getting a script to write SPAM SPAM SPAM SPAM over and over again and ideally send it to your friend. Their screen would fill with the word SPAM, they would be annoyed and frustrated, but ultimately, it would be somewhat funny.
Figure 2: What early spam may have looked like. © University of Waterloo
Well, from this initial start, spam has been baked into the culture of the Web.
We see spam in our e-mail clients, we try to make sure it doesn’t interfere with our lives, and yes, if we use forums, Twitter, Reddit, etc., we will see spam pop up into our browsers.
The history of spam, along with the history of memes and trolls, are a good entryway into the wild world of networked communication. Just as computer geeks in the 1980s came up with “spamming” each other, giving rise to a whole way of talking about a form of communication, people who irritated each other generated the idea of “trolling” and sending, editing, and remixing images became a way of sending cultural ideas through “memes” In this module, we explore the unique user cultures that gave rise to the Web that we know today.
Figure 3: Spam folder. Kenishirotie/iStock/Getty Images
*Web Cultures Introduction*9b. Spam
We have now seen where the word “spam” itself came from, with graduate students and others literally “SPAMMING" each other with annoying messages over and over again, inspired by the Monty Python sketch. In his work on spam, Finn Brunton has defined it as the
the manipulation of information technology infrastructure to exploit existing aggregations of human attention.
(Brunton, 2018)
The earliest spam messages predate the name “spam” itself. The first spam message was probably sent on 1 May 1978 from Gary Thuerk, a salesperson for the DEC Corporation. He took the ARPANET directory, found all of the users that lived on the West Coast, and sent a blanket message that there would be an open house for anybody to drop by.
Figure 1 . First spam message, advertising a DEC Corporation Open House event, May 1978. Image description . ((Templeton, n.d.); tovovan/iStock/Getty Images
ON 2 MAY 78 DIGITAL EQUIPMENT CORPORATION (DEC) SENT OUT AN ARPANET MESSAGE ADVERTISING THEIR NEW COMPUTER SYSTEMS. THIS WAS A FLAGRANT VIOLATION OF THE USE OF ARPANET AS THE NETWORK IS TO BE USED FOR OFFICIAL U.S. GOVERNMENT BUSINESS ONLY. APPROPRIATE ACTION IS BEING TAKEN TO PRECLUDE ITS OCCURRENCE AGAIN.
IN ENFORCEMENT OF THIS POLICY DCA IS DEPENDENT ON THE ARPANET SPONSORS, AND HOST AND TIP LIAISONS. IT IS IMPERATIVE YOU INFORM YOUR USERS AND CONTRACTORS WHO ARE PROVIDED ARPANET ACCESS THE MEANING OF THIS POLICY.
(Czahor, 1978)
Of course, this all does show that the ARPANET by the late 1970s is being used for more than just U.S. Government business. Contractors are using it, academics, etc., and many of them are using it for personal things: swapping stories, sharing science fiction, their favourite books, etc. Nobody cared if somebody was using the network to talk about their love of Arthur C. Clarke science fiction novels, but they did really care about this message.
So this led to the first big debate: What is spam? But things would sit for a while until the next big issue…
The Green Card Lottery Spam Message
Enter Laurence Canter and Martha Siegel, two lawyers, who posted a message on the Usenet network on 12 April 1994. It is worth pausing to define Usenet very briefly.
Usenet: It started in 1980 and ran over the Internet: it was basically a large forum where people could read and post messages to newsgroups (i.e. alt.fans.x-files would be a place for people to talk about the show The X Files ); these would be threaded messages and they were all distributed to each other.
It would be largely eclipsed by the Web by the late 1990s, but not totally – the University of Waterloo, for example, only decommissioned its Usenet server in mid-September 2016.
In any case, on 12 April 1994, Canter and Siegel post to Usenet about the Green Card Lottery. The Green Card Lottery is an annual event that lets people potentially get a Green Card, or work visa, to come to the United States – for people in the developing world, it is a high-stakes event that could change their lives and the lives of their families forever! Canter and Siegel sent their message, advertising their services to help people get a Green Card, to 5,500 discussion groups. This matters because it was the first commercial spam sent out over the Internet (the DEC salesperson had at least just been inviting people to an open house, not directly soliciting business).
Figure 2: Green Card Lottery spam message, April 1994. Image description . oleksii arseniuk/iStock/Getty Images
Figure 3: Return on investment for Green Card spam mail. © University of Waterloo
the manipulation of information technology infrastructure to exploit existing aggregations of human attention.
(Brunton, 2018)
In other words, Canter and Siegel found a place where people were paying attention, Usenet forums, and waste their time to attract it. The key is in the indiscriminate nature of spam. For example, if I get an e-mail from the University of Waterloo talking about what a great place we are, that’s OK because I work here and it is conceivably of interest to me; if the UWaterloo president, however, e-mails every single person at every university in Canada, they’re spamming.
Spam Helps us Understand the Web Today
It also helps us understand the development of the Web, because as we begin to see the rise of spam, we are also understanding that more and more people are watching the Web (and hence can have their attention diverted!) More generally, we can actually begin to see the history of spam as really the history of the Web. In other words, if it was just a few nerds swapping Monty Python stories in the basement, you wouldn’t bother spamming them. But once it is alot of people with money, it makes more sense!
Throughout the 1990s, the Web is growing: from the first website in 1991, there were 20 by 1992, 10,000 by 1995, and millions by 1998.
Figure 4: Websites’ growth from 1991 to 1998. mayrum/iStock/Getty Images
Spam becomes baked into the very essence of the Web as it grows.
Beyond e-mail and forum posts, spam becomes a major problem in the Web as people begin looking for good content to view . As more and more people are using the Web, they want to find websites that are interesting to read – and spammers want to create websites to trick them!
Early search engines begin to crawl the Web, as they do today, and find websites based on the words that they use. The first idea was that websites would use the tag in their HTML to describe what their website was about, but people quickly realize that those tags can be manipulated to bring people to your site – even if you are not the “best” site on the topic. As the second generation of web crawler starts looking for words that appear a lot, people begin to repeat words over and over again. At the bottom of the site, to make sure they are at the top of search results (i.e. “BMW BMW BMW BMW BMW” at the bottom of a page, so that if somebody searched “BMW” you would find their page).
Indeed, the magic of Google is that its search engine helped circumvent the problem of Spam in search engine results. Their “PageRank” algorithm in part uses hyperlinks to websites as votes for their popularity. To understand, let’s imagine two competing websites: the University of Waterloo’s website and the “University of Waterloo’s Used Car Dealership” website. The first one is what people probably want to see, the second one is a spam site that is trying to trick people who want to learn about Waterloo to visit it. Under earlier models, the problem was that the used cars were appearing and the great educational experience we offer was not.
Figure 5: Link farms trick Google’s “PageRank” algorithm to rank search results high. Volodymyr Kotoshchuk/iStock/Getty Images (browser); LongQuattro/iStock/Getty Images (link)
Figure 6: Weighting the value of links solves ranking problems in search results. Volodymyr Kotoshchuk/iStock/Getty Images (browser); LongQuattro/iStock/Getty Images (link)
The Evolution of Spam
Spam continues to evolve: when blogging was popular, spammers would post comments. Nowadays, the most common form of spam tends to be phishing e-mails. “Phishing” starts in the mid-1990s with spam businesses, where hackers would capture accounts in AOL and then note on a list with a <>< (which kind of looks like a fish) to show that they had captured them.
Now we get lots of spam messages trying to capture our accounts by making us click on links, which are getting increasingly sophisticated. There are also things like “like farms,” where people – often in places like China – set up thousands of devices to have people or algorithms click “like” in order to inflate sketchy apps in app stores or elsewhere on the Web. Indeed, today, we find ourselves dominated by spam as we increasingly have to prove that we are human.
Figure 7: Proving that we are human. (Magid, 2014)
Conclusions: Spam
So what has spam done for the Web? It has forced search engines to innovate; forced us to prove our basic humanity every single day; and crucially, it helps us understand the Web and early Internet: we can see that spam has been part of this story from almost the first day.
If you are curious, you can check out the Spamdex: The Spam Archive yourself. Indeed, you could see Fake News as the continuation of spam – continually tricking people into thinking something that they are not. Perhaps, in the future, this module will have more ruminations on how spam not only changed our understanding of commerce, but the very nature of our democracy itself.
9c. Trolls
If spam was a major problem of the 1990s and 2000s, it may sometimes seem like trolling is the problem of the 2010s.
What is a troll? To call somebody a troll now is basically referring to
humans who do mischief using electronic communication.
(Nycyk, 2018)
Figure 1: Troll. (Watson, 2015)
In other words, communication that is trying to be provocative, offensive, menacing, and disruptive, done often for the amusement of the person sending it (i.e. “doing it for the lulz”). The word troll itself comes from Norse literature, where trolls are seen as “troublesome, shifting, changing and hard to pin down” (Nycyk, 2018).
There have probably been early forms of trolling for decades, even as there were early forms of spam, but the first high-profile incident happened in 1993. On the multiplayer text-based game LambdaMOO (a “Multi-User Domain Object Oriented Space” where you could chat, roleplay, etc.) a user forced themselves on other users with text-based sexual acts. Written up as “A Rape in Cyberspace” in the December 1993 version of the Village Voice , the incident made users first begin to discuss the idea of “virtual rape” through text; secondly, it divided those who saw it as a serious incident worthy of a rapid response and those who did not; and thirdly, it really began to make people realize how difficult it is to stop this sort of behaviour online. While the troll from the LambdaMOO incident was banned, this was just the beginning of the problem of problematic online behaviour.
Early Trolls
The Usenet network, which we saw in the last sub-module, was witness to widespread trolling. Indeed, one of the first definitions of “trolling” came from a user, aptly named “Troll” on 8 July 1992:
I am called Troll. I didn’t get the name because I’m a fun guy. I am the the [sic] champion of channel +insult on irc and I have thrice defended the title before the channel went down, so I can flame with the best. Flame away if you like, but “I’m gonna deal it back to you in spades. ‘Cause when I’m havin’ fun ya know I can’t conceal it. Because I know you’d never cut it in my game.
(Nycyk, 2018)
Figure 2: An online Troll . dan177/iStock/Getty Images
Usenet groups would even begin to target each other to troll. One group, the Karl Malden Harvard group (founded by Harvard students to make fun of the Hollywood actor Karl Malden) decided to attack the Beavis ‘N Butthead Group (a TV show popular in the 1990s). They en masse came over and trolled the Beavis ‘N Butthead fans, who then retaliated and drove the Harvard students off of Usenet altogether. As we can see from this, and the troll above, increasingly we are seeing people disrupting others’ online experiences through trolling.
But it is the move from Web 1.0 to Web 2.0 that really steps things up a notch. If Web 1.0 is defined on the Internet as an “information resource” (i.e. the yahoo.com directory that you saw in the last module); Web 2.0 is defined by the social network, or user-generated content. People search for content and create content. This brings user power but also troll power.
Figure 3: Web 1.0 (information resource) vs. Web 2.0 (social network). mayrum/iStock/Getty Images
Trolls in the Age of Social Media
Figure 4: Staying connected in the age of social media. elenabs/iStock/Getty Images
When a social media user dies, their site can become a memorial on sites like Facebook.
Trolls began to use coding scripts and other tools to keep making fun of her on her memorial page, as well as making fun of the disappearance of another girl online. Much of this came from the site 4chan, where users tend to anonymously post images to communicate with each other, and in particular, it’s /b/ community page, which provides an anonymous forum for internet users to organize.
This is beginning of “mass trolling” on social media sites like Facebook and Twitter. 4Chan assumes particular significance here, in part because it allows people to be anonymous unlike almost every other site, letting it become the epicentre of anonymous attacks! Another factor has been the rise of state actors such as Russia, who have arguably orchestrated large-scale troll attacks during events such as the 2016 U.S. Presidential election. Indeed, the Washington Post ’s timeline of “ How Russian Trolls Allegedly Tried to Throw the 2016 Election to Trump ” makes for fascinating reading about how trolls are now increasingly central to our lives, both online and off.
I would like you to read this short excerpt from an article, which is a fascinating and provocative defence of trolls, and answer the questions below.
Pause and Reflect: “Into the Wild Online,” A Mild Defence of Trolling
Read the following section from Andreas Birkbak, Into the Wild Online: Learning from Internet Trolls :
Lakoff and Johnson (1980) offer an anecdote about how an Iranian student understood the phrase the ‘solution of our problems’ as referring to not a means of solving problems, but a boiled-down and concentrated version of the problems in question. ‘Solution’ was understood as a chemical metaphor, referring to a liquid mixture, rather than in the everyday sense. This metaphorical accident is suggestive of a way to think about problems not as things to be overcome, but things that can be ‘catalyzed’ or not (to stay with the chemical metaphor). Pursuing this idea, Lakoff and Johnson propose that “the reappearance of a problem is viewed as a natural occurrence rather than a failure on your part to find ‘the right way to solve it’.”[20]
Such an understanding of the role of problems would fit a more issue-oriented version of democratic politics, and it would arguably also shift the role that trolls can play. If trolls can no longer be understood to simply be ‘in the way’ of addressing problems ‘seriously’, trolling might be appreciated as forcing us to slow down reasoning. Such ‘slowing down’ will have to include avoiding hasty conclusions about what Internet trolls are and what motivates them. The resources for doing so can both lie in the richness of thinking about trolls prior to the Internet and in more careful studies of those beings that we refer to as Internet trolls. As an online community manager said after dealing with trolling for a period of time:
“The idea of the basement dweller drinking Mountain Dew and eating Doritos isn’t accurate (…) They would be a doctor, a lawyer, an inspirational speaker, a kindergarten teacher. (…) It’s more complex than just being good or bad.” [21]
Moving beyond simplified versions of good and bad is the first step if we want to live more productively with Internet trolls. Another step is to appreciate how trolls, and monsters more generally, have been generative in the sense of forcing us to slow down reasoning at several points in our history, including questioning what counts as natural and what does not. Such realizations about historical interactions with trolls might inform our current dealings with their online relatives. As Lindow [22] puts it: “The details have changed; these trolls are not large or small, shaggy or misshapen, changers of shapes or perceptions, but they still blur categories, and they still disrupt.”
The recent rise in the use of trolling in the context of political campaigns provides perhaps the most clear example of how particular instantiations of trolling can be perceived as tolerable or even desirable. When U.S. politician Rick Perry’s positions on abortion sparked politically-motivated trolling behaviour on his Facebook page, this activity was understood by respondents as “trolls with whom the public can sympathize and even empathize” [23]. Such dynamics points to a newer tendency to recognize certain kinds of trolling behaviour as ideologically motivated and perhaps best understood within an ethics of political activism (Fichman and Sanfilippo, 2016). The notion of ideological trolling is thus useful for capturing one way in which trolling can be politically productive, but it is somewhat at odds with the potential for disruption put forward in this paper, in so far as ideological trolling often reproduces rather than disturbs existing political categories.
The use of trolling as a form of “soft influence” in international politics, such as the case of the “Kremlin trolls,” is perhaps the most emblematic example of ideological trolling (Zelenkauskaite and Balduccini, 2017). While the systematic use of trolling to shift public opinion (astroturfing) is a significant phenomenon worth more scholarly attention, such dynamics also position trolling as counter to a public sphere of free and sincere public opinion-forming, which might lead away from appreciating trolls as the radical other that challenges existing norms for proper action. Following the framework proposed in this paper, it seems key to insist on the uncertainty as to whether, for instance, Kremlin trolls are in fact ‘hired guns’ or rather tricksters piggybagging on international tension to achieve disruption (Zelenkauskaite and Niezgoda, 2017). Such uncertainty challenges us to think twice about the ideological maps we use to navigate political conflicts and consider what else may be of relevance.
(Birkbak, 2018)
After reading this partial defence of trolling, are you convinced? Why or why not?
Note: This is for individual reflection only; you are not required to submit your answers.
Now, of course, some of the trolling behaviours that we see – including the trollface – are emblematic of “memes”. In the final section of this module, let’s explore Internet meme culture.
9d. Memes
The idea of a “meme” pre-dates the Web. In 1976, Richard Dawkins was trying to meet
the formidable challenge of explaining culture, cultural evolution, and the immense differences between human cultures around the world.
(Dawkins, 2006)
For example, he was curious about why people tended to believe in God across diverse cultures around the world. He postulated that it was not an innate belief, nor was it a genetically-transmitted belief, but rather an idea that “replicated itself and was imitated”.
Sounds a bit like a gene. how about a meme?
We see this definition elsewhere in the literature. Shifman defines memes as
Figure 1: Infectious digital content. A ntonioGuilleum/iStock/Getty Images
gene‐like infectious units of culture that spread from person to person, memes have been the subject of constant academic debate, derision, and even outright dismissal.
(Shifman, 2013)
So if the idea comes from 1976, when does the first Internet memes appear?
Memes began circulating on Usenet in the early 1990s, and by the 2000s, popular jokes and other things began to circulate on websites like Metafilter and Something Awful . Yet it took them a while to really gain popular consciousness. The first Wikipedia article, for example, was created in March 2005 but until December 2006 it would just redirect users to another page called “Internet Phenomenon”. In other words, “memes” as we know them today are recent, mostly within the last decade or so.
By the late 2000s, memes are everywhere! History can help us understand where they came from, and why they have become so popular.
The First Internet Meme: Godwin’s Law
One of the first Internet memes is Godwin’s Law . Let’s see an example of it in action:
Figure 2: The Godwin’s Law meme. Image description . (xkcd, n.d.)
As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.
(Goodwin, 1994)
Who was Mike Godwin? Godwin was a lawyer and frequent contributor to the Usenet groups. He was frustrated that in almost every online conversation he was in, the Nazis always were invoked. In some ways, the Nazi allegory is a meme in and of itself (“You like highways? You know who else liked highways? Hitler liked highways!!”), and so Godwin’s law was a popular counter-meme that sought to stop people from constantly invoking Nazis.
Figure 3: Mike Godwin. (Hartwell, 2013)
Module 9d: Individual Activity: The Godwin’s Law Meme
In 1994, he wrote an article in Wired magazine (“ Meme, Counter-Meme ”) about this:
It was back in 1990 that I set out on a project in memetic engineering. The Nazi-comparison meme, I’d decided, had gotten out of hand – in countless Usenet newsgroups, in many conferences on the Well, and on every BBS that I frequented, the labeling of posters or their ideas as “similar to the Nazis” or “Hitler-like” was a recurrent and often predictable event. It was the kind of thing that made you wonder how debates had ever occurred without having that handy rhetorical hammer.
(Godwin, 1994)
Take a few minutes to read the article, and answer the following questions:
What argument does Godwin use in the article?
How did his meme spread?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
There are other early memes: from the smiley face [ :-) ], to “jumping the shark,” to the Bechdel Test, to the “Know Your Meme” website. Feel free to Google these if you are curious to learn more.
Many of these are text-based memes, of course, and by the mid-1990s, we begin to see the rise of graphical memes . The most famous is probably the “Dancing Baby,” which becomes one of the first mainstream Internet memes: it appears on the popular television show Ally McBeal , in advertisements, etc.
Figure 4: Dancing Baby meme. (Lussier et. al., 2006)
The Problem of Meme Archives in the Digital Age
Figure 5: Osama Bin Laden and Bert. (Internet Archive, n.d.)
However, memes present fascinating problems in the digital age, however.
One of my favourite examples of this occurred in October 2001, when a protest against the American invasion of Afghanistan took to the streets of Bangladesh. Onlookers noticed something peculiar about some of the protest signs: take a close look at Figure 5, an image preserved by the Internet Archive.
Sure, it made sense that Osama Bin Laden – mastermind of the 9/11 attacks in the United States – would feature on the posters. But why did Bert, the lovable muppet from Sesame Street, also grace these posters?
Well the answer is that in the late 1990s, Bert was the feature of a classic meme: “Bert is Evil”.
Figure 6: “Bert is Evil” meme. (Internet Archive, n.d.)
This meme posted “Bert” with a series of villains: Bert with Hitler, Bert with the Unabomber, Bert present during the assassination of President Kennedy and, yes, Bert with Osama Bin Laden. When protesters were looking for images of Osama Bin Laden on the web, they inadvertently used pictures that included Bert. If you want to learn more, you can read the Wired article “ Osama Has a New Friend ”.
Now, this is somewhat funny today, but in the charged climate after the 9/11 attacks, this seemed less funny to the creator of the “Bert is Evil” site, Dino Ignacio. On his archived website on October 11 th , 2001, he wrote
I have taken down the “Bert is Evil!” site from my server. I would like to thank Sesame Workshop for their patience and restraint all these years. I implore all fans and mirror site hosts of “Bert is Evil” to stop the spread of this site too … I am doing this because I feel this has gotten too close to reality and I choose to be responsible enough to stop it right here.
(Ignacio, 2001)
This, to me, raises several questions:
Who owns Evil Bert?
Should it have been left online?
What should we do about Memes?
We will revisit these questions in this module’s discussion activity.
Can You Kill a Meme?
The last question I want to leave this module with is,
Can You Kill a Meme?
Could Ignacio kill “Bert is Evil”?
Or, more recently, should Furie (his username) be able to kill Pepe the Frog who transformed from a funny slacker frog to an Alt Right symbol? The creator first tried to #SavePepe in 2016 and then killed Pepe in 2017. He argued that
Pepe is whatever you say he is, and he and I, the creator, say that he is love .
(Furie, 2016)
This raises a number of questions that I would like you to consider in this module’s discussion activity.
Figure 7: Pepe the frog meme. (Furie, 2005)
Module 9 Group Discussion Activity
Reflect on the following questions, and post your responses in the Memes Discussion Topic .
Who owns a meme?
What do you do if a meme has a person in it who hasn’t given consent?
Should we leave these online for study or take them offline for privacy?
How do we even study such a rapidly-evolving field?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Conclusions
Throughout this module, we have covered a lot of ground: from spam to trolls to memes, exploring the various ways that users took the Web and made it their own place. Sometimes this has led to wonderful things and creativity… at other times, less so. But we can see that through spam
the shape of the Web has evolved to better deliver relevant information, but that this has led to a cat-and-mouse game between users and those who would abuse them;
trolling has shaped user experience since the early 1990s; and
memes have become a playful but also occasionally vexing force to share information around the Web.
In the next module, we turn to the international dimensions of the Web: How it has been received in places as varying as Taiwan, China, South Korea, or beyond? Stay tuned!
Module 10: The Web Beyond North America
10a. Introduction
In this course to date, we have advanced a fairly linear narrative:
the development of the ARPANET and other communications networks,
the rise of the TCP/IP protocol,
the development of hypertext as an idea, and
how this all came together in the “killer app” of the World Wide Web which transformed global communications.
Figure 1: ARPANET, the TCP/IP Protocol, and hypertext as leading to the WWW. Yevhenii Dubinko/iStock/GettyImages; mayrum/iStock/GettyImages; rungrote/iStock/Getty Images
Much of this history, of course, has been based in the United States and to some degree in Western Europe. Networked communication has a different history in different parts of the world, and was also implemented or used in different ways elsewhere. In this module, we will aim to do three main things:
First, we will explore the idea of “ pre-web ” technology through the story of the Minitel: an early French communications network. This can help contest some of the “straightforward” history of ARPANET eventually evolving into the Web.
Second, we will explore alternatives to the Internet/Web in other parts of the world, notably, the Soviet Union’s own efforts at creating a communications network; and then
Finally, we will use two case studies in Asia to explore how the Web was received in different ways, in part due to local factors and due to some of the myopic thinking around how characters should be encoded.
The Difficulties of a Global History of the Internet
Figure 2: Sample Internet access statistics from the World Bank database. code6d/E+/Getty Images
Module 10a Individual Activity: The World Bank’s database
Explore the World Bank’s database , and then answer the following questions:
What surprises you?
Do you think it is accurate to speak of the Internet as a global network?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
If the Internet is a global network today, it was not historically a global network. While some countries like France (as we discuss in Module 10b), had their own networks, most countries saw the advent of networked communication when they joined the U.S.-led Internet. Accordingly, histories have traditionally focused on what we have seen in the previous modules: the story of the ARPANET and TCP/IP.
This gave North America a critical role in the history of the Internet, and gave English a dominant position on the Web too! As there was such a head start in North America, most early Web and Internet content was in English: so even people accessing the Internet from elsewhere in the world often needed to communicate in English. As we will discuss later in this module, there are legacies of this even today.
Figure 3: Foundations of today’s Internet originate in the U.S. Yevhenii Dubinko/iStock/Getty Images
10b. Web Alternatives: The Case of the Minitel
In the history of the Internet and the Web, we often struggle with technologies that don’t lead to fruition – treating them like dead ends or stumps on the timeline of Web history. For example, the history of the Web is an easy one to tell as it culminates in the web browser that you are using to read this content! But what if the Web hadn’t caught on and some other form of technology had risen – how would we tell that history?
There are important alternative networks to consider. A critical one is Minitel . It has recently received some renewed attention from scholars as we begin to consider the history of the Web and the Internet. As Julien Mailland explained in an article “ Minitel, the Open Network Before the Internet ”, Minitel is critical for a few reasons.
In 1991, most Americans had not yet heard of the internet. But all of France was online, buying, selling, gaming, and chatting, thanks to a ubiquitous little box that connected to the telephone. It was called Minitel.
(Mailland, 2017)
Figure 1: A Minitel terminal in action. Goodshoot/Getty Images
Telephone-Based Computer Networks
Figure 2: A Minitel terminal from 1982. (Edward, 2005)
Minitel was part of an earlier wave of experimenting with networks – a technology that stretched between the late 1970s and in some cases lasted into the early 2010s: telephone-based computer networks . The two most significant examples of this were the successful Minitel network in France and the failed Prestel network (run by the British Post Office). Both systems were launched in 1978.
Figure 2: Telephone-based computer networks. © University of Waterloo
Minitel was launched in 1978 as Télétel. Terminals were given out free. While other systems experimented with using televisions, dedicated Minitel terminals were distributed. The trial was successful, and then in 1983 the full system was launched. A few features that led to this system working really well:
Computers were given out free . Part of this is because they could not figure out a business model, so they decided to use the terminals as a loss-leader. Telephone subscribers were given an option between getting a traditional paper telephone book or the Minitel (which would also always be up to date). This was expensive for the state, however: each terminal cost around 500 francs to build!
It was truly plug and play . In 1983, the few personal computers that existed were often difficult to set up and use. Minitel was different: you unboxed your terminal, plugged it into the phone line, and it suddenly worked!
Minitel servers were independent . The French telecommunications agency oversaw only the network, the infrastructure, and connections, but did not regulate content. This allowed a vibrant culture to emerge.
These three factors were key in driving Minitel adoption. In contrast, the British Prestel system was expensive, censored, and required a specialized engineer to handle some of the installation. This meant that while the Prestel system hung on until 1994, it only hit a peak of 90,000 subscribers. Minitel, on the other hand, had five million Minitels in action by 1989 alone!
Figure 3: Minitel (France) is free, plug and play, unregulated content versus Prestel (Britain) that is expensive, required engineer to install, and censored. © University of Waterloo
Vibrant User Cultures
While the British regulated their network (think of censors scrubbing objectionable content), the French more or less did not care what happened on the Minitel network. Think of this as an early form of net neutrality: while illegal activity was still regulated, of course, the French more or less saw themselves as running a network that users could do with what they wanted. This opened up a lot of space for regulation.
Now, there’s a whole argument that the pornography and sex industry have driven quite a bit of networked communication – from advances in streaming video to payment processing – and Minitel is kind of part of this as well. One French Minitel researcher even argued that it was the “prudishness” of the British network that saw Prestel fail as opposed to the French network! The network also did a good job of facilitating commerce: it could collect funds from users and give them to sellers (taking a cut). As Mailland and Kevin Driscoll have noted in their article “ Minitel: The Online World France Built Before the Web ”, this is kind of like the Apple App Store today.
Thanks to the power of emulation, you can even try out Minitel for yourself (similar to the line-mode web browser that you used as an example in Module 9). If you visit sm.3615.live , you can explore one of the most popular forum and chat sites on the Minitel network .
As Stephen Cass explained in an article “Log On Like It’s 1985: A Fragment of Minitel Returns,” featuring an art installation at New York City’s Ace Hotel, “sm.3615.live” was more than just “le serveur medical”:
One of the most popular pay-by-the-minute chat and forum services accessed via Minitel was called 3615 SM. It was created by Hannaby and François Lagarde in 1985. Hannaby had just graduated as a doctor, while Lagarde had graduated a few years earlier. “We wanted to do something about communication at large, so we thought that a service for doctors was a good idea,” says Hannaby. However, they rapidly developed a large mainstream audience. Hannaby suspects the name they chose accidentally helped—the “SM” stood for “serveur médical,” but in French, as in English, the letters “SM” can have, ah, racier connotations. Hannaby and Lagarde were also technical innovators: They developed hardware that let users download files from the Minitel network to PCs and Macs.
(Gillies and Cailliau, 2000)
Try to play around with this interface. If you do not fluently read French (like your instructor), you can always open up a Google Translate tab and translate snippets. In any case, get used to working with the interface by typing your commands rather than clicking on them, and enjoy the vibrant text art!
The Minitel in Action
It is hard to find a good video of how the Minitel works! Let’s see a bit of it in action in this interesting documentary: most of it is devoted to trying to get a Minitel terminal working today, but the first four minutes provide some interesting snippets around the history of the Minitel. (Watch until 4:19)
RetroManCave. (2018, January 29). Minitel - The Rise & Fall of a National Tech Treasure. [Video]. YouTube. https://www.youtube.com/watch?v=HOhK9bgQo8g
You can keep watching if you want, but I think that video gives a very good sense of what the Minitel looked like and some of its cultural impacts.
The Minitel Today
The widespread advent of the Web in France by the mid-to-late 1990s did not immediately signal the end of Minitel. It hung on as people trusted it to handle credit card transactions much more than the early Web, although the rise of encryption and growing consumer confidence soon saw that lead eroded.
Indeed, as Gillies and Cailliau argued, the national – as opposed to the global – character of the Minitel would lead to its end. Writing in 2000, they noted that:
In the long term, however, Minitel is likely to be replaced by the World Wide Web. … The arrival of the Web with its high-quality graphics and multimedia was bound to challenge Minitel’s dominance. The very fact that Minitel didn’t spread beyond France is part of its problem: it may bring France to your home, but the Web brings you the world. It took the World Wide Web until July 1995 to surpass Minitel in terms of numbers of servers, but now it is slowly pulling away.
(Gillies and Cailliau, 2000)
I suppose, after all, it is called the World Wide Web for a reason!
In any case, in 2012, Minitel was finally shut down – it was indeed, as the authors foresaw, replaced by the Web. Yet the importance of the Minitel lies in understanding that people were doing and working with things that were like the Web without the Web – and that the Web would come along and largely supplant these systems!
Figure 4: An abandoned Minitel. Florian Chouya/iStock/Getty Images
*The Minitel: France's Pre-Web Network*10c. The Soviet Internet: A Failed Alternative to the Internet
If the Minitel represented a pre-Web technology that foreshadowed the power of networked communication, it is worth also reflecting on an alternative to the Internet: the abortive networks that were found in the Soviet Union at the height of the Cold War.
After the Second World War, which saw the defeat of Nazi Germany, Fascist Italy, and Imperial Japan by the allied powers led by the United States, the United Kingdom, and the Soviet Union, the world was uneasily carved up into spheres of influence between the capitalist west and the communist east. Part of this contest was over which system was superior: the market-driven west or the command-and-control economies of the east.
Figure 1: NATO member countries (aka Western Bloc). (Ssolbergj, 2008)
Figure 2: Warsaw Pact countries (aka Eastern Bloc). (Bjarmason, 2004)
The Internet that we know emerged out of the west, albeit not largely out of the vibrant free-market economy, but out of the state-supported defence and research establishment that led to the development of the ARPANET and then later TCP/IP. But what about the Soviet Union? If the Internet emerged out of state collaboration with researchers – with the ability for the Soviet economy to be directed in a way not possible in the United States – surely there could there be potential for the emergence of networked communication as well?
The story is an interesting one, and until the recent publication of Benjamin Peters’ book How Not to Network a Nation: The Uneasy History of the Soviet Internet , it was a largely unknown one.
Early Attempts at Soviet Networks
The Automation of Machines and Operations Must be Extended to the Automation of Factory Departments and Technological Processes and to the Construction of Fully Automatic Plans.
(Khruschev, 1956)
Figure 3: Nikita Khruschev. (Junge, 1963)
In practice, of course, things did not always work so optimally: a plant manager might not want the central planners to have the full information around their resources available, as they often tended to want to undershoot targets. The consequences for missing targets could be severe, after all, and much of their power came from having some local autonomy.
Between 1959 and 1962, the Soviet Union saw two major initial projects that attempted to coordinate the economy through networked communication:
The first was Anatoly Kitov’s Economic Automatic Management System or EASU in 1959. This was an early attempt to take small computer networks that were emerging in factories – think mainframe computers that were coordinating factory machinery – and to begin to join them together. Imagine all the tractor factories beginning to be able to communicate to see how things were working. However, his proposal sought to leverage Soviet military networks by using Ministry of Defence computers to then take all of that data and find ways to optimize and streamline production. The Soviet military did not take too kindly to this: Kitov was stripped of his Communist Party membership, removed as the director of a computer centre, and effectively ended his career. In other words, EASU was destroyed as it was seen as a threat to the Soviet military’s powerful role.
Figure 4: Anatoly Kitov. (Unknown, n.d.)
Figure 5: Alexasandr Kharkevich (Unknown, n.d.)
Three years later in 1962, there would be another event at a national network. Alexasandr Kharkevich, a researcher in Soviet cybernetics, proposed a national communications network – the Unified Communication System or ESS . Whereas EASU had the goal of motivating production, ESS was more similar to the ARPANET: bringing researchers together, but also really just having the technical ambition of playing with large-scale networks! There was a lot of potential – a fleshed-out proposal that saw a distributed network that in some ways echoed Paul Baran’s ideas that we saw in our ARPANET module – but the project largely died when Kharkevich died in 1965. It illustrated paradoxically that individuals were very important in this Soviet system, as it could not survive the loss of one key person.
What did this show? They demonstrated that early attempts at networking in the Soviet Union would be stymied by political processes: you would think that a centralized command-and-control system would make sense, but the power of the military and of bureaucracies.
The OGAS (National Automated System for Computation and Information Processing)
The major effort at networking the Soviet Union came through the OGAS project or the National Automated System for Computational and Information Processing . OGAS was conceived by a man named Viktor Glushkov who was a leading academic and vice president of the Soviet Academy of Sciences. The project began work in 1962 and Glushkov and his team began to conceptualize the project.
His goal was to provide information around the economy: similarly, to the EASU, it would bring together the individual factory mainframes to allow central planners to work with data as it came in. The major invention, however, was to empower users at all levels : a worker could put in their experiences, resources, recommendations; a factory manager could view this this information and in turn, it could be sent all the way to the top . All documents would be made online, providing information to people at all levels.
If you are trying to run a centralized economy, you can imagine how useful this would be!
Figure 6: Viktor Glushkov. (Peters, 2016)
It had some key differences from the ARPANET. Notably, the ARPANET was all about packet switching , whereas the OGAS fundamentally needed to have a central information processor at the heart of the network. Above I referred to sending information to the “top” – there is no “top” in the ARPANET or the Internet, of course, but there would be in OGAS.
As Benjamin Peters has written in his history of the Soviet Internet:
In America, the ARPANET was designed to resemble a brain of the nation because its visionaries first imagined the nation as a single distributed brain of users. In the Soviet Union, the OGAS was designed to resemble a nervous system for the nation because its visionaries first imagined the nation as a single incorporated body of workers.
(Peters, 2016)
As they began to lay the groundwork to build OGAS, they began to run into the final obstacle: resistance from people in the bureaucracy. As Peters again argued: OGAS
Figure 7: ARPANET represented by a neural network in a brain; OGAS by a nervous system starting at the brain and working throughout the body. magicmine/iStock/Getty Images, elenabs/iStock/Getty Images
threatened to strip their institutions of the thing that justified their existence – the need to manage the command economy in the first place. The OGAS, if effective, would strip those positions of what made them informally beneficial to hold – the potential for corruption and personal gain and power.
(Peters, 2016)
In other words, if the OGAS worked, it would work too efficiently, wiping out the local managers who carved out their autonomy within the system.
Trying to Network the Soviet Union: Failures
So they had an idea of how to network a nation. What would happen when they tried to put it into action. Plans came together over the coming years, and finally on 1 October 1970 the plan was presented to the ruling body of the Soviet Union: the Politburo. The ARPANET came online in 1969, hastening the Soviet Union leadership’s decision to review the idea of bringing together a large-scale network to enhance the Soviet economy. It was a fateful meeting. By all accounts, the OGAS was close to approval when it was caught in a fight between the Ministry of Finance and the Central Statistical Administration. The Finance people managed to successfully argue that the Statistics people would suddenly become too powerful and control the economy! While some initial steps were approved, the project ultimately went nowhere after this denial.
Ultimately, the Soviet system had too many entrenched interests to give way to networks – in a way that the United States, with its decentralized system, was able to give sway.
The irony, of course, is that it was the government in the United States that largely funded, financed, and administered the project. While in the Soviet Union, bureaucratic infighting and, indeed, intra-agency competition saw things fall apart.
What it does, however, is help complicate the narrative further and help us understand a bit more about why the ARPANET was ultimately successful!
Figure 8: ARPANET successful in the U.S., while intra-agency competition destroys OGAS in the Soviet Union. MicroStockHub/iStock/Getty Images
10d. What Are the Impacts of the Internet, Starting in North America and Western Europe?
In the past two sections we have discussed competing models to the North American Internet and World Wide Web and how they didn’t work for various reasons. In this section, let’s explore the implications of that: how did North American dominance play out as the Internet went global?
Let’s consider a keyboard.
If you are using a computer right now, look down (or if you are on a mobile device, open your keyboard). Chances are you will see something that looks a lot like this:
Figure 1: Qwerty keyboard with U.S. English layout. Pavlo Stavnichuk/iStock/Getty Images
This layout is called the “QWERTY” layout, because the first six keys on the top row of letters spells out QWERTY. There is some debate about why we have this relatively weird layout of letters that we have largely become accustomed to, but generally, the rationale behind this layout was to distribute frequently occurring pairs of letters around the keyboard so that a typewriter wouldn’t jam if a typist were typing too quickly. There are other more efficient keyboard layouts around, but we generally use QWERTY.
Now, if we were to go to an office in Tokyo – a place where people speak Japanese and generally write in Japanese, and we looked at the keyboards in a Japanese office, we would generally find QWERTY keyboards. In this lies the question of North American dominance in networked communication.
ASCII Imperialism
One of the first places that we see the impact of North America and Western Europe dominance of networks comes in how characters are encoded. When computers talk to each other, they are communicating in binary strings of bits. A bit can be 0 or 1. So, if you want to write “Hi Ian” over a network, each of those characters needs to be “encoded” into bits, transmitted over the line, and then re-encoded. It is sort of like a telegraph message, just done at extremely fast speeds and translated by machines in the background.
The earliest dominant network standard for encoding characters is the American Standard Code for Information Interchange , which originated out of the American Standards Association in 1963.
Let’s encode “Hi Ian” into ASCII.
Now this is great! But let’s try to encode the name of one of my co-authors, Niels Brügger.
You can’t! There is no encoding for ü. This is because while ASCII is great for English-language communication, it does not do very well at foreign character encoding. This is a problem because for the earliest part of the Internet’s history, the ARPANET and then later networks, relied largely on ASCII character encoding. Even the Web, which was far more global from its earliest days, inherited this legacy: it wasn’t until 2011 when you could have non-Latin domain names, meaning that Chinese web users needed to visit .cn and Russians .rn, as opposed to versions of those in their own languages.
The Shift towards the Unicode “Miracle”
By the 1980s, people began to realize that ASCII was not going to be enough because so many letters we left out of ASCII – not just Scandinavian characters, but also characters such as the é common in French. They did so by adding an extra bit to ASCII: the numbers above are seven digits long, and so the next version of the character-encoding standard turned this into eight. It had an imaginative name: ISO 8859 , referring to the international standard that outlined this new system. It did so by creating additional “parts”, which meant that it was extendable (i.e. Part 1 was introduced in 1987 to support Western European languages; Turkish support in 1989, Thai support in 2001).
This had some limitations, though, as while it supports many different languages you had to use “Switching Codes” to switch between languages, i.e. encode part of a document in English, then part in Japan, and then back again. The trick then was to use Unicode , developed in the 1990s.
People have called this the “Unicode Miracle” as it served almost as a universal translator between languages. Let’s watch a great 10-minute documentary about the Unicode Miracle here:
Computerphile. (2013, September 20). Characters, Symbols and the Unicode Miracle. [Video]. YouTube. https://www.youtube.com/watch?v=MijmeoH9LT4
Pause and Reflect: Unicode
Reflect on and respond to the following question:
Do you think that calling Unicode a “miracle” is warranted?
Note: This is for individual reflection only; you are not required to submit your answers.
We can see in this graph that the standard quickly began to take over, allowing for greater interoperability of texts on the Web!
Figure 2: The usage of the main encodings on the Web from 2001 to 2012 as recorded by Google. (Chris55, 2016)
The Internet Is Still Not Truly Global
Of course, we began this module by discussing how Internet usage rates differ dramatically between some countries where it is universal, to others where only one or two percent of the population has reliable access. As a continent, for example, Africa has only 28.6% of the population with access to the Internet as of 2015. While there have been initiatives towards global internet access, such as discussed in this CBC article “ Facebook and Google stake claims in developing world with global internet projects ”, there continues to be much discussion and backlash towards this. As developing areas like parts of Africa are seen as emerging markets, private companies are taking a bigger role in trying to reach out to these areas.
China, of course, has seen Internet development along very different lines. The Internet first arrived in mainland China in 1994, and the “Golden Shield” program was launched in 1998, including the Great Firewall that blocks access to many websites around the world. Wikipedia maintains a fairly consistently updated list of “ Websites blocked in mainland China ”, which is a vivid illustration of how Internet access is very different in China!
*The Internet's Global Impact*10e. Web around the World: Three DIY Case Studies
For this module you may have seen the extensive list of readings and become a bit scared – fear not. This week’s discussion activity largely revolves around three case studies of how the Internet has been received differently in three different Asian countries: Taiwan, Japan, and Korea.
You will be assigned ONE of the following readings:
Li Shao Liang et al, “A Brief History of the Taiwanese Internet”.
Mark McClelland, “Early Computer Networks in Japan, 1984-1994”.
Jo Dongwon, “H-Mail and the Early Configuration of Online User Culture in Korea”.
Here is a brief description of each of the three readings in turn:
Case Study 1: “A Brief History of the Taiwanese Internet”
This chapter, by Li Shao Liang, Lin Yi-Ren, and Arthur Hou-ming Huang, looks at early Internet cultures in Taiwan. We have seen Bulletin Board Systems, or BBSes, in the past. Throughout much of the world, BBSes were largely eclipsed by the World Wide Web and were mostly defunct by the mid-to-late 1990s. The same was not true in Taiwan, however, where BBSes continue to remain relevant as a way to organize and facilitate student activism! We can see how government policy influenced this very different culture: faster connections to Taiwanese university students, free access via dorms, and administrators selected by students let the BBSes develop a distinct culture. Thanks to conscious government and university policies, this is a really useful case study as we can see how similar technology evolved in different ways in North America and in Taiwan.
Figure 1: Prevalence of BBSes in Taiwan. Thitima Thongkham/iStock/Getty Images
Figure 2: Prevalence of Intranets in Japan. v-graphix/iStock/Getty Images
Case Study 2: “Early Computer Networks in Japan, 1984-1994”
This case study is that of Japan, which we see in the chapter by Mark McClelland. Much of this case deals with the language difficulties of Japan interacting with computers that were not terribly amenable to the Japanese language – preferring instead Latin characters. The chapter thus explores how early computer networked use in Japan was really on “Intranets” – internal corporate networks, rather than the global one. Indeed, Internet adoption lagged throughout the 1990s due to the global network being dominated by English. It was only until Internet on mobile phones in the late 1990s that Japan really began getting connected on a large scale to the global network!
Case Study 3: “H-Mail and the Early Configuration of Online User Culture in Korea”
Finally, this chapter looks at Korea. Dongwon Jo argues that Internet use in South Korea primarily evolved through a public email service, H-Mail, which was launched in 1987. This was an e-mail service that served as a computer network: while it was originally designed as a broadcast service (i.e., where messages could be sent out to many users), early users managed to manipulate the system so that it became not only a standard email service but even really allowing for online communities!
Figure 3: Prevalence of the H-mail service in South Korea. Pohdee/iStock/Getty Images
Once you have read your assigned chapter, you are ready to participate in the Module 10 discussion.
Module 10 Group Discussion Activity
Each discussion group has been assigned to read one of the following three chapters:
Liang, Li Shao et al. “A Brief History of the Taiwanese Internet”.
McClelland, Mark. “Early Computer Networks in Japan, 1984-1994”.
Dongwon, Jo. “H-Mail and the Early Configuration of Online User Culture in Korea”.
Go to your LEARN discussion group to find out which chapter you’ve been assigned to read.
Next, I would like you to address the following questions in a short discussion post. Aim for around 250 words, and then respond to at least one other post this week as well. Post your responses in the Web Around the World Discussion Topic .
What technology is actually being discussed?
What is the role of the user in the development and/or adoption of the technology?
Does the perceived use differ from the actual use?
What role does language play?
How does this history differ from the North American Internet/Web narrative? What similarities do you see?
Does the technology change users’ offline lives in any way? How?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Conclusions
In this module, we have explored various global implications of the Web:
From the Minitel which represented an alternative to the Web at a time when the Web was beyond the imagination of even Tim Berners-Lee. That ultimately floundered as it was based within the borders of one country and couldn’t take up with the rich, multimedia experience of the World Wide Web.
The Internet efforts of the Soviet Union , which ultimately fell apart due to the internal dynamics of the Soviet Union but which represented an alternative history of how to understand networked communication;
To the character encoding issues of the North American-dominated early Internet, which took decades before character sets could comfortably handle non-English languages!
Ultimately, we conclude with a question.
The answer today is a bit unsettled. We hope that it will be, in any case, but that continues to be a bit of a dream.
Module 11: Web Archives and Cultural Datasets
11a. Introduction
Please watch “Archive,” a free documentary, in its entirety.
Jonathan Minard. (2013, October 22). The Archive Documentary, Part 1 (The Internet Archive). [Video]. Internet Archive. https://archive.org/details/archive_documentary_internet_archive_sequence.
Pause and Reflect: “Archive” Documentary
After watching the documentary, take some time to reflect on the following questions:
What does the move towards the Internet and the Web mean for the human and historical record?
What are your thoughts about the video in general?
Note: This is for individual reflection only; you are not required to submit your answer.
We have seen the kind of rhetoric referenced in the video before, notably in Module 3, where we saw the Library of Alexandria that sought to be a universal repository of knowledge in the western world. The Internet Archive today tries to fill this role, as do other major cultural institutions around the world. In this module, we will be exploring what this means:
first, what web archives are,
second, why they matter, and
third, what this means for our world today where everything online, potentially, could end up in one of these repositories of knowledge.
This module also relates directly to the essay that you are writing for this course, which uses web archives as the main source. It is an opportunity to reflect on some of the bigger issues you may be encountering during your research and writing process.
11b. A Quick Introduction to Web Archives
Our collective cultural heritage faces a serious problem in the digital age. We used to forget. Now we have the power of recall and retrieval at a scale that will decisively change how our society remembers.
The problem can be summed up in something as innocuous as a personal homepage, hosted on the free GeoCities.com service.
Figure 1: The landing page of GeoCities.com as it appeared on December 19, 1996. (Internet Archive, n.d.)
GeoCities.com, founded in 1994, provided free websites to anybody who wanted to create one. A user would visit GeoCities.com, enter their e-mail address, and receive a free megabyte to stake their own space on the burgeoning Information Superhighway. These sites took many shapes and sizes: a Buffy the Vampire Slayer fan site, a celebration of a favourite sports team, a family tree, and even a young child’s tribute to Winnie the Pooh.
Early Web users flocked to the site, as the following visualization shows:
Figure 2: Growth in GeoCities user base, 1995-1997. © Ian Milligan
By October 1995, the first ten thousand users had created their sites. Two years later, a million had. And by 2009, seven million users had created accounts on GeoCities.com.
This is a truly astounding amount of information. Today, our research group at the University of Waterloo uses the GeoCities archive to ask historical questions about the 1990s: we have some 186 million documents – HTML pages – created by these seven million users.
Thanks to the Internet Archive, we can today “visit” GeoCities.com, as you saw with the Wayback Machine in Module 8.
What can you do with a web archive?
In 1996, Brewster Kahle and Bruce Gilliat founded the San Francisco-based Internet Archive due to their recognition that people were increasingly living their lives online and that this culture was in danger. As more and more people created websites and content, someone needed to preserve them so that one day historians could explore this early period!
This can lead to some significant shifts in how we write and understand history. For example, take the popular 1990s/early 2000s electronic pet the “Tamagotchi”. A historian, before web archives, might have to rely on the Globe and Mail or the New York Times to read journalists trying to explain the phenomenon. Whereas, with Web archives, we can now go right back to sites from twenty years ago and celebrate them! We can then figure out what that trend meant about our relationship to animals, each other, and technology. For example, we can visit “ Tamagotchi World ” from 1999.
Figure 3: Brewster Kahle and Bruce Gilliat. (Telfer, 1998)
Figure 4: Screenshot of the Tamagotchi World website, February 18, 1999. (Internet Archive, n.d.)
The list continues and continues: political histories of the late 1990s from elections to early Internet censorship – dot.com businesses. For example, check out this early Amazon.com page from 1999 .
Figure 5: Screenshot of Amazon.com, August 28, 1999. (Internet Archive, n.d.)
What does this mean?
It is not just that we can get more historical information – say from the New York Times online or the University of Waterloo’s newspaper – but that there is the kind of information left by people who never before would have left information behind in the historical record .
Think of:
a website that you may have written. Is it in the Internet Archive?
a public tweet you may have sent. It has been preserved by national libraries around the world.
a public Instagram photograph that you have posted. Is it in the Internet Archive?
The list goes on and on. If you want, you can be part of the historical record with the click of a mouse.
Pause and Reflect: A Vitual Time Capsule
Let’s finish this section with a quick thought exercise. Imagine a researcher in fifty years trying to reconstruct what life is like today. What three websites would help reconstruct your world? Keep a record of your selections somewhere, as they will be part of the discussion activity for this week.
Then, think about the following questions:
Why did you pick the sites that you did?
What could you learn from say theglobeandmail.com or the nytimes.com that you would not be able to learn from Twitter or Snapchat? And vice versa?
Note : These questions are for individual reflection only; you are not required to submit your answers.
Excited? So what’s the catch?
There’s too much of this data. We have moved from a model where historians have traditionally been informed by source scarcity – we wish we had more historical information but it was never saved – to one of abundance. This is the concept that underpins the article I wrote, “The Problem of History in the Age of Abundance”, which you haveve read this week. In the next section, we will explore the difficulty of working with the Big Data of history: web archives.
11c. The Catch – Cultural Big Data
You have already seen in the exercise in Module 8 just how difficult it can be to work with a web archive. Let’s look at a more specific web archive as an example.
Since 2005, the University of Toronto Libraries have been collecting a web archive of Canadian federal political parties and interest groups. Four times a year, the librarians make sure to catch a snapshot of all the political parties registered with Elections Canada, as well as an assemblage of political groups focusing on issues as varied as banning landmines, combating climate change, or advocating around Indigenous issues.
Check it out at Archive-it . When you are there, try to do a search for webpages about “Stephen Harper” who was the Prime Minister of Canada between 2006 and 2015. Here is what the results look like when I run the query (your results may be slightly different as you are running the search at a different time than I am).
Figure 1: Screenshot of search results for “Stephen Harper” on University of Toronto’s Canadian Political Parties and Political Interest Groups Web Archive. (Archive-it, n.d.)
You will see in this case that I have received 637,078 results for pages that contain “Stephen Harper” (I put the name in quotation marks so it would search it in its entirety). In my case, the results range from Harper’s Twitter webpage to a Facebook page to an archived article in the Walrus , a Canadian magazine.
These are useful results, and the search engine is extremely useful for much targeted queries, but as a starting point for serious research, we are in trouble. With millions of results, we need to know how the search engine ranks results. It also underpins the scale and questions at play: over half a million results are provided from this one web archive.
One question I often like to ask is who decides what we see ? In this case, the search engine is deciding to put Harper’s Twitter page first; his Facebook page fifth; and pages dedicated to his policies on other websites far, far lower.
Figure 2: Who decides what we see? Volodymyr Kotoshchuk/iStock/GettyImages
The Humanities and the World of Data: From Black Boxes to Understandable Algorithms
When we think of “Big Data” we usually think of computer scientists or statisticians – not historians! Increasingly, as all of this data becomes available for us to use as researchers, things are beginning to change.
How to analyze, interpret, and exploit big data are big problems for the humanities.
(Rockwell & Sinclair, 2016)
In short, these are actually really pressing problems for humanists.
Figure 3: Search engines can operate as a black box . © University of Waterloo
This is because imagine a researcher using a search engine to explore old web sites. They search for “Stephen Harper” and then read the first five pages of results (because, to be honest, very few people ever go beyond even the first or second page, let alone the 10,000 th page of results!). They think they are writing their research paper on Stephen Harper, but in reality the algorithm which decided which result would be 1 st or 2 nd and which one would be 15,000 th or 20,000 th decided what the researcher would see.
Figure 4: Search engine algorithms as co-authors? © University of Waterloo
In other words, in the era of Big Data, humanists and other scholars are going to be using these very large repositories of data – and they need to understand in some ways how they work!
What can we do? A Series of Problems
Let’s imagine, then, that we are researchers and we have begun to explore these web archives at scale. That is, we are beginning to mine old websites for information about what life was like 20 or 25 years ago. We instantly begin to run into some research issues that we need to explore:
the problem of ethics and
the problem of scale
Problem 1: Ethics
A thought experiment can help bring this into relief.
Module 11c Individual Activity: Privacy and research – Where do you draw the line?
Imagine a researcher in 50 years trying to see what life is like in 2018. What three public websites would make you feel like your privacy was violated if they used them for research?
Include the following in your response:
Why did you pick the three sites that you did?
How should we deal with this problem? Just not collect? Anonymize the data? Wait until you are really old, or dead?
For instructions about how to submit this activity, refer to the Individual Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
We sometimes have difficulty understanding how best to use this material in research. One common refrain is that it’s public material, and thus we don’t think that there are serious ethical concerns with using it. But imagine all of the times that you’re in public but it would be inappropriate for somebody to listen or use what you are saying. If you are meeting at a coffee shop with a friend, and I sit at the next table and eavesdrop and then publish what you were saying. Even though the meeting was “public” it isn’t ethical to publish your conversation.
Similarly, if you used Twitter or Instagram and suddenly the University of Waterloo used your material in a report, you might also similarly feel violated (and if you do not feel violated, many of your fellow students would).
Expectation of Privacy . Did the person creating this material have a reasonable expectation of privacy? Donald Trump’s Twitter account, or that of Justin Trudeau or even the president of the University of Waterloo, are all clearly public and do not have an expectation of privacy. Someone with 15 followers on Twitter, communicating mostly with their friends, does. Historians and other researchers need to keep this in mind when studying them.
Scale at Work . If I take the tweets of hundreds or even thousands of people, or their websites, privacy is less of a concern – not entirely – than if I was just focusing on one or two websites.
Pause and Reflect
With these two considerations in mind (expectation of privacy and scale at work), imagine I begin collecting every single webpage that mentions the word “Waterloo”. Then answer the following four questions:
Is this ethical?
Should I try to get consent from people?
How can I store this information?
What does this mean for the study of history?
Note: This is for individual reflection only; you are not required to submit your answers.
Problem 2: Scale
The second problem of doing this sort of Big Data-scale work is that of “scale”. Working with the Wayback Machine, as we did in Module 8, can be tedious, clicking page by page. If you know what you are looking for, you can look for something (like “University of Waterloo” or “Pokémon Official Site”) but if you are not completely sure, it can get tricky.
It may help for you to refamiliarize yourself with the Wayback Machine .
To do research, historians and others are increasingly working with the underlying data files themselves. This means a few different things:
Using computers to read lots and lots of text : At a certain point, we can’t read everything written on a subject. Instead, we need to train computers to read all of the text on that subject for us. One of my projects uses the GeoCities.com community that I introduced the module with – we train computers to read this material. Sometimes it can as simple as complicated search queries like “Webpages that mention Justin Trudeau and link to the CBC”, but other times we may use machine learning to teach a machine exactly what we are looking for.
Leveraging hyperlinks . In Module 9, we learned about “PageRank”, a metric for evaluating websites based on their hyperlinks. We can use this exact same algorithm to find historically “trustworthy” websites. We can this by emphasizing the importance of a hyperlink: for example, the more times that a website is linked to, the more curious we might be to look at it.
A Research Example
One of the research questions that I was exploring was the experience of youth and children in GeoCities. Until 1999, GeoCities was organized along the lines of “neighbourhoods,” which were places where people would thematically cluster websites – for example, if you wanted a website about your pet, you would create it in “Heartland”, your university in “Athens, and your favourite photographs of cars in “MotorCity”; and so forth. One of these neighbourhoods was the “Enchanted Forest”, which had hundreds of thousands of pages and millions of words largely written by children and youth between 1996 and 1999.
Figure 5: The GeoCities Enchanted Forest Homepage as it appeared April 17, 1997. (Internet Archive, n.d.)
I tried to start this process by using the Wayback Machine, but it was too slow. How else could I find valuable information in this neighbourhood?
I thus decided to do the following:
take all of the websites and find the hyperlinks that they had to each other,
find which of the websites in the “Enchanted Forest” tended to be linked to the most by other websites in the “Enchanted Forest”, and
map them all out on a chart.
It looked a bit like this:
Ian Milligan. (2016, May 24). Enchanted Forest PageRank. [Video]. YouTube. https://www.youtube.com/watch?v=JRpjXe0PsqE
Figure 6: Screenshot of Enchanted Forest Awards page, January 27, 1999. (Internet Archive, n.d.)
We can see that it was an awards page – many people linked to it because when this page gave them an “award”, they would post it on their website and in turn, there would be a link to the site. Aha! This was an interesting characteristic of the Enchanted Forest that I ended up writing about for a peer-reviewed publication.
Even more tellingly, we could see what this website became:
Figure 7: Screenshot of Enchanted Forest site, February 11, 2006, after it was acquired by Yahoo! Image description . (Internet Archive, n.d.)
*Cultural Big Data and Web Archives*11d. Creating our Own Cultural Datasets
In Module 9 we explored some of the hands-on ways to explore web archives. In this section, I will present a mixture of hands-on and conceptual examples to see how these data are created – which can help inform the discussion activity that we will be having later.
Let’s begin with an example of social media data collecting. One popular way to collect Twitter data is using an application called twarc “.
Figure 1: twarc screenshot. (Documenting the Now, c. 2016)
collect all tweets that contain a given word or string (i.e. #blacklivesmatter),
collect all tweets within a given location,
collect all tweets with a given language, and
collect all tweets from a given user.
You can save them all to your system. You can do some fun things with Twitter! For example, we collect a lot of data with an eye to one day making them available to historians to use. One dataset we have been collecting is every single tweet that has been sent to the user @RealDonaldTrump! (Spoiler alert: leaders get a lot of swear words tweeted at them).
Think about the Twitter-collecting that we are doing, and take some time now to reflect on the following questions:
Is this ethical?
Should I try to get consent from people (who are just tweeting at Donald Trump, for example)?
How can I store this information?
What does this mean for the study of history?
Note: This is for individual reflection only; you are not required to submit your answers.
Archiving websites is usually difficult. The Internet Archive and national libraries all around the world generally use a program called Heritrix , which is easy to set up and deploy if you are a systems developer. But if you are just a run-of-a-mill person with a degree in history or even computer science, you might find it frustrating.
Why is it hard to archive a website? There are a few reasons:
Technology always changes ! Web developers come up with cool things (like the “infinite scroll” where content only loads when you scroll to the bottom – Twitter, Instagram, and Facebook all run like this). Great for users, but it means that people trying to record websites have to trick the website that a human is scrolling in order that the content could be loaded to be saved. It is sort of a cat-and-mouse game.
A website really represents the complex interplay of many different things : If you visit a website like the University of Waterloo’s History Department , you might see a Twitter timeline that loads up in the corner. Cool, except this means that part of the website comes from uwaterloo.ca and the other part comes from Twitter.com. A website is really like a “cut and paste”-type environment that brings everything together in your browser.
Websites are big ! They really do start to add up in size the more you grab, especially once you start adding in videos to be saved. One rough estimate is that every two years a website collection doubles in size, mostly because we find inventive ways to add more video, sound, and flashy animations.
Why would you want to save a website? For example, content can be deleted during a change in government, for example, or the restructuring of a website . Sometimes this is nefarious (a new government deleting climate change data), but often it is just part of the usual operations of government. The trick is that in the past, reports used to be printed – now they live on websites, so saving them is very important! After the election of Donald Trump, for example, many groups quickly worked to capture as much of the Obama administration’s presence and executive branch as they could. The same will undoubtedly be true with the next American president, whoever they are.
Fortunately, you can now save websites yourself. There are two relatively easy ways to do this.
The first is Save Page Now, a service run by the Internet Archive .
You put the URL into the box, click “SAVE PAGE,” and within minutes that page is crawled and made accessible to anybody else. If you are citing a website for an essay, for example, you might want to use the Save Page Now service and then cite the Wayback Machine URL that it will give you. Even if the website changes, the copy that you have preserved in the Internet Archive won’t!
The second is a website called WebRecorder.io , which lets you record a website as it exists on the screen when you see it. For example, you can decide to record a Twitter account with this website and you not only record the tweets, but also the look and feel of the interface.
Figure 2: Screenshot of the Internet Archive’s “Save Page Now” service. (Internet Archive, n.d.)
You can see this in the screenshot below, where I have visited Webrecorder and then pasted the University of Waterloo’s Twitter account into the “URL to record” box.
Figure 3: Screenshot of the University of Waterloo’s Twitter page, captured by WebRecorder.io, May 2019. (Twitter, n.d.)
Figure 4: Screenshot of WebRecorder.io’s “Turn on autoscroll” feature. (©University of Waterloo, 2019)
The page would then automatically archive the entirety of the University of Waterloo’s twitter feed, as of the moment I was recording it!
Take some time to explore these two tools: try saving a website with the Internet Archive , and then with WebRecorder.io . Any reactions or thoughts as you do so?
Note: This is for individual reflection only; you are not required to submit your answers.
We have had lots of food for thought as we have moved though the module. In the conclusions section, let’s revisit some of these major issues and bring them into relief through a discussion activity.
11e. Conclusions
If you ever wonder about just how important these new and emerging forms of communication are, you can refer back to the main reading for this module: Zeynep Tufekci’s Twitter and Tear Gas: The Power and Fragility of Networked Protest . It really does show that Twitter and other forms of social media are not just for sharing pictures of breakfast, but are major sites to bring diverse and powerful groups of people together to effect meaningful change.
As Tufecki writes:
Thanks to a Facebook page, perhaps for the first time in history, an internet user could click yes on an electronic invitation to a revolution. Hundreds of thousands did so, in full view of their online networks of strong and weak ties, all at once. The rest is history – a complex and still-unfinished one, with many ups and downs. But for Egypt, and for the rest of the world, things would never be the same again.
(Tufekci, 2017)
These are serious matters.
Figure 1: A screenshot of the “We are all Kahled Said” Facebook page, which was instrumental during the 2011 Egyptian Revolution. (Facebook, 2019)
You can read more at the “ Death of Khaled Mohamed Saeed ” Wikipedia page.
In other words, we have seen the following:
Our cultural record is dramatically expanding . Information was rarely preserved in the era of print, but now suddenly we are preserving social media streams, websites, movies, and photographs like never before.
This brings us into new ethical territory , as suddenly much of this material has been collected without the creators even knowing that it was being archived. And
It requires new technical skills and approaches for humanists to work with in order to explore data at scale.
While there has been lots of hesitation, I do want to underscore the advantages. Through expanding digital record – with ethical caveats and technical limitations kept in mind – historians will be able to garner better insights from lives lived online (and off). Through web archives and the Big Data of our modern cultural record, we should have more voices and more diverse voices included in the historical record.
The era of “dead white men” being the main voices in our historical record may be over and this should be a good thing.
Module 11 Group Discussion Activity
For the Module 11 discussion activity, I would like you to reflect on the prompts that you’ve responded to throughout the module. Post your responses to the following questions in the Three Websites Discussion Topic .
Imagine a researcher in fifty years trying to reconstruct what life is like today. What three websites would you select to help reconstruct your world?
What website would you feel most uncomfortable with if it was preserved? Why?
How, in a paragraph, do you think we should be handling the ethical concerns you outlined in your previous answer? Be prepared to discuss with your classmates the ethical pros and cons of preserving and working with this type of material.
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated on the Course Schedule.
Module 12: Conclusions
12a. Conclusion
You have made it to the last module of the course! What we will do in this module is present some closing remarks that summarize where we have been in this course. Then, we will turn to the final exam and some of the methods with which you can approach studying. We hope that you have had as much fun moving your way through this course as we did creating it.
The Information Revolution
We began this course in Module 1 by reflecting on the idea of the “disruptive information age”. I argued that while there’s a lot of hype and discourse around how today we are in this crazy age where everything is different and better than what has come before, and that we are living through a revolution the likes of which humanity has never seen. In fact, once we begin thinking historically, a very different picture begins to emerge.
In the previous modules, we were introduced to many of the earlier technologies that had similar rhetoric around disruption and innovation: the printing press, the telegraph, the phone, and the conceptual idea of hypertext. Hopefully, we have now been able to complicate and form a more complete picture of what we mean by the “information age” or the “information revolution”. Remember, if you are calling something a revolution, you are making a historical claim!
At the beginning of the course, you were asked to consider whether it makes sense to speak of an information revolution before we understand where we have come from. Now, after the entire semester of content, you are ready to begin to reflect on this: Do you think it’s proper to understand what’s going on as “the” information revolution? Or is it just “an” information revolution? Or is it not really a revolution at all?
Figure 1: Our connected world: “the” information revolution? Jakarin2521/iStock/Getty Images
The Value of History
In my opening remarks in Module 1, I introduced a quotation from ex-Uber engineer Anthony Levandowski who wrote the following in a New Yorker article entitled “ Did Uber Steal Google’s Intellectual Property?”
I don’t even know why we study history. It’s entertaining, I guess — the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow.
(Levandowski, 2018)
Module 12 Group Discussion Activity
In your final discussion activity, reflect on the following, and post your responses in the Information Revolution? Discussion Topic .
Do you first think that it is proper to understand our contemporary moment as “the” information revolution?
And, do you think we need history to understand today – and tomorrow?
For detailed instructions about how to participate, see the Group Discussion Activities page. Be sure to post your responses by the date indicated in the Course Schedule.
We’ve Come a Long Way…
We have covered a lot of ground in this course. We began with the earliest forms of recorded human “conversation” in the caves of Spain, Argentina, and Indonesia, to inscriptions on stone in Mesopotamia, to the Gutenberg Press, and printing objects on parchment and vellum. It was with Gutenberg that we began to muse about the potential of a “revolution”, where we could begin to think of human history as being divided into two epochs: before printing press and after printing press.
Figure 2: Timeline showing the pivotal role of the printing press (1455) in human history. Buenaventuramariano/iStock/Getty Images, Nadiinko/iStock/Getty Images
We also looked at various methods of transmitting information: first through vision (the optical telegraph, long lines of towers stretching across France and England) and sound (loud bangs when clocks were synchronized), and then of course through electricity. The telegraph would eventually connect much of the world – transatlantic and transpacific cables connecting areas of the world that previously had taken weeks to communicate.
And, then, of course – the “Internet” part of the course. As information density began to grow dramatically, ideas of hypertext began to dramatically challenge the linearity of text, allowing us to think conceptually about how computers could present and connect information in novel ways. In some ways, much of this potential saw fruition through the ARPANET, but ultimately through the ideas of the World Wide Web, which has reshaped our world – in a truly global fashion, as we have seen – in the few decades since its availability in 1991. In turn, all of this is further transforming how we as a society understand ourselves, as we record far more historical information than ever before.
Figure 4. If telegraphs took minutes, today we take it for granted that a smartphone can almost instantly communicate with people all over the world. Warchi/E+/Getty Images
Our Lives Have Changed
Figure 5: Edgar Allan Poe. ivan-96/iStock/Getty Images
The last thing I want to leave you with, before switching gears to talk about some of the summative activities we will be doing to get ready for the final exam, is an example of the sheer quantity of data that’s now being generated every single day.
Let’s consider Edgar Allan Poe (1809-1849), the poet best known for poems like “The Raven”. He is one of the most celebrated and influential American writers of the nineteenth century. If you are a literary scholar and you want to study Poe, you have 422 letters written by him to figure out his life.
Just think. Who here has “published” more than he has? All of you. You all publish more than Edgar Allan Poe, and will leave more behind, and in theory, you have the ability to reach more people than him. You live in a world dominated by communication: through text, through images, through videos, and beyond. Of course, we might close by asking: Is that a good thing?
Figure 6: Next time you worry about how much time you spend on your phone, just think of yourself as adding to our collective historical record! skynesher/E+/Getty Images
12b. Preparing for the Final Examination
As we move into examination season, I want to take this opportunity to introduce what the exam will look like.
EDITED: Due to COVID-19, we are doing a TAKE-HOME EXAM! So there will be one question worth 100%. I will give you three options for the take-home exam, and you will have to answer one of the three options.
I will leave the details about Question #1 and the exam study guide, as they might help you study and just to think about the course. But you don’t have to study as much as you can just draw on the course material to write your final essay.
There will be two questions, each worth 50%:
You will be presented with six terms for Question 1, and you will need to describe two .
You will have three options for Question 2, and you will need to answer one .
Think of this final exam as an opportunity to show off your knowledge of the course – if you have been regularly reading the modules, engaging with the readings, and actively discussing them and other ideas in the discussion forums, with a bit more studying and you should be ready to show off all the hard work you have put into this course.
Let’s talk about the two questions in turn.
Figure 1. “Exams are finally over and you won’t face this sight for at least 4 more months, Warriors. Feeling relieved and ready for the holidays?” (Twitter, n.d.)
Question 1
Question 1 will read as follows:
Pick two (and only two) of the following terms for this question . For each, you need to describe what the term is:
what/who was it,
when or where did it happen (if applicable), and
just as importantly, you need to discuss its (or his/their) significance with respects to the long history of the Internet. Why does it matter?
Then there will be a list of six terms, such as the “Gutenberg Press” or “SPAM”. Because we are nice, you will have seen all of the possible terms that can appear on the exam – if you study them all (or enough of them, I suppose) you will be ready to rock this part of the exam (refer to the Exam Study Guide below).
You will notice that there is no word limit or suggestion. That’s because it is tough for us to let you know how much to write: you all take more or less words to say the same thing, and we don’t want you worrying about counts and length on the exam. Some answers will be longer than others will, but basically we would like you to tell us everything you know about this concept.
At a minimum, you should aim to have two paragraphs . One on what the concept is, and one on whether/why it matters.
Because it is worth about 50%, you should spend half of your allotted time on the exam on Question 1.
Question 2
Question 2 will read as follows:
Answer one (and only one) of the following questions in essay form. These questions should provide an opportunity for you to show off your knowledge. Make sure to have a thesis.
This is the final essay you will write in this course!
You will NOT know the potential questions in advance. This isn’t because we are mean, but rather because we don’t want you to spend the exam study period memorizing an essay that you just recite on the day of the final exam – it also makes the standards go way up higher, so trust me, coming in blind is better. You will be able to choose one of the three questions asked. They will be big picture questions that cover at least two or three of the lectures, and will give you an opportunity to show off your knowledge.
You do not need to reference as you go through this essay. In general, we will know where you got your information. If in doubt, you can mention it, for example noting “Berners-Lee argued that” or even “The person who invented the World Wide Web wrote that” if you forget their name. We are not interested in tripping you up on names and dates, but rather getting a sense of your overall grasp of course content.
Suggestions for Question #2
Outlining is your friend. One strategy is to pick your question, get a scratch page in your exam book, and begin to outline what you might argue. While it will take five or ten minutes up front, it can be a lifesaver as you move forward. Of course, you will all have your individual exam studying strategies.
Exam Study Guide
An exam study guide has been provided below with a list of all of the potential terms that might appear in Question 1. There are 20-25 provided, of which 6 will appear on the final exam.