STV 302: Information Technology and Society
Scott Campbell
Estimated study time: 40 minutes
Table of contents
Sources and References
Primary texts — Langdon Winner, The Whale and the Reactor (1986); Wiebe Bijker, Thomas Hughes, and Trevor Pinch, The Social Construction of Technological Systems (1987); Bruno Latour, Reassembling the Social (2005); Sheila Jasanoff, The Ethics of Invention (2016); Safiya Umoja Noble, Algorithms of Oppression (2018); Cathy O’Neil, Weapons of Math Destruction (2016); Shoshana Zuboff, The Age of Surveillance Capitalism (2019); Sally Wyatt, “Technological Determinism Is Dead; Long Live Technological Determinism” in The Handbook of Science and Technology Studies (2008); Batya Friedman and David Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination (2019).
Supplementary texts — Neil Postman, Technopoly: The Surrender of Culture to Technology (1992); Evgeny Morozov, To Save Everything, Click Here (2013); Ruha Benjamin, Race After Technology (2019); Virginia Eubanks, Automating Inequality (2018); Kate Crawford, Atlas of AI (2021); Sean Cubitt, Finite Media: Environmental Implications of Digital Technologies (2017); Steven Shapin and Simon Schaffer, Leviathan and the Air-Pump (1985); Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (2015).
Online resources — Stanford Encyclopedia of Philosophy entries on Philosophy of Technology and Values in Design; MIT OpenCourseWare STS lecture materials; Association of Internet Researchers (AoIR) ethics guidelines; UNESCO reports on information and communication technologies; World Economic Forum reports on digital divides.
Chapter 1: General Problems — The Interaction Between IT and Society
1.1 Why Study Information Technology and Society?
The study of information technology (IT) and society sits at the intersection of several disciplines: computer science, sociology, philosophy, political science, communication studies, and environmental science. This interdisciplinary character reflects a fundamental insight: technology is never merely technical. Every technological artifact emerges from a social context, embodies particular values, and reshapes the world in ways that extend far beyond its intended function.
Consider the smartphone. On one level, it is an engineering achievement — a miniaturized computer combining processors, sensors, radios, and screens. On another level, it is a social object that has restructured how billions of people communicate, work, consume media, navigate cities, and understand themselves. It has created new industries and destroyed old ones, enabled new forms of political organizing and new modes of surveillance, connected remote communities and deepened certain inequalities. Understanding any of these dimensions in isolation gives an incomplete picture. The field of Science, Technology, and Society (STS) provides frameworks for studying these entanglements systematically.
Information technology in this course refers broadly to systems for creating, storing, processing, and communicating information through electronic or digital means. This includes hardware (computers, servers, network infrastructure, sensors), software (operating systems, applications, algorithms), networks (the internet, cellular systems, satellite links), and the data they generate and consume. But it also includes the organizations, institutions, norms, and practices that surround these technical systems.
1.2 Framing the Relationship: Determinism, Construction, and Co-Production
One of the oldest and most consequential debates in thinking about technology concerns the direction of influence between technology and society. Three broad positions structure this debate.
Technological determinism holds that technology is the primary driver of social change. In its strongest form, this view claims that technological development follows its own internal logic and that society must adapt to it. Marshall McLuhan’s famous dictum “the medium is the message” captures a version of this idea: the form of a communication technology shapes human experience more profoundly than any particular content conveyed through it. Sally Wyatt has argued that technological determinism, despite being repeatedly critiqued, persists because it captures something real about the felt experience of living through rapid technological change. People routinely speak as though “technology” were an autonomous force — “the internet changed everything,” “AI will transform the economy” — and this language reflects genuine experiences of disruption and constraint.
Social constructivism emerged partly as a corrective to determinism. Scholars such as Trevor Pinch and Wiebe Bijker developed the Social Construction of Technology (SCOT) framework, which insists that technologies do not develop along a single inevitable path. Instead, different social groups — engineers, users, regulators, marketers — interpret a technology differently, and the “successful” design is the one that achieves closure among relevant groups, not the one that is technically superior in some objective sense. Bijker’s study of the bicycle is a canonical example: the design of the bicycle was not settled by engineering logic alone but by the competing demands of racers (who wanted speed), women riders (who needed designs compatible with Victorian dress), and safety advocates (who feared the high-wheeled penny-farthing).
Co-production, a concept developed by Sheila Jasanoff, attempts to move beyond both determinism and constructivism. Jasanoff argues that scientific knowledge, technological systems, and social order are produced together — each is constitutive of the other. The way a society organizes knowledge production shapes the technologies it builds, and the technologies it builds reshape social arrangements, which in turn affect what counts as knowledge. This is not a compromise position between determinism and constructivism but a fundamentally different ontological claim: the technical and the social are not two separate domains that influence each other but are always already entangled.
1.3 Key Analytical Concepts
Several concepts recur throughout the study of IT and society and will be developed in subsequent chapters.
Affordances refer to the possibilities for action that an artifact offers to users. The concept, borrowed from ecological psychology, helps analysts describe what a technology makes easy, difficult, or impossible without falling into determinism. Twitter’s 280-character limit affords brevity and rapid sharing; it does not determine that political discourse will become shallow, but it does make certain communicative practices more likely than others.
Values are embedded in technologies through design decisions. Langdon Winner’s landmark essay “Do Artifacts Have Politics?” (1980) argued that some technologies are inherently political: they embody and enforce particular arrangements of power. His example of Robert Moses’s low-clearance overpasses on Long Island — allegedly designed to prevent buses (and thus low-income, predominantly Black, riders) from reaching Jones Beach — illustrates how physical infrastructure can enact social exclusion. Whether or not the historical details of Winner’s example are fully accurate, the conceptual point stands: design is never neutral.
Infrastructure refers to the large-scale technical systems that underlie daily life and typically become visible only when they fail. Susan Leigh Star and Geoffrey Bowker’s work on classification systems and infrastructure emphasizes that these systems are deeply political: they determine what gets counted, who gets served, and whose experiences are made legible.
Chapter 2: What Is Information Technology?
2.1 Defining IT: Beyond Computers
While common usage often equates “information technology” with digital computers and the internet, a broader perspective is essential. Writing, printing, telegraphy, telephony, radio, and television are all information technologies. Each of these, upon its introduction, provoked debates strikingly similar to current debates about digital technology: concerns about information overload, anxieties about the erosion of established authority, hopes for democratic empowerment, fears about moral corruption.
The invention of the printing press in fifteenth-century Europe, for example, disrupted the Catholic Church’s monopoly on textual interpretation, enabled the Reformation, and eventually contributed to the rise of modern science and the nation-state. As Neil Postman argued in Technopoly, each new information technology does not simply add something to the culture; it changes the culture’s relationship to information itself.
What distinguishes contemporary digital IT from its predecessors is a set of specific technical properties:
| Property | Description |
|---|---|
| Digitality | Information is encoded as discrete binary values (0s and 1s), enabling perfect copying, lossless transmission, and computational processing |
| Programmability | Digital systems can be reconfigured through software, making them general-purpose in a way earlier technologies were not |
| Networked connectivity | The internet and related protocols allow digital devices to communicate globally at negligible marginal cost |
| Data intensity | Digital systems generate vast quantities of data as a byproduct of their operation, enabling new forms of analysis, prediction, and control |
| Speed and scale | Computation and communication occur at speeds and scales that qualitatively change what is possible |
These properties interact to produce the phenomena that define the contemporary information landscape: social media, algorithmic decision-making, big data analytics, the platform economy, the Internet of Things, and artificial intelligence.
2.2 Hardware, Software, and Systems
A complete analysis of IT requires attention to multiple layers. Hardware includes the physical devices and infrastructure: silicon chips, fiber-optic cables, cell towers, data centers, satellites, undersea cables, and the devices people hold in their hands. Hardware has material costs — it requires minerals, energy, water, and labor to produce, operate, and dispose of. These material costs are examined in Chapter 7.
Software is the set of instructions that tells hardware what to do. Software operates at multiple levels: operating systems manage hardware resources; applications provide functionality to users; algorithms process data according to specified rules. Because software is abstract and easily copied, it is tempting to think of it as immaterial. This is misleading. Software requires hardware to run, and the design choices embedded in software have profound material and social consequences.
Systems are combinations of hardware, software, data, people, and institutions organized to accomplish particular purposes. A hospital’s electronic health records system is not just a database and an interface; it includes the data standards used to encode diagnoses, the workflow expectations built into the interface, the training protocols for staff, the regulatory frameworks governing data privacy, and the economic incentives shaping what gets documented and how. Understanding IT requires analyzing systems, not just components.
2.3 The Internet and the Web
The internet is a global network of networks, connected through standardized protocols (principally TCP/IP) that allow diverse hardware and software to communicate. Its origins in the U.S. Department of Defense’s ARPANET project illustrate the co-production of technology and social order: military funding, academic research cultures, and countercultural ideals of decentralization and openness all shaped the internet’s architecture.
The World Wide Web, developed by Tim Berners-Lee at CERN in 1989, is a system of interlinked hypertext documents accessed via the internet. The web is often confused with the internet itself, but they are distinct: the internet is infrastructure; the web is an application running on that infrastructure. Other applications — email, file transfer, streaming video, messaging apps — also run on the internet.
The contemporary web is dominated by platforms: large-scale digital services (Google, Meta, Amazon, Apple, Microsoft, and their counterparts in other countries) that mediate interactions between users, advertisers, developers, and content creators. Platforms are not neutral intermediaries; they shape interactions through their design, algorithms, terms of service, and business models. The platform economy raises fundamental questions about power, competition, labor, privacy, and public discourse.
Chapter 3: Is Everything Online? — Digital Divides and Access
3.1 The Myth of Universal Access
Popular discourse often speaks as though “everyone” is online, as though the internet has become a universal medium. This assumption is false. As of the mid-2020s, roughly one-third of the world’s population has never used the internet. Even among those who are nominally “connected,” the quality of access varies enormously. The concept of the digital divide captures these disparities.
The digital divide was initially framed as a binary: people were either online or offline. This first-level divide concerned physical access to hardware and connectivity. It mapped onto existing social inequalities — income, geography, age, gender, disability, race, and ethnicity. Rural communities, low-income populations, elderly people, people with disabilities, and populations in the Global South were (and remain) disproportionately disconnected.
3.2 Beyond Access: The Second and Third Digital Divides
Scholars soon recognized that physical access alone was insufficient. The second-level digital divide concerns differences in skills and usage patterns. Even when people have access to the same hardware and connectivity, they use IT differently depending on their education, digital literacy, social networks, and cultural context. Some users engage in creative, empowering, and economically productive activities online; others are limited to passive consumption. Jan van Dijk’s framework identifies four sequential types of access: motivational (wanting to use the technology), material (having the hardware and connectivity), skills (knowing how to use it effectively), and usage (actually employing it for beneficial purposes).
The third-level digital divide concerns differences in outcomes: even among people with similar access and skills, the benefits derived from IT use differ. A well-connected professional using LinkedIn to network derives very different benefits from a gig worker using a platform app that monitors and controls their labor. The same technology can simultaneously empower some users and exploit others.
3.3 Infrastructure and Inequality
Digital divides are not natural phenomena; they are products of policy decisions, market structures, and historical inequalities. The deployment of broadband infrastructure, for example, follows the logic of profit maximization: telecommunications companies invest in areas with high population density and high income, neglecting rural and low-income areas. Public policy can counteract this logic — as demonstrated by municipal broadband initiatives, universal service obligations, and public investment in connectivity — but such interventions are always contested.
The global dimension of the digital divide reflects and reinforces broader patterns of global inequality. The infrastructure of the internet is not evenly distributed: submarine cables, internet exchange points, data centers, and content delivery networks are concentrated in North America, Europe, and East Asia. Much of the world’s internet traffic routes through a small number of chokepoints, creating dependencies and vulnerabilities for countries in the Global South.
Chapter 4: Disinformation and Epistemic Challenges
4.1 Defining Key Terms
The contemporary information environment has made disinformation a central concern for scholars, policymakers, and the public. Precision in terminology matters.
These categories are not always neatly separable. A piece of content may begin as deliberate disinformation, be picked up and shared in good faith as misinformation, and be weaponized through selective contextualization as malinformation.
4.2 The Political Economy of Disinformation
Disinformation is not new — propaganda is as old as politics — but digital IT has transformed its production, distribution, and consumption in several ways.
First, the cost of production has plummeted. Creating convincing false content once required the resources of state propaganda apparatuses or major media organizations. Today, a single individual with a smartphone can create and distribute false narratives to millions. Generative AI tools have further reduced the cost and skill required to produce convincing text, images, audio, and video.
Second, platform architectures amplify disinformation. Social media platforms are designed to maximize engagement, and false, sensational, and emotionally provocative content tends to generate more engagement than accurate, nuanced content. Algorithmic recommendation systems, optimized for clicks and time-on-platform, can create filter bubbles and echo chambers that expose users primarily to content that confirms their existing beliefs.
Third, the epistemic authority of traditional gatekeepers — journalists, scientists, experts, institutions — has been challenged. The democratization of publishing that the internet enables is genuinely valuable, but it also means that authoritative and non-authoritative sources compete on increasingly equal terms for attention. The result is what scholars have called an epistemic crisis: not just disagreement about facts but disagreement about how to determine facts, whom to trust, and what counts as evidence.
4.3 Responses and Their Limitations
Responses to disinformation include fact-checking organizations, platform content moderation policies, media literacy education, and regulatory proposals. Each has significant limitations.
Fact-checking is valuable but faces scale problems: the volume of false content vastly exceeds the capacity of fact-checkers. Moreover, research suggests that fact-checks have limited persuasive power, especially among those most committed to the false beliefs being corrected. The backfire effect — the phenomenon in which corrections can actually strengthen belief in the original falsehood — has been documented, though its prevalence and magnitude are debated.
Content moderation by platforms raises questions about power and accountability. Private companies making decisions about what speech is permissible exercise enormous influence over public discourse, yet they operate with limited transparency, inconsistent standards, and minimal democratic accountability. The global reach of platforms compounds the problem: content moderation policies developed primarily by American companies in Silicon Valley are applied to vastly different cultural, political, and linguistic contexts.
Media literacy education aims to equip individuals with the skills to critically evaluate information. While valuable, this approach risks placing the burden on individuals to navigate a structurally dysfunctional information environment. The problem is not only that people lack skills but that the systems within which they seek information are designed in ways that systematically undermine informed judgment.
Chapter 5: IT and Values
5.1 Values in Design: The Value Sensitive Design Framework
Technologies are not value-neutral instruments that can be used for good or ill depending on the intentions of their users. Values are embedded in technologies through the design process, often unconsciously. The Value Sensitive Design (VSD) framework, developed by Batya Friedman and colleagues, provides a systematic methodology for identifying and addressing human values throughout the design process.
VSD employs three types of investigation:
VSD integrates these three types of investigation iteratively throughout the design process. It does not prescribe which values should prevail but insists that values be made explicit and subjected to deliberation.
5.2 Winner’s Political Artifacts
Langdon Winner’s argument that artifacts have politics remains one of the most influential contributions to thinking about values in technology. Winner distinguishes two ways in which technologies can be political.
First, some technologies are inherently political: they are strongly compatible with, or even require, particular forms of social organization. Winner argues that nuclear power, for example, requires centralized, hierarchical, and secretive institutional arrangements for its safe management, making it inherently authoritarian in tendency. Solar power, by contrast, is compatible with (though does not guarantee) more decentralized and democratic arrangements.
Second, some technologies are politically designed: they are configured in ways that serve particular interests, even though the technology itself could have been designed differently. The Moses overpasses example belongs to this category. More contemporary examples abound: algorithms that systematically disadvantage certain demographic groups, interfaces designed to manipulate users into giving up privacy (dark patterns), and platform architectures that concentrate power and profit while distributing risk and cost.
5.3 Latour and Actor-Network Theory
Bruno Latour’s Actor-Network Theory (ANT) offers a different framework for understanding the relationship between technology and values. ANT refuses the distinction between human and non-human actors, insisting that agency is distributed across networks that include people, machines, texts, institutions, and natural phenomena. A speed bump, in Latour’s analysis, is not merely a passive object that humans have placed in a road; it is an actor that shapes behavior — a “sleeping policeman” that enforces a speed limit more effectively than a sign could.
ANT’s symmetrical treatment of humans and non-humans is controversial. Critics argue that it obscures morally relevant differences between people and things, and that it can deflect attention from power and inequality. But ANT’s insistence on tracing associations — on following the actors rather than starting from predetermined categories — has proven enormously productive for analyzing the complex networks through which IT operates.
5.4 Surveillance and Privacy
The question of values in IT design becomes especially urgent in the domain of surveillance. Shoshana Zuboff’s concept of surveillance capitalism describes a new economic logic in which the systematic collection and analysis of behavioral data becomes the primary source of profit. In Zuboff’s analysis, companies like Google and Facebook do not simply collect data to improve their services; they extract behavioral surplus — data beyond what is needed for service improvement — and use it to create prediction products that are sold to advertisers and other clients.
Surveillance capitalism, Zuboff argues, represents a fundamental threat to human autonomy. It operates through asymmetries of knowledge: companies know vastly more about users than users know about companies, and this knowledge is used not merely to predict behavior but to shape it. The shift from prediction to behavioral modification — using data-driven techniques to nudge, prompt, and manipulate people toward desired behaviors — represents what Zuboff calls an “unprecedented” form of power.
Privacy, in this context, is not simply a matter of controlling personal information. It is a precondition for autonomy, dignity, and democratic participation. When every action generates data that is collected, analyzed, and used to influence future action, the boundary between self-determination and external manipulation becomes difficult to locate.
Chapter 6: IT and Fairness
6.1 Algorithmic Bias: Definitions and Examples
As algorithmic systems increasingly make or inform consequential decisions — who gets a loan, who gets hired, who gets parole, who sees which advertisements, who gets flagged for additional security screening — the question of algorithmic fairness has become urgent.
Algorithmic bias refers to systematic and unfair discrimination in the outputs of algorithmic systems. Bias can enter these systems at multiple points: through the data used to train them, through the design choices made by their creators, through the ways they are deployed, and through the social contexts in which they operate.
6.2 Fairness as a Technical and Social Problem
Cathy O’Neil’s Weapons of Math Destruction popularized the concept of WMDs — widespread, mysterious, and destructive mathematical models that encode prejudice, reinforce inequality, and resist accountability. O’Neil identifies several features that make algorithms dangerous: they operate at scale, they are opaque (often protected as trade secrets), and they create feedback loops (for example, predictive policing algorithms direct police to neighborhoods with high recorded crime rates, which generates more arrests in those neighborhoods, which confirms the algorithm’s predictions).
Safiya Umoja Noble’s Algorithms of Oppression focuses specifically on how search engines — particularly Google — reproduce and amplify racism and sexism. Noble demonstrates that Google’s search results for terms related to Black women historically returned pornographic and degrading content, reflecting and reinforcing racist and sexist stereotypes. These results were not the product of explicit programming but emerged from the interaction between search algorithms, advertising markets, and the broader culture of white supremacy.
6.3 Defining Fairness: Competing Criteria
One of the central challenges in algorithmic fairness is that there is no single, universally accepted definition of fairness. Several competing mathematical criteria have been proposed:
| Fairness Criterion | Definition |
|---|---|
| Demographic parity | The proportion of positive outcomes should be equal across groups |
| Equalized odds | True positive rates and false positive rates should be equal across groups |
| Predictive parity | The positive predictive value (precision) should be equal across groups |
| Individual fairness | Similar individuals should receive similar outcomes |
| Counterfactual fairness | An individual’s outcome should be the same in a counterfactual world where their protected attribute were different |
As the COMPAS case illustrates, these criteria are generally incompatible: satisfying one often requires violating another. This is not a technical problem to be solved but a reflection of genuine moral disagreements about what fairness requires. Different fairness criteria embody different philosophical commitments — to equality of outcome, equality of treatment, or equality of process — and choosing among them is a political and ethical decision, not a mathematical one.
6.4 Structural Approaches to Algorithmic Justice
Ruha Benjamin’s concept of the New Jim Code draws attention to the ways in which ostensibly race-neutral technologies can function as instruments of racial domination. Benjamin argues that focusing on individual instances of bias or discrimination misses the systemic character of the problem: algorithmic systems operate within, and reinforce, structures of racial capitalism. Addressing algorithmic injustice therefore requires not just technical fixes (debiasing data, adjusting algorithms) but structural transformation of the social arrangements within which these systems operate.
Virginia Eubanks’s Automating Inequality examines how automated decision-making systems — welfare eligibility algorithms, coordinated entry systems for homeless services, predictive risk models in child protective services — disproportionately affect poor and working-class communities. Eubanks argues that these systems function as a digital poorhouse, extending and intensifying long-standing practices of surveillance, punishment, and exclusion directed at the poor.
Chapter 7: Myths of Immateriality
7.1 The Cloud Is Not Ethereal
One of the most pervasive myths about digital technology is that it is immaterial — that digital information exists in a weightless, placeless realm detached from the physical world. The language of technology reinforces this myth: we speak of “the cloud,” of “virtual” reality, of “cyberspace” as though these were non-physical domains. In reality, every digital interaction depends on a vast material infrastructure with significant environmental costs.
Data centers are the physical backbone of cloud computing. These facilities — some the size of several football fields — house thousands of servers that store and process data. They consume enormous quantities of electricity, both for computation and for the cooling systems required to prevent overheating. As of the mid-2020s, data centers account for approximately 1-2% of global electricity consumption, a figure that is growing rapidly as demand for cloud services, streaming video, and AI training increases.
Networks require physical infrastructure: fiber-optic cables (including hundreds of thousands of kilometers of undersea cables), cell towers, satellites, routers, and switches. The construction and maintenance of this infrastructure involves mining, manufacturing, construction, and energy consumption.
Devices — smartphones, laptops, tablets, servers, IoT sensors — require minerals (lithium, cobalt, rare earth elements, tantalum, gold) whose extraction involves environmentally destructive mining operations, often in the Global South. The manufacturing of semiconductors is extraordinarily resource-intensive, requiring vast quantities of purified water, toxic chemicals, and energy. Kate Crawford’s Atlas of AI powerfully documents the material supply chains that underlie AI systems, tracing connections from lithium mines in Nevada and cobalt mines in the Democratic Republic of Congo to the data centers and offices of technology companies.
7.2 E-Waste and the Geography of Disposal
The material lifecycle of IT does not end with manufacture and use. Electronic waste (e-waste) is one of the fastest-growing waste streams globally. When devices are discarded, they become hazardous waste: they contain lead, mercury, cadmium, brominated flame retardants, and other toxic substances. Much of the world’s e-waste is shipped — often illegally — from wealthy countries to poorer ones, where it is processed under dangerous conditions by informal workers, including children.
The geography of e-waste mirrors the geography of extraction: the environmental and health costs of the digital economy are disproportionately borne by communities in the Global South, by poor and marginalized communities, and by future generations. This pattern challenges any narrative that digital technology is “clean” or environmentally benign.
7.3 Energy Consumption and Carbon Emissions
The energy consumption of IT is substantial and growing. Training a single large AI model can consume as much electricity as several homes use in a year and can generate carbon emissions equivalent to multiple transatlantic flights. Streaming video accounts for a significant and growing share of internet traffic and associated energy consumption. Cryptocurrency mining, particularly for proof-of-work systems like Bitcoin, consumes more electricity than many countries.
Sean Cubitt’s Finite Media argues that the environmental costs of digital technology are not incidental but structural: the business models of technology companies depend on constant growth in data generation, processing, and consumption, which in turn requires constant growth in energy and material resource use. Sustainability in the digital economy, Cubitt suggests, requires not just incremental efficiency improvements but fundamental changes in the logic of production and consumption.
7.4 Toward Material Accountability
Recognizing the materiality of digital technology has important implications for policy and practice. It challenges the common assumption that digitization is inherently more environmentally sustainable than physical alternatives (e.g., that e-books are always greener than printed books, or that remote work always reduces carbon emissions). It highlights the need for extended producer responsibility, right-to-repair legislation, and international regulation of e-waste flows. And it connects the politics of IT to broader struggles over environmental justice, resource sovereignty, and climate change.
Chapter 8: Myths of Inevitability
8.1 Technological Determinism Revisited
The myth of inevitability holds that technological development follows a predetermined path and that society must adapt to whatever technologies emerge. This is technological determinism in its popular form: the idea that “you can’t stop progress,” that resistance to new technologies is futile, and that the appropriate response to any technological disruption is to adapt.
This myth serves powerful interests. If technological change is inevitable, then there is no point in democratic deliberation about which technologies to develop, how to deploy them, or whom they should serve. Questions of power, justice, and values are foreclosed: the only relevant question is how to adapt efficiently. Technology companies, in particular, benefit from the perception that their products and business models are inevitable: it discourages regulation, deflects criticism, and frames resistance as backward or irrational.
8.2 Social Construction and Contingency
STS scholarship has thoroughly dismantled the myth of inevitability. The SCOT framework, as discussed in Chapter 1, demonstrates that technologies are shaped by social processes: by the interests, values, and negotiations of diverse social groups. The bicycle, the fluorescent lamp, Bakelite plastic — in each case, Bijker and Pinch show that the “successful” technology was not the only possible outcome but the result of contingent social processes.
Interpretive flexibility is a key concept in SCOT: different social groups interpret the same artifact differently, and these different interpretations lead to different design trajectories. The internet, for example, was interpreted by its early academic users as a tool for open collaboration, by military planners as a resilient communication system, by entrepreneurs as a platform for commerce, and by governments as both an opportunity and a threat. Each interpretation shaped the technology’s development. The internet we have today is not the only internet that could have existed; it is the product of specific historical contingencies.
Closure and stabilization are the processes by which interpretive flexibility is reduced and a dominant interpretation of a technology emerges. Closure can occur through rhetorical strategies (redefining the problem so that the current design appears to solve it), through institutional mechanisms (standards-setting bodies, regulations), or through the sheer momentum of installed base and network effects. Once closure is achieved, a technology can appear inevitable in retrospect, even though it was deeply contingent in its development.
8.3 Path Dependence and Lock-In
The concept of path dependence helps explain why technologies that are not optimal can persist. Once a technology is widely adopted, the costs of switching to an alternative increase: users have invested in learning, complementary technologies have been built around it, institutions have adapted to it. The QWERTY keyboard layout is a frequently cited (if debated) example: whether or not it was originally designed to slow typists down, it has persisted not because it is the best possible layout but because the costs of switching are prohibitive.
Lock-in describes the extreme case of path dependence: a situation in which the costs of switching are so high that a technology becomes effectively permanent, even if superior alternatives exist. Carbon-intensive energy systems, automobile-dependent urban planning, and certain software platforms exhibit lock-in. Understanding lock-in is essential for understanding why “better” technologies do not automatically replace existing ones and why transitions away from harmful technologies are so difficult.
8.4 Reclaiming Agency
If technology is not inevitable, then societies have meaningful choices to make about which technologies to develop, how to govern them, and what values they should serve. This does not mean that societies can choose freely — material constraints, existing infrastructure, economic pressures, and power relations all limit the range of feasible options. But it does mean that the fatalistic posture encouraged by the myth of inevitability is both empirically wrong and politically disabling.
Langdon Winner argues that the appropriate response to new technologies is not uncritical acceptance but technological citizenship: informed, deliberative engagement with questions about what technologies a society should build and how they should be governed. This requires democratic institutions capable of meaningful technology assessment, public participation in technology governance, and a broadly shared understanding of the social dimensions of technology.
Chapter 9: Myths of Progress
9.1 Techno-Utopianism and Its Origins
The myth of progress holds that technological development is inherently beneficial — that new technologies automatically improve human welfare, solve social problems, and expand freedom. This belief has deep roots in Enlightenment thought and in the particular historical experience of industrialized societies, where technological change has indeed been associated (unevenly and ambiguously) with improvements in material living standards.
In the context of IT, techno-utopianism has taken several forms. In the 1990s, internet pioneers and commentators proclaimed that the internet would democratize information, empower individuals, flatten hierarchies, and render authoritarian control obsolete. The Californian Ideology — a term coined by Richard Barbrook and Andy Cameron — described the peculiar blend of countercultural libertarianism and free-market capitalism that characterized Silicon Valley’s self-understanding: technology as liberation, disruption as virtue, and regulation as obstacle.
More recently, techno-utopian rhetoric has attached itself to AI, blockchain, and other emerging technologies. Each is presented as a solution to complex social problems: AI will eliminate disease, blockchain will end corruption, social media will spread democracy. The pattern is remarkably consistent across different technologies: extravagant promises, dismissal of concerns, and a persistent inability to distinguish between technical capability and social benefit.
9.2 Techno-Pessimism and Critique
At the opposite pole, techno-pessimism holds that technology is inherently dehumanizing, that it destroys traditional values and social bonds, and that it concentrates power in dangerous ways. Martin Heidegger’s critique of technology as Enframing (Gestell) — a way of revealing the world that reduces everything to standing reserve, available for exploitation — represents one philosophical expression of this view. Jacques Ellul’s concept of technique — the relentless drive toward efficiency that subordinates all other values — represents another.
Contemporary techno-pessimism focuses on specific harms: social media’s effects on mental health (especially among young people), algorithmic discrimination, surveillance, attention manipulation, job displacement through automation, and the environmental costs discussed in Chapter 7. Evgeny Morozov’s critique of solutionism — the ideology that frames complex social and political problems as engineering challenges amenable to technological fixes — is a pointed contemporary expression of skepticism about technological progress.
9.3 Beyond the Binary: Critical Engagement
Neither uncritical utopianism nor blanket pessimism provides an adequate framework for engaging with IT. What is needed is critical engagement: the capacity to analyze specific technologies in their specific social contexts, attending to questions of power, values, distribution, and justice.
Several principles guide critical engagement with IT:
Distributive analysis: Who benefits from a technology and who bears its costs? Technologies rarely affect everyone equally; their benefits and harms are distributed along lines of class, race, gender, geography, and other social divisions. A technology that is net beneficial in aggregate may be deeply harmful to particular communities.
Historical awareness: Current technologies exist in historical context. Understanding the history of earlier information technologies — their promises, their disappointments, their unintended consequences — provides essential perspective on current developments. The claim that “this time is different” should always be met with skepticism, even when it is sometimes true.
Institutional analysis: Technologies operate within institutional contexts that shape their effects. The same technology deployed within different institutional arrangements will produce different outcomes. Algorithmic decision-making tools, for example, may be relatively benign when subject to robust oversight and accountability mechanisms, and deeply harmful when deployed without such safeguards.
Democratic governance: Technologies that affect the public should be subject to democratic governance. This does not mean that every technical decision should be made by popular vote, but it does mean that the fundamental choices about which technologies to develop, how to regulate them, and what values they should serve are properly political questions that should be decided through inclusive, deliberative processes.
9.4 The Responsibility of Designers and Users
The STS perspective insists that technology is the product of human choices and that humans therefore bear responsibility for the technologies they create, deploy, and use. This responsibility extends to designers, who make the initial decisions that embed values in artifacts; to organizations, which deploy technologies in particular contexts for particular purposes; to policymakers, who create the regulatory and institutional frameworks within which technologies operate; and to users, who make choices about which technologies to adopt and how to use them.
This does not mean that responsibility is equally distributed. Designers and deployers of powerful technologies bear greater responsibility than individual users, because they make decisions that affect millions of people and because they possess knowledge and resources that individual users lack. The rhetoric of individual responsibility — “users should read the terms of service,” “people should protect their own privacy” — can function as a deflection from the structural responsibilities of powerful actors.
Chapter 10: Integration and Reflection
10.1 Connecting the Themes
The topics examined in this course are deeply interconnected. The myths of immateriality, inevitability, and progress reinforce one another: if technology is immaterial (and therefore environmentally harmless), inevitable (and therefore beyond democratic control), and inherently progressive (and therefore not requiring critical scrutiny), then there is no reason to question the current trajectory of technological development. Dismantling any one of these myths creates space for questioning the others.
Similarly, the analysis of values in design connects directly to the analysis of algorithmic fairness: bias in algorithms is a specific instance of the broader phenomenon of values embedded in technology. Disinformation connects to questions of platform power, which connects to surveillance capitalism, which connects to questions of democratic governance.
The digital divide is not separate from these concerns but shapes who gets to participate in the debates. If access to IT is unequal, then the communities most affected by algorithmic bias, surveillance, and environmental harm may have the least voice in shaping the technologies that affect them.
10.2 Developing a Personal Technological Perspective
One of the learning outcomes of this course is the development of a personal technological perspective — a coherent, critically informed way of understanding one’s own relationship to IT. This involves several steps:
First, identifying one’s own assumptions about technology. Most people hold some combination of deterministic, constructivist, and co-productionist views without being fully aware of it. Making these assumptions explicit is a prerequisite for examining them critically.
Second, recognizing the values embedded in the technologies one uses daily. Every app, platform, device, and service embodies particular assumptions about what is important, what is normal, and who matters. Learning to read these values — to ask “whose interests does this design serve?” — is a fundamental skill for technological citizenship.
Third, engaging with diverse perspectives. The study of IT and society draws on multiple disciplines and multiple traditions of thought. Engaging seriously with perspectives different from one’s own — including the perspectives of communities most marginalized by current technological arrangements — is essential for developing a well-grounded view.
Fourth, connecting personal experience to structural analysis. Individual experiences with technology — frustration with opaque algorithms, delight at convenient services, anxiety about privacy, dependence on connectivity — are not merely personal but are shaped by the structures analyzed in this course. Connecting personal experience to structural analysis makes it possible to move from individual complaint to collective understanding and, potentially, collective action.
10.3 Research in STS: Asking Good Questions
The final learning outcome of this course concerns the ability to formulate independent research questions and communicate findings. Good research questions in the STS of IT share several characteristics:
They are specific enough to be tractable. “Is AI good or bad?” is not a research question. “How does the use of automated resume-screening tools affect hiring outcomes for applicants with non-Western names in Canadian technology companies?” is.
They are empirically grounded. STS research is not armchair philosophy; it involves gathering and analyzing evidence about how technologies actually work and how they actually affect people.
They are theoretically informed. Good STS research does not just describe what happens; it uses theoretical frameworks — SCOT, ANT, VSD, co-production, surveillance capitalism, algorithmic fairness — to explain why it happens and what it means.
They attend to power and values. STS research asks not just “how does this work?” but “who benefits?” “who is harmed?” “whose values are embedded?” and “who gets to decide?”
They are reflexive. The researcher is not a neutral observer but a participant in the social world being studied. Good STS research acknowledges the researcher’s own position, values, and limitations.
10.4 Looking Forward
The interaction between IT and society will continue to evolve in ways that are difficult to predict. Artificial intelligence, quantum computing, biotechnology, brain-computer interfaces, and other emerging technologies will raise new questions and intensify existing ones. The frameworks and concepts developed in this course — co-production, interpretive flexibility, values in design, algorithmic fairness, the myths of immateriality, inevitability, and progress — provide durable tools for engaging with these developments critically and constructively.
The central lesson of STS is that technology is a human product, shaped by human choices, and therefore amenable to human governance. This is neither an optimistic nor a pessimistic claim; it is a claim about responsibility. If the technologies that shape our world are the products of human decisions, then we are collectively responsible for making those decisions well — with attention to evidence, to diverse perspectives, to justice, and to the interests of those who cannot yet speak for themselves.