CS 492: The Social Implications of Computing

Carmen Bruni

Estimated study time: 1 hr 43 min

Table of contents

Sources and References

Primary textbook — No required textbook; the course is built around curated readings from journalism, academic papers, and professional codes of conduct.

Supplementary texts — Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016); Neil Postman, Technopoly: The Surrender of Culture to Technology (Vintage, 1993); Shoshana Zuboff, The Age of Surveillance Capitalism (PublicAffairs, 2019); Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s, 2018); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018).

Online resources — ACM Digital Library (dl.acm.org); IEEE Xplore; Stanford Encyclopedia of Philosophy (plato.stanford.edu); MIT SERC Case Studies (mit-serc.pubpub.org, Creative Commons licensed); Harvard Embedded EthiCS Module Repository (embeddedethics.seas.harvard.edu/modules/); MIT OCW 6.805: Ethics and the Law on the Electronic Frontier; Stanford CS 181: Computers, Ethics, and Public Policy (cs.stanford.edu/people/eroberts/courses/cs181/); UC Berkeley CS 195: Social Implications of Computer Technology (cs195.org); CMU Machine Learning, Ethics, and Society; Harvard Berkman Klein Center for Internet and Society publications; Electronic Frontier Foundation (eff.org); The Brookings Institution technology policy reports; European Data Protection Board (edpb.europa.eu).


Chapter 1: The Social Implications of Computing — An Overview

1.1 Technology as a Faustian Bargain

Every new technology confers benefits and simultaneously exacts costs. Neil Postman, in his influential address Five Things We Need to Know About Technological Change, articulated this as the first principle of technological change: every technology is a Faustian bargain in which advantages are always accompanied by corresponding disadvantages. The automobile granted unprecedented personal mobility but introduced air pollution, urban sprawl, and tens of thousands of annual traffic fatalities. The printing press democratized knowledge yet weakened oral tradition and the authority structures that had sustained medieval society. Computing is no different. The same networks that enable instantaneous global communication also enable mass surveillance, cyber-attacks, and the erosion of privacy.

Postman’s framework provides a powerful lens for evaluating every topic in this course. His five principles are worth internalizing:

  1. Technology is a trade-off. For every advantage a new technology offers, there is a corresponding disadvantage. The question is never simply “What will the technology do?” but always “What will the technology undo?”

  2. Advantages and disadvantages are not distributed equally. New technologies create winners and losers. The personal computer enriched Silicon Valley engineers and venture capitalists, but it also rendered entire categories of clerical and manufacturing work obsolete. When evaluating any computing innovation, we must ask: who benefits, and who pays the price?

  3. Every technology embeds an ideology. Technologies are not neutral tools; they carry implicit assumptions about the world. A database system treats human beings as collections of data fields. A recommendation algorithm treats human preferences as optimizable functions. As Marshall McLuhan argued, “the medium is the message” — the form of a technology shapes what can be communicated through it.

  4. Technological change is ecological, not additive. A new technology does not simply add something to an existing situation; it changes everything. Television did not merely add visual entertainment to American culture; it fundamentally restructured politics, education, religion, and family life. Similarly, the smartphone has not merely given us a portable telephone; it has restructured attention, social interaction, journalism, commerce, and democracy.

  5. Technology tends to become mythic. Over time, human inventions come to be perceived as natural, inevitable features of the world, making them resistant to questioning or criticism. We treat the internet as if it were a force of nature rather than a human creation subject to human choices about governance, access, and design.

1.2 A Brief History of Computing and Society

The relationship between computing and society is not new. As early as 1965, Edward David Jr. and Robert Fano published “Some Thoughts About the Social Implications of Accessible Computing,” arguing that “technical means are now available for bringing computing and information service within easy reach of every individual.” They framed computers as “thinking tools” analogous to power tools, enhancing intellectual capability — but warned that “the daily activities of each individual could become open to scrutiny” and that automation threatened not just income but psychological dignity, as displaced workers lose “self respect” and a sense of contribution. Their proposed solutions — hierarchical file systems to protect privacy, educational reform, and responsible stewardship by the technical community — remain strikingly relevant. A 2016 retrospective confirmed that the 1965 predictions largely held, but consequences proved more complex and troubling than anticipated, including identity theft, data breaches, and information overload rather than scarcity.

The history of computing’s social impact can be organized into several eras:

The mainframe era (1950s–1970s) introduced computers as instruments of institutional power. Governments and large corporations used them for census processing, payroll, military logistics, and scientific computation. Social concerns centered on depersonalization — the reduction of individuals to punch cards and identification numbers — and the growing power of bureaucratic institutions.

The personal computing revolution (1980s–1990s) democratized access to computation. With it came new concerns: software piracy, the digital divide between those who could afford computers and those who could not, and early questions about the impact of screen time on children and education.

The internet era (1990s–2000s) connected personal computers into a global network. The emergence of the World Wide Web, electronic commerce, and social networking platforms introduced a cascade of new social challenges: online privacy, cybercrime, digital intellectual property, the decline of traditional journalism, the rise of algorithmic gatekeeping, and the emergence of entirely new forms of social interaction and community.

The mobile and platform era (2010s–present) placed powerful networked computers in nearly every pocket. Smartphones, social media platforms, cloud computing, and artificial intelligence have intensified every earlier concern while adding new ones: algorithmic bias, deep fakes, surveillance capitalism, the environmental footprint of data centres, and the existential risks posed by advanced AI systems.

1.3 Unintended Consequences

A recurring theme in the study of computing’s social implications is the phenomenon of unintended consequences. Technologies designed for one purpose routinely produce effects that were neither anticipated nor desired by their creators. Facebook was designed as a social networking platform for college students; it became a vector for political manipulation and a catalyst for genocide in Myanmar. GPS was developed for military navigation; it enabled ride-sharing services, but also stalking via tracking devices. Facial recognition was developed for security and convenience; it has been deployed for mass surveillance by authoritarian regimes and has been shown to exhibit racial and gender bias.

The ACM’s 2018 analysis of unintended consequences provided striking examples. The U.S. ban on Huawei, intended to protect national security, had the unintended effect of leaving Huawei phones running unpatched, insecure Android applications — creating what analysts called a “ticking security time bomb” — while U.S. chip companies lost significant sales and Huawei threatened to close its U.S. research laboratory, eliminating 850 jobs. Hardware trojans comprising just a few hundred transistors among billions in a chip can act as kill switches or data leakage mechanisms in critical infrastructure, including smart power grids. These examples demonstrate that purely technical or purely political thinking about technology is insufficient; careful systemic analysis is needed because interventions in complex technological ecosystems ripple unpredictably.

Computing professionals have a responsibility to anticipate, to the extent possible, the secondary and tertiary effects of the systems they build. This requires not just technical skill but also humanistic imagination — the ability to think beyond the intended use case and consider how a system might be misused, who might be harmed, and what social structures might be disrupted.

1.4 Technological Determinism versus Social Constructivism

Scholars who study the relationship between technology and society generally fall along a spectrum between two poles:

Technological determinism holds that technology is the primary driver of social change. In this view, the invention of the printing press caused the Protestant Reformation, the automobile caused suburbanization, and the internet caused the decline of traditional gatekeepers in media and politics. This perspective emphasizes the autonomous force of technology and tends to view social adaptation to technology as inevitable.

Social constructivism holds that society shapes technology at least as much as technology shapes society. Technologies are not autonomous forces; they are the products of human choices, economic incentives, political structures, and cultural values. The internet did not have to evolve into a platform dominated by advertising-driven business models; that was a consequence of specific regulatory decisions (or the absence of regulation), venture capital incentives, and design choices made by particular companies.

Most contemporary scholars of science, technology, and society (STS) adopt a position between these poles, recognizing that technology and society co-evolve in complex feedback loops. This co-evolutionary perspective is perhaps the most productive framework for analyzing the topics covered in this course: it takes technology’s power seriously without treating social outcomes as inevitable.

1.5 Human Values and Technology Design

A complementary framework to Postman’s critique is the “Declaration of Empowerment” presented at an ACM SIGCAS conference in 1990, which argued that computer technology must be designed to serve human values and empower users rather than de-skill or control them. The declaration proposed a “Social Impact Statement” for major computing projects — paralleling environmental impact statements — and envisioned systems in which users experience “competence, clarity, control, and comfort and feelings of mastery and accomplishment.” The high-level value goals enumerated included peace, excellent healthcare, adequate nutrition, accessible education, freedom of expression, and support for creative exploration. The declaration’s central insight — that the computing profession bears responsibility for proactively embedding human values into system design, rather than treating efficiency and capability as the sole design goals — remains foundational to the field of value-sensitive design.

1.6 Emerging Frontiers

As Geoffrey Hinton, often called the “Godfather of AI,” has warned, the rapid development of artificial intelligence poses novel risks to society that may differ in kind from previous technological transformations. Unlike earlier technologies that augmented human physical capabilities, AI systems are beginning to augment and in some cases replace human cognitive capabilities. This raises profound questions about the future of work, the nature of expertise, the reliability of information, and the distribution of power between humans and machines.

Researchers have also begun to argue that the scope of computing ethics should expand beyond human welfare to include animal welfare, as AI systems increasingly affect non-human species through environmental monitoring, agricultural automation, and habitat modelling.


Chapter 2: The Internet — Architecture, Trust, and Governance

2.1 The Architecture of the Internet

The internet is not a single technology but a layered system of protocols, infrastructure, and services. At its foundation lies a packet-switched network architecture, originally developed as ARPANET by the United States Department of Defense in the late 1960s. The internet’s layered protocol stack — with the Internet Protocol (IP) providing addressing and routing, the Transmission Control Protocol (TCP) providing reliable data delivery, and the Hypertext Transfer Protocol (HTTP) enabling the World Wide Web — was designed with certain values embedded in its architecture.

The original design philosophy of the internet embodied a commitment to end-to-end connectivity, decentralization, and openness. Data packets were treated equally regardless of their source, destination, or content. There were no built-in gatekeepers, no central authority approving or filtering content, and no mechanism for identifying users. These design choices reflected the values of the academic and military researchers who built the system, but they also created an environment ripe for both innovation and abuse.

The World Wide Web, built on top of the internet by Tim Berners-Lee in 1989, added a layer of hyperlinked documents accessible through web browsers. The web’s openness fuelled explosive growth, but it also meant that anyone could publish anything — from scholarly research to dangerous misinformation, from creative art to illegal weapons designs.

2.2 Trust on the Internet

The original internet was built on a foundation of institutional trust among a small community of researchers. As Kevin Kelly argued in “The Web Runs on Love, Not Greed,” the early web was sustained by a spirit of voluntary cooperation, open-source development, and gift-economy sharing of knowledge. However, as the internet expanded to billions of users and became the backbone of global commerce, the question of trust became acute.

As Ananda Mitra argued in “Trust, Authenticity, and Discursive Power in Cyberspace” (2002), the internet has redistributed discursive power by enabling marginalized people to speak for themselves rather than being spoken for by the powerful. However, this redistribution complicates questions of trust, since in cyberspace “how something is said and the fact that something can be said at all, could become more powerful than what is being said.” Mitra identified a fundamental tension: cyberspace can be seen either as a place where “no one can be trusted and nothing is authentic,” or as an environment where voices can be validated through dialectical examination of multiple perspectives.

Trust on the internet operates at multiple levels:

Infrastructure trust concerns whether the underlying systems are reliable and secure. Can users trust that their communications will not be intercepted? That their data will not be lost? That the systems they depend on will remain operational? Events such as the Capital One data breach (2019), in which a former Amazon Web Services employee exploited a misconfigured firewall to access the personal data of over 100 million customers, demonstrate the fragility of infrastructure trust.

Content trust concerns whether the information encountered online is accurate and authentic. The problem of misinformation — false information spread without malicious intent — and disinformation — false information deliberately created and spread to deceive — has become one of the defining challenges of the internet age. The PizzaGate conspiracy theory, which originated on internet forums and led a man to fire an assault rifle in a Washington, D.C. pizza restaurant, illustrates the real-world consequences of online misinformation. Research on “the state of fakery” has documented how AI-powered tools are rapidly increasing the ability to create convincing fake audio, video, and images, outpacing the ability to detect such fakes. Adobe’s VoCo, a “Photoshop of speech,” demonstrated the ability to edit recorded speech to replicate and alter voices. Face2Face, developed by researchers at the University of Erlangen-Nuremberg, Max Planck Institute, and Stanford, performs real-time video reenactment — tracking one person’s facial expressions and translating them onto another person’s face with photorealistic results. As digital forensics expert Hany Farid warned, “More and more, we’re living in a digital world where that underlying digital media can be manipulated and altered, and the ability to authenticate is incredibly important.”

Institutional trust concerns whether the companies and organizations that mediate online interaction can be trusted to act in users’ interests. Surveys consistently show declining public trust in internet companies and platforms. The BBC reported that trust in the internet is “now missing” among significant portions of the population, while TIME magazine noted that “almost everyone doesn’t trust the internet” according to polling data.

2.3 The Power of Platforms

A small number of technology companies — principally Google (Alphabet), Amazon, Meta (Facebook), Apple, and Microsoft — exercise enormous power over the internet ecosystem. These companies function as digital gatekeepers, controlling access to information, commerce, communication, and entertainment for billions of people.

Google processes over 8.5 billion searches per day, making it the primary intermediary between internet users and information. Nicholas Carr’s influential essay “Is Google Making Us Stupid?” argued that the cognitive habits fostered by internet search — skimming, scanning, and jumping between sources — are eroding our capacity for deep, sustained reading and contemplation. Carr drew on historical examples, noting that Socrates worried about the effects of writing on memory, and that Friedrich Nietzsche’s prose style changed when he began composing on a typewriter. The deeper concern is not that any particular technology is harmful, but that each new medium reshapes the cognitive processes of its users in ways that are often invisible.

Google’s dominance raises additional concerns about the right to be forgotten — the principle, recognized in European law, that individuals should be able to request the deletion of personal information from search results when that information is outdated, irrelevant, or harmful. This right reflects a tension between individual privacy and the public’s interest in access to information.

Amazon has evolved from an online bookstore into a dominant force in retail, cloud computing, logistics, and increasingly healthcare and entertainment. Investigations have revealed the extent to which Amazon’s marketplace operates as a complex ecosystem where third-party sellers compete under rules set and enforced by Amazon, which simultaneously competes with them using privileged access to sales data.

Wikipedia represents a different model of internet power — one based on collective, volunteer-driven knowledge production. Studies of Wikipedia’s use in professional newsrooms have shown that journalists routinely rely on it as a starting point for research, raising questions about the reliability and biases of crowd-sourced encyclopedic knowledge.

2.4 Internet Addiction

The concept of internet addiction has been debated by psychologists since the late 1990s. Research published in The New Yorker has explored whether excessive internet use constitutes a genuine addiction analogous to substance dependence, or whether it is better understood as a symptom of underlying conditions such as depression, anxiety, or social isolation.

The question matters because the answer determines how society should respond. If internet overuse is a true addiction, it may warrant clinical treatment, warning labels, and regulatory intervention similar to those applied to gambling or tobacco. If it is primarily a symptom of other conditions, the appropriate response is to address those underlying conditions rather than to regulate internet use itself.

The design of many internet platforms deliberately exploits psychological mechanisms — variable reward schedules, social approval signals, and infinite scrolling — that are known to promote compulsive use. This raises ethical questions about the responsibilities of platform designers, which will be explored further in the chapters on ethics and on video games.

2.5 Net Neutrality

Net neutrality is the principle that internet service providers (ISPs) should treat all internet traffic equally, without blocking, throttling, or prioritizing particular content, applications, or services. The debate over net neutrality is fundamentally a debate about the internet’s character as a public utility versus a private marketplace.

Arguments in favour of net neutrality emphasize that the internet has become essential infrastructure — as fundamental to modern life as roads, electricity, and water. Proponents argue that allowing ISPs to create “fast lanes” for companies that can afford premium fees would undermine competition, stifle innovation by startups that cannot afford to pay, and create a two-tiered internet in which wealthy corporations enjoy superior access while smaller organizations and individual voices are marginalized. A 2019 study found that wireless carriers were systematically throttling video streaming services around the clock, not just during periods of network congestion, demonstrating that absent regulation, ISPs will use their gatekeeping power strategically.

Arguments against net neutrality contend that regulation stifles investment and innovation. ISPs argue that they need the flexibility to manage network traffic, invest in infrastructure expansion, and develop new business models. They contend that differential pricing reflects legitimate differences in the cost of delivering different types of content and that market competition will prevent abusive practices.

The legal debate centres on whether ISPs should be classified as Title I “information services” (subject to minimal regulation) or Title II “common carriers” (subject to obligations to serve all customers on equal terms). The common carrier model, historically applied to telephone companies, railroads, and other essential services, would give regulators significant authority over ISP practices. The information services model would leave ISPs largely free to set their own terms.

2.6 The Dependence of Cyberspace

In “A Declaration of the Dependence of Cyberspace” (2018), Moshe Y. Vardi offered a powerful rebuttal to the cyber-libertarianism of the 1990s. John Perry Barlow’s 1996 manifesto had declared that “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind,” arguing that cyberspace should be free from government regulation. Vardi demonstrated that two decades of evidence had proven this vision naive: “Just as you cannot separate the mind and the body, you cannot separate cyberspace and physical space.” The migration of advertising revenue to the web caused a “retail apocalypse” devastating physical stores. Growing fears of cyberattacks on critical infrastructure revealed the physical stakes of cyber insecurity. The core insight is that “what happens in cyberspace does not stay in cyberspace” — the digital and physical worlds are inextricably intertwined, requiring governance frameworks that acknowledge this interdependence.

2.7 Digital Arms and Criminal Innovation

The internet has enabled new forms of criminal activity and lowered barriers to the production of dangerous goods. The development of consumer 3D printing technology has made it possible to manufacture functional firearms outside the traditional supply chain, raising concerns about untraceable weapons. Similarly, the emergence of “auto switches” — small devices that convert semi-automatic handguns into fully automatic weapons — illustrates how digital fabrication can circumvent weapons regulations.

Artificial intelligence has further expanded the toolkit available to criminals. Reports have documented the use of AI for financial fraud, voice cloning, and the creation of convincing phishing attacks. The convergence of AI capability with criminal intent represents a qualitative shift in the threat landscape, as automated systems can produce sophisticated attacks at scale.

2.8 Intellectual Property and Software Piracy

The internet has profoundly disrupted traditional models of intellectual property. The ease of digital copying means that software, music, video, and text can be reproduced and distributed at near-zero marginal cost, undermining the economic model that historically supported creative and intellectual work.

The debate over software piracy pits two perspectives against each other:

Those who view piracy as essentially harmless argue that software companies earn sufficient profits, that copying does not deprive the original owner of their copy (unlike physical theft), that high prices create artificial scarcity of valuable tools, and that enforcement is impractical in a globally connected digital environment.

Those who oppose piracy argue that it undermines the economic incentives for software development, that it is a form of theft regardless of whether the original is physically taken, that lower piracy rates could lead to lower software prices as companies recover costs across a larger paying customer base, and that a culture of disregard for intellectual property has broader implications for the security of all digital information.

The legal landscape has evolved significantly, with cases such as Viacom’s lawsuit against Google over YouTube clips establishing important precedents for the responsibilities of platforms that host user-uploaded content.


Chapter 3: Social Media — Influence, Disinformation, and Platform Responsibility

3.1 The Rise of Social Media

Social media platforms have become among the most powerful institutions in the modern world. Facebook (now Meta) has over three billion monthly active users. YouTube, TikTok, Instagram, X (formerly Twitter), and Snapchat collectively reach the vast majority of internet users worldwide. These platforms are not merely communication tools; they are the primary environments in which billions of people encounter news, form political opinions, maintain social relationships, and construct their identities.

The business model that drives most social media platforms is advertising revenue, which depends on maximizing user engagement. This creates a structural incentive to promote content that triggers strong emotional reactions — content that is surprising, outrageous, or divisive — because such content generates more clicks, shares, and comments than content that is nuanced, accurate, or boring. As Facebook whistleblower Frances Haugen testified before the U.S. Congress in 2021, the company’s algorithms systematically prioritized engagement over safety, amplifying harmful content because it was profitable to do so.

3.2 The Cambridge Analytica Scandal

The Facebook–Cambridge Analytica scandal represents one of the most significant breaches of user trust in the history of social media. In 2013, data scientist Aleksandr Kogan developed a personality quiz application called “This Is Your Digital Life” that collected personal data not only from the approximately 270,000 people who completed the quiz but also from their Facebook friends, ultimately harvesting data from up to 87 million Facebook profiles without informed consent.

This data was shared with Cambridge Analytica, a political consulting firm that used it to build psychographic profiles of voters. By analyzing individuals’ Facebook activity — their likes, shares, and interactions — the firm claimed to be able to determine personality traits and then craft individually targeted political messages designed to influence voting behaviour. Cambridge Analytica worked for the Donald Trump presidential campaign in 2016, using a Facebook advertising feature called “Dark Posts” — personalized advertisements visible only to specific targeted individuals, appearing in their news feeds at strategically chosen times and disappearing within hours.

The scandal exposed several structural failures:

  • Platform design: Facebook’s Open Graph API allowed third-party applications to access the data of users’ friends, dramatically expanding the scope of data collection beyond what any reasonable user would have expected from taking a personality quiz.
  • Consent mechanisms: The consent provided by the 270,000 quiz-takers could not meaningfully extend to their millions of friends, who had no knowledge that their data was being collected.
  • Regulatory gaps: Existing privacy regulations were inadequate to address the scale and sophistication of data harvesting on social media platforms.
  • Enforcement: Although Facebook’s policies prohibited the transfer of user data to third parties, these policies were not effectively enforced.

The consequences were significant. Facebook was fined $5 billion by the Federal Trade Commission — the largest privacy fine in history at that time. Cambridge Analytica filed for bankruptcy. Multiple jurisdictions launched investigations and strengthened data protection regulations.

3.3 Disinformation and State-Sponsored Manipulation

Social media platforms have become battlegrounds for information warfare. State actors have exploited these platforms to interfere in democratic processes, sow social division, and advance geopolitical objectives.

China has deployed AI-powered disinformation campaigns targeting both U.S. voters and Taiwan, using sophisticated techniques including deepfake videos, coordinated inauthentic accounts, and algorithmically amplified narratives. Taiwan, which faces a particularly intense and sustained disinformation campaign from mainland China, has become a laboratory for democratic resilience, developing innovative fact-checking systems and media literacy programmes.

The relationship between social media platforms and state-sponsored disinformation creates a fundamental tension. Platforms are simultaneously venues for free expression, tools for democratic participation, and vectors for manipulation. Efforts to combat disinformation risk suppressing legitimate speech, while failure to act allows the corruption of public discourse.

3.4 Content Moderation and Censorship

The question of how social media platforms should moderate content is among the most consequential issues in contemporary technology policy. Platform companies must navigate a complex landscape of competing demands:

  • Free expression advocates argue that platforms should err on the side of allowing speech, removing only content that clearly violates the law.
  • Safety advocates argue that platforms should proactively remove harmful content including hate speech, harassment, and misinformation, even where such content is technically legal.
  • Government regulators in various jurisdictions have enacted or proposed widely varying rules, from the European Union’s Digital Services Act to authoritarian regimes that demand the suppression of political dissent.

The U.S. Supreme Court has heard landmark cases addressing whether social media platforms have First Amendment rights to curate content, or whether they function as common carriers that must provide equal access to all speakers. These cases have the potential to reshape the legal framework governing online speech.

YouTube’s 2023 decision to stop removing content making false claims about presidential election fraud illustrates the practical difficulty of content moderation at scale. The company argued that its previous policy, while well-intentioned, was having the unintended effect of suppressing legitimate political debate. Critics argued that the reversal would facilitate the spread of election-denying conspiracy theories.

3.5 Influencer Culture and Algorithmic Persuasion

Social media has created new categories of public figures — influencers — who build large followings and derive income from advertising, sponsorships, and content creation. The influencer economy raises questions about transparency, authenticity, and the commercialization of personal relationships.

Research has shown that many social media users, particularly younger users, trust influencers more than they trust traditional news sources. This shift in the locus of trust from institutional gatekeepers to individual personalities has profound implications for how information circulates in society and how opinions are formed.

The platforms themselves function as persuasion architectures. Through algorithmic curation, notification design, and interface choices, they shape what users see, how they feel, and what they do. The extent to which users are aware of, or can resist, these persuasive mechanisms remains an open question.

3.6 The TikTok Controversy

TikTok, owned by the Chinese company ByteDance, has become a focal point for concerns about the intersection of social media, national security, and geopolitical competition. The U.S. Congress passed legislation in 2024 that would require ByteDance to divest TikTok’s U.S. operations or face a ban, citing concerns that the Chinese government could compel the company to share user data or manipulate the content shown to American users.

Defenders of TikTok argue that the proposed ban would infringe on the free expression rights of its 170 million American users, that the security concerns are speculative, and that the real motivation is protectionism. Critics point to TikTok’s privacy policy, which grants the company broad rights to collect user data including biometric information, and to China’s national security laws, which require companies to cooperate with government intelligence operations.


Chapter 4: Privacy, Social Control, and Data Protection

4.1 The Concept of Privacy

Privacy is not a single, unitary concept but a cluster of related values including:

  • Informational privacy: the right to control who has access to personal information about you.
  • Decisional privacy: the right to make personal decisions free from interference.
  • Physical privacy: the right to be free from unwanted physical intrusion or surveillance.
  • Associational privacy: the right to associate with others without being observed or monitored.

In the digital age, informational privacy has become the most contested dimension, as the volume, granularity, and accessibility of personal data have increased exponentially. Every online transaction, every search query, every GPS location, and every social media interaction generates data that can be collected, aggregated, analysed, and monetized.

4.2 Privacy by Design

Ann Cavoukian, the former Information and Privacy Commissioner of Ontario, developed the framework of Privacy by Design (PbD), which holds that privacy protections should be built into the design and architecture of information systems and business practices from the outset, rather than added as afterthoughts. The framework rests on seven foundational principles:

  1. Proactive not reactive; preventative not remedial. Anticipate and prevent privacy-invasive events before they happen.
  2. Privacy as the default setting. Personal data should be automatically protected; no action should be required from the individual to protect their privacy.
  3. Privacy embedded into design. Privacy is integral to the system, not an add-on.
  4. Full functionality — positive-sum, not zero-sum. It is possible to have both privacy and security, privacy and functionality; the choice is not either/or.
  5. End-to-end security — full lifecycle protection. Data is securely managed throughout its entire lifecycle.
  6. Visibility and transparency. Component parts and operations remain visible and transparent to users and providers alike.
  7. Respect for user privacy — keep it user-centric. The interests of the individual should be kept uppermost.

Cavoukian’s broader argument, articulated in “Privacy and Radical Pragmatism: Change the Paradigm,” is that the traditional approach to privacy — which treats it as a regulatory burden to be minimized — must give way to a paradigm in which privacy is understood as a source of competitive advantage and public trust.

4.3 The General Data Protection Regulation (GDPR)

The European Union’s General Data Protection Regulation, implemented on 25 May 2018, represents the most comprehensive data protection framework in the world. The GDPR establishes seven key principles for data processing:

  1. Lawfulness, fairness, and transparency: Data must be processed lawfully, fairly, and in a transparent manner.
  2. Purpose limitation: Data should be collected for specified, explicit, and legitimate purposes.
  3. Data minimization: Only the minimum data necessary should be collected.
  4. Accuracy: Personal data must be accurate and kept up to date.
  5. Storage limitation: Data should be kept no longer than necessary.
  6. Integrity and confidentiality: Data must be processed with appropriate security measures.
  7. Accountability: Organizations must demonstrate compliance with all principles.

The GDPR grants individuals eight fundamental rights over their personal data:

  • The right to be informed about data collection and use.
  • The right of access to copies of personal data held by an organization.
  • The right to rectification of inaccurate data.
  • The right to erasure (the “right to be forgotten”).
  • The right to restrict processing in certain circumstances.
  • The right to data portability — to receive data in a structured, machine-readable format.
  • The right to object to certain types of processing.
  • Rights related to automated decision-making and profiling.

Organizations that violate the GDPR face penalties of up to 20 million euros or 4% of global annual revenue, whichever is greater. Meta was fined a record $1.3 billion in 2023 for transferring European user data to the United States without adequate privacy protections.

4.4 Workplace Privacy

The question of employee privacy in the digital workplace involves a tension between employers’ legitimate interests in monitoring productivity, protecting trade secrets, and maintaining security, and employees’ reasonable expectations of dignity, autonomy, and freedom from surveillance.

Modern workplace monitoring technologies include email and internet usage tracking, keystroke logging, screen recording, GPS tracking of company vehicles, and increasingly, AI-powered tools that analyse employee behaviour patterns. The COVID-19 pandemic accelerated the adoption of remote work monitoring tools, raising new questions about the boundary between professional and personal space when the office is the home.

Companies such as JPMorgan Chase have invested heavily in employee surveillance systems, raising concerns about the creation of a panoptic workplace in which every action is observed and recorded. The chilling effect of pervasive monitoring on employee creativity, risk-taking, and morale is well-documented in organizational psychology research.

4.5 Tracking Technologies

The proliferation of tracking technologies has blurred the line between surveillance and convenience. RFID (Radio Frequency Identification) tags, first discussed as privacy threats in the early 2000s, have evolved into ubiquitous components of retail, logistics, and access control systems. The privacy concerns they raised — the possibility of tracking individuals’ movements and purchases without their knowledge — seem prescient in light of more recent developments.

Apple’s AirTag, designed as a personal item tracker, has been repurposed by stalkers and criminals to track unwitting victims. The device’s small size, long battery life, and integration with Apple’s massive Find My network make it an effective surveillance tool, despite Apple’s efforts to add anti-stalking safeguards.

The use of personal data by companies in unexpected contexts raises further concerns. Amazon’s expansion into healthcare through Amazon Clinic requires patients to sign privacy waivers that grant the company broad access to health data, raising questions about the adequacy of informed consent when the alternative is foregoing medical care.

4.6 Universal Identification Systems

The proposal for a universal identification system — a single, lifetime identifier for every resident that would be used for everything from tax returns to grocery transactions — has been debated for decades. The arguments reflect fundamental tensions between efficiency and liberty.

Proponents argue that a universal ID would reduce identity fraud, streamline government services, lower administrative costs, and eliminate the confusion and inefficiency of maintaining multiple identification systems for different purposes. They point to the successful adoption of national ID systems in many countries as evidence that such systems can function effectively.

Opponents raise several concerns. A universal ID system creates a single point of failure: if the system is compromised, every aspect of an individual’s life is exposed. It enables unprecedented surveillance, as a single identifier makes it trivial to correlate information across databases and track an individual’s activities across all domains of life. It shifts the balance of power from individuals to institutions, as the ability to revoke or restrict access to a universal ID gives the issuing authority enormous coercive power. Historical experience with identity systems, from the Social Security Number in the United States to the Aadhaar system in India, demonstrates that identifiers designed for limited purposes inevitably expand in scope — a phenomenon known as “function creep.”


Chapter 5: Cybersecurity and National Security

5.1 The Threat Landscape

Cybersecurity threats have evolved from the mischievous hacking of the 1980s into a sophisticated ecosystem of state-sponsored espionage, organized cybercrime, ransomware operations, and ideologically motivated hacktivism. The threat landscape encompasses:

  • State-sponsored actors who conduct espionage, intellectual property theft, and infrastructure sabotage. Russia, China, North Korea, and Iran are the most frequently cited state sponsors of offensive cyber operations. A 2024 report revealed that a state actor was behind a cyberattack on British Columbia’s government systems.
  • Organized cybercrime groups that operate ransomware-as-a-service platforms, conduct financial fraud, and traffic in stolen data. The attack on Indigo Books and Music, which cost the Canadian retailer over $50 million, and the ransomware attack on the Toronto Public Library system, which took four months to recover from, illustrate the real-world impact of cybercrime on institutions and communities.
  • Individual hackers with varying motivations, from financial gain to ideological activism to the simple thrill of breaking into systems.

5.2 The Encryption Debate

Encryption is the mathematical foundation of digital security, enabling confidential communication, secure commerce, and data protection. Strong end-to-end encryption ensures that only the intended sender and recipient can read a message; not even the service provider can access the content.

Governments around the world have sought to weaken or circumvent encryption in the name of law enforcement and national security. The most prominent confrontation came when Apple refused to create a tool to unlock the iPhone of a suspected terrorist, with CEO Tim Cook characterizing the FBI’s demand as a threat to civil liberties. Apple argued that creating a backdoor for one device would inevitably compromise the security of all devices.

The encryption debate involves a genuine dilemma:

  • Law enforcement perspective: Strong encryption can prevent investigators from accessing evidence of serious crimes, including terrorism, child exploitation, and organized crime. Without the ability to obtain the contents of encrypted communications, police and intelligence agencies may be unable to prevent or prosecute dangerous activities.
  • Security and privacy perspective: Any mechanism that allows government access to encrypted communications creates a vulnerability that can be exploited by malicious actors, including hostile foreign governments and cybercriminals. The security of the entire digital ecosystem depends on the integrity of encryption, and any compromise affects everyone, not just criminal suspects.

The United Kingdom’s Online Safety Act and the European Court of Human Rights have arrived at different conclusions on this question. The UK law includes provisions that could compel platforms to scan encrypted messages, though critics have called this clause technically unenforceable. The European Court of Human Rights ruled in 2024 that mandating encryption backdoors violates the right to privacy, declaring that “backdoored encryption is illegal” under the European Convention on Human Rights.

5.3 Edward Snowden and Mass Surveillance

In June 2013, Edward Snowden, a contractor for the National Security Agency (NSA), leaked thousands of classified documents revealing the existence and scope of mass surveillance programmes operated by the United States and its allies. The revelations included:

  • PRISM: A programme through which the NSA collected communications data from major technology companies including Microsoft, Yahoo, Google, Facebook, YouTube, Skype, and Apple. PRISM was described as “the number one source of raw intelligence used for NSA analytic reports,” accounting for 91% of the NSA’s internet traffic acquired under the Foreign Intelligence Surveillance Act (FISA) Section 702.
  • Upstream collection: The NSA’s direct tapping of fibre-optic cables carrying internet traffic.
  • Metadata collection: The bulk collection of telephone metadata — records of who called whom, when, and for how long — for millions of Americans who were not suspected of any crime.
  • International cooperation: The Five Eyes alliance (the United States, United Kingdom, Canada, Australia, and New Zealand) operated a coordinated global surveillance infrastructure.

An analysis by The Washington Post found that 90% of the individuals placed under surveillance were ordinary Americans, not the intended foreign intelligence targets. The scale of collection was staggering — the NSA was estimated to be intercepting and storing 1.7 billion electronic communications per day.

The Snowden revelations catalysed a global debate about the balance between national security and civil liberties. Snowden himself became a polarizing figure: to supporters, he is a heroic whistleblower who exposed unconstitutional government overreach; to critics, he is a traitor who compromised national security and endangered intelligence operatives.

In 2020, the U.S. Ninth Circuit Court of Appeals ruled that the NSA’s bulk metadata collection programme was illegal, vindicating many of Snowden’s concerns. However, the broader surveillance infrastructure remains largely intact, and many of the issues Snowden raised — including the tension between security and privacy, the oversight of intelligence agencies, and the role of technology companies in facilitating surveillance — remain unresolved.

In Canada, CSIS (the Canadian Security Intelligence Service) was found to have illegally retained sensitive personal data for over a decade, further illustrating the challenges of oversight when intelligence agencies operate in secret.

5.4 Canadian Security Legislation

Canada has grappled with its own version of the security-versus-liberty debate through a series of legislative measures:

Bill C-51 (the Anti-Terrorism Act, 2015) significantly expanded the powers of Canadian intelligence and law enforcement agencies, including the power to share information across government departments, the authority for CSIS to take “measures” to disrupt threats (rather than merely collecting intelligence), and lowered thresholds for preventive detention and peace bonds. Critics argued that the bill’s broad language could capture legitimate protest and dissent.

Bill C-59 (the National Security Act, 2019) was introduced as a response to criticisms of C-51, adding new oversight mechanisms including a National Security and Intelligence Review Agency and an Intelligence Commissioner. However, critics argued that these reforms did not go far enough to address the fundamental expansion of surveillance powers enacted under C-51.

5.5 Emerging Threats: AI-Powered Surveillance

The convergence of artificial intelligence with surveillance technology has created new capabilities that challenge existing legal and ethical frameworks. Research on using Wi-Fi routers for human detection through body pose estimation (DensePose from Wi-Fi) demonstrates that AI can extract sensitive information — including the positions and movements of people inside buildings — from the electromagnetic signals that permeate modern environments. This technology requires no cameras, no specialized sensors, and no cooperation from the subject; ordinary Wi-Fi routers that are already present in most buildings are sufficient.

The implications are profound. Traditional privacy protections are premised on the idea that certain spaces — homes, offices, private rooms — are shielded from outside observation. If AI can reconstruct human activity from ambient electromagnetic signals, the concept of a “private space” requires fundamental re-examination.


Chapter 6: Professional Ethics in Computing

6.1 The ACM Code of Ethics

The Association for Computing Machinery (ACM) Code of Ethics and Professional Conduct, most recently updated in 2018, is the most widely referenced ethical framework for computing professionals. It is organized around four sections:

1. General Ethical Principles:

  • Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
  • Avoid harm. Computing professionals must minimize negative consequences, including threats to health, safety, personal security, and privacy.
  • Be honest and trustworthy.
  • Be fair and take action not to discriminate.
  • Respect the work required to produce new ideas, inventions, creative works, and computing artifacts.
  • Respect privacy.
  • Honour confidentiality.

2. Professional Responsibilities:

  • Strive to achieve high quality in both the processes and products of professional work.
  • Maintain high standards of professional competence, conduct, and ethical practice.
  • Know and respect existing rules pertaining to professional work.
  • Accept and provide appropriate professional review.
  • Give comprehensive and thorough evaluations of computer systems and their impacts.
  • Perform work only in areas of competence.
  • Foster public awareness and understanding of computing.
  • Access computing and communication resources only when authorized.

3. Professional Leadership Principles:

  • Ensure that the public good is the central concern during all professional computing work.
  • Articulate, encourage acceptance of, and evaluate fulfilment of social responsibilities by members of the organization.
  • Manage personnel and resources to enhance the quality of working life.
  • Articulate, apply, and support policies and processes that reflect the principles of the Code.
  • Create opportunities for members of the organization to grow as professionals.

4. Compliance with the Code:

  • Uphold, promote, and respect the principles of the Code.
  • Treat violations of the Code as inconsistent with membership in the ACM.

6.2 The IEEE Code of Ethics

The Institute of Electrical and Electronics Engineers (IEEE) Code of Ethics complements the ACM Code with additional emphasis on the responsibilities of engineers. Key commitments include:

  • To hold paramount the safety, health, and welfare of the public.
  • To avoid real or perceived conflicts of interest.
  • To be honest and realistic in stating claims or estimates based on available data.
  • To reject bribery in all its forms.
  • To improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies.
  • To treat all persons fairly and not engage in discrimination.
  • To avoid injuring others, their property, reputation, or employment by false or malicious actions.

6.3 The Rome Call for AI Ethics

The Vatican’s “Rome Call for AI Ethics,” signed in 2020, represents an attempt to bring ethical principles to the development of artificial intelligence from a humanistic and rights-based perspective. The document calls for AI that is:

  • Transparent: People should be able to understand how AI systems reach their decisions.
  • Inclusive: AI should serve all people, not just the privileged few.
  • Responsible: Those who design and deploy AI should be accountable for its effects.
  • Impartial: AI should not create or reinforce bias and discrimination.
  • Reliable: AI systems should be trustworthy and perform as intended.
  • Secure and private: AI should protect users’ data and privacy.

6.4 Ethical Dilemmas in Practice

The abstract principles of professional ethics become concrete in the messy reality of professional life. Computing professionals regularly face situations in which ethical principles conflict with each other, with corporate directives, or with economic pressures.

Berenbach and Broy, in “Professional and Ethical Dilemmas in Software Engineering” (2009), argued that headline-making ethical failures typically result not from a single lapse but from a cascade of smaller ethical failures that produce a magnification effect. They identified and named nine specific everyday dilemmas, including Mission Impossible (accepting tasks with unrealistic deadlines, creating pressure to cut corners), Fictionware/Vaporware (misrepresenting product capabilities to clients or releasing products known to be defective), Canceled Vacation (systemic organizational pressure forcing professionals to sacrifice personal commitments, creating burnout and ethical erosion), Rush Job (pressure to skip proper testing and documentation), and Red Lies (falsifying status reports to avoid difficult conversations). By naming these patterns — analogous to naming software design anti-patterns — the authors made them recognizable and discussable, enabling professionals to identify and resist them.

Algorithmic sentencing: The use of algorithms to inform criminal sentencing decisions, as documented in “An Algorithm That Grants Freedom, or Takes It Away,” raises questions about fairness, accountability, and the appropriate role of automated systems in decisions that profoundly affect human lives. When a computer system recommends a longer prison sentence for a defendant, who is responsible for that recommendation? Can the defendant challenge the algorithm’s reasoning? Are the data used to train the system free from historical biases?

The Volkswagen emissions scandal exemplifies how organizational pressure can lead engineers to compromise ethical principles. As Simon Rogerson argued in “Ethics Omission Increases Gases Emission” (2018), VW installed “defeat device” software in approximately 11 million diesel vehicles worldwide. The software could detect test conditions — vehicle stationary, wheels turning, steering wheel inactive — and switch between a “test mode” with full emissions compliance and a “normal mode” with emissions up to 40 times the legal limit. Rogerson mapped the VW engineers’ conduct against the Software Engineering Code of Ethics, identifying violations related to acting in the public interest, failing to report deceptive software, and failing to act with integrity. The environmental and public health consequences were massive: excess NOx emissions contributed to respiratory disease, premature deaths, and environmental degradation affecting millions. Rogerson argued that while the engineers likely faced intense organizational pressure, professional codes exist precisely to provide a framework for resisting such pressure — and that the entire computing profession’s reputation is at stake when engineers build systems whose core purpose is deception.

The Boeing 737 MAX and the Bradley Fighting Vehicle (depicted in the film The Pentagon Wars) illustrate how institutional dynamics — cost pressures, schedule demands, bureaucratic inertia, and diffusion of responsibility — can override the professional judgment of individual engineers with catastrophic consequences.

Platform complicity: The question of when a computing professional should refuse to participate in projects they find ethically objectionable is explored in accounts like “Why I Quit GitHub,” where an employee resigned over the company’s contract with U.S. Immigration and Customs Enforcement (ICE). The Facebook whistleblower case further illustrates this tension: Frances Haugen chose to leak internal documents revealing that the company knew its products harmed teenagers but chose not to act, because she concluded that internal advocacy was futile.

6.5 Echo Chambers and Algorithmic Responsibility

The design of recommendation algorithms that create echo chambers — environments in which users are primarily exposed to information that reinforces their existing beliefs — raises questions about the ethical responsibilities of the engineers who build these systems. When an algorithm is designed to maximize engagement, and engagement is maximized by showing people content that confirms their biases, the algorithm is effectively undermining the conditions for informed democratic deliberation.

The concept of the “filter bubble,” coined by Eli Pariser, describes how algorithmic personalization can isolate individuals from diverse perspectives. Whether engineers have a professional obligation to design systems that promote exposure to diverse viewpoints, even at the cost of reduced engagement metrics, is an open and urgent question.

6.6 The Internet of Things and Ethical Design

The proliferation of internet-connected devices — from smart home assistants to fitness trackers to connected vehicles — has expanded the scope of ethical responsibility for computing professionals. As argued in “The Internet of Things Needs a Code of Ethics,” the IoT creates new categories of risk: devices that monitor intimate aspects of daily life, that can be hijacked for surveillance or attack, and that collect data whose future uses cannot be anticipated at the time of collection.

The case of vending machines on a Canadian university campus that were found to contain hidden cameras collecting facial recognition data illustrates the ways in which connected devices can be used for surreptitious surveillance. The incident prompted the question: who is responsible when a seemingly innocuous device turns out to be a surveillance tool? The manufacturer? The operator? The institution that allowed it to be installed?


Chapter 7: Artificial Intelligence and Large Language Models

7.1 The AI Revolution

Artificial intelligence has transitioned from a specialized research field to a general-purpose technology with applications across virtually every domain of human activity. The development of large language models (LLMs) such as GPT-4, Claude, and their successors represents a qualitative shift in AI capability, as these systems can generate human-quality text, engage in complex reasoning, write computer code, and pass professional licensing examinations.

Geoffrey Hinton, who won the Nobel Prize in Physics for his foundational work on neural networks, has publicly warned that AI poses existential risks to humanity, arguing that current AI systems may be closer to genuine understanding than most people realize. An open letter signed by prominent AI researchers called for a pause in the development of systems more powerful than GPT-4, citing “profound risks to society and humanity.”

7.2 Risks and Misuse of AI

The misuse of AI systems has already produced significant harm across multiple domains:

Deepfakes and non-consensual imagery: AI-generated deepfake pornography has become a widespread tool of harassment, with victims including public figures, classmates, and children. Cases include U.S. high school students using AI to generate nude images of classmates, and a Quebec man who was sentenced to prison for creating AI-generated child sexual abuse material. The creation of synthetic, non-consensual intimate imagery represents a new category of harm that existing legal frameworks are only beginning to address.

AI-powered fraud: Scammers have used AI voice cloning technology to impersonate family members in distress, tricking victims into sending money. AI-generated phishing attacks are more sophisticated and harder to detect than traditional ones. Microsoft and OpenAI have documented cases of state-sponsored hackers using ChatGPT to improve their cyberattack capabilities.

AI-generated influencers: The creation of entirely synthetic social media influencers with over 100,000 followers raises questions about authenticity, disclosure, and the nature of trust in digital spaces.

Military applications: The use of AI in military targeting, as documented in reporting on the “Lavender” system used by the Israeli military, raises profound questions about autonomous weapons, civilian casualties, and the delegation of life-and-death decisions to algorithmic systems.

7.3 On the Dangers of Stochastic Parrots

The paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (known as “Shmargaret Shmitchell” in the paper, a pseudonym she used after being fired from Google) became one of the most consequential academic papers in AI ethics, not only for its content but for the circumstances surrounding its publication.

The paper argued that large language models pose several underappreciated risks:

  1. Environmental costs: Training large models requires enormous computational resources, with correspondingly large carbon footprints. Training a single BERT-base model produces CO2 emissions equivalent to a trans-American flight; larger models like GPT-3 consumed exponentially more. This burden falls disproportionately on marginalized communities who bear the brunt of climate change while benefiting least from English-centric models.

  2. Training data: Large models are trained on massive datasets scraped from the internet, which encode the biases, stereotypes, and toxic content present in that data. Internet-sourced data skews toward developed nations, younger users, and majority demographics. Content moderation and “bad word” filtering inadvertently suppresses LGBTQ+ discourse and minority community voices. Static training datasets encode a snapshot of social values that becomes outdated as norms evolve, creating “value-lock.” Models absorb and amplify hegemonic viewpoints, stereotypes, and microaggressions.

  3. The coherence illusion: Language models generate text that appears fluent and coherent but is produced through statistical pattern matching rather than genuine comprehension. The authors characterize these systems as “stochastic parrots” — systems that manipulate linguistic form without grounded communicative intent. Humans project meaning onto fluent text, but fluency masks fundamental emptiness. When these systems are deployed in high-stakes contexts — healthcare, law, education — the gap between apparent competence and actual understanding can lead to serious harm.

  4. Opportunity costs: The resources devoted to building ever-larger models could be directed toward research that more directly addresses pressing social needs, including support for diverse languages and meaning-sensitive approaches rather than scale-focused benchmark competitions.

Timnit Gebru’s departure from Google after the company objected to the paper’s publication sparked a broader reckoning about the relationship between corporate AI research and academic freedom. The incident raised questions about whether companies that profit from AI can be trusted to support honest assessment of AI’s risks.

7.4 Responsible AI Development

The debate over responsible AI development centres on a fundamental tension between innovation and precaution:

Those who argue for rapid deployment contend that AI advice and decision-support tools fill a genuine social need, helping people navigate complex decisions in medicine, finance, and law. They argue that a “caveat emptor” model — deploying AI with appropriate disclaimers and allowing users to decide whether to rely on it — respects individual autonomy and allows the benefits of AI to reach those who need them most quickly.

Those who argue for caution contend that deploying AI systems for high-stakes decisions without guarantees of accuracy is irresponsible, particularly when users may lack the expertise to evaluate the system’s reliability. They worry that people without sufficient background knowledge will misinterpret AI outputs, leading to harmful outcomes, and that the availability of AI advice may erode investment in human expertise.

7.5 AI and Privacy

Large language models raise novel privacy concerns. As documented in “ChatGPT Has a Big Privacy Problem,” these systems are trained on vast quantities of data that may include personal information, and they can reproduce or recombine that information in their outputs. Unlike a database, where specific records can be identified and deleted, the “knowledge” in a neural network is distributed across billions of parameters in ways that make it impossible to surgically remove particular pieces of information.

The Italian data protection authority temporarily banned ChatGPT in 2023 over concerns about its compliance with the GDPR, setting a precedent for regulatory scrutiny of language models.

7.6 AI Labelling and Authentication

The proliferation of AI-generated content has created an urgent need for systems that can distinguish human-created content from AI-generated material. Several approaches are being explored:

  • Watermarking: Embedding imperceptible signals in AI-generated content that can be detected by specialized tools.
  • Cryptographic provenance: Using digital signatures and blockchain technology to create verifiable chains of custody for content, as explored in “Cryptography may offer a solution to the massive AI-labeling problem.”
  • AI detection tools: Software that analyses text or images for statistical patterns characteristic of AI generation. However, these tools have significant limitations, including high false-positive rates and an ongoing arms race between generation and detection.
  • Platform labelling: TikTok and other platforms have begun requiring labels on AI-generated content, though enforcement remains challenging.

7.7 The Ethics of AI-Generated Writing

The use of AI for academic writing raises fundamental questions about authorship, originality, and the purpose of education. If a student submits an essay generated by ChatGPT, have they committed plagiarism? The answer depends on how we define plagiarism and what we believe the purpose of writing assignments is.

If writing assignments exist primarily to produce a product — a well-argued essay on a given topic — then using AI may be analogous to using a calculator for arithmetic: a tool that produces the desired output more efficiently. If writing assignments exist primarily to develop a process — the skills of critical thinking, argument construction, evidence evaluation, and clear expression — then using AI undermines the entire purpose, as the student bypasses the cognitive work that the assignment was designed to require.


Chapter 8: Digital Communication, Relationships, and Community

8.1 The Transformation of Communication

Digital communication technologies have fundamentally altered how human beings relate to one another. Email, instant messaging, social media, video calling, and collaborative platforms have created new modes of interaction that supplement and in some cases replace face-to-face communication. These changes bring both opportunities and risks.

The permanence of digital communication creates a new kind of vulnerability. As explored in articles about the consequences of deleting (or failing to delete) emails, digital communications can be subpoenaed, leaked, hacked, or forwarded in ways that the original sender never anticipated. A careless email or an ill-considered tweet can have consequences years or even decades after it was sent. The essay “One Stupid Tweet” documented how a single tasteless joke, posted to 170 followers, destroyed a woman’s career and subjected her to a global campaign of harassment.

8.2 Contract Cheating and Academic Integrity

“The Shadow Scholar,” published in The Chronicle of Higher Education in 2010 under the pseudonym Ed Dante (later revealed as Dave Tomar), provided a first-person account of a professional academic ghostwriter. Tomar reported writing nearly 5,000 pages of scholarly work in a single year, earning over $65,000. His work spanned master’s theses in cognitive psychology, PhD dissertations in sociology, and essays in ethics, philosophy, marketing, and cinema. He described three client archetypes: ESL students struggling with English, lazy students who could afford to pay, and students overwhelmed by demands they were unprepared for. He noted that educators were, ironically, his most frequent clients. The article became the most commented-on piece in the Chronicle Review’s history.

The phenomenon of contract cheating raises several concerns:

  • Educational integrity: Students who submit purchased work do not develop the skills and knowledge that their degrees are supposed to represent.
  • Credential devaluation: If significant numbers of graduates have not actually done the work required for their degrees, the value of those credentials is undermined for everyone.
  • Inequality: Contract cheating services are expensive, meaning that wealthier students can more easily purchase academic success.
  • Detection difficulty: Unlike plagiarism from published sources, custom-written papers cannot be detected by traditional plagiarism-detection software. The emergence of AI writing tools has further complicated detection.

8.3 Cyberbullying

Cyberbullying — the use of digital communication technologies to harass, intimidate, or humiliate others — has become a significant social problem, particularly among young people. The suicide of Rehtaeh Parsons, a 17-year-old Nova Scotia student who was cyberbullied after an alleged sexual assault, prompted legislative action in Canada.

Nova Scotia’s original Cyber-Safety Act (2013) was one of the first pieces of legislation specifically targeting cyberbullying. It defined cyberbullying broadly as any electronic communication intended or reasonably expected to cause fear, intimidation, humiliation, distress, or damage to health, well-being, self-esteem, or reputation. The Act created the CyberSCAN unit — the first of its kind in Canada — with five investigators dedicated to cyberbullying complaints.

However, the Nova Scotia Supreme Court struck down the Act in 2015, ruling that its broad definition of cyberbullying infringed on Charter-protected freedom of expression. The replacement legislation, the Intimate Images and Cyber-Protection Act (2018), adopted a more targeted approach, focusing on the non-consensual distribution of intimate images while maintaining protections against cyberbullying through a system of protection orders.

The legislative evolution illustrates a recurring tension in technology regulation: the desire to prevent harm must be balanced against the protection of fundamental rights, and broadly drafted laws risk capturing legitimate expression along with genuinely harmful behaviour.

8.4 Loneliness and Digital Connection

The relationship between internet use and loneliness is more complex than popular narratives suggest. Robert Waldinger’s research at Harvard, based on the longest-running study of adult development, has shown that the quality of personal relationships is the single strongest predictor of health and happiness across the lifespan. The question is whether digital communication enhances or diminishes the quality of those relationships.

Research published in the AMA Journal of Ethics (2023) suggests that the relationship between internet use and loneliness is bidirectional and dynamic. When the internet is used as a “way station” toward building and strengthening existing relationships — through video calls with distant family, messaging with close friends, or coordination of in-person social activities — it effectively reduces loneliness. For older adults specifically, a one-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores. Conversely, when social technologies are used to withdraw from real-world interaction and avoid the discomfort of social engagement, loneliness increases. Excessive use, particularly when categorized as “internet addiction,” is consistently associated with poorer well-being outcomes.

The concept of “digital intimacy,” explored in “Brave New World of Digital Intimacy,” describes how online interactions can create feelings of closeness and connection that may or may not correspond to the depth of the underlying relationship. Social media creates an illusion of social abundance — hundreds of “friends,” thousands of “followers” — that can mask genuine loneliness.

8.5 Censorship and the Free Flow of Information

The internet has created both new opportunities and new challenges for the free flow of information. On one hand, platforms like the Minecraft library of banned journalism — in which reporters whose work is censored in their home countries republish their articles within a virtual world — demonstrate the internet’s potential to circumvent authoritarian censorship.

On the other hand, the same tools that enable free expression also enable harassment, disinformation, and incitement to violence, creating pressure for platforms and governments to restrict speech. The debate over censorship of electronic communication pits two perspectives against each other:

Those who oppose corporate compliance with censorship demands argue that technology companies should collectively refuse to operate in nations with poor human rights records rather than facilitating government suppression of speech and surveillance of dissidents. They contend that when companies comply with authoritarian censorship, they become complicit in human rights violations.

Those who favour compliance argue that companies must respect the laws of the jurisdictions in which they operate, that withdrawal from restrictive markets would leave users without access to valuable services, and that competitors willing to comply would simply fill the gap, resulting in no improvement for users and lost market share for the principled company.


Chapter 9: Surveillance, Monitoring, and Social Control

9.1 The Surveillance Society

The proliferation of surveillance technologies has created what scholars call the “surveillance society” — an environment in which monitoring is pervasive, systematic, and often invisible. The surveillance infrastructure encompasses:

  • Government surveillance: Intelligence agencies, law enforcement, and regulatory bodies use electronic surveillance for national security, criminal investigation, and regulatory enforcement. The Snowden revelations (discussed in Chapter 5) demonstrated the scale and scope of government surveillance capabilities.
  • Corporate surveillance: Companies collect, aggregate, and analyse vast quantities of data about their customers, employees, and the general public. This data is used for targeted advertising, product development, risk assessment, and increasingly, behavioural prediction and manipulation.
  • Peer surveillance: Social media, review platforms, and messaging applications enable individuals to monitor each other’s activities, locations, and communications.
  • Automated surveillance: Cameras, sensors, algorithms, and AI systems perform continuous monitoring without human operators.

9.2 Physical Surveillance Technologies

The deployment of physical surveillance technologies in public and semi-public spaces has accelerated:

Speed cameras and red-light cameras have been deployed in cities including Ottawa and Hamilton, Ontario. While proponents argue they improve road safety, critics contend they represent a shift from policing (which involves human judgment) to automated enforcement (which treats all violations identically regardless of context).

Facial recognition technology is increasingly used by law enforcement, despite documented problems with accuracy, particularly for people of colour. Reports have documented cases of wrongful arrest based on faulty facial recognition, including instances where police used AI-generated composite faces to run facial recognition searches — layering one unreliable technology upon another.

Sports venue surveillance: The University of Alabama tracked students’ locations during football games to enforce attendance requirements, raising questions about the appropriateness of location tracking by educational institutions.

Noise monitoring: New York City has deployed noise radar systems capable of identifying the sources of excessive noise, representing the extension of automated surveillance from visual to auditory domains.

9.3 Corporate Surveillance and Retail Monitoring

Retail environments have become increasingly surveilled spaces. Self-checkout systems that use cameras and computer vision to detect shoplifting have raised concerns about the presumption of innocence and the collection of biometric data in commercial transactions.

Amazon’s “Just Walk Out” technology, which promised a surveillance-powered shopping experience in which customers could simply take items and leave while cameras and sensors tracked their purchases, was quietly discontinued after it was revealed that the system relied heavily on human reviewers in India rather than the AI technology Amazon had promoted.

The gap between the marketed promise of AI surveillance (autonomous, efficient, accurate) and its reality (dependent on human labour, error-prone, and privacy-invasive) is a recurring theme across the surveillance technology landscape.

9.4 Government Censorship and Platform Shutdowns

Authoritarian governments have used internet shutdowns and platform bans as tools of social control. India’s government threatened to shut down Twitter and raid employees’ homes when the platform refused to comply with demands to remove content related to farmer protests. The pattern is consistent across authoritarian contexts: governments use the threat of market exclusion to compel platforms to serve as instruments of state censorship.

The case of Cambodia’s state-developed messaging application, designed as a WhatsApp alternative, illustrates a different approach: rather than forcing compliance from foreign platforms, the government created its own platform that it controls entirely, raising concerns that it could be used for comprehensive surveillance of citizens’ communications.

9.5 Browser Monoculture and Structural Surveillance

The consolidation of the web browser market creates a form of structural surveillance that operates at the infrastructure level. Chrome alone commands approximately 62.7% of global browser usage; including other Chromium-based browsers (Samsung Internet, Opera, Microsoft Edge), the figure reaches approximately 70.8%, leaving Firefox and Safari with roughly 21% combined. An Atlantic Council analysis warned that an attack targeting Chrome could render roughly 60% of internet users unable to use their primary browser, while an attack on the underlying Chromium codebase could be even more widespread.

Beyond security, browser monoculture has implications for standards and power. Google has created applications that “work best with Chrome,” and developers increasingly optimize exclusively for Chromium, effectively abandoning cross-browser compatibility — echoing the Internet Explorer 6 era when dominance enabled a single company to dictate web standards without competitive consequences. Proposed solutions range from transferring Chromium governance to an independent nonprofit foundation to regulatory intervention, reflecting growing concern that the current trajectory concentrates too much power over the open web in a single company’s hands.


Chapter 10: Video Games, Gambling, and Digital Entertainment

10.1 The Video Game Industry

The global video game industry generates revenues exceeding $180 billion annually, surpassing the combined revenues of the film and music industries. Video games are played by billions of people across all age groups and demographics. The social implications of this medium extend far beyond entertainment, encompassing questions of addiction, violence, gambling, economic exploitation, and cultural influence.

10.2 Benefits of Video Games

Research reviewed by the American Psychological Association has identified several potential benefits of video game play:

  • Cognitive benefits: Action games can improve spatial reasoning, visual attention, and decision-making speed. Strategy games may enhance planning and resource management skills.
  • Social benefits: Multiplayer games create social environments in which players develop teamwork, communication, and leadership skills. For some players, especially those with social anxiety or disabilities that limit in-person interaction, online gaming communities provide important social connections.
  • Emotional benefits: Games can serve as tools for stress relief, emotional regulation, and creative expression.
  • Educational benefits: Serious games and game-based learning environments can enhance engagement and retention across a range of educational contexts.

However, the research also indicates that these benefits are contingent on the type of game, the duration and context of play, and the characteristics of the player. The same medium that can enhance cognition and social connection can also foster compulsive behaviour, social isolation, and financial exploitation.

10.3 Video Game Addiction

The question of whether excessive video game play constitutes a genuine addiction remains debated. The World Health Organization included “gaming disorder” in the ICD-11 (International Classification of Diseases) in 2018, defining it as a pattern of gaming behaviour characterized by impaired control over gaming, increasing priority given to gaming over other activities, and continuation or escalation of gaming despite negative consequences.

Critics of the addiction framework argue that pathologizing heavy gaming medicalizes what may be a normal variation in leisure preferences, that the diagnostic criteria are too vague, and that the inclusion of gaming disorder was driven more by moral panic than by scientific evidence. They point out that the vast majority of gamers, even heavy gamers, do not experience clinically significant impairment.

Proponents respond that a small but significant minority of players does experience genuine loss of control, with severe consequences for education, employment, relationships, and mental health. They argue that the existence of a harm-causing subset justifies clinical recognition, just as the recognition of alcohol use disorder does not imply that all drinking is pathological.

10.4 Loot Boxes and Gambling Mechanics

Perhaps the most pressing concern about the video game industry is the incorporation of gambling-like mechanics, particularly loot boxes, into mainstream games. Loot boxes are virtual items that can be purchased with real money and contain randomized rewards of varying value. They share several key features with slot machines:

  • Random reward distribution: The contents are determined by a random number generator, not by skill.
  • Variable value: Some items are common and essentially worthless; others are rare and highly valued.
  • Near-miss effects: Visual and auditory cues simulate the “almost won” experience that is known to drive continued gambling.
  • Psychological arousal: The opening animation is designed to trigger anticipation and excitement.

Research has established a significant correlation between loot box spending and problem gambling, with meta-analyses finding effect sizes of 0.26–0.27, which is considered small-to-moderate but clinically relevant. A UK Gambling Commission report estimated that 31% of children aged 11–16 had opened a loot box.

The concept of “losses disguised as wins” (LDWs), identified by Dixon et al. at the University of Waterloo in their research on multi-line video slot machines, is directly applicable to loot boxes. In an experiment with 40 novice players, skin conductance response (SCR) amplitudes were measured across three outcome types. SCR amplitudes for LDWs were statistically similar to those for genuine wins, and both were significantly larger than for regular losses — meaning the body responds to a net loss as though it were a win. On a 20-line machine with 5-cent bets, each spin costs $1.00; a payout of $0.75 is a net loss of $0.25, yet the machine celebrates with musical fanfare and animation. Follow-up research showed that 58% of participants preferred games with LDWs, and LDWs extended play duration during losing streaks. Players who experienced LDWs significantly overestimated the number of genuine wins that occurred. These deceptive design mechanisms are now embedded in loot box systems across the gaming industry.

The regulatory response has been fragmented. Belgium and the Netherlands have classified some loot boxes as gambling and banned them. The UK government’s investigation concluded that loot boxes that can be purchased with real money and do not reveal their contents in advance are “games of chance played for money’s worth” and recommended bringing them under gambling regulation. Most other jurisdictions have not yet acted, meaning that loot boxes typically lack the consumer protections — age restrictions, odds disclosure, spending limits — that apply to regulated gambling.

10.5 Gaming Industry Practices

The gaming industry has faced criticism for a range of business practices:

  • Predatory monetization: Epic Games paid $520 million to settle FTC charges related to children’s privacy violations and the use of deceptive design patterns (“dark patterns”) that tricked players into making unintended purchases.
  • Social casino exploitation: Research has documented how social casino games use Facebook user data to identify and target vulnerable gamblers with personalized advertising.
  • Regulatory arbitrage: Some companies exploit the gap between jurisdictions, designing monetization systems that would be regulated as gambling in some countries but are unregulated in others.
  • Gaming and China: Chinese regulators have imposed strict limits on gaming time for minors and on in-game spending, while some American gaming companies have been criticized for acting as censors on behalf of the Chinese government to maintain market access.

Chapter 11: Gender, Diversity, and Inclusion in Technology

11.1 The Gender Gap in Computer Science

Women’s representation in computer science has followed a paradoxical trajectory. As Clive Thompson documented in “The Secret History of Women in Coding” (2019), women were foundational to computing and dominated early software development. Ada Lovelace conceptualized what we now call coding in 1833. In the 1940s, men regarded hardware as the prestigious work while programming was considered menial and secretarial — so women filled the role and excelled. By 1960, more than one in four programmers were women. In 1967, Cosmopolitan magazine published “The Computer Girls,” noting women could earn $20,000 per year. By the 1983–84 academic year, 37.1% of computer science graduates were women — the historic peak.

From 1984 onward, the percentage plunged, eventually halving to approximately 18% before modest recent increases. This decline is unique to computer science; in most other STEM fields, women’s representation has increased over the same period. Thompson identified three converging causes of the decline, and understanding them requires examining multiple reinforcing factors:

  • Cultural narratives: Personal computers (Commodore 64, TRS-80) were marketed almost entirely to boys; boys were twice as likely as girls to receive computers as gifts. As programming gained professional prestige, it ceased to be seen as “women’s work.” The first generation of young men with home computing experience arrived at college with a head start, and a “cultural schism” emerged where girls internalized the message that “computers were for boys.”
  • Hostile environments: Reports from companies including Riot Games, Activision Blizzard, Ubisoft, and Electronic Arts have documented cultures of sexual harassment, discrimination, and retaliation that drive women out of the industry. Riot Games paid $100 million to settle a gender discrimination lawsuit. Activision Blizzard was sued by the California Department of Fair Employment and Housing over “a culture of constant sexual harassment.” The SEC fined Activision Blizzard $35 million for failing to maintain disclosure controls related to workplace misconduct complaints.
  • Bias in evaluation: Research has documented both implicit and explicit bias in the evaluation of women’s contributions in STEM fields. Studies have found that identical work is rated lower when attributed to a woman, and that men’s implicit bias is associated with women’s career costs in STEM.
  • Network exclusion: Research on social networks in STEM has shown that women in less powerful network positions tend to avoid integrating other women, a pattern that can perpetuate women’s marginalization.
  • Imposter syndrome — and its limits: While the concept of imposter syndrome has been widely applied to explain women’s experiences in male-dominated fields, critics have argued that the framework places the burden on individual women to overcome feelings of inadequacy rather than on institutions to address the structural conditions that produce those feelings. As argued in “Stop Telling Women They Have Imposter Syndrome” (Harvard Business Review), the problem is often not that women doubt themselves; it is that the environments in which they work systematically undervalue and marginalize their contributions.

11.2 Racial Bias in Technology

Algorithmic systems have been shown to encode and amplify racial biases present in their training data. Several high-profile cases have brought this issue to public attention:

  • Amazon’s AI recruiting tool: Amazon developed an AI system beginning in 2014 to rate job candidates on a 1–5 star scale, automating resume screening. The system was trained on 10 years of resumes submitted to Amazon. Because the tech industry is male-dominated, the vast majority of those resumes came from men — and the AI learned to prefer male candidates. It specifically penalized resumes containing the word “women’s” (as in “women’s rugby team”), downgraded graduates of all-women’s colleges, and favoured action verbs disproportionately found on male engineers’ resumes, such as “executed” and “captured.” Amazon modified the system to make specific gender-related terms neutral but ultimately “lost confidence that the program was indeed gender neutral in all other areas” and scrapped the project. The ACLU warned that similar flawed tools are “spreading” across hundreds of companies, and beyond gender, these systems risk discrimination by race through proxies like zip codes, fraternal affiliations, and linguistic patterns. As analysts noted, AI does not eliminate human bias — it “launders it through software.”
  • Facial recognition disparities: Joy Buolamwini discovered the problem firsthand when commercial AI systems could not detect her darker-skinned face at the MIT Media Lab but worked when she wore a white mask. Her subsequent research with Timnit Gebru (the “Gender Shades” project, 2018) created the Pilot Parliaments Benchmark dataset of 1,270+ images from three African and three European countries, classified using the Fitzpatrick dermatological skin-tone scale. The results were stark: across IBM, Microsoft, and Face++ systems, lighter-skinned men had error rates of 0.8% or better, while darker-skinned women had error rates of 20.8%, 34.5%, and 34.7%. For the darkest-skinned women (Fitzpatrick VI), error rates reached 46.5% and 46.8%. One major company claiming 97% overall accuracy was trained on data that was over 77% male and 83% white, masking catastrophic failures on underrepresented groups. Buolamwini coined the term “coded gaze” to describe how algorithmic systems reflect the biases and blind spots of their creators. These disparities have real-world consequences: a New Jersey man was wrongfully arrested based on a faulty facial recognition match, one of several documented cases of wrongful arrest linked to the technology.
  • Instagram filters and colourism: Analysis has shown that Instagram and other social media filters frequently lighten skin tones and alter facial features in ways that reflect and reinforce Eurocentric beauty standards, a phenomenon described as “the quiet racism of Instagram filters.”
  • Measurement of bias in language models: Research has demonstrated that word embeddings — the mathematical representations of words used in AI systems — encode gender and racial stereotypes. Words associated with women are more likely to be associated with domesticity and emotion, while words associated with men are more likely to be associated with career and competence.

11.3 LGBTQ+ Inclusion in Technology

LGBTQ+ individuals in the technology industry face particular challenges including:

  • Workplace cultures that may be hostile or unwelcoming to non-heterosexual and non-cisgender individuals.
  • AI systems that may embed heteronormative or cisnormative assumptions.
  • Platform governance decisions that disproportionately affect LGBTQ+ content through automated content moderation that cannot reliably distinguish between the discussion of LGBTQ+ identities and violations of community standards.
  • The amplification of anti-LGBTQ+ content by engagement-maximizing algorithms.

The tension between platform neutrality and the protection of vulnerable communities is illustrated by events at X (formerly Twitter), where Elon Musk’s response to anti-transgender content led to significant controversy and prompted LGBTQ+ community organizations to consider leaving the platform.

11.4 Gender, Voice, and AI Design

The observation that most voice assistants — Alexa, Siri, Cortana — default to female voices has prompted analysis of the gender dynamics embedded in AI design. The choice of a female voice for a subservient, always-available digital assistant reinforces traditional gender roles, associating femininity with service, compliance, and availability.

This design choice reflects broader patterns in the technology industry, where products are overwhelmingly designed by men and where the perspectives and experiences of women are often treated as secondary considerations. The question “Why is it Alexa, not Alex?” encapsulates the concern that AI design choices, even those that may seem trivial, carry cultural meaning and reinforce social hierarchies.


Chapter 12: Work, Automation, and the Digital Economy

12.1 The Productivity Paradox

The relationship between computing technology and economic productivity has puzzled economists for decades. In 1987, Nobel laureate Robert Solow famously observed that “you can see the computer age everywhere but in the productivity statistics.” Despite massive investment in computing technology throughout the 1970s and 1980s — a hundredfold increase in computing capacity — labour productivity growth actually declined, from over 3% annually in the 1960s to roughly 1% in the 1980s.

Erik Brynjolfsson and Lorin Hitt, in “Beyond the Productivity Paradox” (1998), provided the most influential analysis of this phenomenon, arguing that the paradox was not evidence that IT is unproductive but rather reflected measurement difficulties, time lags, and the critical need for complementary organizational changes. Their explanations included:

  1. Measurement problems: Traditional productivity metrics failed to capture quality improvements. The mismeasurement hypotheses centred on the idea that real output estimates overestimated inflation and understated productivity because they did not account for qualitative improvements in IT goods and services.
  2. Time lags: Firm-level data showed that short-term IT benefits matched “normal” returns, but long-term benefits were 2 to 8 times as large, indicating significant lags of 2–5 years before returns materialized. Historical parallels with the steam engine and electricity confirmed that transformative technologies require decades of organizational adaptation.
  3. Redistribution rather than creation: Some firms invested in IT for competitive advantage without expanding total industry output.
  4. Organizational barriers: The critical finding was that firms that combined IT investment with decentralized work practices and organizational redesign showed approximately 5% higher productivity, while those investing in IT without organizational change actually performed worse. $1 of computer hardware was associated with approximately $10 in market value, far exceeding the typical $1-to-$1 ratio, suggesting massive hidden value in complementary organizational assets — new processes, training, and restructured hierarchies. Computers, Brynjolfsson and Hitt concluded, are “the catalyst for bigger changes” whose value comes not from the hardware itself but from the organizational restructuring they enable.

The paradox was partially resolved in the late 1990s, when a few sectors — technology, retail, and wholesale — led an acceleration of U.S. productivity growth. However, the broader question of whether technology investment translates into productivity gains remains relevant. With the emergence of artificial intelligence, Solow’s paradox is back in the spotlight: a 2026 study of thousands of CEOs found that AI had no measurable impact on employment or productivity, echoing the patterns observed four decades earlier.

12.2 Automation and Worker Displacement

The displacement of workers by automation is one of the oldest concerns about computing technology and one that has acquired new urgency with the development of AI systems capable of performing cognitive tasks previously considered uniquely human.

The COVID-19 pandemic accelerated the adoption of automation technologies, as companies invested in systems that could replace workers who were unable to come to work or who posed infection risks. Research has shown that jobs lost during the pandemic were more likely to be permanently eliminated through automation than jobs lost in previous recessions.

The debate over professional responsibility for worker displacement involves two positions:

Those who argue that computing professionals should focus solely on technical excellence contend that displacement is a natural consequence of technological progress, that attempting to prevent it would be futile and counterproductive, and that new technologies ultimately create more jobs than they destroy. They maintain that the responsibility for managing the social consequences of automation lies with governments and policymakers, not with individual engineers.

Those who argue for professional engagement contend that computing professionals have special knowledge about the likely impacts of the systems they build and a corresponding obligation to advocate for affected workers within their organizations. They argue that the ACM and IEEE codes of ethics, which require professionals to consider the social consequences of their work, imply a duty to anticipate and mitigate displacement.

12.3 Weapons of Math Destruction

Cathy O’Neil’s Weapons of Math Destruction articulated a powerful critique of the way algorithmic systems perpetuate and amplify inequality. O’Neil, a mathematician and data scientist, argued that many widely used algorithms function as “weapons of math destruction” — opaque, unaccountable systems that operate at scale, encode biases, and create destructive feedback loops.

O’Neil defined three characteristics that make an algorithm a “weapon of math destruction”: (1) opacity — methods hidden from scrutiny, (2) scale — applied to large populations, and (3) damage — creating hardship or deepening inequality. Key examples include:

  • Predictive policing algorithms (such as PredPol) that direct police resources to neighbourhoods with high historical crime rates, leading to more arrests in those neighbourhoods, which produces data showing more crime, which directs more resources to the same neighbourhoods — a self-reinforcing cycle that disproportionately affects communities of colour.
  • Teacher evaluation models: Sarah Wysocki, a well-regarded Washington, D.C. teacher, was fired because an opaque value-added model rated her poorly despite positive evaluations from parents and administrators — illustrating how algorithmic decisions can override human judgment with devastating personal consequences.
  • Credit scoring algorithms that use proxies for race and class (such as zip code, internet browsing history, or social media activity) to make lending decisions, modernizing discriminatory redlining.
  • Employment screening: Kyle Behm could not access employment because opaque personality assessment algorithms screened out his profile, with no mechanism for understanding or contesting the decision.
  • University ranking algorithms that incentivize institutions to manipulate the metrics used in rankings rather than to improve actual educational quality.

O’Neil argued that these systems are “propping up the lucky and punishing the downtrodden, creating a toxic cocktail for democracy.” The mathematical veneer — their apparent objectivity and precision — makes them particularly dangerous, as it obscures the subjective choices embedded in their design. Algorithms, she demonstrated, lack any concept of fairness; they optimize for efficiency and proxy metrics, not justice.

12.4 The Gig Economy and Remote Work

The rise of platform-mediated gig work and remote employment has transformed the nature of work for millions of people. Digital platforms such as Uber, DoorDash, and Fiverr match workers with short-term tasks, offering flexibility but often without the protections and benefits of traditional employment.

The shift to remote work, accelerated by the pandemic, has raised questions about career development, social isolation, and the blurring of boundaries between work and personal life. Reports from young professionals describe the challenges of building professional networks and acquiring tacit knowledge when working entirely remotely.

The concept of “shadow work” — unpaid labour performed by consumers that was previously done by paid workers, such as self-checkout at grocery stores, online check-in at airports, and self-service customer support — represents another dimension of technology-driven changes in the nature of work.

12.5 E-Commerce and Algorithmic Trading

The algorithmization of commerce has transformed markets in ways that raise ethical concerns:

  • Algorithmic trading: Wall Street firms have developed systems that analyse the Twitter posts of political figures, including presidents, and automatically execute trades based on the anticipated market impact of those statements. The speed and scale of algorithmic trading raise questions about market fairness and stability.
  • College admissions algorithms: The use of algorithms to screen college applicants has been criticized for potentially steering admissions in directions that do not align with educational values.
  • AI in content creation: The emergence of AI news anchors and AI-generated journalism raises questions about the future of human creative and intellectual work.
  • Electronic voting: The use of computer systems in elections creates vulnerabilities that could undermine democratic processes, as explored in analyses of e-voting challenges to democracy.

12.6 Digital Art and Creative Workers

The development of AI image generation systems such as DALL-E, Midjourney, and Stable Diffusion has provoked a fierce debate about the impact of AI on creative workers. Digital artists have pushed back against AI systems trained on their work without consent or compensation, arguing that these systems constitute a form of automated plagiarism that threatens their livelihoods.

The debate touches on fundamental questions about the nature of creativity, the meaning of originality, and the economic rights of creative workers in an age of automated content generation.


Chapter 13: Cryptocurrency, Cybercrime, and Digital Finance

13.1 Cryptocurrency: Promise and Peril

Cryptocurrencies — digital assets secured by cryptography and typically operated on decentralized blockchain networks — have evolved from an obscure technical experiment into a significant force in global finance. Bitcoin, the first and most well-known cryptocurrency, was introduced in 2009 by the pseudonymous Satoshi Nakamoto as an alternative to government-controlled monetary systems.

The promise of cryptocurrency includes financial autonomy (freedom from government and institutional control of money), financial inclusion (access to financial services for the unbanked), transparency (public blockchains provide verifiable transaction records), and censorship resistance (the ability to transact without requiring permission from any authority). The story of a Ukrainian refugee who fled to Poland carrying $2,000 in bitcoin on a USB drive — value that would have been difficult to transport through traditional financial channels — illustrates the potential of cryptocurrency to serve as a portable store of value in crisis situations.

The perils of cryptocurrency include its facilitation of criminal activity through pseudo-anonymous transactions, its extreme price volatility, the environmental costs of energy-intensive mining operations, and the proliferation of fraud and scams. The collapse of the FTX cryptocurrency exchange, which lost billions of dollars of customer funds, demonstrated the risks of unregulated financial intermediaries. The story of Jimmy Zhong, who stole over $3 billion in bitcoin from the Silk Road darknet marketplace only to be caught years later, illustrates both the potential for large-scale digital theft and the limitations of cryptocurrency anonymity.

13.2 Stablecoins and Systemic Risk

The collapse of the TerraUSD and Luna cryptocurrencies in 2022 revealed the fragility of algorithmic stablecoins — cryptocurrencies designed to maintain a stable value through automated market mechanisms rather than backing by reserves. When TerraUSD lost its peg to the U.S. dollar, a “death spiral” ensued in which the mechanism designed to restore the peg instead accelerated the collapse, destroying approximately $40 billion in value within days.

The episode demonstrated that the complexity and interconnection of cryptocurrency systems can create systemic risks analogous to those in traditional finance, but without the regulatory safeguards and institutional backstops that have developed over centuries of financial regulation.

13.3 China’s Relationship with Cryptocurrency

China’s evolving approach to cryptocurrency provides an instructive case study in the tension between technological innovation and state control. China has undergone a comprehensive ban on cryptocurrency mining and trading, driven by concerns about financial stability, capital flight, environmental impact, and the threat to the government’s monetary sovereignty. At the same time, China has developed its own digital yuan — a central bank digital currency that provides many of the technological efficiencies of cryptocurrency while maintaining full government control and surveillance capability.

13.4 Ransomware and Cyber Extortion

Ransomware — malware that encrypts a victim’s data and demands payment for the decryption key — has become one of the most damaging forms of cybercrime. The attacks on Indigo Books and Music (which cost over $50 million) and the Toronto Public Library (which disrupted service for four months) illustrate the real-world impact on institutions and the communities they serve.

Cryptocurrency has been a key enabler of ransomware, as it provides a mechanism for anonymous payment that is difficult for law enforcement to trace. The relationship between cryptocurrency and cybercrime creates a feedback loop: the availability of anonymous payment encourages ransomware attacks, and the demand for ransomware payments drives adoption of cryptocurrency.

13.5 Hacking and State-Sponsored Cyber Operations

The cybersecurity threat landscape includes sophisticated state-sponsored operations that blur the line between espionage, sabotage, and warfare:

  • NSO Group and Pegasus spyware: The Israeli company NSO Group developed Pegasus, a sophisticated spyware tool capable of compromising smartphones and accessing all their data, including messages, emails, photos, and real-time location. Investigations revealed that Pegasus was used to hack the cellphones of journalists, activists, and political figures worldwide, including associates of murdered Washington Post journalist Jamal Khashoggi. Apple sued NSO Group as a “hacker-for-hire” and Citizen Lab, based at the University of Toronto, played a key role in identifying and documenting the abuse.
  • The Jeff Bezos hack: The hack of Amazon CEO Jeff Bezos’s phone, allegedly using spyware associated with Saudi Arabia, and the subsequent extortion attempt by the National Enquirer, illustrated the intersection of state-sponsored hacking, corporate espionage, and personal privacy.
  • Investment fraud: Canada has been identified as a major target for international investment scammers, with fraud operations originating from multiple countries using sophisticated digital techniques to target Canadian victims.

Chapter 14: Computing and the Environment

14.1 The Energy Footprint of Computing

The environmental impact of computing has grown from a minor concern to a major sustainability challenge. Data centres, which house the servers that power cloud computing, social media, streaming video, and AI systems, consume approximately 1–2% of global electricity and are projected to increase dramatically.

The rise of artificial intelligence has accelerated this trend. AI model training requires enormous computational resources. The paper “Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model” by Luccioni, Viguier, and Ligozat (2022) provided the first comprehensive lifecycle carbon footprint assessment of an LLM. The study found that BLOOM’s dynamic power consumption during training produced approximately 24.7 tonnes of CO2eq (433,196 kWh at 57 gCO2eq/kWh on France’s nuclear-heavy grid). However, the full lifecycle cost — including embodied emissions from hardware manufacturing (11.2 tonnes, 22.2%) and idle infrastructure consumption (14.6 tonnes, 28.9%) — totalled 50.5 tonnes. By comparison, GPT-3’s training produced an estimated 502 tonnes of CO2eq, primarily because the U.S. grid’s carbon intensity (429 gCO2eq/kWh) is far higher than France’s. Critically, deployment costs were also significant: running BLOOM via API emitted approximately 19 kg CO2eq per day, with 75% of energy consumed merely keeping the model in GPU memory between requests. Only 54% of total power consumption during training was dynamic computation; 46% was idle overhead — a finding that reveals how standard reporting of only GPU computation dramatically underestimates true environmental costs.

Key environmental concerns include:

Water consumption: AI and data centres consume billions of litres of water for cooling. Reports have documented that in water-stressed regions such as Arizona, data centre water consumption is exacerbating existing shortages.

Energy demand: Data centre power consumption is projected to surge six-fold over the next decade. AI processing alone could consume as much electricity as the entire nation of Ireland. Google’s emissions have climbed nearly 50% in five years due to AI energy demand, jeopardizing the company’s climate commitments.

Microsoft’s paradox: Microsoft has invested in innovative approaches to data centre sustainability, including underwater data centres (Project Natick) that use ocean water for cooling. However, the company’s AI push has simultaneously caused its emissions to surge, illustrating the tension between efficiency improvements and the rebound effect of increasing demand.

14.2 Cryptocurrency and Energy

The energy consumption of cryptocurrency mining, particularly Bitcoin’s proof-of-work consensus mechanism, has been a major environmental concern. Bitcoin mining was estimated to consume more electricity annually than many medium-sized countries. The White House released a fact sheet on the “Climate and Energy Implications of Crypto-Assets in the United States,” and proposals have been made to tax cryptocurrency mining to internalize its environmental costs.

The transition of Ethereum from proof-of-work to proof-of-stake in 2022 (the “Merge”) reduced that blockchain’s energy consumption by approximately 99.95%, demonstrating that less energy-intensive consensus mechanisms are technically feasible. However, debates continue about whether proof-of-stake provides equivalent security guarantees and whether Bitcoin will or should make a similar transition.

Some have argued that cryptocurrency mining can be environmentally beneficial when it uses otherwise wasted energy, such as the case of Texas entrepreneurs who mine bitcoin using flare gas from oil drilling that would otherwise be burned off into the atmosphere. However, critics argue that such cases are the exception rather than the rule, and that the overall environmental impact of cryptocurrency mining remains deeply negative.

The relationship between cryptocurrency mining and water scarcity has emerged as a concern in regions where mining operations and water-stressed communities compete for limited resources.

14.3 Electronic Waste

Electronic waste (e-waste) is one of the fastest-growing waste streams in the world. The rapid obsolescence cycle of consumer electronics — smartphones, laptops, tablets, and accessories — generates millions of tonnes of e-waste annually, much of which contains hazardous materials including lead, mercury, cadmium, and brominated flame retardants.

Research reviews on e-waste management have documented the disproportionate impact on developing countries, where much of the world’s e-waste is processed under conditions that expose workers and communities to toxic substances. Informal recycling operations, in which workers disassemble electronics by hand and burn circuit boards to recover precious metals, create severe health and environmental hazards.

The concept of planned obsolescence — designing products with artificially limited lifespans to encourage replacement purchases — has been challenged through “right to repair” movements and regulatory action. Apple agreed to pay $14.4 million in a Canadian settlement related to the deliberate throttling of older iPhone models, a practice that critics characterized as designed to push consumers toward purchasing new devices.

The EPA and other regulatory agencies have established e-waste recycling programmes, but compliance and enforcement remain challenging, particularly given the global nature of electronic supply chains.

14.4 The Environmental Impact of Streaming

The shift from physical media to streaming for music, video, and gaming has been widely assumed to be environmentally beneficial — no plastic discs, no packaging, no shipping. However, research has shown that the environmental impact of streaming is more complex than this simple narrative suggests.

Streaming music and video requires data to be transmitted across networks and processed in data centres every time content is played. For frequently replayed content, the cumulative energy cost of streaming can exceed the one-time energy cost of producing and distributing a physical copy. The environmental impact of music streaming has been estimated to exceed that of the vinyl and CD eras in terms of greenhouse gas emissions, even though it produces less plastic waste.

14.5 AI for Environmental Good

While the environmental costs of AI are significant, AI is also being deployed to address environmental challenges:

  • Illegal fishing detection: AI systems are being used to monitor vessel tracking data and identify patterns consistent with illegal fishing, helping to protect marine ecosystems.
  • Battery technology: AI has been used to discover new materials that could reduce the use of lithium in batteries, potentially addressing one of the key bottleneck resources in the transition to renewable energy.
  • Energy optimization: AI systems are being used to optimize energy distribution, building efficiency, and industrial processes, reducing waste and emissions.
  • Food and water waste: Startups presented at technology conferences have demonstrated AI solutions to reduce food and water waste throughout the supply chain.

The challenge is to ensure that the environmental benefits of AI applications outweigh the environmental costs of the AI systems themselves — a calculus that requires careful accounting and honest assessment.

14.6 Toward Sustainable Computing

The environmental sustainability of computing requires action at multiple levels:

  • Hardware efficiency: Continued improvement in the energy efficiency of processors, memory, and storage.
  • Software efficiency: Writing code that minimizes computational resources, rather than relying on ever-more-powerful hardware to compensate for inefficient software.
  • Renewable energy: Powering data centres with renewable energy sources.
  • Circular economy: Designing electronics for longevity, repairability, and recyclability rather than for planned obsolescence.
  • Regulation: Establishing standards and requirements for the environmental reporting and performance of technology companies.
  • User awareness: Educating users about the environmental consequences of their digital consumption choices.

Chapter 15: Debates in Computing Ethics — Key Controversies

15.1 Censorship of Electronic Communication

The censorship debate in the context of computing centres on the responsibilities of technology companies operating in countries with restrictive speech regimes. When companies like Google, Apple, or Meta operate in countries that demand the suppression of political speech, the blocking of news, or the surveillance of dissidents, they face a choice between complying with local law (and facilitating repression) and refusing to comply (and potentially losing access to the market).

The arguments for compliance emphasize pragmatism: companies are subject to the laws of the jurisdictions in which they operate; withdrawal would harm users who depend on the services; and competitors willing to comply would simply replace them. The arguments for refusal emphasize moral responsibility: companies that facilitate censorship and surveillance become complicit in human rights violations, and collective action by major technology companies could pressure governments to reform.

15.2 Misinformation and Platform Responsibility

The question of whether platform owners should be held responsible for detecting and preventing misinformation is one of the most contested issues in technology policy.

Those who favour platform responsibility argue that the scale and speed of online misinformation — amplified by engagement-maximizing algorithms — pose a genuine threat to democratic governance, public health, and social cohesion. They argue that platforms profit from engagement driven by false and inflammatory content and should bear responsibility for the consequences.

Those who oppose platform responsibility argue that defining “misinformation” is inherently subjective and politically contentious, that requiring platforms to be arbiters of truth would result in the suppression of legitimate but controversial speech, and that the technical challenge of accurately identifying misinformation at scale makes comprehensive content removal impractical. They advocate instead for education and media literacy, teaching users to evaluate sources and identify questionable information rather than relying on platforms to filter their information environment.

15.3 Computers and Children

The debate over children’s relationship with computing technology reflects broader anxieties about childhood in the digital age. On one side, concerned parents and educators argue that children are spending too much time in front of screens, at the expense of outdoor physical activity, unstructured play, and face-to-face social interaction. They point to research linking excessive screen time to attention problems, sleep disruption, and reduced physical fitness.

On the other side, advocates for digital literacy argue that restricting children’s access to technology is counterproductive in a world where digital skills are essential for education and employment. They contend that the current generation is fundamentally a “wired generation” and that efforts should focus on helping children manage the potential negative consequences of technology use rather than attempting to eliminate it.

The growing concern about children’s privacy online is reflected in regulatory action: Epic Games’ $520 million settlement over children’s privacy violations, and proposals by U.S. regulators for new online privacy safeguards for children, signal increasing recognition that the technology industry has failed to adequately protect young users.


Appendix: Analytical Frameworks for Technology Ethics

A.1 Stakeholder Analysis

When evaluating the social implications of a computing technology, a useful starting point is stakeholder analysis: identifying all parties affected by the technology and assessing the impact on each.

StakeholderPotential BenefitsPotential Harms
UsersConvenience, access, connectionPrivacy loss, addiction, manipulation
Non-usersIndirect benefits from innovationDigital divide, exclusion from services
CompaniesRevenue, market positionReputation risk, legal liability
WorkersNew job categoriesDisplacement, surveillance
SocietyEconomic growth, efficiencyInequality, democratic erosion
EnvironmentOptimization, monitoringEnergy use, e-waste

A.2 Nissenbaum’s Contextual Integrity

Helen Nissenbaum’s framework of contextual integrity (widely taught at Harvard and Stanford) provides a rigorous approach to privacy analysis. The central insight is that privacy violations occur not when information is shared, but when information flows violate the norms of the social context in which they occur. Every social context — healthcare, education, friendship, commerce — has established norms governing what information is appropriate to share, with whom, and under what conditions. A doctor sharing your medical information with another physician treating you respects contextual norms; the same doctor sharing that information with your employer violates them, even though the information itself is identical.

This framework is more nuanced than simple “public vs. private” dichotomies because it recognizes that the same information can be appropriate in one context and inappropriate in another. It has been particularly influential in analyzing digital privacy, where information routinely crosses contextual boundaries — data collected for one purpose (a personality quiz) being used for another (political targeting), as in the Cambridge Analytica scandal.

A.3 The Suresh-Guttag Taxonomy of ML Bias

Harini Suresh and John Guttag at MIT developed a taxonomy of harm sources across the full machine learning lifecycle, providing a systematic framework for identifying where bias enters AI systems:

  1. Historical bias: Pre-existing societal biases reflected in training data (e.g., hiring data reflecting decades of gender discrimination).
  2. Representation bias: Training data that fails to represent the full population (e.g., facial recognition datasets that are 83% white).
  3. Measurement bias: Proxies used to measure a concept that systematically differ across groups (e.g., using arrest records as a proxy for criminality).
  4. Aggregation bias: A one-size-fits-all model that fails to account for meaningful differences between subgroups.
  5. Evaluation bias: Benchmark datasets that are unrepresentative of the deployment population.
  6. Deployment bias: A system used in contexts or for purposes different from those for which it was designed.

This taxonomy is valuable because it reveals that bias is not a single problem with a single fix; it can enter at any stage and requires different interventions at each stage.

A.4 Nguyen’s Value Capture

C. Thi Nguyen’s concept of value capture (developed in a MIT SERC case study on social media) describes how platforms’ metric systems replace users’ richer, more nuanced values with simplified quantifiable proxies. A researcher who values nuanced understanding may find that value captured and replaced by citation counts. A person who values genuine social connection may find that value captured and replaced by follower counts and likes. This framework transfers directly to analyzing loot boxes and engagement mechanics in gaming, where intrinsic motivations for play are captured and replaced by scores, ranks, and virtual rewards.

A.5 The PAPA Framework

Richard Mason’s PAPA framework identifies four key ethical issues in information technology:

  • Privacy: What information about oneself should individuals be required to reveal to others?
  • Accuracy: Who is responsible for the authenticity, fidelity, and accuracy of information?
  • Property: Who owns information? What are the just prices for its exchange?
  • Accessibility: What information does a person or organization have a right to obtain, and under what conditions?

A.6 Consequentialism, Deontology, and Virtue Ethics

Three major ethical traditions provide complementary frameworks for evaluating technology decisions:

Consequentialism (including utilitarianism) evaluates actions based on their outcomes. A consequentialist analysis of facial recognition technology would weigh the security benefits (crimes prevented, suspects identified) against the harms (wrongful arrests, chilling effects on freedom of assembly, disproportionate impact on racial minorities).

Deontology (including Kantian ethics) evaluates actions based on adherence to moral rules and respect for rights. A deontological analysis of mass surveillance would focus on whether bulk data collection violates individuals’ rights to privacy, regardless of whether the surveillance produces net benefits for society.

Virtue ethics evaluates actions based on the character of the agent. A virtue ethics analysis of a computing professional’s decision to participate in building a surveillance system would ask: is this the kind of person I want to be? Does this decision reflect the virtues of honesty, courage, justice, and prudence?

These frameworks often yield different conclusions when applied to the same situation, which is why ethical analysis of technology requires facility with multiple perspectives rather than rigid adherence to a single theory.

A.7 Impossibility Theorems in Algorithmic Fairness

A critical result from the algorithmic fairness literature, widely taught at CMU and Harvard, is that common mathematical definitions of fairness are mutually incompatible. Three widely used definitions include:

  • Demographic parity: The proportion of positive outcomes should be equal across groups.
  • Equalized odds: The true positive rate and false positive rate should be equal across groups.
  • Calibration: Among those assigned a given risk score, the actual rate of the outcome should be equal across groups.

Mathematical proofs have shown that, except in degenerate cases, it is impossible to satisfy all three simultaneously. This impossibility result has profound implications: it means that the choice of which fairness criterion to optimize is inescapably a value judgment, not a technical decision. There is no “objectively fair” algorithm — only algorithms that are fair according to one definition at the expense of others. This insight reinforces O’Neil’s argument that the mathematical veneer of algorithmic systems obscures the subjective choices embedded in their design.

A.8 The Collingridge Dilemma

David Collingridge’s dilemma of social control of technology states that:

  • Early in a technology’s development, it is relatively easy to change its trajectory, but the social consequences are difficult to predict because the technology is not yet widely deployed.
  • Late in a technology’s development, the social consequences are evident, but the technology is deeply embedded in social and economic structures, making it resistant to change.

This dilemma is directly applicable to contemporary debates about AI regulation: acting early (before the consequences are clear) risks stifling beneficial innovation, while waiting until the consequences are apparent may mean that the technology is too entrenched to redirect.

A.9 Questions for Ethical Analysis

When confronting any issue at the intersection of computing and society, the following questions provide a structured approach to analysis:

  1. Who benefits? Who gains from this technology, and how?
  2. Who is harmed? Who bears the costs, and what form do those costs take?
  3. Who decides? Who has the power to make decisions about the technology’s design, deployment, and governance?
  4. What are the alternatives? Could the same goals be achieved with less harmful means?
  5. What are the long-term consequences? How might the technology’s effects compound over time?
  6. What values are embedded in the technology’s design? What assumptions about the world does it encode?
  7. Is consent informed and meaningful? Do affected individuals understand and genuinely agree to the technology’s impact on their lives?
  8. What happens when things go wrong? What are the failure modes, and who bears the consequences of failure?
Back to top