LS 413: Surveillance and Society
Jennifer Whitson
Estimated study time: 53 minutes
Table of contents
Sources and References
Primary texts — Michel Foucault, Discipline and Punish: The Birth of the Prison (1975); Gilles Deleuze, “Postscript on the Societies of Control” (1992); Kevin D. Haggerty and Richard V. Ericson, “The Surveillant Assemblage” (2000); Shoshana Zuboff, The Age of Surveillance Capitalism (2019); David Lyon, Surveillance Society: Monitoring Everyday Life (2001) and The Culture of Surveillance (2018); Gary T. Marx, Windows into the Soul: Surveillance and Society in an Age of High Technology (2016); Oscar Gandy, The Panoptic Sort: A Political Economy of Personal Information (1993); Roger Clarke, “Information Technology and Dataveillance” (1988); Simone Browne, Dark Matters: On the Surveillance of Blackness (2015); Mark Andrejevic, iSpy: Surveillance and Power in the Interactive Era (2007); Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (2019).
Supplementary texts — Jeremy Bentham, Panopticon; or, The Inspection-House (1791); James Rule, Private Lives and Public Surveillance (1973); Greg Elmer, Profiling Machines: Mapping the Personal Information Economy (2004); Kirstie Ball, Kevin Haggerty, and David Lyon, eds., Routledge Handbook of Surveillance Studies (2012); Virginia Eubanks, Automating Inequality (2018); Safiya Umoja Noble, Algorithms of Oppression (2018); Ruha Benjamin, Race After Technology (2019); Torin Monahan, Surveillance in the Time of Insecurity (2010); Daniel Solove, Understanding Privacy (2008); Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (2010); Steve Mann, Jason Nolan, and Barry Wellman, “Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments” (2003).
Online resources — Surveillance Studies Centre, Queen’s University; Electronic Frontier Foundation (EFF) publications; The Surveillance Studies Network; Privacy International reports; ACLU technology and civil liberties publications; Surveillance & Society journal (open access).
Chapter 1: Foundations of Surveillance Studies
1.1 Defining Surveillance
Surveillance, derived from the French sur (over) and veiller (to watch), refers fundamentally to the focused, systematic, and routine attention to personal details for purposes of influence, management, protection, or direction. This definition, offered by David Lyon, captures the breadth of surveillance practices while distinguishing them from casual observation. Surveillance is not merely watching; it is purposeful watching embedded within relations of power, institutions, and technologies.
The study of surveillance as a distinct field of inquiry emerged in the late twentieth century, though the practices it examines are as old as organized society itself. Ancient empires conducted censuses, medieval churches kept parish records, and early modern states developed elaborate bureaucracies to track populations. What changed in the modern era was the scale, systematicity, and technological mediation of these practices. The rise of the nation-state, industrialization, and eventually digital computing transformed surveillance from an episodic activity into a continuous, pervasive feature of social life.
Gary T. Marx distinguishes several dimensions along which surveillance practices vary: whether the subject is aware of being watched, whether consent has been given, whether the surveillance is visible or hidden, whether it targets individuals or categories of people, and whether it is personal or impersonal in character. These dimensions reveal that surveillance is not a monolithic phenomenon but a family of practices with different ethical implications and social effects.
A critical distinction in the field is that between targeted surveillance and mass surveillance. Targeted surveillance focuses on specific individuals who have come to the attention of authorities — suspects in criminal investigations, for example. Mass surveillance, by contrast, collects information about entire populations, sorting and filtering to identify patterns, risks, or opportunities. The shift from targeted to mass surveillance is one of the defining transformations of the digital age and raises profound questions about the relationship between citizens and the state, consumers and corporations.
1.2 Historical Development: From Bureaucracy to Big Data
The history of surveillance is inseparable from the history of modern governance. James Rule’s pioneering study Private Lives and Public Surveillance (1973) traced how bureaucratic record-keeping systems — driver’s licensing, national insurance, banking records — constituted a form of surveillance that was already deeply embedded in liberal democratic societies. Rule demonstrated that surveillance was not an aberration or excess of state power but a routine, structural feature of modern administration.
The development of national identification systems, tax records, census-taking, and population registries all represent what Lyon calls bureaucratic surveillance — the systematic collection of personal information by state agencies for purposes of administration and governance. Max Weber’s analysis of bureaucracy as the quintessential form of modern rational organization provides a theoretical foundation for understanding how surveillance became woven into the fabric of institutional life. Bureaucracies require information to classify, sort, and process individuals, and this requirement generates an inherent drive toward ever more comprehensive data collection.
The twentieth century saw several watershed moments in surveillance history. The totalitarian regimes of Nazi Germany and the Soviet Union demonstrated the lethal potential of state surveillance when combined with ideological fanaticism. The Stasi — the East German Ministry for State Security — maintained files on an estimated six million of East Germany’s sixteen million citizens, employing a vast network of informants and deploying sophisticated techniques of psychological manipulation. These historical examples remain powerful reference points in surveillance discourse, though scholars caution against reducing all surveillance to its most extreme manifestations.
The Cold War era saw the expansion of signals intelligence and electronic surveillance capabilities in liberal democracies as well. The revelations about programs like ECHELON — a signals intelligence network operated by the Five Eyes alliance (United States, United Kingdom, Canada, Australia, and New Zealand) — revealed that mass electronic surveillance was not confined to authoritarian states. The post-9/11 period dramatically accelerated this trajectory, with programs such as the NSA’s PRISM and the bulk collection of telephone metadata becoming publicly known through the 2013 disclosures by Edward Snowden.
1.3 Key Concepts and Terminology
Several foundational concepts structure the field of surveillance studies and recur throughout this course:
Social sorting refers to the process by which surveillance systems categorize individuals and populations into groups, assigning them to different risk categories, consumer profiles, or administrative classifications. Oscar Gandy’s concept of the panoptic sort captures how personal information is collected, processed, and used to coordinate and control access to goods, services, and life chances. Social sorting is not neutral; it reflects and reproduces existing social inequalities along lines of race, class, gender, and citizenship status.
Dataveillance, a term coined by Roger Clarke in 1988, refers to the systematic monitoring of people’s actions or communications through the application of information technology. Clarke distinguished dataveillance from traditional forms of surveillance by emphasizing its reliance on data trails — the digital traces left by transactions, communications, and movements — rather than direct physical observation.
Lateral surveillance or peer-to-peer surveillance describes how ordinary individuals monitor one another through social media, search engines, and other digital tools. Mark Andrejevic has explored how the interactive character of digital media transforms consumers into both subjects and agents of surveillance, blurring the boundary between watchers and watched.
Function creep refers to the gradual expansion of a surveillance system or technology beyond its original purpose. A database created for one specific function — say, tracking library borrowing — may over time be linked to other databases and used for purposes never originally envisioned. Function creep is a recurring pattern in the history of surveillance and a central concern in privacy regulation.
Chapter 2: Theoretical Frameworks
2.1 Foucault and Panopticism
No theoretical framework has been more influential in surveillance studies than Michel Foucault’s analysis of the Panopticon and the disciplinary society. In Discipline and Punish (1975), Foucault examined the historical transition from sovereign power — which operated through spectacular displays of violence such as public executions — to disciplinary power, which operates through continuous observation, normalization, and the internalization of the gaze.
The Panopticon, originally designed by the utilitarian philosopher Jeremy Bentham in 1791, was an architectural plan for a prison in which a single watchman positioned in a central tower could observe all inmates in surrounding cells without the inmates being able to tell whether they were being watched at any given moment. The genius of the design, Foucault argued, was that it rendered the actual exercise of surveillance unnecessary: because inmates could never be certain they were not being observed, they would regulate their own behavior as if they were always under scrutiny. Power became automatic and deindividualized.
Foucault generalized the panoptic principle beyond the prison to describe the operation of power across a wide range of modern institutions — schools, hospitals, factories, military barracks. In each of these settings, disciplinary power operates through three mechanisms:
- Hierarchical observation: The architectural and organizational arrangement of space to enable continuous surveillance of individuals by those in positions of authority.
- Normalizing judgment: The establishment of norms and standards against which individuals are measured, with deviations from the norm identified, recorded, and corrected.
- The examination: The combination of observation and normalization into a ritualized procedure — the school exam, the medical examination, the personnel review — that simultaneously observes, classifies, and documents individuals.
The Panopticon has become the most widely cited metaphor in surveillance studies, but it has also been subject to extensive critique. Critics argue that Foucault’s model presupposes a centralized, hierarchical mode of surveillance that does not adequately capture the dispersed, networked, and participatory character of contemporary surveillance. Others note that Foucault himself was analyzing an ideal type rather than making empirical claims about actual institutions, and that the value of panopticism lies in its analytical illumination of a particular logic of power rather than its literal applicability to every surveillance situation.
2.2 Deleuze and Societies of Control
Gilles Deleuze’s brief but enormously influential essay “Postscript on the Societies of Control” (1992) proposed that disciplinary societies, as analyzed by Foucault, were giving way to a new form of social organization. Where disciplinary power operated through enclosure — confining individuals within institutions (the school, the factory, the barracks, the prison) and subjecting them to regimes of training and normalization — control operates through modulation, continuous variation, and the management of flows.
In the society of control, individuals are no longer confined within institutional enclosures but are instead tracked, monitored, and managed as they move through open systems. The factory is replaced by the corporation, wages by performance bonuses and stock options, the mold by the modulation. Deleuze suggested that individuals in control societies are no longer discrete subjects but rather “dividuals” — divisible entities reducible to data points, codes, passwords, and access levels.
Deleuze’s framework has proven remarkably prescient in describing the logic of digital surveillance. Contemporary tracking technologies — GPS, RFID tags, cookies, device fingerprinting, biometric systems — do not require the confinement of individuals within institutional walls. Instead, they monitor movement, transactions, communications, and behaviors across the fluid spaces of everyday life. The data produced by these technologies is then processed algorithmically to generate profiles, risk scores, and predictions that determine access to spaces, services, and opportunities.
The contrast between Foucault and Deleuze can be summarized as follows: panoptic surveillance watches individuals in fixed spaces; control surveillance tracks dividuals through fluid networks. Both frameworks remain essential for understanding contemporary surveillance, which typically combines elements of both disciplinary enclosure (prisons, schools, workplaces) and networked control (digital profiling, algorithmic governance, platform surveillance).
2.3 The Surveillant Assemblage
Kevin Haggerty and Richard Ericson’s concept of the surveillant assemblage, introduced in their influential 2000 article, draws on the philosophy of Deleuze and Guattari to offer a more complex picture of contemporary surveillance than either panopticism or the societies of control alone can provide. Rather than imagining surveillance as a unified system directed by a single powerful agent (the state, a corporation, Big Brother), Haggerty and Ericson describe it as a heterogeneous assemblage — a convergence of multiple surveillance systems, technologies, and practices that were originally discrete but have become increasingly interconnected.
The surveillant assemblage produces what Haggerty and Ericson call data doubles — virtual representations of individuals composed of data drawn from multiple sources. Your data double is the composite picture that emerges when your credit card transactions, social media activity, location data, health records, browsing history, and purchasing patterns are aggregated and analyzed together. This data double has real consequences: it determines the advertisements you see, the credit you are offered, the prices you are quoted, and potentially whether you are flagged for additional security screening at an airport.
A key insight of the assemblage framework is that surveillance is rhizomatic rather than hierarchical — it grows horizontally, making connections between previously separate systems, rather than being imposed top-down from a single center of power. This means that resistance to surveillance cannot be directed at a single target but must contend with a dispersed and constantly evolving network of monitoring practices.
2.4 Lyon and the Surveillance Society
David Lyon, one of the founding figures of surveillance studies, has developed a comprehensive sociological framework for understanding what he calls the surveillance society — a society in which surveillance has become a routine, everyday, taken-for-granted feature of social life. Lyon emphasizes that surveillance is inherently ambiguous: it can be both caring and controlling, protective and invasive, enabling and constraining.
Lyon’s work is notable for its insistence on the social dimensions of surveillance. Where much public discourse focuses on technology — cameras, databases, algorithms — Lyon argues that the central questions are about social relationships, power, and justice. Technologies do not determine social outcomes; rather, they are deployed within specific social, political, and economic contexts that shape how they are used and who benefits or suffers from their use.
In The Culture of Surveillance (2018), Lyon extends his analysis to examine how surveillance has become not merely something done to people but something people actively participate in and even embrace. The proliferation of social media, wearable fitness trackers, smart home devices, and self-quantification practices reflects what Lyon calls a culture of surveillance in which monitoring has become normalized and even desired as a source of connection, self-knowledge, and security.
2.5 Gary T. Marx: A Framework for Analyzing Surveillance
Gary T. Marx’s Windows into the Soul (2016) offers one of the most systematic analytical frameworks for the empirical study of surveillance. Rather than beginning from grand theory, Marx develops a set of analytical dimensions and ethical criteria that can be applied to any specific surveillance situation. His framework asks: Who is doing the surveillance? Who is the subject? What is the means? What information is collected? How is it used? What are the social contexts and consequences?
Marx identifies several key principles for evaluating surveillance practices ethically. These include whether the surveillance is based on legitimate goals, whether the means are proportionate to those goals, whether individuals have given informed consent, whether there is adequate oversight and accountability, and whether the harms of surveillance fall disproportionately on marginalized populations. This framework provides a practical toolkit for the critical analysis of surveillance that complements the more abstract theoretical perspectives discussed above.
Chapter 3: Dataveillance and Big Data
3.1 From Physical Surveillance to Digital Traces
The digital revolution has fundamentally transformed the nature and scope of surveillance. Where earlier forms of surveillance typically required the physical presence of an observer or informant — a guard in a watchtower, a spy in a meeting, a plainclothes officer on a street corner — contemporary surveillance increasingly operates through the automatic collection and analysis of digital data. Every email sent, every website visited, every purchase made with a credit card, every journey taken with a transit card, every search query entered, and every social media post published generates digital traces that can be collected, stored, aggregated, and analyzed.
Roger Clarke’s concept of dataveillance captures this transformation. Clarke defined dataveillance as “the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons.” Writing in 1988, before the widespread adoption of the internet, Clarke was remarkably prescient in identifying the surveillance potential of networked information systems. He argued that dataveillance was more cost-effective, more comprehensive, and more difficult to detect than traditional physical surveillance, and that it therefore posed distinctive challenges for privacy and civil liberties.
The volume of data generated by everyday activities has grown exponentially. The concept of big data refers not merely to large datasets but to a qualitative shift in the capacity to collect, store, process, and analyze information. Big data is typically characterized by the “three Vs”: volume (the sheer quantity of data), velocity (the speed at which data is generated and processed), and variety (the diversity of data types and sources). Some analysts add a fourth V — veracity — to capture questions about data quality and reliability.
3.2 Algorithmic Surveillance and Predictive Analytics
The collection of vast quantities of data is, by itself, of limited use. What gives big data its power — and its surveillance potential — is algorithmic analysis: the application of computational techniques to identify patterns, make classifications, and generate predictions from data. Algorithms are sets of rules or procedures for solving problems or performing computations. In the context of surveillance, algorithms process personal data to generate risk scores, behavioral predictions, consumer profiles, and social classifications.
Predictive analytics uses statistical techniques and machine learning to forecast future events or behaviors based on historical data. In law enforcement, predictive policing systems such as PredPol (now Geolitica) analyze historical crime data to identify geographic areas where crimes are statistically likely to occur, directing patrol resources accordingly. In insurance, predictive models assess risk based on demographic data, behavioral patterns, and increasingly granular personal information. In employment, algorithmic screening systems evaluate job applicants based on resume keywords, personality assessments, and even video interviews analyzed for facial expressions and vocal patterns.
The use of algorithms in surveillance raises profound questions about transparency, accountability, and justice. Algorithms are often treated as objective and neutral — mathematical tools that simply process data — but critical scholars have demonstrated that they embed human decisions, values, and biases at every stage of their design and deployment. The selection of training data, the choice of variables, the definition of target outcomes, and the interpretation of results all involve human judgment, and these judgments can encode and amplify existing social inequalities.
3.3 Metadata and the Mosaic Theory
One of the key insights of contemporary surveillance studies is that metadata — data about data — can be as revealing as content data and in many cases more so. Metadata from communications includes information such as who communicated with whom, when, for how long, and from what location, without capturing the content of the communication itself. Governments and intelligence agencies have argued that metadata collection is less intrusive than content surveillance and therefore requires less stringent legal oversight.
However, research has demonstrated that metadata can reveal extraordinarily intimate details about individuals’ lives. A person’s pattern of phone calls — the numbers dialed, the timing, the duration — can reveal their social network, their religious practices, their health conditions, their political affiliations, and their romantic relationships. The legal scholar Orin Kerr and others have developed what is known as the mosaic theory, which holds that individual pieces of information that are individually innocuous can, when aggregated, compose a revealing picture of an individual’s life that is equivalent to or more intrusive than direct surveillance.
The Snowden revelations of 2013 brought the collection of metadata to the center of public debate. The disclosure that the NSA was systematically collecting telephone metadata for virtually all calls made within the United States — under the authority of Section 215 of the USA PATRIOT Act — provoked widespread concern about the scope of government surveillance and the adequacy of existing legal frameworks to protect privacy in the digital age.
3.4 The Internet of Things and Ambient Surveillance
The proliferation of networked sensors and connected devices — collectively known as the Internet of Things (IoT) — represents a further expansion of the surveillance infrastructure. Smart thermostats, voice-activated assistants, fitness trackers, connected appliances, smart meters, and autonomous vehicles all generate continuous streams of data about the environments they inhabit and the people who use them. This creates what scholars have called ambient surveillance — a pervasive, environmental form of monitoring that is woven into the physical fabric of everyday life.
The IoT challenges traditional conceptions of surveillance as an intentional, directed activity. Many connected devices collect data as a byproduct of their primary function rather than as a deliberate surveillance measure. A smart thermostat learns occupancy patterns to optimize energy use; a fitness tracker monitors heart rate and movement to provide health insights; a voice assistant listens for wake words to respond to commands. Yet the data generated by these devices can be repurposed, shared, sold, or subpoenaed, transforming mundane household objects into potential surveillance tools.
Chapter 4: Surveillance Capitalism
4.1 Zuboff’s Framework
Shoshana Zuboff’s The Age of Surveillance Capitalism (2019) offers the most comprehensive theoretical account of the economic logic driving contemporary digital surveillance. Zuboff defines surveillance capitalism as a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales. Under surveillance capitalism, the products and services offered by technology companies are not the business itself but rather the means by which human behavioral data is harvested and processed into predictions of future behavior — what Zuboff calls prediction products — that are sold on behavioral futures markets.
Zuboff traces the origins of surveillance capitalism to Google’s discovery, in the early 2000s, that the “exhaust data” generated by users’ search queries — data that was not directly needed to improve the search engine — could be analyzed to predict which advertisements users were most likely to click on. This insight transformed Google from a company that served users into a company that served advertisers, with users’ behavioral data as the raw material. The model has since been adopted across the technology industry and beyond.
4.2 Behavioral Surplus and Prediction Products
Central to Zuboff’s analysis is the concept of behavioral surplus — the portion of user-generated data that exceeds what is needed to improve products and services for users. In the early days of Google, all user data was recycled into improving the search algorithm. But as the company faced pressure to generate revenue, it began harvesting data beyond what was necessary for product improvement and using it to build predictive models for targeted advertising.
The logic of behavioral surplus drives what Zuboff calls the extraction imperative: the relentless expansion of data collection into ever more domains of human experience. As competition intensifies among surveillance capitalists, the quest for more predictive data leads to increasingly invasive forms of data harvesting — from online behavior to physical-world movements, emotional states, social relationships, and physiological processes. The extraction imperative explains the proliferation of free services (search engines, social media platforms, email, maps) that function as data collection infrastructure, as well as the push into physical environments through smart devices, wearable technology, and urban sensing systems.
Prediction products are the outputs of surveillance capitalism’s manufacturing process. These are not predictions in the traditional sense of forecasts based on known patterns; rather, they are probabilistic assessments of individual behavior that are sold to business customers on behavioral futures markets. An advertiser buys a prediction about which users are most likely to purchase a particular product; an insurance company buys predictions about which individuals are most likely to file claims; an employer buys predictions about which candidates are most likely to succeed in a role.
4.3 Instrumentarian Power
Zuboff distinguishes surveillance capitalism’s mode of power from both totalitarianism and traditional market power. She calls it instrumentarian power — a form of power that operates not through force or coercion but through the shaping of behavior at scale through computational means. Instrumentarian power does not seek to possess or destroy human subjects; it seeks to predict and modify their behavior, often without their knowledge or meaningful consent.
The concept of instrumentarianism draws on B.F. Skinner’s behaviorist psychology, which proposed that human behavior could be understood and controlled through the systematic manipulation of environmental stimuli and reinforcement schedules. Zuboff argues that surveillance capitalists have realized Skinner’s vision at a previously unimaginable scale, using digital platforms to create behavioral modification systems that nudge, tune, herd, and condition human behavior in the service of commercial objectives.
4.4 Platform Economies and Data Extraction
The business model of surveillance capitalism is instantiated most clearly in platform economies — digital infrastructures that mediate interactions between users, advertisers, content creators, and service providers while extracting data from all participants. Platforms like Google, Facebook (Meta), Amazon, Apple, and Microsoft — sometimes collectively referred to as Big Tech or by the acronym GAFAM — occupy positions of extraordinary economic and informational power.
Nick Srnicek’s Platform Capitalism (2017) identifies several types of platforms — advertising platforms, cloud platforms, product platforms, lean platforms, and industrial platforms — each with distinct business models but all sharing a common reliance on data extraction. The platform model is inherently surveillant: it works by positioning itself as an intermediary through which data-generating interactions must pass, capturing the data generated by those interactions, and using it to improve services, target advertising, and develop new products.
The economics of platforms tend toward monopoly or oligopoly because of network effects — the value of a platform increases with the number of users, creating a self-reinforcing cycle of growth — and because of the data advantage that comes from having more users and therefore more behavioral data from which to derive predictions. These dynamics concentrate surveillance power in the hands of a small number of extremely large corporations, raising questions about democratic governance, market competition, and individual autonomy.
Chapter 5: Racialized Surveillance
5.1 Surveillance and Race: Historical Foundations
Surveillance has never been applied equally across populations. From the surveillance of enslaved peoples in the Atlantic slave trade to the monitoring of Indigenous communities under colonial regimes, surveillance has historically been an instrument of racial domination. Simone Browne’s groundbreaking work Dark Matters: On the Surveillance of Blackness (2015) demonstrates that surveillance studies cannot be adequately understood without centering questions of race.
Browne introduces the concept of racializing surveillance to describe “a technology of social control where surveillance practices, policies, and performances concern the production of norms pertaining to race and exercise a ‘power to define what is in or out of place.’” She traces how surveillance technologies and practices — from the lantern laws of eighteenth-century New York City, which required enslaved Black people to carry candle lanterns after dark, to contemporary biometric identification systems — have been shaped by and deployed in the service of racial classification and control.
The historical record reveals numerous examples of surveillance as racial governance: the pass systems that controlled the movement of Black South Africans under apartheid, the FBI’s COINTELPRO program targeting civil rights and Black liberation organizations in the United States, the registration and internment of Japanese Americans during World War II, and the post-9/11 surveillance of Muslim communities through programs like the NYPD’s Demographics Unit. These are not aberrations but recurring patterns that reveal the deep entanglement of surveillance with racial power.
5.2 Algorithmic Bias and Racial Classification
The digital age has not eliminated racialized surveillance but rather has given it new technological forms. Algorithmic systems that process data about individuals reproduce and amplify racial inequalities when they are trained on biased data, designed around biased assumptions, or deployed in contexts shaped by structural racism.
Facial recognition technology provides a particularly stark example. Research by Joy Buolamwini and Timnit Gebru (2018) demonstrated that commercial facial recognition systems had significantly higher error rates for darker-skinned faces, particularly for darker-skinned women, compared to lighter-skinned men. These disparities mean that facial recognition systems used in law enforcement are more likely to produce false matches — and therefore wrongful arrests — for Black and brown individuals. Multiple cases of wrongful arrest based on faulty facial recognition matches have been documented in the United States.
Virginia Eubanks’s Automating Inequality (2018) examines how automated decision-making systems in public services — welfare eligibility determination, child protective services risk scoring, coordinated entry systems for homeless services — disproportionately harm poor and working-class communities, which in the United States are disproportionately communities of color. Eubanks argues that these systems constitute a “digital poorhouse” that automates long-standing practices of surveillance and punishment directed at the poor.
Ruha Benjamin’s Race After Technology (2019) introduces the concept of the New Jim Code to describe “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.” Benjamin’s work demonstrates that technological innovation does not automatically produce social progress; without deliberate attention to equity and justice, new technologies can encode and accelerate existing patterns of discrimination.
5.3 Border Surveillance and Immigration Enforcement
National borders are sites where surveillance, race, and state power converge with particular intensity. Contemporary border surveillance regimes combine physical barriers, human patrols, and an array of technological systems — remote sensors, drones, biometric databases, predictive analytics — to monitor, classify, and control the movement of people across national boundaries.
The U.S.-Mexico border has become a laboratory for advanced surveillance technology, with billions of dollars invested in sensor networks, camera towers, ground-based radar, and aerial surveillance platforms. These technologies do not operate in a race-neutral manner; they are deployed in the context of immigration enforcement policies that disproportionately target Latinx communities and are justified through racialized discourses of threat and criminality.
Biometric identification systems — fingerprinting, iris scanning, facial recognition — are increasingly central to border control and immigration enforcement worldwide. The collection of biometric data from refugees, asylum seekers, and migrants raises particular ethical concerns given the vulnerability of these populations and the potential for biometric data to be shared with authorities in countries from which people have fled persecution.
Chapter 6: Workplace Surveillance
6.1 Historical Context: From Factory Discipline to Digital Monitoring
The workplace has been a primary site of surveillance since the beginning of industrialization. Frederick Winslow Taylor’s scientific management movement in the early twentieth century exemplified the application of systematic observation and measurement to the labor process. Taylor’s time-and-motion studies broke work down into discrete tasks, measured the time required for each, and prescribed the most efficient methods and pace. Workers were to be observed, timed, evaluated, and optimized — not as autonomous agents but as components of a productive machine.
Foucault’s analysis of disciplinary power is directly applicable to the workplace. The factory floor, with its spatial organization, supervisory hierarchies, time-keeping systems, and production quotas, exemplifies the panoptic principle. The architectural layout enables visual oversight; time clocks and production records provide documentary evidence of worker performance; and the combination of observation and evaluation disciplines workers into compliance with managerial expectations.
Digital technologies have dramatically expanded the scope and granularity of workplace surveillance. Contemporary employers can monitor employees’ email communications, internet browsing, keystroke patterns, screen activity, application usage, physical location, telephone calls, and even biometric data. Software products marketed under labels such as “employee monitoring,” “workforce analytics,” or “productivity management” provide managers with detailed dashboards showing how workers spend their time, down to the minute.
6.2 The Gig Economy and Algorithmic Management
The rise of the gig economy — platform-mediated work arrangements in which workers are classified as independent contractors rather than employees — has introduced new forms of surveillance that are simultaneously more pervasive and less visible than traditional workplace monitoring. Platform companies like Uber, Lyft, DoorDash, and Amazon Flex use algorithmic systems to assign tasks, set pay rates, evaluate performance, and discipline workers, creating what scholars have termed algorithmic management.
Gig workers are subject to intensive surveillance through the apps they must use to receive and complete work. GPS tracking monitors their location and routes in real time; customer ratings provide continuous performance evaluation; algorithms analyze their acceptance rates, completion times, and behavioral patterns to assign a “score” that determines access to future work opportunities. Workers who fall below algorithmic thresholds may be “deactivated” — effectively fired — without human review, explanation, or opportunity for appeal.
Alex Rosenblat’s Uberland (2018) documents how ride-hailing platforms use information asymmetries to exercise control over drivers while maintaining the legal fiction that they are independent contractors. Drivers receive less information about rides than the platform possesses, creating a panoptic dynamic in which the platform sees everything while workers see only what the platform chooses to reveal.
6.3 Remote Work and the Expansion of Surveillance
The COVID-19 pandemic accelerated the adoption of remote work and, with it, the expansion of workplace surveillance into workers’ homes. With employees working outside the physical office, employers turned to technological solutions to monitor productivity and ensure accountability. Employee monitoring software — sometimes called “bossware” — proliferated, with products like Hubstaff, Time Doctor, ActivTrak, and Teramind offering features such as random screenshot capture, webcam monitoring, keystroke logging, application tracking, and “idle time” detection.
This extension of workplace surveillance into the home represents a significant blurring of the boundary between work and private life. When an employer’s monitoring software runs on a personal device in a worker’s living room, capturing screenshots that may include family photographs, personal messages, or health information, the traditional spatial separation between the sphere of employment and the sphere of domestic privacy dissolves. The home, historically imagined as a space of refuge from institutional surveillance, becomes another node in the surveillant assemblage.
Chapter 7: Domestic Surveillance and the Smart Home
7.1 The Connected Home as Surveillance Infrastructure
The proliferation of Internet-connected devices in domestic spaces — smart speakers, security cameras, video doorbells, smart televisions, connected appliances, baby monitors, and home automation systems — has transformed the home from a presumptive sanctuary of privacy into a densely surveilled environment. These devices, marketed as providing convenience, security, and efficiency, simultaneously function as data collection infrastructure that generates continuous streams of information about the intimate details of domestic life.
Amazon’s Echo (Alexa), Google Home (Google Assistant), and Apple’s HomePod (Siri) exemplify the dual character of smart home devices. Voice-activated assistants must listen continuously for their “wake word,” which means they are always processing ambient audio, even if they are designed to record and transmit only after the wake word is detected. Investigations have revealed that human reviewers at Amazon, Google, and Apple have listened to recordings captured by these devices, sometimes including private conversations, arguments, and intimate encounters that users did not intend to share.
Ring, Amazon’s smart doorbell and home security company, illustrates the convergence of domestic surveillance and law enforcement. Ring has developed partnerships with hundreds of police departments across the United States, enabling law enforcement agencies to request video footage from Ring doorbells in the vicinity of crimes. The company’s Neighbors app — a social media platform for sharing surveillance footage and crime alerts — has been criticized for reinforcing racial profiling and promoting a culture of suspicion, particularly toward people of color in predominantly white neighborhoods.
7.2 Children and Surveillance
The surveillance of children raises distinctive ethical and developmental questions. Parents have always monitored their children, but digital technologies have expanded the scope and intensity of parental surveillance in unprecedented ways. GPS tracking devices, phone monitoring apps, social media monitoring services, and school-based surveillance systems subject children to levels of monitoring that would have been unthinkable a generation ago.
Sharenting — the practice of parents sharing information about their children on social media — creates digital footprints for children before they are old enough to consent to or understand the implications of online exposure. A child born today may have hundreds of photographs, status updates, and personal details posted about them online before they are old enough to speak, creating a comprehensive digital record that will persist indefinitely.
School-based surveillance has also intensified, particularly in the United States. Metal detectors, security cameras, school resource officers (police stationed in schools), random drug testing, and digital monitoring of students’ online activity have become common, especially in schools serving predominantly low-income and minority communities. These practices reproduce racialized patterns of surveillance, subjecting Black and brown students to heightened scrutiny and creating what some scholars describe as a school-to-prison pipeline in which the surveillance and disciplinary apparatus of schools mirrors that of the criminal justice system.
7.3 Intimate Partner Surveillance and Technology-Facilitated Abuse
Digital technologies have also created new vectors for intimate partner surveillance and abuse. Stalkerware — software that can be installed on a partner’s phone to track their location, read their messages, monitor their calls, and even activate their camera and microphone — represents one of the most direct forms of technology-facilitated intimate partner violence. Studies estimate that stalkerware is used in a significant proportion of domestic violence cases, and organizations such as the National Network to End Domestic Violence have developed resources to help survivors detect and remove these tools.
Beyond dedicated stalkerware, the everyday surveillance capabilities of smartphones, social media, and connected devices can be weaponized in the context of abusive relationships. An abuser may demand access to a partner’s passwords, monitor their social media activity, use shared cloud accounts to track their location, or exploit smart home devices to exercise control from a distance — dimming lights, locking doors, adjusting thermostats, or monitoring through security cameras.
Chapter 8: Social Media and Platform Surveillance
8.1 The Architecture of Social Media Surveillance
Social media platforms are among the most sophisticated surveillance systems ever created, yet they are rarely experienced by their users as surveillance technologies. Facebook (Meta), Twitter (X), Instagram, TikTok, YouTube, and other platforms collect vast quantities of data about their users — not only the content users deliberately post but also metadata about their interactions, the time they spend viewing different types of content, their scrolling patterns, their location, their device information, and their networks of connections.
This data is the raw material from which platforms derive their economic value. As the technology commentator Bruce Schneier has summarized: “Surveillance is the business model of the internet.” The services that platforms provide to users — social connection, self-expression, information sharing, entertainment — are the inducements that attract users and generate the behavioral data that platforms monetize through targeted advertising and data licensing.
Mark Andrejevic’s concept of the digital enclosure captures the dynamics of social media surveillance. Just as the enclosure of common lands in early modern England forced peasants to work on landlords’ terms, the digital enclosure of communication, social interaction, and cultural production within commercial platforms forces users to submit to surveillance as a condition of participation in contemporary social life. The choice to abstain from social media platforms becomes increasingly costly as more social, professional, and civic activities migrate online.
8.2 Content Moderation and Platform Governance
Social media platforms exercise surveillance not only for commercial purposes but also through content moderation — the practices by which platforms review, filter, and remove user-generated content that violates their terms of service or community guidelines. Content moderation involves both automated systems (algorithms that detect prohibited content) and human reviewers (workers who evaluate flagged content and make decisions about removal).
Content moderation is a form of surveillance that determines what can and cannot be said in the digital public sphere. Platform decisions about which content to allow, which to remove, which to downrank, and which to amplify have enormous consequences for public discourse, political organizing, and the distribution of attention and influence. These decisions are made by private companies with limited transparency, accountability, or democratic oversight.
The labor of content moderation itself raises surveillance concerns. Human content moderators — often low-paid workers in the Philippines, India, Kenya, and other countries in the Global South — are required to review streams of disturbing content, including graphic violence, child exploitation, and hate speech, under conditions of intensive surveillance. Their screens are monitored, their productivity is tracked, and they face termination if they fall below performance targets. Studies have documented high rates of post-traumatic stress among content moderators.
8.3 Lateral Surveillance and the Participatory Panopticon
Social media has democratized surveillance, transforming ordinary individuals into both subjects and practitioners of surveillance. The concept of lateral surveillance describes how people monitor one another through social media, search engines, and other digital tools — checking a potential date’s social media profiles, googling a job applicant, monitoring an ex-partner’s online activity, or tracking a neighbor’s movements through a neighborhood surveillance app.
Steve Mann’s concept of the participatory panopticon describes a society in which ubiquitous recording devices — smartphones with cameras, wearable cameras, dashcams, bodycams — mean that virtually any public interaction may be recorded and shared. The participatory panopticon differs from Bentham’s original in that the gaze is not unidirectional (from guard to prisoner) but omnidirectional — everyone can watch everyone.
The implications are deeply ambiguous. On one hand, the participatory panopticon enables citizens to document police brutality, corporate misconduct, and other abuses of power — what Mann calls sousveillance (watching from below). The video recording of George Floyd’s murder by Minneapolis police officer Derek Chauvin in May 2020, captured on a bystander’s smartphone, galvanized a global movement for racial justice and police accountability. On the other hand, the proliferation of recording and sharing can facilitate harassment, doxxing, mob justice, and the erosion of expectations of privacy in public spaces.
Chapter 9: Resistance, Privacy, and Counter-Surveillance
9.1 Privacy: Concepts and Debates
Privacy is the value most commonly invoked in opposition to surveillance, yet it is a deeply contested and culturally variable concept. There is no single, universally accepted definition of privacy; rather, scholars have identified multiple dimensions and conceptions.
Samuel Warren and Louis Brandeis, in their seminal 1890 law review article, defined privacy as “the right to be let alone.” Alan Westin (1967) defined it as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” Daniel Solove has argued against seeking a single definition of privacy, proposing instead a taxonomy of privacy-related harms organized around four categories: information collection, information processing, information dissemination, and invasion.
Helen Nissenbaum’s influential theory of contextual integrity holds that privacy is violated not by any particular kind of information collection per se but by information flows that violate the norms appropriate to the social context in which they occur. Information that is freely shared in one context — a medical condition disclosed to a doctor, for instance — may constitute a privacy violation if transferred to another context, such as an employer or an advertising platform. This framework shifts the focus from abstract notions of secrecy or control to the concrete social norms that govern information flows in particular contexts.
9.2 Legal and Regulatory Frameworks
Legal responses to surveillance vary significantly across jurisdictions but can be broadly categorized into constitutional protections, statutory regulations, and international human rights frameworks.
In the United States, the Fourth Amendment to the Constitution protects against “unreasonable searches and seizures,” but its application to digital surveillance has been contested and inconsistent. The third-party doctrine, established by the Supreme Court in Smith v. Maryland (1979), held that individuals have no reasonable expectation of privacy in information voluntarily shared with third parties — such as telephone numbers dialed, which are shared with the phone company. This doctrine has been used to justify extensive government access to digital records without a warrant, though the Supreme Court’s 2018 decision in Carpenter v. United States signaled a partial retreat from the third-party doctrine in the context of cell-site location information.
The European Union’s General Data Protection Regulation (GDPR), which took effect in 2018, represents the most comprehensive regulatory framework for data protection in the world. The GDPR establishes principles of lawfulness, fairness, and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability. It grants data subjects rights including the right of access, the right to rectification, the right to erasure (“the right to be forgotten”), and the right to data portability. The GDPR applies to any organization that processes the personal data of EU residents, regardless of where the organization is located, giving it significant extraterritorial reach.
Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and the Privacy Act provide a framework for data protection in the Canadian context. PIPEDA applies to private-sector organizations and is based on ten fair information principles: accountability, identifying purposes, consent, limiting collection, limiting use, disclosure and retention, accuracy, safeguards, openness, individual access, and challenging compliance.
9.3 Sousveillance and Counter-Surveillance
Sousveillance — a term coined by Steve Mann from the French sous (below) and veiller (to watch) — refers to the practice of monitoring those in positions of power from below. Where surveillance is the observation of individuals by institutions, sousveillance is the observation of institutions by individuals. Mann conceived of sousveillance as both a technological practice (using wearable cameras and recording devices to document the behavior of authority figures) and a political practice (asserting the right of citizens to watch the watchers).
Sousveillance is practiced in many forms: citizen journalism that documents police violence, whistleblowing that exposes government or corporate surveillance programs (as in the cases of Edward Snowden, Chelsea Manning, and Reality Winner), freedom of information requests that compel transparency about surveillance practices, and the use of body cameras or smartphone cameras to create evidentiary records of encounters with authority.
9.4 Technical Counter-Surveillance
Beyond social and political practices of resistance, individuals and communities employ technical tools to evade, obfuscate, or subvert surveillance systems. Encryption — the mathematical transformation of data into a form that can only be read by someone possessing the decryption key — is the most fundamental technical defense against surveillance. End-to-end encryption, as implemented in messaging applications like Signal, ensures that messages can be read only by the sender and the intended recipient, not by the platform operator, internet service providers, or government agencies.
Virtual private networks (VPNs) encrypt internet traffic and mask users’ IP addresses, making it more difficult for internet service providers, governments, and websites to track users’ online activities. The Tor network provides even stronger anonymity by routing internet traffic through multiple encrypted layers and volunteer-operated nodes, making it extremely difficult to trace communications back to their origin.
Obfuscation — the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection — represents another strategy of technical resistance. Browser extensions like TrackMeNot generate randomized search queries to prevent search engines from building accurate user profiles; AdNauseam clicks on every advertisement to render advertising profiles meaningless; and CV Dazzle uses makeup and hairstyling techniques to defeat facial recognition algorithms.
9.5 The Limits of Individual Resistance
While technical counter-surveillance tools are valuable, scholars have cautioned against overstating the efficacy of individual resistance to surveillance. Surveillance is a structural phenomenon embedded in institutional arrangements, economic systems, and power relations; it cannot be adequately addressed through individual technical fixes alone.
David Lyon has argued that effective responses to surveillance require collective action and political mobilization, not merely individual opt-out strategies. The burden of privacy protection should not fall on individuals to adopt complex technical measures; rather, it should be addressed through democratic governance, legal regulation, and institutional reform. The emphasis on individual responsibility for privacy — exemplified by the injunction to “read the terms of service” — obscures the structural power asymmetries that make meaningful consent to surveillance largely impossible.
Chapter 10: Surveillance Futures — Emerging Technologies and Challenges
10.1 Biometric Surveillance
Biometric surveillance — the use of biological characteristics to identify, track, and classify individuals — represents one of the fastest-growing frontiers of contemporary surveillance. Biometric technologies include fingerprint recognition, facial recognition, iris scanning, voice recognition, gait analysis, and even behavioral biometrics such as typing patterns and mouse movements.
Facial recognition technology has attracted particular controversy due to its capacity for mass, real-time surveillance of public spaces. China’s deployment of facial recognition systems for population management — including the surveillance of Uyghur Muslims in the Xinjiang region — has been widely condemned as an instrument of authoritarian control. But facial recognition is also widely deployed in liberal democracies: in airports for immigration control, in retail stores for loss prevention, and by law enforcement agencies for criminal identification.
The biometric body is increasingly the key to accessing social institutions and resources. Biometric data is used to unlock phones, authorize payments, verify identity for government services, control access to workplaces, and cross international borders. This raises profound questions about bodily autonomy, the commodification of the body, and the consequences of biometric data breaches — unlike a password, a fingerprint or iris pattern cannot be changed if compromised.
10.2 Artificial Intelligence and Automated Decision-Making
The integration of artificial intelligence (AI) into surveillance systems represents a qualitative intensification of surveillance capacity. AI-powered systems can process, analyze, and act on surveillance data at speeds and scales that far exceed human cognitive abilities. Machine learning algorithms can identify faces in crowds, detect anomalous behavior in video feeds, predict criminal activity, assess creditworthiness, evaluate job applicants, and moderate online content — all with minimal human intervention.
The automation of surveillance raises concerns about accountability gaps. When decisions that affect individuals’ lives — whether to grant bail, whether to approve a loan, whether to flag a traveler for additional screening — are made by algorithmic systems, it can be difficult to determine who is responsible when those decisions are wrong or unjust. The opacity of machine learning systems — often described as “black boxes” — makes it challenging to explain why a particular decision was made, to identify errors, or to challenge outcomes.
Calls for algorithmic accountability have led to proposals for impact assessments, audit requirements, and transparency mandates for automated decision-making systems. The EU’s AI Act, adopted in 2024, establishes a risk-based regulatory framework that imposes stricter requirements on AI systems classified as “high risk,” including those used in law enforcement, border control, and employment. Whether such regulatory frameworks will prove adequate to the challenges posed by AI-powered surveillance remains an open question.
10.3 The Future of Surveillance Studies
Surveillance studies as a field faces the ongoing challenge of keeping pace with rapid technological change while maintaining theoretical depth and critical perspective. Several emerging areas demand sustained scholarly attention.
Surveillance and climate change intersect as environmental monitoring systems generate vast quantities of data about landscapes, ecosystems, and human settlements, and as climate-related migration creates new populations subject to border surveillance and biometric registration.
Surveillance and public health, thrust into prominence by the COVID-19 pandemic, raises questions about the use of contact tracing apps, health passports, and quarantine enforcement technologies — and about whether surveillance measures introduced as temporary emergency responses become permanent features of public health infrastructure.
Surveillance and geopolitics — the competition between states, particularly the United States and China, over the development and deployment of surveillance technologies — shapes global norms about acceptable surveillance practices and creates new dynamics of technological dependence and resistance.
The field’s founding commitment remains essential: to understand surveillance not merely as a technical phenomenon but as a social, political, and ethical one, embedded in relations of power, shaped by cultural values, and differentially experienced across lines of race, class, gender, and geography. As Lyon has written, the fundamental question of surveillance studies is not “How does surveillance technology work?” but rather “What kind of society is being produced by surveillance, and what kind of society do we want?”