SOC 232: Technology and Social Change

Estimated study time: 1 hr 3 min

Table of contents

Sources and References

This document draws primarily on Anabel Quan-Haase, Technology and Society: Social Networks, Power, and Inequality, 3rd ed. (Oxford University Press, 2020). Supplementary material is informed by Manuel Castells, The Rise of the Network Society (Blackwell, 2010); Daniel Bell, The Coming of Post-Industrial Society (Basic Books, 1973); Sherry Turkle, Alone Together (Basic Books, 2011); danah boyd, It’s Complicated: The Social Lives of Networked Teens (Yale University Press, 2014); Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (Yale University Press, 2017); Virginia Eubanks, Automating Inequality (St. Martin’s Press, 2018); Safiya Umoja Noble, Algorithms of Oppression (NYU Press, 2018); Nick Srnicek, Platform Capitalism (Polity, 2017); Christian Fuchs, Social Media: A Critical Introduction (Sage, 2021); David Lyon, The Culture of Surveillance (Polity, 2018); and Langdon Winner, “Do Artifacts Have Politics?” (Daedalus, 1980).


Chapter 1: Introduction to Technology and Society

1.1 What Is Technology?

The word “technology” derives from the Greek techne (craft, art) and logos (word, study), but its meaning in sociological inquiry extends far beyond simple tools or machines. Technology (技术) encompasses not only physical artifacts — hardware, devices, infrastructure — but also the knowledge systems, organizational practices, and social arrangements that give those artifacts meaning and function. A smartphone is a piece of technology, but so is the set of protocols governing how cellular networks route data, the business model that subsidizes handset costs through service contracts, and the social norms that dictate when it is acceptable to check one’s phone during a conversation.

Technology (技术): The application of scientific knowledge, tools, techniques, and organizational systems for practical purposes. In sociology, technology is understood as inseparable from the social contexts in which it is designed, deployed, and used.

Quan-Haase distinguishes between technology as artifact, technology as activity, and technology as knowledge. As artifact, technology refers to tangible objects — the printing press, the telegraph, the internet router. As activity, technology describes the processes by which humans design, build, and maintain these objects. As knowledge, technology refers to the expertise, skills, and understanding required to create and use artifacts effectively. Sociological analysis insists that all three dimensions are irreducibly social: artifacts embody design decisions shaped by power relations, activities are organized through institutions, and knowledge is distributed unequally across populations.

1.2 Technological Determinism

One of the most persistent frameworks for understanding the relationship between technology and society is technological determinism (技术决定论), the thesis that technology is the primary driver of social change. In its strong form, technological determinism holds that technical innovation follows an autonomous logic and reshapes society in predictable, unavoidable ways. Marshall McLuhan’s famous dictum that “the medium is the message” exemplifies this perspective: communication technologies do not merely transmit content but fundamentally restructure human perception, cognition, and social organization.

Technological determinism (技术决定论): The view that technology is an autonomous force that determines social structures, cultural values, and historical trajectories. Strong determinism treats technology as the independent variable; soft determinism treats it as a powerful but not sole causal factor.

Deterministic arguments appear frequently in popular discourse. Claims that “the internet democratizes information,” “automation will eliminate jobs,” or “social media is destroying democracy” all rest on deterministic assumptions — they attribute causal power to technology itself rather than to the social actors, institutions, and power structures that shape how technologies are developed and deployed.

Sociologists have raised several objections. First, determinism conflates correlation with causation: the fact that the printing press preceded the Reformation does not mean it caused the Reformation in any simple sense. Second, determinism ignores the role of human agency: people adopt, resist, modify, and repurpose technologies in ways that inventors never anticipated. Third, determinism tends toward teleology, narrating technological change as a story of inevitable progress or inevitable decline, obscuring the political choices that shape technological trajectories.

1.3 Social Construction of Technology

The leading alternative to determinism in the sociology of technology is the social construction of technology (技术的社会建构, often abbreviated SCOT), developed by Wiebe Bijker, Trevor Pinch, and their collaborators. SCOT holds that technologies do not have inherent properties that determine their social effects; rather, the meaning, function, and significance of any technology are constructed through social processes of negotiation among relevant social groups (相关社会群体).

Social construction of technology (SCOT) (技术的社会建构): A framework arguing that the design, adoption, and meaning of technologies are shaped by social groups through processes of negotiation, conflict, and closure. Technology does not determine society; society shapes technology.

A classic SCOT case study examines the development of the bicycle in the late nineteenth century. Early bicycle designs varied enormously — high-wheelers, tricycles, safety bicycles — and different social groups attached different meanings to each design. Young men valued the high-wheeler for its speed and daring; women, older riders, and safety advocates preferred designs that prioritized stability. The eventual dominance of the safety bicycle was not the inevitable triumph of superior engineering but the outcome of social negotiation among groups with competing interests.

SCOT introduces two key concepts. Interpretive flexibility (解释灵活性) refers to the fact that different social groups can attribute different meanings and uses to the same artifact. Closure (闭合) describes the process by which interpretive flexibility diminishes and a dominant meaning stabilizes — either through rhetorical closure (when the relevant social groups come to agree on a single interpretation) or through redefinition of the problem (when the terms of debate shift so that disagreement dissolves).

1.4 Actor-Network Theory

A third influential framework is actor-network theory (行动者网络理论, ANT), associated with Bruno Latour, Michel Callon, and John Law. ANT rejects the distinction between human and nonhuman actors, insisting that technologies, texts, natural phenomena, and organizations all participate in networks of action. A speed bump, for Latour, is as much an actor as the traffic engineer who designed it or the driver who slows down upon encountering it.

ANT's principle of generalized symmetry (广义对称性) requires the analyst to describe human and nonhuman actors using the same analytical vocabulary, refusing to grant explanatory privilege to either social or technical factors. This has proven both productive and controversial: productive because it draws attention to the material infrastructure that sustains social life, controversial because critics argue it obscures the distinctive moral and political capacities of human beings.

1.5 Do Artifacts Have Politics?

Langdon Winner’s seminal 1980 essay “Do Artifacts Have Politics?” poses a question that cuts across all three frameworks. Winner argues that technologies can embody political arrangements in two senses. First, some technologies are designed to produce particular social outcomes: Robert Moses’s low overpasses on Long Island, built too low for public buses, effectively excluded low-income and predominantly Black residents who relied on public transit from accessing public beaches. Second, some technologies are inherently political in the sense that they require or are strongly compatible with particular forms of social organization: nuclear power, Winner argues, demands centralized, hierarchical, and security-intensive institutions in ways that solar power does not.

Winner's overpass example illustrates how politics by design (设计中的政治) operates: the physical characteristics of a built artifact encode social values and power relations. Contemporary parallels include algorithmic systems that embed racial bias in criminal sentencing recommendations, or platform architectures that privilege engagement-maximizing content regardless of its veracity.

Chapter 2: Theoretical Perspectives on Technology

2.1 Classical Sociological Foundations

The classical sociological tradition did not address digital technology, but it developed analytical frameworks that remain indispensable for understanding technology’s social dimensions. Karl Marx’s analysis of the labor process under capitalism emphasized that machinery is never neutral: it is a means by which capital extracts surplus value from labor, deskills workers, and consolidates control over the production process. The factory system’s division of labor, in Marx’s account, is not simply an efficient way to organize production but a social relation of domination.

Alienation (异化), in Marx’s framework, describes the estrangement of workers from the products of their labor, from the labor process itself, from their fellow workers, and from their own human potential. The introduction of machinery intensifies alienation by reducing workers to appendages of the machine, subordinating human rhythm and creativity to mechanical imperatives.

Alienation (异化): Marx's concept describing the estrangement experienced by workers under capitalism, wherein they lose control over the products, process, and social relations of their labor. Technology under capitalism deepens alienation by subordinating human activity to mechanical and algorithmic logics.

Max Weber’s concept of rationalization (理性化) provides another foundational lens. Weber argued that modernity is characterized by the progressive extension of instrumental rationality — the calculation of means-ends efficiency — into all domains of social life. Bureaucracy, standardized procedures, and quantitative measurement are expressions of this tendency. Technology, in the Weberian framework, is both a product and an instrument of rationalization: it embodies the drive toward efficiency, calculability, predictability, and control that George Ritzer later termed the McDonaldization (麦当劳化) of society.

2.2 Marxist and Neo-Marxist Approaches

The Frankfurt School extended Marx’s analysis into the domain of culture and communication. Theodor Adorno and Max Horkheimer’s concept of the culture industry (文化工业) argued that mass media technologies — radio, film, recorded music — do not simply transmit culture but industrialize its production, standardizing cultural products and pacifying audiences. Culture becomes a commodity, and consumption of mass-produced entertainment substitutes for genuine critical thought and political engagement.

Herbert Marcuse’s One-Dimensional Man (1964) extended this critique, arguing that advanced industrial society uses technology to create false needs (虚假需求) — desires for consumer goods that serve the interests of capital rather than genuine human fulfillment. Technology in Marcuse’s analysis is not inherently oppressive, but under conditions of capitalist domination it functions as an instrument of social control, integrating potential opposition into the system by satisfying material wants while foreclosing radical alternatives.

Contemporary digital platforms exhibit dynamics that resonate with Frankfurt School analysis. Social media platforms industrialize the production and circulation of cultural content, reducing user-generated expression to standardized formats (posts, stories, reels) optimized for algorithmic distribution and advertising revenue. The concept of the attention economy (注意力经济) --- in which human attention is the scarce resource that platforms compete to capture and monetize --- extends the culture industry thesis into the digital age.

Christian Fuchs has developed a sustained Marxist analysis of digital labor, arguing that social media users perform unpaid digital labor (数字劳动) that generates surplus value for platform companies. Every post, like, share, and click produces data that platforms sell to advertisers. Users are simultaneously consumers and producers — prosumers (产消者) — whose activity is the raw material of platform capitalism.

2.3 Feminist Perspectives on Technology

Feminist scholars have challenged the assumption that technology is gender-neutral, revealing how technologies are designed by and for dominant groups and how they reproduce gender inequalities. Judy Wajcman’s concept of technofeminism (技术女性主义) argues that gender relations are materialized in technology: the design of artifacts, the organization of technical knowledge, and the culture of engineering all reflect and reinforce patriarchal power relations.

Technofeminism (技术女性主义): A theoretical approach arguing that gender and technology are mutually constitutive --- that technologies are shaped by gender relations and simultaneously reshape those relations. Technofeminism rejects both technological determinism and social determinism, insisting on the co-production of gender and technology.

Donna Haraway’s “Cyborg Manifesto” (1985) offered a radically different feminist engagement with technology, proposing the figure of the cyborg (赛博格) — a hybrid of organism and machine — as a political myth for feminist politics. Haraway argued that the boundaries between human and machine, nature and culture, male and female are constructed rather than given, and that technology offers possibilities for transgressing these boundaries in emancipatory ways.

Contemporary feminist technology studies examine how algorithmic systems reproduce gender bias (as when image-recognition systems perform more poorly on women’s faces, or when hiring algorithms trained on historical data perpetuate gender discrimination), how platform design shapes gendered experiences of harassment and visibility, and how the overwhelmingly male composition of the technology workforce shapes the artifacts it produces.

2.4 Postmodernist Approaches

Postmodernist theorists challenge the grand narratives of progress and rationalization that underpin much thinking about technology. Jean Baudrillard’s concept of simulation (仿真/模拟) argues that in contemporary media-saturated societies, the distinction between reality and representation has collapsed. We inhabit a world of simulacra (拟像) — copies without originals, signs that refer only to other signs. Television, advertising, and digital media do not represent reality but produce a hyperreality (超真实) that is experienced as more real than the real.

Hyperreality (超真实): Baudrillard's concept describing a condition in which simulations and representations become indistinguishable from, and ultimately replace, the reality they purport to depict. Digital environments --- virtual reality, deepfakes, algorithmically curated news feeds --- intensify hyperreality.

Michel Foucault’s work on discourse (话语) and power/knowledge (权力/知识) has been enormously influential in technology studies, particularly in analyses of surveillance, data, and algorithmic governance. Foucault argued that power operates not primarily through coercion but through the production of knowledge, the classification of populations, and the normalization of behavior. Technologies of surveillance, record-keeping, and statistical analysis are central instruments of modern power.

2.5 Science and Technology Studies (STS)

The interdisciplinary field of Science and Technology Studies (科学与技术研究, STS) provides much of the conceptual infrastructure for contemporary sociology of technology. Programs at MIT, Stanford, Cornell, and Edinburgh have shaped the field’s development. STS emphasizes the co-production of scientific knowledge and social order, the material agency of artifacts and infrastructures, and the importance of attending to practices — the everyday, embodied activities through which technologies are made and used.

Key STS concepts include sociotechnical systems (社会技术系统), which insist that technology and society are not separate domains but mutually constitutive elements of integrated systems; boundary objects (边界对象), artifacts that are flexible enough to be used by different communities but robust enough to maintain coherence across them; and infrastructure (基础设施), the often-invisible technical systems (power grids, internet protocols, standards) that underpin social life.


Chapter 3: The Information Society and Network Society

3.1 Post-Industrial Society

The concept of the post-industrial society (后工业社会) emerged in the 1960s and 1970s as sociologists sought to characterize the transformation of advanced economies from manufacturing to services and knowledge production. Daniel Bell’s The Coming of Post-Industrial Society (1973) argued that the central resource of the emerging social order was no longer capital or labor but theoretical knowledge (理论知识) — the systematic, codified knowledge produced by universities and research institutions and applied to the organization of production and governance.

Post-industrial society (后工业社会): A social formation in which the economy shifts from goods-producing to service-providing, theoretical knowledge becomes the central resource, and a new technical-professional class assumes strategic importance. Coined by Daniel Bell (1973).

Bell identified five dimensions of post-industrial transformation: (1) the shift from a goods-producing to a service economy; (2) the preeminence of a professional and technical class; (3) the centrality of theoretical knowledge as the source of innovation and policy formation; (4) the orientation toward the future through technology assessment and forecasting; and (5) the rise of a new “intellectual technology” — decision-making tools such as simulation, systems analysis, and linear programming.

Critics have argued that Bell’s framework overstates the discontinuity between industrial and post-industrial economies, ignores the persistence of manufacturing (much of it relocated to the Global South), and underestimates the role of power and inequality in shaping the knowledge economy.

3.2 The Information Society

The concept of the information society (信息社会) extends Bell’s analysis, emphasizing the role of information and communication technologies (ICTs) in restructuring economic production, social organization, and cultural life. Several features distinguish the information society thesis: the exponential growth of information production and circulation; the declining cost and increasing capacity of digital storage and processing; the economic centrality of information-intensive industries (finance, media, software, biotechnology); and the pervasion of computing into everyday objects and practices.

The information society thesis has been criticized on multiple grounds. Frank Webster argues that proponents often fail to specify what quantity or quality of information would distinguish an information society from its predecessors --- all societies, after all, depend on information. Others note that the concept tends toward technological determinism, treating ICT diffusion as an autonomous driver of social change rather than examining the political and economic decisions that shape technological trajectories.

3.3 Castells and the Network Society

Manuel Castells’s monumental trilogy The Information Age (1996-1998) offers the most comprehensive sociological account of the transformations associated with information technology. Castells argues that we are witnessing the emergence of a new social structure — the network society (网络社会) — organized around electronic information networks rather than the hierarchical bureaucracies that characterized industrial society.

Network society (网络社会): Castells's concept describing a social structure in which key social, economic, political, and cultural activities are organized through electronic information networks. Power in the network society derives from the ability to program and switch networks rather than from control of hierarchical organizations.

For Castells, the network society is produced by the convergence of three independent processes: the information technology revolution (microelectronics, computing, telecommunications); the restructuring of capitalism (deregulation, privatization, globalization); and the emergence of new social movements (feminism, environmentalism, identity politics). The interaction of these three processes generates a new social morphology in which networks (网络) become the fundamental unit of social organization.

Castells introduces several key concepts. The space of flows (流动空间) describes the material infrastructure — fiber-optic cables, satellite links, server farms, airport hubs — through which information, capital, and people circulate in real time, superseding the space of places (地方空间) as the dominant spatial logic. Timeless time (无时间的时间) describes the compression and desequencing of temporality enabled by digital technologies: financial transactions occur in nanoseconds, news cycles collapse, and the biological rhythms of sleep and waking are disrupted by always-on connectivity.

3.4 Power in the Network Society

Castells distinguishes between two forms of power specific to the network society. Networking power (网络化权力) refers to the capacity of actors within a network to set the rules of inclusion and exclusion — to determine who and what is admitted to the network. Network-making power (网络构建权力) refers to the ability to program the goals of a network and to connect or disconnect different networks. Those who control the switches between networks — media moguls, political elites, financial regulators — wield decisive power.

Crucially, Castells argues that exclusion from dominant networks constitutes a new form of social disadvantage. The fourth world (第四世界) — populations, territories, and activities that are structurally irrelevant to the network economy — is defined not by exploitation (as in classical Marxism) but by disconnection. The homeless in American cities, subsistence farmers in sub-Saharan Africa, and displaced workers in deindustrialized regions share a condition of structural irrelevance to the networks that organize global capitalism.

The concept of the fourth world illuminates contemporary debates about digital exclusion (数字排斥). Communities without broadband internet access are not merely inconvenienced; they are structurally excluded from the networks through which employment, education, healthcare, and civic participation are increasingly organized. The COVID-19 pandemic made this exclusion starkly visible when schooling and work shifted online, revealing that millions of households lacked adequate connectivity.

Chapter 4: Social Media and Social Networks

4.1 Defining Social Media

Social media (社交媒体) refers to internet-based platforms that enable users to create, share, and interact with content and with one another. Quan-Haase distinguishes social media from earlier forms of computer-mediated communication (email, bulletin boards, chat rooms) by emphasizing three characteristics: user-generated content, profile-based identity, and networked sociality. Social media platforms are not merely communication channels; they are sociotechnical systems that structure social interaction through their architectures, algorithms, and business models.

Social media (社交媒体): Internet-based applications built on the ideological and technological foundations of Web 2.0 that allow the creation and exchange of user-generated content. Characterized by profile construction, articulated social networks, and algorithmic curation of content.

Jose van Dijck’s concept of platformed sociality (平台化社交) captures a crucial transformation: as social interaction migrates to commercial platforms, it becomes subject to the logics of datafication, commodification, and algorithmic selection. Platforms do not simply host social interaction; they shape it through design choices — what actions are possible (liking, sharing, commenting), what content is visible (algorithmic timelines), and what data is collected (behavioral tracking).

4.2 Social Network Theory

The sociological study of social networks (社交网络) predates digital technology by decades. Georg Simmel’s early twentieth-century analyses of dyads, triads, and group size anticipated network thinking. Jacob Moreno’s sociometry in the 1930s developed techniques for mapping interpersonal relationships. But the systematic study of social networks accelerated from the 1970s onward with the work of Harrison White, Mark Granovetter, and their students.

Mark Granovetter’s 1973 paper “The Strength of Weak Ties” is foundational. Granovetter distinguished between strong ties (强关系) — close, frequent, emotionally intense relationships such as those with family and close friends — and weak ties (弱关系) — infrequent, less emotionally intense relationships such as those with acquaintances. His counterintuitive finding was that weak ties are often more valuable than strong ties for accessing novel information and opportunities, because they bridge otherwise disconnected social clusters.

Weak ties (弱关系): Social connections characterized by infrequent interaction and low emotional intensity. Granovetter's theory argues that weak ties serve as bridges between dense clusters of strong ties, facilitating the flow of novel information and expanding individuals' access to diverse resources.

Bridging social capital (桥接型社会资本) and bonding social capital (结合型社会资本), concepts developed by Robert Putnam, map onto the strong/weak tie distinction. Bonding social capital refers to the resources embedded in close, homogeneous networks — emotional support, mutual aid, solidarity. Bridging social capital refers to the resources accessible through diverse, heterogeneous networks — information, opportunities, exposure to different perspectives. Digital social networks, with their capacity to maintain large numbers of weak ties across geographic distance, are particularly effective at generating bridging social capital.

4.3 Network Effects and Platform Dynamics

Network effects (网络效应) describe the phenomenon whereby the value of a platform to each user increases as more users join. A telephone is useless if only one person owns one; its value grows with every additional subscriber. Social media platforms exhibit particularly strong network effects: Facebook is valuable precisely because one’s friends, family, and colleagues are on it. Network effects tend to produce winner-take-all markets (赢者通吃市场) in which a small number of platforms achieve dominant positions, making it difficult for competitors to enter and for users to exit.

The concept of lock-in (锁定效应) describes the difficulty users face in leaving a platform once they have invested time, content, and social connections in it. Lock-in operates through switching costs: migrating to a new platform means rebuilding one's social network, losing archived content, and learning new interfaces. Platform companies deliberately increase switching costs through proprietary data formats, restrictive APIs, and network-dependent features.

Nick Srnicek’s analysis of platform capitalism (平台资本主义) situates these dynamics within the political economy of digital capitalism. Srnicek argues that platforms are a new type of firm organized around the extraction and control of data. He identifies five types of platforms: advertising platforms (Google, Facebook), cloud platforms (Amazon Web Services), industrial platforms (Siemens), product platforms (Spotify), and lean platforms (Uber). All share a common logic: they position themselves as intermediaries, extract data from the interactions they facilitate, and use that data to generate competitive advantages.

4.4 Algorithms and Content Curation

Social media platforms do not present content chronologically; they use algorithms (算法) to select, rank, and recommend content based on predictions about what each user is most likely to engage with. Algorithmic curation is not neutral: it reflects the business imperatives of platforms (maximizing engagement to sell advertising), the biases embedded in training data, and the design decisions of engineers.

Algorithmic curation (算法策展): The use of automated computational procedures to select, filter, rank, and recommend content to users. On social media platforms, algorithms determine what appears in users' feeds, shaping their information environments and influencing their perceptions of the social world.

The consequences of algorithmic curation are significant. Filter bubbles (过滤气泡), a concept introduced by Eli Pariser, describe the tendency of personalization algorithms to show users content that confirms their existing preferences and beliefs, potentially narrowing their exposure to diverse perspectives. Echo chambers (回音室) describe a related phenomenon in which users interact primarily with like-minded others, reinforcing shared beliefs and amplifying polarization.

Empirical research on filter bubbles and echo chambers has produced mixed findings. Some studies confirm that algorithmic curation reduces exposure to cross-cutting political content; others find that most users encounter more ideological diversity online than offline. The relationship between algorithmic personalization and political polarization remains actively debated.


Chapter 5: Digital Divide and Digital Inequality

5.1 The First-Level Digital Divide

The digital divide (数字鸿沟) refers to inequalities in access to and use of information and communication technologies. The concept emerged in the 1990s as researchers and policymakers recognized that the benefits of the internet were unevenly distributed across populations. The first-level digital divide focuses on access (接入) — the basic question of whether individuals and communities have the hardware, software, and connectivity necessary to go online.

Digital divide (数字鸿沟): Systematic inequalities in access to, use of, and benefits derived from information and communication technologies. The concept encompasses multiple levels: access (first level), skills and usage patterns (second level), and tangible outcomes (third level).

Research consistently shows that the first-level digital divide maps onto existing axes of social inequality. Income is the strongest predictor of internet access: lower-income households are significantly less likely to have broadband connections. Age, education, race and ethnicity, disability status, and geography (urban/rural) are also significant predictors. Globally, the divide between the Global North and Global South remains enormous: while internet penetration exceeds 90 percent in most wealthy nations, it remains below 30 percent in many countries in sub-Saharan Africa and South Asia.

5.2 The Second-Level Digital Divide

As internet access became more widespread in wealthy nations, researchers recognized that access alone does not ensure equitable participation in digital society. The second-level digital divide (第二层次数字鸿沟) refers to inequalities in digital skills (数字技能) and usage patterns among those who have access. Eszter Hargittai’s research has demonstrated that even among young, well-connected populations, significant differences exist in the ability to navigate the internet effectively, evaluate online information critically, and use digital tools for economic and civic purposes.

Hargittai distinguishes several dimensions of digital skill: operational skills (操作技能, the ability to use hardware and software), formal skills (形式技能, the ability to navigate digital environments), information skills (信息技能, the ability to search for and evaluate online information), and strategic skills (策略技能, the ability to use digital technologies to achieve personal and professional goals). Inequalities at each level compound: those with lower operational skills are less able to develop information and strategic skills.

5.3 The Third-Level Digital Divide

The third-level digital divide (第三层次数字鸿沟) focuses on tangible outcomes (有形成果) — the real-world benefits that individuals derive from their internet use. Even among those with comparable access and skills, the returns to internet use vary systematically by social position. Higher-income, more educated users are more likely to use the internet for activities that enhance their economic, social, and cultural capital — job searching, professional networking, health information, civic engagement — while lower-income, less educated users are more likely to use the internet primarily for entertainment and communication.

Third-level digital divide (第三层次数字鸿沟): Inequalities in the tangible social, economic, and political benefits that individuals and groups derive from their use of digital technologies, even when access and skills are comparable. Also termed the digital outcome divide (数字结果鸿沟).

5.4 Intersectionality and Digital Inequality

An intersectional analysis reveals that digital inequalities are not simply additive but multiplicative. A low-income, elderly, rural, Indigenous woman does not simply face four separate disadvantages; these categories interact to produce a distinctive experience of digital exclusion. Race and gender intersect with class to shape not only who has access to technology but what kinds of technology are designed for whom, whose concerns are reflected in platform policies, and whose voices are amplified or suppressed by algorithmic systems.

Research on algorithmic discrimination (算法歧视) demonstrates how digital inequality operates through ostensibly neutral technical systems. Latanya Sweeney found that Google ad searches for traditionally Black names were significantly more likely to generate ads for arrest records than searches for traditionally white names. Safiya Umoja Noble's Algorithms of Oppression documented how Google search results for "Black girls" returned hypersexualized and degrading content. These are not random glitches but systematic patterns rooted in the data on which algorithms are trained and the social contexts in which they operate.

5.5 Global Digital Inequality

At the global level, digital inequality intersects with longstanding patterns of economic dependency and geopolitical power. The infrastructure of the global internet — undersea cables, root servers, domain name systems — is overwhelmingly controlled by corporations and institutions based in the United States and Europe. The dominant platforms (Google, Facebook, Amazon, Apple) are American companies whose designs, policies, and algorithms reflect American cultural assumptions and business models.

Data colonialism (数据殖民主义), a concept developed by Nick Couldry and Ulises Mejias, describes how the extraction of data from populations in the Global South by platform companies based in the Global North reproduces colonial patterns of resource extraction. Just as colonial powers extracted raw materials from colonized territories for processing in metropolitan factories, digital platform companies extract behavioral data from users worldwide for processing in Silicon Valley algorithms.


Chapter 6: Surveillance and Privacy

6.1 Foucault and the Panopticon

Michel Foucault’s analysis of the panopticon (全景监狱) provides the foundational metaphor for sociological analyses of surveillance. Jeremy Bentham’s panopticon was an architectural design for a prison in which a central observation tower allowed a single guard to monitor all inmates, who could never be certain whether they were being watched at any given moment. Foucault argued that the panopticon exemplified a distinctly modern form of power: disciplinary power (规训权力) operates not through spectacular punishment but through constant visibility, inducing individuals to internalize surveillance and regulate their own behavior.

Panopticon (全景监狱): Bentham's prison design, reinterpreted by Foucault as a metaphor for modern disciplinary power. The key mechanism is the internalization of surveillance: the subject, aware of the possibility of being observed at any moment, begins to monitor and discipline their own behavior, rendering external coercion unnecessary.

Foucault’s insight was that the panopticon is not merely a building but a diagram of power — a principle applicable to schools, hospitals, factories, barracks, and any institution organized around the imperative to observe, classify, and normalize populations. The extension of this principle to digital environments has generated a rich body of scholarship.

6.2 Surveillance Society and Dataveillance

David Lyon argues that we live in a surveillance society (监控社会) in which monitoring has become a routine, pervasive feature of everyday life rather than an exceptional measure directed at suspected criminals or political dissidents. Surveillance is embedded in mundane activities: swiping a loyalty card at the grocery store, passing through an automated toll booth, carrying a smartphone that continuously transmits location data.

Dataveillance (数据监控): The systematic monitoring of individuals through their digital data trails. Coined by Roger Clarke, dataveillance describes the use of personal data systems to monitor the actions or communications of individuals, replacing or supplementing direct physical observation.

Dataveillance (数据监控), a term coined by Roger Clarke, describes the systematic use of personal data for monitoring purposes. Unlike traditional surveillance, which requires direct observation, dataveillance operates through the aggregation and analysis of digital traces — transaction records, communication metadata, location data, browsing histories, social media activity. The shift from surveillance to dataveillance represents a qualitative transformation: monitoring becomes continuous rather than episodic, automated rather than labor-intensive, and aggregated rather than targeted.

6.3 Platform Surveillance and Surveillance Capitalism

Shoshana Zuboff’s concept of surveillance capitalism (监控资本主义) describes a new economic logic in which the extraction and analysis of behavioral data is the primary source of profit. Zuboff argues that Google pioneered surveillance capitalism when it discovered that the “behavioral surplus” generated by users’ search queries — data beyond what was needed to improve the search product — could be fed into predictive algorithms and sold to advertisers as predictions about future behavior.

Surveillance capitalism (监控资本主义): An economic system organized around the extraction, analysis, and sale of behavioral data. Zuboff argues that surveillance capitalists claim human experience as free raw material for translation into behavioral predictions, which are then sold in behavioral futures markets (行为期货市场).

Surveillance capitalism differs from earlier forms of capitalism in that its raw material is not labor or natural resources but human behavior. Platform companies deploy ever more sophisticated instruments of behavioral extraction — sensors, cameras, microphones, location trackers, keystroke loggers — to capture data about what people do, where they go, whom they communicate with, and how they feel. This data is processed by machine-learning algorithms to generate predictions about future behavior, which are sold to advertisers and other institutional clients.

6.4 Privacy in the Digital Age

Privacy (隐私) has traditionally been understood as the right to control information about oneself and to be free from unwanted observation. Helen Nissenbaum’s framework of contextual integrity (语境完整性) argues that privacy is violated not simply when personal information is collected but when information flows violate the norms appropriate to a particular context. Sharing health information with a doctor is expected; the doctor sharing that information with an employer violates contextual integrity.

The privacy paradox (隐私悖论) describes the well-documented discrepancy between individuals' stated concern about privacy and their actual disclosure behavior. Surveys consistently show that people value privacy highly, yet they routinely share personal information on social media, accept lengthy terms of service without reading them, and trade personal data for convenience. Explanations include information asymmetry (users do not understand what data is collected or how it is used), rational calculation (the immediate benefits of platform use outweigh the diffuse costs of data extraction), and structural coercion (participation in digital society is effectively mandatory, leaving individuals no realistic option to withhold their data).

6.5 State Surveillance and Resistance

The Snowden revelations of 2013 exposed the scale of state surveillance programs, demonstrating that agencies such as the NSA and GCHQ were systematically collecting metadata from millions of phone calls, tapping undersea fiber-optic cables, and compelling technology companies to provide access to user data. These revelations catalyzed public debate about the relationship between security and privacy, the oversight of intelligence agencies, and the complicity of technology companies in state surveillance.

Resistance to surveillance takes multiple forms: technical (encryption, VPNs, Tor), legal (privacy legislation such as the GDPR), political (advocacy organizations such as the Electronic Frontier Foundation), and cultural (art, literature, and performance that make surveillance visible and contestable). The concept of sousveillance (逆监控), coined by Steve Mann, describes the practice of monitoring authorities from below — citizens filming police encounters, for instance — as a counter to top-down surveillance.


Chapter 7: Online Communities and Virtual Identities

7.1 Community in the Digital Age

The question of whether meaningful community (社区/社群) can exist online has been debated since the earliest days of the internet. Howard Rheingold’s The Virtual Community (1993) offered an enthusiastic account of online community formation, describing the WELL (Whole Earth ‘Lectronic Link) as a genuine community characterized by mutual support, shared norms, and emotional bonds. Critics such as Hubert Dreyfus argued that online interaction is too disembodied, too anonymous, and too ephemeral to sustain genuine community.

Virtual community (虚拟社区): A social aggregation that emerges from sustained computer-mediated interaction among individuals who share interests, values, or purposes, and who develop interpersonal relationships and a sense of belonging. Whether virtual communities constitute "real" communities remains a subject of sociological debate.

Barry Wellman’s research on networked individualism (网络化个人主义) reframes the debate. Wellman argues that community has been transformed rather than destroyed by digital technology. The traditional model of community — place-based, group-based, densely interconnected — has given way to a network model in which individuals maintain personal networks of ties that cut across geographic, organizational, and social boundaries. The individual, rather than the group, is the primary unit of connectivity.

7.2 Identity Performance Online

Erving Goffman’s dramaturgical theory, developed long before the internet, provides a foundational framework for understanding online identity. Goffman argued that social life is a kind of theater in which individuals perform roles, manage impressions, and maintain front stage (前台) and back stage (后台) regions. Social media platforms extend and complicate this analysis: profiles are curated performances, posts are impression management strategies, and the boundary between front and back stage becomes blurred when audiences from different social contexts (employers, family, friends) converge on a single platform.

Context collapse (语境坍塌): The flattening of multiple, distinct social contexts into a single context on social media platforms. When an individual's parents, employer, college friends, and political acquaintances all see the same post, the careful audience segregation that characterizes offline social life breaks down, creating anxiety and strategic self-censorship.

danah boyd’s research on context collapse (语境坍塌) demonstrates how social media disrupts the audience segregation that Goffman identified as essential to impression management. In face-to-face interaction, individuals tailor their self-presentation to specific audiences; on social media, diverse audiences are collapsed into a single, undifferentiated public. This generates anxiety — the difficulty of crafting a post that is simultaneously appropriate for one’s boss, one’s mother, and one’s college friends — and strategic responses, including the creation of multiple accounts, the use of privacy settings to segment audiences, and the adoption of ambiguous, coded language.

7.3 Sherry Turkle and the Self Online

Sherry Turkle’s early work, Life on the Screen (1995), explored how online environments — particularly MUDs (Multi-User Dungeons) — enabled users to experiment with identity, adopting multiple personae and exploring aspects of the self that were constrained in offline life. Turkle saw this as potentially liberating: digital environments could serve as identity workshops in which individuals developed richer, more flexible understandings of themselves.

Her later work, Alone Together (2011), struck a more critical tone. Turkle argued that constant connectivity was producing a new form of social isolation: people were physically present but psychologically absent, tethered to their devices, substituting the controlled, asynchronous interactions of texting and social media for the riskier, more demanding encounters of face-to-face conversation. The result was a diminished capacity for solitude (独处) — the ability to be comfortable alone with one’s thoughts — and for empathy (共情) — the ability to attend fully to another person’s experience.

Turkle's critique resonates with broader concerns about the quantified self (量化自我) --- the practice of using digital devices to track and optimize bodily and psychological processes (steps walked, calories consumed, sleep quality, mood). The quantified self movement exemplifies the extension of instrumental rationality into the most intimate domains of human experience, raising questions about whether self-knowledge mediated by algorithms enhances or diminishes autonomy.

7.4 Race, Gender, and Identity Online

Online spaces are not post-racial or post-gender utopias. Research consistently shows that offline inequalities of race, gender, sexuality, and class are reproduced and sometimes amplified online. Lisa Nakamura’s concept of cybertypes (网络刻板印象) describes how racial and ethnic stereotypes are reproduced in digital environments through visual representations, interaction patterns, and platform design. Andre Brock’s work on critical technocultural discourse analysis examines how Black users of Twitter (Black Twitter) create distinctive cultural spaces within a platform not designed for them, using the platform’s affordances in creative ways that reflect African American communicative traditions.

Gender-based online harassment (网络骚扰) — including doxxing, revenge pornography, death threats, and coordinated abuse campaigns — disproportionately targets women, people of color, and LGBTQ+ individuals. The scale and intensity of online harassment reflect not only individual pathology but structural features of platform design (anonymity, lack of accountability, engagement-maximizing algorithms that amplify outrage) and broader patterns of misogyny and racism.


Chapter 8: Work, Automation, and the Gig Economy

8.1 Technology and the Transformation of Work

The relationship between technology and work has been a central concern of sociology since Marx’s analyses of the factory system. Harry Braverman’s Labor and Monopoly Capital (1974) extended Marx’s analysis to the twentieth century, arguing that the application of scientific management (Taylorism) and automation to the labor process produces a systematic deskilling (去技能化) of workers. Tasks that once required judgment, discretion, and craft knowledge are broken into simple, repetitive components that can be performed by less skilled (and therefore cheaper) workers or by machines.

Deskilling (去技能化): The process by which the introduction of new technologies or management practices reduces the skill requirements of jobs, transferring knowledge and control from workers to management or to machines. Braverman argued that deskilling is an inherent tendency of capitalist production.

The deskilling thesis has been qualified by research showing that technological change can also produce upskilling (技能提升) — the creation of new, more complex tasks requiring higher levels of education and training — and reskilling (技能重塑) — the transformation of existing jobs in ways that require new but not necessarily lesser skills. The net effect of technological change on skill levels is an empirical question that varies across industries, occupations, and institutional contexts.

8.2 The Gig Economy and Platform Labor

The gig economy (零工经济) describes an economic arrangement in which workers perform short-term tasks or projects, often mediated by digital platforms, rather than holding traditional full-time employment. Platforms such as Uber, Lyft, DoorDash, TaskRabbit, and Upwork connect workers with clients, manage transactions, and evaluate performance through rating systems, while classifying workers as independent contractors rather than employees.

Gig economy (零工经济): An economic system characterized by short-term, flexible, freelance, or on-demand work arrangements, often mediated by digital platforms that match workers with tasks. The gig economy raises questions about labor rights, economic security, and the erosion of the standard employment relationship.

Platform labor raises profound questions about employment classification, labor rights, and economic security. By classifying workers as independent contractors, platforms avoid obligations to provide minimum wages, benefits, overtime pay, unemployment insurance, and workplace protections. Workers bear the costs and risks — vehicle maintenance, fuel, insurance, health care — while platforms capture the surplus generated by their labor.

8.3 Algorithmic Management

Algorithmic management (算法管理) refers to the use of algorithms and data analytics to direct, evaluate, and discipline workers. In the gig economy, algorithms determine which tasks workers are assigned, how their performance is rated, and whether they are permitted to continue working on the platform. Uber’s algorithm, for example, assigns rides, sets prices through surge pricing, monitors driver behavior (speed, route, acceptance rate), and deactivates drivers whose ratings fall below a threshold — all without direct human managerial intervention.

Algorithmic management extends managerial control in ways that are simultaneously more pervasive and less visible than traditional supervision. Workers are subject to continuous, automated evaluation based on metrics they often do not fully understand and cannot negotiate. The opacity of algorithmic decision-making --- what Frank Pasquale calls the black box (黑箱) --- means that workers cannot easily identify, challenge, or appeal managerial decisions. This represents a significant shift in the balance of power between workers and employers.

8.4 Automation and the Future of Work

Debates about automation (自动化) and its effects on employment have intensified with advances in artificial intelligence and robotics. The automation anxiety thesis holds that machines will displace human workers on an unprecedented scale, producing mass unemployment and social disruption. A widely cited 2013 study by Carl Benedikt Frey and Michael Osborne estimated that 47 percent of U.S. jobs were at high risk of automation within one to two decades.

Critics of the automation anxiety thesis point out that past predictions of technological unemployment have consistently failed to materialize. While technology eliminates some jobs, it also creates new ones — in technology development, maintenance, and in entirely new industries — and increases productivity in ways that generate economic growth and demand for labor. The key question is not whether technology destroys jobs but whether the pace and nature of job creation will match the pace and nature of job destruction, and whether the workers displaced by automation will have the skills needed for the new jobs created.

The automation debate illustrates the tension between technological determinism (技术决定论) and social shaping (社会塑造) perspectives. Deterministic accounts treat automation as an autonomous technical process with predetermined social consequences (mass unemployment). Social shaping accounts emphasize that the effects of automation depend on political choices --- labor market regulations, education and training systems, social safety nets, corporate governance structures --- that are subject to democratic deliberation and contestation.

Chapter 9: Political Participation and Digital Activism

9.1 The Internet and Democratic Participation

Early discourse about the internet was marked by utopian optimism about its democratic potential. The internet, proponents argued, would lower barriers to political participation, enable citizens to access information and deliberate on public issues, bypass gatekeeping institutions (media, political parties), and empower marginalized voices. This optimism was grounded in the internet’s decentralized architecture, low cost of publication, and capacity for many-to-many communication.

E-democracy (电子民主): The use of information and communication technologies to enhance democratic processes, including citizen access to information, public deliberation, and direct participation in governance. E-democracy encompasses a spectrum from incremental enhancements (online voting, digital petitions) to transformative visions (digital direct democracy, participatory budgeting).

Empirical research has tempered this optimism. Studies of online political participation consistently find that digital engagement reproduces rather than equalizes offline patterns of participation: those who are already politically active, well-educated, and well-resourced are more likely to participate politically online. The internet has not produced a new class of engaged citizens from previously disengaged populations; it has given already-active citizens new tools.

9.2 Digital Activism and Social Movements

The role of digital technologies in social movements has been a major focus of research since the Arab Spring of 2010-2011, which demonstrated the capacity of social media to facilitate rapid mobilization, coordinate collective action across geographic distances, and attract international attention to protest movements.

Zeynep Tufekci’s Twitter and Tear Gas (2017) offers a nuanced analysis. Tufekci argues that digital technologies enable movements to grow rapidly by lowering the costs of coordination and communication, but that this rapid growth can be a weakness as well as a strength. Movements that scale quickly via social media often lack the organizational infrastructure — leadership development, strategic capacity, tactical flexibility — that earlier movements built through years of face-to-face organizing. Tufekci calls this the capacity paradox (能力悖论): the ease of digitally enabled mobilization can produce movements that appear powerful but lack the organizational depth to sustain themselves or to translate protest into political change.

Hashtag activism (标签行动主义): The use of social media hashtags to raise awareness, express solidarity, or advocate for social and political causes. Examples include #BlackLivesMatter, #MeToo, and #FridaysForFuture. Hashtag activism has been both celebrated for its capacity to amplify marginalized voices and criticized as superficial engagement that substitutes symbolic expression for substantive action.

9.3 Slacktivism and the Critique of Digital Activism

The concept of slacktivism (懒人行动主义) — a portmanteau of “slacker” and “activism” — captures the concern that digital activism substitutes low-cost, low-risk symbolic actions (signing online petitions, sharing hashtags, changing profile pictures) for the sustained, demanding engagement that produces political change. Evgeny Morozov’s The Net Delusion (2011) argued that Western enthusiasm for internet-enabled democracy was naive, ignoring how authoritarian regimes use the same technologies for surveillance, propaganda, and repression.

The slacktivism critique has itself been criticized for imposing an artificial hierarchy of political action and for dismissing the consciousness-raising and solidarity-building functions of symbolic expression. Research by Sandra Gonzalez-Bailon and others suggests that online activism is not necessarily a substitute for offline action but can serve as a gateway --- that those who engage in low-cost digital actions are more, not less, likely to engage in higher-cost forms of political participation.

9.4 Misinformation, Disinformation, and Platform Governance

The optimistic vision of the internet as a democratic public sphere has been further complicated by the proliferation of misinformation (错误信息, false information spread without intent to deceive) and disinformation (虚假信息, false information deliberately created and spread to deceive). The 2016 U.S. presidential election, the Brexit referendum, and subsequent electoral events around the world demonstrated how social media platforms could be weaponized to spread false claims, amplify polarization, and undermine trust in democratic institutions.

Disinformation (虚假信息): False or misleading information intentionally created and disseminated to deceive, manipulate public opinion, or achieve strategic objectives. Distinguished from misinformation (错误信息), which is false but spread without deliberate intent to deceive, and from malinformation (恶意信息), which is true but shared with intent to cause harm.

Platform governance — the rules, norms, and enforcement mechanisms through which platforms regulate content and conduct — has emerged as a critical site of political contestation. Decisions about what constitutes acceptable speech, who is permitted to speak, and how content moderation is enforced have enormous consequences for democratic discourse. These decisions are made primarily by private corporations, raising questions about the accountability and legitimacy of platform governance.


Chapter 10: Artificial Intelligence and Society

10.1 Defining Artificial Intelligence

Artificial intelligence (人工智能, AI) refers broadly to computational systems designed to perform tasks that would ordinarily require human intelligence — recognizing patterns, making predictions, understanding language, navigating physical environments. Contemporary AI is dominated by machine learning (机器学习), a family of techniques in which algorithms learn to perform tasks by identifying patterns in large datasets rather than following explicitly programmed rules.

Machine learning (机器学习): A subset of artificial intelligence in which algorithms improve their performance at a task through exposure to data, without being explicitly programmed for that task. Machine learning encompasses supervised learning (training on labeled data), unsupervised learning (identifying patterns in unlabeled data), and reinforcement learning (learning through trial and error with feedback signals).

The sociological analysis of AI focuses not on the technical details of algorithms but on the social conditions of their production, the social consequences of their deployment, and the power relations they encode and reproduce. AI systems are not neutral tools; they are social artifacts that reflect the values, assumptions, and biases of their creators and the data on which they are trained.

10.2 Algorithmic Bias and Discrimination

Algorithmic bias (算法偏见) refers to systematic errors in algorithmic outputs that produce unfair outcomes for particular social groups. Bias can enter algorithmic systems at multiple points: through biased training data (historical data that reflects existing patterns of discrimination), through biased problem formulation (defining the problem in ways that embed discriminatory assumptions), through biased feature selection (using variables that serve as proxies for race, gender, or class), and through biased evaluation metrics (measuring success in ways that ignore disparate impacts).

The COMPAS recidivism prediction algorithm, widely used in the U.S. criminal justice system, was found by ProPublica to be significantly more likely to falsely classify Black defendants as high risk for reoffending and to falsely classify white defendants as low risk. The algorithm did not use race as an explicit input, but it relied on variables (criminal history, employment status, neighborhood) that are correlated with race due to structural racism in policing, employment, and housing. This case illustrates how algorithmic discrimination (算法歧视) can operate through formally race-neutral mechanisms.

Virginia Eubanks’s Automating Inequality (2018) examines how automated decision-making systems in public services — welfare eligibility, child protective services, homeless services — systematically disadvantage poor and working-class communities. Eubanks argues that these systems automate existing biases, criminalizing poverty and subjecting marginalized populations to heightened surveillance and punitive interventions.

10.3 Transparency, Accountability, and Explainability

The opacity of algorithmic decision-making raises fundamental questions about accountability (问责性) and transparency (透明性). When an algorithm denies someone a loan, flags someone as a security threat, or determines someone’s eligibility for government benefits, the affected individual typically cannot understand why the decision was made, cannot identify who is responsible, and cannot effectively challenge the outcome.

Algorithmic accountability (算法问责): The principle that the designers, deployers, and operators of algorithmic systems should be answerable for the outcomes those systems produce. Algorithmic accountability requires mechanisms for transparency (making algorithmic processes visible), explainability (making algorithmic decisions comprehensible), and redress (providing avenues for challenging algorithmic decisions).

The call for explainable AI (可解释人工智能, XAI) reflects the demand that algorithmic decisions be comprehensible to the people affected by them. The European Union’s General Data Protection Regulation (GDPR) includes provisions that have been interpreted as establishing a “right to explanation” for automated decisions, though the scope and enforceability of this right remain debated.

10.4 AI and the Labor Market

The impact of AI on the labor market extends beyond the automation debates discussed in Chapter 8. AI is not only replacing routine manual and cognitive tasks but is beginning to encroach on tasks previously considered uniquely human — creative writing, artistic composition, medical diagnosis, legal analysis, scientific research. The development of large language models and generative AI systems has accelerated these concerns.

The sociological question is not simply whether AI will displace human workers but how the costs and benefits of AI-driven productivity gains will be distributed. Historical patterns suggest that technological change tends to increase overall wealth while concentrating its benefits among capital owners and highly skilled workers, exacerbating inequality. Without deliberate policy interventions --- progressive taxation, social safety nets, education and retraining programs, labor market regulations --- AI-driven productivity gains are likely to reproduce and amplify existing inequalities.

10.5 AI Ethics and Governance

The rapid deployment of AI systems in high-stakes domains — criminal justice, healthcare, hiring, lending, education — has generated an emerging field of AI ethics (人工智能伦理). Key principles include fairness (algorithmic systems should not discriminate), transparency (algorithmic processes should be open to scrutiny), accountability (those who deploy AI should be answerable for its effects), and beneficence (AI should be designed to benefit humanity).

Critics of the AI ethics framework argue that principles without enforcement mechanisms are insufficient, that ethics discourse can serve as a substitute for binding regulation, and that the framing of AI governance as an ethical rather than political question obscures the power relations at stake. Kate Crawford’s Atlas of AI (2021) argues that AI must be understood not as an abstract technology but as a material infrastructure with enormous environmental, labor, and social costs — from the mining of rare earth minerals to the exploitation of data labelers in the Global South.


Chapter 11: Health, Well-being, and Technology

11.1 Technology and Health Information

Digital technologies have transformed the landscape of health information, enabling individuals to access medical knowledge, track their health, communicate with providers, and participate in health communities online. The concept of the e-patient (电子患者) describes individuals who use digital tools to manage their health actively, seeking information, connecting with others who share their conditions, and engaging with healthcare providers as informed partners rather than passive recipients of care.

Health informatics (健康信息学): The interdisciplinary field concerned with the design, development, and deployment of information technologies in healthcare settings. Health informatics encompasses electronic health records, telemedicine, mobile health applications, and the use of data analytics for public health surveillance and clinical decision support.

The benefits of digital health information are significant: patients can access information about conditions, treatments, and providers; individuals in rural or underserved areas can consult with specialists via telemedicine; mobile health applications can support chronic disease management; and social media health communities can provide emotional support and practical advice. However, these benefits are unevenly distributed along the axes of digital inequality discussed in Chapter 5.

11.2 Screen Time and Mental Health

The relationship between screen time (屏幕时间) and mental health, particularly among adolescents, has generated intense public concern and scholarly debate. Jean Twenge’s research argues that the rise of smartphones and social media is causally linked to increases in depression, anxiety, loneliness, and suicidality among American teenagers, particularly girls. Twenge identifies 2012 — the year smartphone ownership among American teens exceeded 50 percent — as an inflection point after which mental health indicators deteriorated sharply.

The screen time debate illustrates the challenges of establishing causality in the social sciences. Critics of Twenge's thesis, including Andrew Przybylski and Amy Orben, argue that the statistical associations between screen time and mental health outcomes are extremely small --- comparable in magnitude to the association between wearing glasses and mental health --- and that the causal direction is ambiguous: troubled adolescents may turn to screens for comfort rather than screens causing their distress. The debate underscores the need for longitudinal research, attention to the specific activities (特定活动) that screen time encompasses (which vary enormously), and resistance to moral panics about new technologies.

11.3 Social Comparison and Social Media

Social media platforms facilitate social comparison (社会比较) — the process by which individuals evaluate themselves relative to others. Leon Festinger’s social comparison theory (1954) identified two types of comparison: upward comparison (comparing oneself to those perceived as superior, which can produce motivation or envy) and downward comparison (comparing oneself to those perceived as inferior, which can produce gratitude or contempt). Social media intensifies upward social comparison by presenting curated, idealized versions of others’ lives: vacations, accomplishments, physical appearance, relationship milestones.

Research suggests that exposure to idealized content on social media is associated with lower body image satisfaction, particularly among young women; increased feelings of envy and inadequacy; and a phenomenon known as FOMO (错失恐惧症, fear of missing out) — the anxiety that others are having rewarding experiences from which one is absent.

11.4 Technology Addiction and Digital Wellness

The concept of technology addiction (技术成瘾) — sometimes termed internet addiction disorder, smartphone addiction, or social media addiction — describes patterns of compulsive technology use that interfere with daily functioning, relationships, and well-being. The World Health Organization’s inclusion of “gaming disorder” in the International Classification of Diseases (ICD-11) in 2018 lent institutional legitimacy to the concept, though significant debate continues about whether technology addiction constitutes a genuine clinical disorder or a moral panic.

Persuasive design (说服性设计): The intentional design of digital interfaces to influence user behavior, often by exploiting cognitive biases and psychological vulnerabilities. Techniques include variable reward schedules (unpredictable notifications and content refreshes), social validation (likes, comments, follower counts), infinite scrolling, and autoplay features. Former Google design ethicist Tristan Harris has described these techniques as the product of a race to the bottom of the brain stem.

The digital wellness (数字健康) movement advocates for more intentional, balanced relationships with technology, including practices such as screen time limits, notification management, device-free zones and times, and digital sabbaths. Critics note that framing technology overuse as an individual behavioral problem obscures the structural forces — persuasive design, the attention economy, the commodification of engagement — that produce compulsive use.


Chapter 12: Globalization and Technology

12.1 Globalization and ICTs

Globalization (全球化) — the intensification of worldwide social relations linking distant localities such that events in one place are shaped by and shape events in places far away — has been profoundly accelerated and transformed by information and communication technologies. The internet, mobile telephony, satellite communication, and digital logistics systems have enabled the compression of time and space that David Harvey termed time-space compression (时空压缩), facilitating the global circulation of capital, goods, information, images, and people at unprecedented speed and scale.

Time-space compression (时空压缩): Harvey's concept describing the reduction of the time required to traverse space, made possible by innovations in transportation and communication technology. Digital technologies have intensified time-space compression to the point where information can traverse the globe instantaneously and economic transactions can be executed in microseconds.

Castells’s concept of the network society, discussed in Chapter 3, is fundamentally a theory of globalization: the network society is a global social structure organized around flows of information, capital, and technology that transcend national boundaries. The global internet, global financial networks, and global media systems constitute the infrastructure of a new form of social organization that is simultaneously global in reach and uneven in its distribution.

12.2 Global Flows and Cultural Globalization

Arjun Appadurai’s framework of global scapes (景观) provides a useful vocabulary for analyzing the cultural dimensions of technology-mediated globalization. Appadurai identifies five dimensions of global cultural flow: technoscapes (技术景观, the global configuration of technology), mediascapes (媒体景观, the distribution of media capacities and images), ideoscapes (意识形态景观, the global flow of ideologies), financescapes (金融景观, the global movement of capital), and ethnoscapes (族群景观, the global movement of people). These scapes are not synchronized; they move at different speeds, in different directions, and with different consequences, producing complex, unpredictable cultural configurations rather than simple homogenization.

The question of whether globalization produces cultural homogenization (文化同质化) --- the worldwide spread of a uniform, typically Western/American, culture --- or cultural heterogenization (文化异质化) --- the proliferation of diverse cultural forms through processes of hybridization, indigenization, and resistance --- remains actively debated. The dominance of American technology platforms (Google, Facebook, YouTube, Netflix) in global markets lends plausibility to homogenization arguments, but research consistently shows that global media are consumed, interpreted, and adapted in locally specific ways.

12.3 Digital Colonialism

The concept of digital colonialism (数字殖民主义) extends postcolonial critique to the digital age. Scholars including Sareeta Amrute, Kwet, and Couldry and Mejias argue that the global technology industry reproduces colonial patterns of domination through multiple mechanisms: the extraction of data from populations in the Global South by corporations based in the Global North; the imposition of technological standards, platforms, and business models designed in Silicon Valley on diverse global contexts; the exploitation of cheap labor in the Global South for manufacturing (electronics assembly in China, cobalt mining in the Congo) and digital piecework (content moderation in the Philippines, data labeling in Kenya); and the concentration of AI research and development in wealthy nations.

Digital colonialism (数字殖民主义): The use of digital technologies and data extraction to extend and reproduce colonial-era patterns of domination, exploitation, and dependency between the Global North and Global South. Manifested through data extraction, platform imperialism, labor exploitation, and the concentration of technological power and knowledge in wealthy nations.

Facebook’s Free Basics program — which offers limited, Facebook-curated internet access in developing countries — exemplifies the tensions of digital colonialism. Proponents argue that Free Basics extends connectivity to populations that would otherwise have no internet access. Critics argue that it positions Facebook as a gatekeeper controlling what the internet looks like for millions of users, entrenches dependence on a single American corporation, and undermines the principle of net neutrality (网络中立性) by creating a two-tiered internet in which those who can afford full access enjoy the open web while those who cannot are confined to a Facebook-curated walled garden.

12.4 Technology and Global Inequality

The relationship between technology and global inequality is complex and contested. Optimistic accounts emphasize the potential of ICTs to promote economic development in the Global South: mobile banking (M-Pesa in Kenya), telemedicine, agricultural information services, and e-commerce can connect previously excluded populations to economic opportunities. Pessimistic accounts emphasize the ways in which technology reinforces existing inequalities: the digital divide concentrates the benefits of ICTs among already-advantaged populations, data extraction transfers value from the Global South to the Global North, and automation threatens to displace the low-wage manufacturing jobs on which many developing economies depend.

The case of M-Pesa in Kenya illustrates both the potential and the limits of technology as a development tool. Launched in 2007, M-Pesa enabled mobile phone users to send and receive money, pay bills, and access financial services without a bank account. Research suggests that M-Pesa has lifted an estimated 194,000 Kenyan households out of poverty and has been particularly beneficial for women. However, M-Pesa's success depended on specific social, economic, and regulatory conditions that may not be replicable elsewhere, and the platformization of financial services raises concerns about data extraction, financial surveillance, and the privatization of economic infrastructure.

12.5 Technology Governance and Global Politics

The governance of global technology raises fundamental questions about sovereignty, democracy, and power. Key issues include: the regulation of transnational technology corporations that operate across jurisdictions with different legal frameworks; the governance of the internet itself (domain name management, technical standards, content regulation); the geopolitics of technology competition (U.S.-China rivalry in AI, 5G, and semiconductor manufacturing); the protection of data flows across borders; and the regulation of emerging technologies (AI, biotechnology, autonomous weapons) that pose novel governance challenges.

The concept of technological sovereignty (技术主权) has gained prominence as nations seek to reduce dependence on foreign technology providers, protect domestic data from foreign access, and develop indigenous technological capabilities. The European Union's pursuit of technological sovereignty through regulations such as the GDPR, the Digital Services Act, and the AI Act represents one model; China's construction of a largely self-contained digital ecosystem (the Great Firewall, domestic platforms such as WeChat, Baidu, and Alibaba) represents another. The tension between global connectivity and national sovereignty is a defining challenge of technology governance in the twenty-first century.

Conclusion: Technology, Power, and the Sociological Imagination

The sociological study of technology resists simple narratives of progress or decline, liberation or domination. Technology is neither an autonomous force that determines social outcomes nor a neutral tool that merely reflects the intentions of its users. It is a social phenomenon — shaped by power relations, embedded in institutions, productive of inequalities, and always open to contestation and transformation.

The frameworks examined in this text — technological determinism, social construction, actor-network theory, Marxist political economy, feminist technoscience, postmodernist critique, and network society theory — offer complementary lenses for analyzing the complex, contradictory, and consequential relationships between technology and society. No single framework is sufficient; each illuminates dimensions that the others obscure.

What C. Wright Mills called the sociological imagination (社会学想象力) — the capacity to connect personal troubles to public issues, to situate individual experience within historical and structural contexts — is perhaps more urgently needed in the age of digital technology than ever before. When we experience the anxiety of constant connectivity, the frustration of algorithmic opacity, the exhilaration of global communication, or the injustice of digital exclusion, the sociological imagination invites us to ask: What social structures produce these experiences? Whose interests do they serve? And how might they be otherwise?

Back to top