SOC 324: Digital Cultures: Technology & Society
Jennifer Whitson
Estimated study time: 43 minutes
Table of contents
Sources and References
Primary textbook — Quan-Haase, Anabel. 2020. Technology and Society: Social Networks, Power, and Inequality. Third Edition. Oxford University Press.
Supplementary texts — Lindgren, Simon. 2017. Digital Media & Society. SAGE Publications. | Norman, Don. 2013. The Design of Everyday Things. Revised and Expanded Edition. Basic Books. | Shariat, Jonathan, and Cynthia Savard Saucier. 2017. Tragic Design: The Impact of Bad Product Design and How to Fix It. O’Reilly Media. | Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books. | Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology. University of Chicago Press. | Bijker, Wiebe E., Thomas P. Hughes, and Trevor Pinch, eds. 1987. The Social Construction of Technological Systems. MIT Press. | MacKenzie, Donald, and Judy Wajcman, eds. 2009. The Social Shaping of Technology. Second Edition. Open University Press.
Online resources — Coldwell, Will. “What Happens When an AI Knows How You Feel?” Wired. | Radiolab. “The Trust Engineers.” WNYC Studios. | Blum, Andrew. “What is the Internet, Really?” TED Talk. | Bishop, Sophie. “Influencer Creep.” AoIR Selected Papers of Internet Research. | Gregg, Melissa. “The Pipeline Metaphor.” | Vassallo, Trae, et al. “Elephant in the Valley.” elephantinthevalley.com. | Sanders, Megan, and Catherine Ashcraft. “Confronting the Absence of Women in Technology Innovation.” NCWIT. | 99% Invisible. “Invisible Women.” Episode. | Tufekci, Zeynep. “The Social Internet: Frustrating, Enriching, but Not Lonely.” Public Culture. | Hicks, Marie. “Computer Love: Replicating Social Order Through Early Computer Dating.” Ada: A Journal of Gender, New Media, and Technology.
Chapter 1: Introduction to Technology and Society
What Is the Sociology of Technology?
The study of technology and society sits at the intersection of multiple disciplines, but sociology brings a distinctive lens to the conversation. While engineers ask how technology works and economists ask what technology costs, sociologists ask a more fundamental set of questions: Who builds technology, and for whom? How does technology reproduce or challenge existing social hierarchies? What assumptions about human life are embedded in the tools we design? These questions form the backbone of the field known as Science and Technology Studies (STS), also sometimes called Science, Technology, and Society. STS emerged in the latter half of the twentieth century as scholars grew dissatisfied with two competing but equally simplistic narratives about technology: that it is an autonomous force driving human progress, and that it is a neutral instrument whose effects depend entirely on how people choose to use it.
Quan-Haase’s Technology and Society provides a sociological framework for understanding these dynamics. The textbook emphasizes that technology is not merely a collection of devices but a social institution, shaped by the same forces of power, inequality, and culture that shape every other domain of human life. To study technology sociologically is to refuse the idea that invention happens in a vacuum. Every technology emerges from a particular social context, reflects the interests and assumptions of its creators, and produces effects that ripple unevenly across different groups of people.
Key Concepts and Definitions
Several foundational concepts recur throughout the study of digital cultures. Technology itself is a contested term. In everyday language, it tends to refer to electronic devices and digital platforms, but scholars define it more broadly as the application of knowledge for practical purposes, encompassing everything from stone tools to algorithms. The narrower focus on digital or information technologies reflects a historical moment in which computing, networking, and data processing have become central to economic production, social interaction, and cultural life.
Society refers not just to a collection of individuals but to the structured patterns of relationships, institutions, and power dynamics that organize collective life. The relationship between technology and society is therefore not a simple cause-and-effect chain but a complex process of mutual shaping. Technologies shape societies by enabling new forms of communication, work, surveillance, and entertainment. At the same time, societies shape technologies by determining which inventions receive funding, which designs become dominant, and which users are prioritized or ignored.
Digital culture refers to the norms, values, practices, and identities that emerge in and around digital technologies. It encompasses everything from how people present themselves on social media to how corporations extract value from user data, from the subcultures that form in online forums to the political movements that organize through encrypted messaging apps.
Why Study Technology Sociologically?
One of the most important reasons to study technology through a sociological lens is to counter the prevailing ideology of technological determinism, the belief that technology develops according to its own internal logic and that society must simply adapt to its consequences. Technological determinism appears in both utopian and dystopian forms. Utopian determinists celebrate each new invention as a step toward human liberation, while dystopian determinists warn that technology is destroying authentic human connection. Both versions share the assumption that technology is the primary driver and society is the passive recipient.
STS scholars challenge this assumption by demonstrating that technology is always the product of human choices, and that those choices are shaped by social, economic, and political forces. The design of a search engine, the architecture of a social media platform, the layout of a city’s broadband infrastructure: none of these are inevitable. They result from decisions made by particular people in particular institutional contexts, and they could have been made differently. Understanding this opens the door to critical engagement with technology, not merely as consumers or users but as citizens with the capacity to demand that technologies serve broader public interests.
Chapter 2: Defining Technology
Technological Determinism and Its Critics
Chapter 1 of Quan-Haase’s textbook introduces several theoretical frameworks for understanding the relationship between technology and society. The most influential, and most widely critiqued, is technological determinism. In its strongest form, technological determinism holds that the characteristics of a technology determine its social effects. The printing press caused the Protestant Reformation; the automobile caused suburban sprawl; the internet caused the decline of traditional media. These narratives have a seductive simplicity, but they collapse under scrutiny. The printing press was one factor among many in the Reformation, alongside theological disputes, political conflicts, and economic transformations. The automobile’s role in suburbanization depended on government policies like highway construction and mortgage subsidies, corporate lobbying by the auto industry, and racial segregation that made suburban development profitable for white families and developers.
Scholars distinguish between hard determinism and soft determinism. Hard determinism treats technology as the sole cause of social change, while soft determinism allows that social factors mediate technology’s effects but still grants technology a privileged causal role. Even soft determinism, however, tends to treat technology as an independent variable that acts upon society rather than emerging from it.
Social Construction of Technology (SCOT)
The most influential alternative to technological determinism is the Social Construction of Technology (SCOT) framework, developed by Trevor Pinch and Wiebe Bijker in the 1980s. SCOT argues that technologies do not have inherent properties that determine their use. Instead, different relevant social groups interpret and use technologies in different ways, and the meaning and function of a technology are the product of social negotiation rather than technical necessity.
Pinch and Bijker’s classic example is the bicycle. In the late nineteenth century, the bicycle was not a single, stable technology but a contested artifact. Young men saw the high-wheeled “penny-farthing” as a symbol of daring and athleticism. Women and older riders saw it as dangerous and impractical. Engineers proposed competing designs, and the eventual dominance of the “safety bicycle” with two equal-sized wheels was not the inevitable triumph of the best design but the result of social processes including changing gender norms, shifting consumer markets, and the development of new materials.
SCOT introduces the concept of interpretive flexibility, meaning that the same technology can mean different things to different groups. This flexibility eventually closes through a process of stabilization, in which one interpretation becomes dominant and the technology’s meaning and form become taken for granted. Once stabilized, a technology appears natural and inevitable, but SCOT reminds us that it could have developed differently.
Actor-Network Theory (ANT)
Actor-Network Theory (ANT), associated with Bruno Latour, Michel Callon, and John Law, takes a more radical approach to the technology-society relationship. ANT rejects the distinction between human and nonhuman actors, arguing that both participate in networks of action. A speed bump, for example, is an actor that slows down cars, enforcing traffic rules more reliably than a posted sign. A computer virus is an actor that disrupts networks and compels human responses. By treating technologies as actors rather than passive instruments, ANT draws attention to the ways in which material objects shape human behaviour.
ANT uses the concept of translation to describe how actors enroll other actors into networks, aligning their interests and creating stable associations. A successful technology, in ANT terms, is one that has enrolled a sufficient network of human and nonhuman allies: users, investors, regulations, infrastructure, and complementary technologies. This framework is useful for understanding why some technologies succeed and others fail, and why the same technology can thrive in one context and collapse in another.
Emotion-Sensing AI and the Problem of Neutrality
Will Coldwell’s article “What Happens When an AI Knows How You Feel?” illustrates the stakes of these theoretical debates. Emotion-sensing AI systems claim to detect human emotions from facial expressions, voice patterns, and physiological signals. If technology were truly neutral, these systems would simply measure objective emotional states. But research has shown that emotion recognition is deeply shaped by cultural context, that facial expressions do not map neatly onto internal emotional states, and that these systems perform differently across racial and gender groups. The technology is not a neutral mirror reflecting human emotions but a social construction that embeds particular assumptions about what emotions are and how they can be measured. This case study demonstrates why SCOT and ANT provide more adequate frameworks than technological determinism for understanding the relationship between technology and society.
Chapter 3: Social Media Platforms and the Social Internet
The Architecture of Social Media
Simon Lindgren’s chapter on social media platforms in Digital Media & Society provides a framework for understanding these systems not merely as communication tools but as infrastructures that shape the conditions of public life. Social media platforms are built on particular architectures: technical designs that determine what users can do, what information is visible, and how content circulates. Facebook’s News Feed algorithm, for example, does not simply display posts in chronological order but selects and ranks content based on predicted engagement, creating a curated experience that varies from user to user. Twitter’s (now X’s) character limit and retweet function encourage particular forms of expression: brevity, wit, and virality. Instagram’s visual orientation privileges certain kinds of self-presentation and rewards particular aesthetic conventions.
These architectural choices are not neutral. They are designed to maximize user engagement, which in turn generates advertising revenue. The attention economy describes a condition in which human attention is the scarce resource that platforms compete to capture and monetize. Every design decision, from the infinite scroll to the notification badge, is calibrated to keep users on the platform as long as possible.
Platforms as Intermediaries and Governors
Lindgren emphasizes that platforms are not merely conduits for user-generated content but active intermediaries that shape what content is created, how it circulates, and who sees it. Platform companies have historically positioned themselves as neutral intermediaries, claiming that they simply provide tools and that users are responsible for what they do with them. This claim of neutrality has come under increasing scrutiny as researchers, journalists, and regulators have documented the ways in which platform design amplifies misinformation, facilitates harassment, and concentrates economic power.
The concept of platform governance refers to the rules, policies, and algorithmic systems through which platforms regulate user behaviour. Content moderation policies determine what speech is permitted and what is removed. Algorithmic ranking systems determine what content is amplified and what is suppressed. Terms of service agreements establish the legal framework within which users interact with the platform. These governance mechanisms are largely opaque to users, creating significant asymmetries of power between platforms and their publics.
The Trust Engineers
The Radiolab episode “The Trust Engineers” provides a vivid illustration of how platform design shapes social behaviour. The episode focuses on Facebook’s data science team, which conducted experiments on millions of users to understand how design changes affected their behaviour. The most controversial of these experiments involved manipulating users’ News Feeds to test whether exposure to more positive or negative content affected users’ own emotional expression, a study that raised profound ethical questions about informed consent and the power of platforms to conduct social experiments at scale.
The episode reveals that the engineers who design these platforms are making consequential social decisions, often without the training or accountability structures that would be expected in other domains of social governance. They are, in effect, engineers of trust, designing the systems through which billions of people form impressions of one another, evaluate information, and make decisions. The sociological significance of this role cannot be overstated: platform engineers are among the most influential actors in contemporary social life, yet their decisions are shaped primarily by corporate imperatives rather than democratic deliberation.
Chapter 4: Is Technology Neutral?
The Politics of Design
Quan-Haase’s Chapter 3 challenges the widespread assumption that technology is neutral, a mere tool whose effects depend entirely on how people use it. This “instrumentalist” view treats technology as value-free, but STS scholars have demonstrated convincingly that technologies embody particular values, assumptions, and power relations. Langdon Winner’s famous essay “Do Artifacts Have Politics?” provides the classic statement of this position. Winner argued that the overpasses on Long Island, New York, were deliberately designed with low clearances to prevent buses, which were used primarily by low-income people and people of colour, from reaching the beaches. Whether or not this particular historical claim is entirely accurate, the broader point is well established: design choices have political consequences, and those consequences are often distributed unevenly across social groups.
The Psychopathology of Everyday Things
Don Norman’s chapter “The Psychopathology of Everyday Things” from The Design of Everyday Things approaches the question of design from a different but complementary angle. Norman argues that when people have difficulty using a technology, the problem usually lies not with the user but with the design. Affordances, a concept Norman borrows from the psychologist James Gibson, refer to the properties of an object that suggest how it can be used. A door handle affords pulling; a flat plate affords pushing. When affordances are well designed, users intuitively understand how to interact with the object. When they are poorly designed, users become confused, frustrated, and prone to error.
Norman introduces several key concepts that are essential for understanding the relationship between design and social life. Signifiers are the cues that communicate where action should take place. A button that looks pressable is a signifier; an underlined word on a webpage that suggests a hyperlink is a signifier. Mapping refers to the relationship between controls and their effects. A well-mapped stove has burner controls arranged in the same spatial pattern as the burners themselves. Feedback refers to the information that a system provides about the results of an action. A click sound when a button is pressed, a progress bar during a file download: these are forms of feedback that help users understand what is happening.
These concepts may seem narrowly technical, but they have profound sociological implications. Design choices determine who can use a technology and who is excluded. A website that relies on small text and complex navigation may be unusable for people with visual impairments or limited digital literacy. A smartphone app that requires a high-speed internet connection excludes users in areas with poor connectivity. Every design choice is an implicit statement about who the intended user is, what their capabilities are, and what they are expected to do.
Values in Design
The concept of values in design (sometimes called value-sensitive design) formalizes the insight that technologies are never neutral. Scholars in this tradition argue that designers should explicitly consider the values their technologies embody and the social consequences they produce. This means asking questions like: Who benefits from this design? Who is harmed? Whose perspectives are included in the design process, and whose are excluded? What assumptions about human behaviour, identity, and social life are built into the technology?
Technologies embed scripts, which are the behavioural patterns that designers inscribe into artifacts. A script defines the expected user and the expected use. When a technology works well, its script is invisible because users’ behaviour conforms to the designer’s expectations. When it fails, the disconnect between the script and actual user behaviour becomes apparent. The concept of scripts draws attention to the fact that design is a form of social regulation: by shaping the material environment, designers shape the range of possible human actions.
Chapter 5: Hubs, Tubes, and Valleys
The Physical Infrastructure of the Internet
Quan-Haase’s Chapter 4 addresses a dimension of technology that is often overlooked in discussions of digital culture: its physical materiality. The internet is commonly imagined as a placeless, immaterial “cloud,” but it is in fact a vast physical infrastructure of undersea cables, data centres, server farms, cell towers, and fibre-optic networks. Understanding this materiality is essential for understanding the social dynamics of digital culture, because the physical infrastructure of the internet is unevenly distributed, privately owned, and shaped by the same geographic, economic, and political forces that structure all other forms of infrastructure.
Andrew Blum’s TED talk “What is the Internet, Really?” provides a compelling introduction to the material internet. Blum describes visiting the physical locations where the internet exists: the nondescript buildings that house internet exchange points, the landing stations where undersea cables come ashore, the massive data centres that store and process the world’s information. These sites are concentrated in particular places, usually near major cities and in regions with cheap electricity and favourable climates for cooling. The geography of the internet is not the flat, borderless space of popular imagination but a highly uneven landscape shaped by the logic of capital investment.
Undersea Cables and Global Connectivity
Approximately 99 per cent of intercontinental data traffic travels through undersea fibre-optic cables, a fact that belies the wireless imagery of “the cloud.” These cables are owned and operated by a small number of telecommunications companies and, increasingly, by major technology firms like Google, Facebook, Microsoft, and Amazon. The routes of these cables follow historical patterns of colonial trade and geopolitical influence, connecting major financial centres while bypassing large parts of the Global South. This means that the physical infrastructure of the internet reproduces and reinforces existing global inequalities.
The vulnerability of this infrastructure is also significant. Undersea cables are susceptible to damage from anchors, earthquakes, and sabotage. When cables are cut, entire countries can lose internet connectivity, with devastating consequences for economies and communication systems that have become dependent on digital networks. The concentration of internet infrastructure in a small number of physical locations also creates points of vulnerability that can be targeted by state actors or exploited for surveillance.
Data Centres and Environmental Impact
Data centres are the physical facilities that house the servers, storage systems, and networking equipment that power the internet. They consume enormous amounts of electricity, both to run the computing equipment and to cool it. By some estimates, data centres account for approximately 1 to 2 per cent of global electricity consumption, a figure that is growing as demand for cloud computing, streaming media, and artificial intelligence increases.
The environmental impact of digital technologies extends beyond energy consumption. The production of computing devices requires the extraction of rare earth minerals and other materials, often under conditions of extreme exploitation. Cobalt, essential for lithium-ion batteries, is mined in the Democratic Republic of Congo under conditions that include child labour and environmental devastation. The disposal of electronic devices generates e-waste, much of which is shipped to developing countries where it is processed under unsafe conditions, exposing workers and communities to toxic materials.
Silicon Valley and the Geography of Innovation
The concentration of the technology industry in Silicon Valley and a small number of other innovation hubs reflects and reinforces broader patterns of geographic inequality. The success of Silicon Valley is not the result of some natural advantage but of a specific constellation of historical factors: military investment in electronics during and after World War II, the proximity of Stanford University and its entrepreneurial culture, the availability of venture capital, and the development of a local labour market with deep expertise in computing and engineering.
This geographic concentration has important social consequences. It means that the people who design the technologies used by billions are drawn from a narrow demographic and geographic base. The values, assumptions, and life experiences of Silicon Valley engineers and entrepreneurs are inscribed into the technologies they create, shaping the digital experiences of users around the world who may have very different needs, contexts, and perspectives.
Chapter 6: Digital Divides
Defining the Digital Divide
Quan-Haase’s Chapter 6 examines the concept of the digital divide, which refers to the unequal distribution of access to, use of, and benefits from digital technologies. The digital divide is not a single gap but a complex set of overlapping inequalities shaped by income, education, age, gender, race, geography, and disability. Early discussions of the digital divide focused primarily on the first-level divide, the gap between those who have physical access to computers and the internet and those who do not. While this gap has narrowed in many countries, it remains significant globally, with billions of people still lacking reliable internet access.
More recent scholarship has focused on the second-level divide, which concerns not access but usage. Even among people who have access to the internet, there are significant differences in how they use it. People with higher levels of education and income tend to use the internet for activities that enhance their social and economic capital, such as searching for information, acquiring new skills, and networking. People with lower levels of education and income are more likely to use the internet primarily for entertainment and social communication. These usage differences mean that the internet can actually reinforce existing inequalities rather than reducing them.
A third-level divide has also been identified, concerning the outcomes or tangible benefits that people derive from their internet use. Even when people have similar access and usage patterns, they may derive different benefits depending on their pre-existing social capital, digital literacy, and the quality of the online resources available to them.
Design and Exclusion
Jonathan Shariat and Cynthia Savard Saucier’s “Design Can Exclude” chapter from Tragic Design provides concrete examples of how design decisions can exclude particular groups of users. The chapter documents cases in which poor design has caused real harm: medical devices with confusing interfaces that led to patient deaths, voting machines with misleading layouts that caused voters to select the wrong candidate, and websites that are inaccessible to people with disabilities.
The concept of inclusive design responds to these problems by arguing that technologies should be designed to be usable by the widest possible range of people, including people with disabilities, people who are not native speakers of the dominant language, and people with limited digital literacy. Inclusive design is not merely a matter of adding accessibility features after the fact but of centring diverse users throughout the design process.
Consider the design of CAPTCHA systems, which require users to identify distorted text or select images matching a description. These systems were designed to distinguish human users from automated bots, but they systematically exclude people with visual impairments, cognitive disabilities, and limited literacy. Audio alternatives are often difficult to understand. The "design problem" of preventing spam is solved in a way that creates a new problem of excluding vulnerable users. This is a microcosm of the broader dynamic that Shariat and Saucier describe: design decisions that seem technical and neutral often have significant social consequences.
Disability and Technology
The relationship between disability and technology is complex. On one hand, digital technologies have created new opportunities for people with disabilities: screen readers enable blind users to access text, voice recognition enables people with mobility impairments to control computers, and online communities provide spaces for connection and mutual support. On the other hand, many digital technologies are designed without consideration for users with disabilities, creating new forms of exclusion. The Web Content Accessibility Guidelines (WCAG) provide standards for accessible web design, but compliance remains uneven, and many websites and applications fail to meet basic accessibility requirements.
The social model of disability is relevant here. This model, which emerged from disability rights activism, argues that disability is not an inherent property of individuals but a product of social and environmental barriers. A wheelchair user is not disabled by their body but by the absence of ramps, elevators, and accessible transportation. Similarly, a blind person is not disabled by their vision but by websites that lack alt text, screen reader compatibility, and keyboard navigation. Applying this model to digital technologies shifts the focus from individual impairment to design choices that either enable or disable particular users.
Chapter 7: Digital Work
The Transformation of Work in the Digital Age
Quan-Haase’s Chapter 7 examines how digital technologies have transformed the nature of work. The concept of digital labour encompasses a wide range of activities, from the highly paid work of software engineers and data scientists to the precarious gig work of ride-share drivers and food delivery couriers, to the unpaid labour of social media users who generate the content that platforms monetize.
The platform economy or gig economy describes a model of work in which platforms like Uber, TaskRabbit, and Deliveroo connect workers with customers, taking a cut of each transaction. Platform companies typically classify workers as independent contractors rather than employees, avoiding obligations like minimum wage, benefits, and job security. This classification has been challenged in courts and legislatures around the world, with workers arguing that the degree of control platforms exercise over their work makes them employees in all but name.
Algorithmic management is a key feature of platform work. Instead of human supervisors, platform workers are managed by algorithms that assign tasks, monitor performance, and determine pay. Uber’s algorithm, for example, decides which rides drivers are offered, tracks their acceptance rates, and adjusts fares based on demand. This form of management is often experienced as opaque and arbitrary, since workers cannot see or challenge the logic behind algorithmic decisions.
Influencer Labour and Affective Work
Sophie Bishop’s “Influencer Creep” examines the labour of social media influencers, people who build audiences on platforms like Instagram, YouTube, and TikTok and monetize their personal brands through sponsorships, advertising, and merchandise sales. Bishop argues that influencer labour has “crept” beyond the boundaries of traditional work, blurring the distinction between work and leisure, professional and personal life, producer and consumer.
Influencer work is a form of affective labour, meaning that it involves the production and management of emotions, relationships, and personal identity. Influencers must constantly perform authenticity, sharing intimate details of their lives to build trust with their audiences. At the same time, they must strategically manage their self-presentation to attract sponsors and maintain their brand. This dual imperative creates a form of labour that is both deeply personal and thoroughly commercialized.
The economics of influencer work are highly unequal. A small number of top influencers earn substantial incomes, while the vast majority earn little or nothing. The platform’s algorithmic systems play a crucial role in determining who succeeds, since visibility depends on algorithmic amplification rather than the intrinsic quality of content. This creates a winner-take-all dynamic in which a few creators capture most of the attention and revenue while most remain invisible.
The Myth of the Passion Economy
The rhetoric of the passion economy, in which people are encouraged to “do what they love” and monetize their hobbies and interests, obscures the exploitative dynamics of digital labour. When work is framed as self-expression and personal fulfilment, workers are less likely to recognize and resist exploitation. The expectation that creative workers should be motivated by passion rather than compensation justifies low pay, long hours, and the absence of labour protections.
This dynamic is not unique to the digital economy but is intensified by it. Social media platforms profit from the free labour of their users, who create content, generate data, and provide attention that is sold to advertisers. The concept of free labour, developed by Tiziana Terranova, describes the ways in which internet users perform productive work for which they are not compensated. Every post, like, and comment is a form of labour that generates value for platform owners.
Chapter 8: Technology and Gender
The Gendering of Technology
Quan-Haase’s Chapter 8 examines the relationship between technology and gender, arguing that technology has historically been constructed as a masculine domain and that this construction has significant consequences for who participates in technological development, who benefits from technological change, and how technologies are designed.
The association between masculinity and technology is not natural or inevitable but is the product of historical processes. In the early history of computing, many of the pioneering programmers were women. During World War II, women performed complex mathematical calculations and programmed early computers like the ENIAC. As computing became more prestigious and lucrative, however, men increasingly dominated the field, and programming was reconstructed as a masculine activity requiring abstract mathematical reasoning rather than the clerical and communicative skills that had characterized early computing work.
The Pipeline Problem and Its Limits
Melissa Gregg’s analysis of the pipeline metaphor challenges the dominant explanation for women’s underrepresentation in technology. The pipeline metaphor suggests that the solution to gender inequality in tech is to increase the number of women entering the field, by encouraging girls to study STEM subjects, providing scholarships and mentoring programs, and recruiting women into technology companies. While these interventions are valuable, Gregg argues that the pipeline metaphor is inadequate because it focuses on supply (the number of women entering the field) while ignoring demand (whether the industry is structured in ways that retain and promote women).
Research consistently shows that women leave the technology industry at much higher rates than men, citing hostile workplace cultures, sexual harassment, lack of advancement opportunities, and the difficulty of balancing demanding work schedules with caregiving responsibilities. The pipeline metaphor treats these as individual problems that women must overcome rather than structural problems that the industry must address.
The Elephant in the Valley
The “Elephant in the Valley” survey by Trae Vassallo and colleagues provides stark evidence of the experiences of women in Silicon Valley. The survey found that 60 per cent of women in tech reported receiving unwanted sexual advances, and of those, 65 per cent reported advances from a superior. Nearly 90 per cent reported witnessing sexist behaviour at company events or industry conferences. One in three reported feeling afraid for their personal safety because of work-related circumstances.
These findings demonstrate that the underrepresentation of women in technology is not primarily a pipeline problem but a culture problem. The technology industry has developed a workplace culture that is often hostile to women and other marginalized groups, and this culture is reflected in the technologies the industry produces. When design teams lack diversity, the technologies they create are more likely to embed biases and exclude the perspectives of underrepresented groups.
Chapter 9: Human Bias in Technology Design
Confronting Bias
Megan Sanders and Catherine Ashcraft’s “Confronting the Absence of Women in Technology Innovation” extends the analysis of gender and technology by examining how the underrepresentation of women in technology design leads to products that fail to serve women’s needs. They argue that diversity in design teams is not just a matter of fairness but of functionality: more diverse teams produce better products because they bring a wider range of perspectives and experiences to the design process.
The concept of algorithmic bias refers to the ways in which automated systems produce systematically unfair outcomes for particular groups. Algorithmic bias can arise from several sources: biased training data that reflects historical patterns of discrimination, design choices that fail to account for the needs of diverse users, and evaluation metrics that prioritize some outcomes over others. Facial recognition systems, for example, have been shown to perform significantly less accurately on darker-skinned faces, particularly darker-skinned women, because they were trained primarily on datasets of lighter-skinned faces.
Invisible Women
The 99% Invisible episode “Invisible Women,” drawing on Caroline Criado Perez’s research, documents the pervasive consequences of designing for a default male user. From crash test dummies modelled on male bodies (resulting in higher injury rates for women in car accidents) to smartphones too large for average female hands, to medical research that excludes female subjects, the episode reveals a systematic pattern of gender data gaps in which women’s bodies, experiences, and needs are treated as deviations from a male norm.
In the digital realm, these data gaps manifest in technologies that fail to recognize women’s voices (voice recognition systems trained on male voices), health apps that lack menstruation tracking, and AI systems that associate women with domestic roles. The solution is not simply to “add women” to existing designs but to fundamentally rethink the design process, centring the experiences of marginalized users from the outset rather than treating them as afterthoughts.
The concept of intersectionality, developed by legal scholar Kimberle Crenshaw, is essential for understanding how bias operates in technology design. Intersectionality recognizes that people experience multiple, overlapping forms of marginalization based on race, gender, class, disability, sexuality, and other dimensions of identity. A Black woman, for example, does not experience racism and sexism as separate, additive forces but as a combined form of discrimination that is qualitatively different from either alone. Technologies that fail to account for intersectionality may address one dimension of bias while ignoring others, producing solutions that benefit privileged members of marginalized groups while leaving the most vulnerable behind.
Chapter 10: Alone Together
Social Relationships in the Digital Age
Quan-Haase’s Chapter 9 examines how digital technologies shape interpersonal relationships, challenging both utopian and dystopian narratives. Sherry Turkle’s concept of being “alone together” captures the paradox of digital social life: we are more connected than ever, with constant access to social networks, messaging apps, and video calls, yet many people report feeling lonelier and more isolated than before.
Turkle argues that digital communication encourages performative interaction, in which people carefully curate their self-presentation rather than engaging in authentic, spontaneous conversation. Texting and social media allow people to control the timing, content, and emotional register of their communication in ways that face-to-face interaction does not. This control can reduce the vulnerability and risk that are essential to deep human connection.
However, Turkle’s critique has been challenged by scholars who argue that she romanticizes face-to-face interaction and underestimates the value of online communication for people who face barriers to in-person sociality. For people with disabilities, people in rural areas, people with stigmatized identities, and people who are geographically separated from friends and family, digital communication can be a lifeline rather than a substitute for “real” connection.
The Social Internet
Zeynep Tufekci’s essay “The Social Internet: Frustrating, Enriching, but Not Lonely” provides a more nuanced account of online social life. Tufekci argues that the internet has not replaced face-to-face interaction but has added new dimensions to social life. People use digital technologies to maintain existing relationships, form new ones, and participate in communities that would not be possible without networked communication. Online communities of interest, support groups for people with rare diseases, activist networks that span national borders: these are genuinely new forms of social connection that the internet has enabled.
Tufekci also acknowledges the frustrations of online social life: the toxicity, the misinformation, the algorithmic manipulation. But she argues that these problems are not inherent to digital communication but are the product of particular platform designs and business models. A social internet designed to serve public interests rather than maximize advertising revenue would produce very different social outcomes.
Networked Individualism
The sociologist Barry Wellman has proposed the concept of networked individualism to describe the social structure that has emerged in the digital age. In contrast to the traditional model of social life organized around tight-knit, geographically bounded communities, networked individualism describes a world in which individuals are the primary unit of social organization, maintaining diverse, far-flung networks of weak and strong ties. Digital technologies facilitate this shift by enabling people to maintain connections across geographic distances, to participate in multiple communities simultaneously, and to customize their social environments.
Networked individualism has both liberating and isolating dimensions. It gives individuals more autonomy and choice in their social lives, allowing them to connect with people who share their interests and values rather than being limited to the people in their immediate physical environment. But it also places greater demands on individuals to actively maintain their social networks, and it can leave people without the deep, reliable support structures that characterized more traditional forms of community.
Chapter 11: Online Identity and Self-Presentation
Digital Identity and Impression Management
Quan-Haase’s Chapter 10 examines the construction of identity in digital environments. Erving Goffman’s concept of impression management, originally developed to describe face-to-face interaction, has been widely applied to online self-presentation. Goffman argued that social life is a kind of performance in which individuals present particular versions of themselves to different audiences, managing their appearance, behaviour, and speech to create desired impressions.
Digital technologies transform impression management by providing new tools for self-presentation and new audiences to present to. Social media profiles, for example, allow users to carefully curate their identities, selecting flattering photos, crafting witty bios, and sharing content that projects a desired image. The asynchronous nature of much online communication gives users time to compose and edit their self-presentations in ways that are not possible in face-to-face interaction.
You Looked Better on Facebook
The gap between online self-presentation and offline reality is a recurring theme in discussions of digital identity. The curated nature of social media profiles creates what researchers call social comparison, in which users compare their own lives to the idealized presentations of others. Because people tend to share positive experiences and conceal negative ones, social media can create the impression that everyone else is happier, more successful, and more attractive than oneself. This phenomenon has been linked to increased rates of depression, anxiety, and low self-esteem, particularly among young people.
However, the relationship between social media use and mental health is complex and contested. Some researchers argue that the negative effects of social media have been overstated and that the evidence for a causal relationship between social media use and mental health problems is weak. Others point out that social media use is not a monolithic activity: passive consumption (scrolling through others’ posts) appears to have different effects than active engagement (posting, commenting, messaging).
Computer Love
Marie Hicks’s article “Computer Love: Replicating Social Order Through Early Computer Dating” examines the history of computer dating services, which emerged in the 1960s as one of the earliest applications of computing to personal life. Hicks shows that these early dating systems were not neutral matchmaking tools but reflected and reinforced the social hierarchies of their time. They were designed primarily for heterosexual matching, embedded conventional assumptions about gender roles and attractiveness, and reproduced racial segregation through their matching algorithms.
Hicks’s historical analysis reveals a pattern that continues in contemporary dating apps. Algorithms that learn from user behaviour tend to reproduce existing patterns of discrimination, since users’ preferences are shaped by the same social forces that produce inequality in the offline world. Swipe-based dating apps, for example, have been shown to disadvantage people of colour, who receive fewer matches than white users, not because of the app’s explicit design but because the app amplifies existing patterns of racial preference.
Algorithmic discrimination occurs when automated systems produce outcomes that systematically disadvantage particular social groups. Unlike intentional discrimination, algorithmic discrimination often arises from the interaction between biased training data, design choices that fail to account for social context, and feedback loops that amplify existing inequalities. Addressing algorithmic discrimination requires not only technical interventions (such as debiasing training data) but also social and institutional changes (such as diversifying design teams and establishing regulatory oversight).
Chapter 12: Technology Futures
Emerging Technologies and Social Futures
Quan-Haase’s Chapter 12 turns to the future, examining emerging technologies and their potential social implications. Artificial intelligence (AI), the Internet of Things (IoT), big data, and automation are among the technologies that are likely to reshape social life in the coming decades. Each of these technologies raises fundamental questions about power, inequality, privacy, and democratic governance.
AI systems are increasingly used to make consequential decisions about hiring, lending, criminal sentencing, and healthcare. These systems promise efficiency and objectivity, but as we have seen throughout this course, the claim of technological neutrality is misleading. AI systems are trained on historical data that reflects existing patterns of discrimination, and their outputs can reinforce and amplify those patterns. The opacity of many AI systems, often described as the black box problem, makes it difficult to identify and challenge discriminatory outcomes.
Surveillance and Privacy
The proliferation of digital technologies has created unprecedented capacities for surveillance. Governments use digital technologies to monitor citizens, from CCTV cameras with facial recognition to metadata collection programs that track communication patterns. Corporations collect vast amounts of data about consumers’ behaviour, preferences, and social connections, using this data to target advertising, personalize services, and predict future behaviour.
Shoshana Zuboff’s concept of surveillance capitalism describes a new economic logic in which the extraction and analysis of behavioural data is the primary source of profit. Under surveillance capitalism, human experience is treated as raw material to be mined for data, which is then used to predict and modify human behaviour. Zuboff argues that this represents a fundamental threat to human autonomy and democratic self-governance, since it concentrates the power to shape human behaviour in the hands of a small number of corporations.
The concept of the panopticon, borrowed from Michel Foucault’s analysis of Jeremy Bentham’s prison design, is often invoked to describe the social effects of pervasive surveillance. In the panopticon, prisoners can be observed at any time without knowing whether they are actually being watched, leading them to internalize the surveillance and regulate their own behaviour. Digital surveillance creates a similar dynamic, in which the knowledge that one’s online activities are being monitored leads to self-censorship and conformity.
Automation and the Future of Work
The automation of work through AI, robotics, and algorithmic systems raises profound questions about the future of employment, economic inequality, and human purpose. Optimistic accounts suggest that automation will free humans from tedious and dangerous work, creating new opportunities for creativity and leisure. Pessimistic accounts warn of mass unemployment, as machines replace human workers in an ever-expanding range of tasks.
The historical record suggests that technological change does not simply eliminate jobs but transforms the nature of work, creating new occupations while destroying old ones. However, the distribution of costs and benefits is highly unequal. Workers with high levels of education and specialized skills are more likely to benefit from automation, while workers in routine manual and cognitive tasks are more likely to be displaced. Without deliberate policy interventions, such as education and retraining programs, social safety nets, and potentially universal basic income, automation could exacerbate existing economic inequalities.
Ethics of Technology Design
The ethical dimensions of technology design have become increasingly prominent in public discourse. The concept of responsible innovation calls on technologists to anticipate the social consequences of their work, engage with diverse stakeholders, and take responsibility for the impacts of the technologies they create. This requires moving beyond the narrow technical focus of engineering education to incorporate ethical reasoning, social awareness, and democratic accountability into the design process.
The central insight of STS, and of this course, is that technology is not an autonomous force but a social product. Technologies are designed by particular people, in particular institutional contexts, reflecting particular values and interests. This means that technologies can be redesigned. The digital futures we create are not predetermined but are the result of choices we make, individually and collectively, about what kinds of technologies to build, how to regulate them, and whose interests they should serve. Understanding the social dimensions of technology is the first step toward shaping technological futures that are more just, more inclusive, and more democratic.
Looking Forward: Critical Digital Citizenship
The knowledge gained from studying digital cultures equips students not merely to use technologies more effectively but to engage with them critically. Critical digital citizenship involves understanding the social, economic, and political forces that shape digital technologies; recognizing the values and assumptions embedded in the tools we use; and participating in public deliberation about how technologies should be designed, regulated, and governed.
This means asking questions that go beyond individual user experience: Who profits from this platform? Whose labour makes it possible? What data is being collected, and how is it being used? Who is included in the design process, and who is excluded? What alternative designs might serve public interests more effectively? These are not merely academic questions but urgent practical ones, as digital technologies become ever more deeply integrated into every dimension of social life.
The sociological study of technology does not provide definitive answers to these questions, but it provides the conceptual tools and empirical evidence needed to ask them well. It reveals the contingency of technological development, the politics of design, and the possibilities for democratic engagement with the technologies that shape our world. In doing so, it opens space for imagining and building digital futures that are not merely efficient or profitable but genuinely humane.