COMMST 491: How to Survive Toxic Media Ecologies
Shana MacDonald
Estimated study time: 39 minutes
Table of contents
Sources and References
Primary textbook — Whitney Phillips & Ryan M. Milner, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape (MIT Press, 2021). Supplementary texts — Alice Marwick & Rebecca Lewis Media Manipulation and Disinformation Online; Safiya Umoja Noble Algorithms of Oppression; Ruha Benjamin Race After Technology; Sander van der Linden Foolproof; Neil Postman Amusing Ourselves to Death. Online resources — Data & Society Research Institute; Center for Countering Digital Hate; Stanford Internet Observatory; Oxford Internet Institute.
Chapter 1 — Media Ecologies: A Framework for Survival
The phrase “media ecology” was popularized by Marshall McLuhan and developed in detail by Neil Postman, who defined it as the study of media as environments — the way communication technologies structure the conditions under which people think, feel, remember, and act. McLuhan’s famous aphorism “the medium is the message” insisted that the content delivered by a medium matters less than the medium’s structural effects on perception. A printing press does not simply circulate text; it reorganizes the mind around linearity, abstraction, and private reading. Television does not merely transmit news; it reorganizes politics around face, voice, and entertainment. Postman, in Amusing Ourselves to Death, extended this argument to warn that the televisual environment had converted serious public discourse into spectacle, and that a culture that cannot tell the difference between argument and entertainment will lose the ability to govern itself.
Phillips and Milner build on this tradition but update it for the algorithmic age. In You Are Here, they ask readers to imagine the media environment as a literal ecosystem — watershed, forest, atmosphere — where every actor is downstream of every other. The metaphor is deliberate. Ecologies are not controlled by any single agent; they emerge from countless interactions among organisms, resources, and flows. A toxin introduced at one point travels through the system. A species that collapses in one region has effects in distant regions. Health and sickness are always systemic, not individual.
This framework matters because the dominant alternative — thinking about media “effects” as discrete arrows from senders to receivers — misses most of what actually happens online. A YouTube radicalization pipeline is not a single message that converts a single viewer. It is a cascade in which recommender systems, monetization incentives, subcultural in-jokes, cross-platform amplification, mainstream press coverage, and user identity work all interact. To understand it you need to see the whole watershed at once.
For survival the ecological framing has three immediate consequences. First, responsibility is distributed. Blaming “bad users” or “bad platforms” alone misreads a system in which ordinary people, journalists, algorithms, and advertisers all contribute to the flow. Second, everything is connected. The joke you forward from a fringe forum today becomes the rhetoric of a cable-news host next month. Third, individual choices still matter, but only when understood as participation in a shared environment. You are never just a lone consumer; you are a node in a network whose actions ripple outward.
The seminar therefore begins by replacing the consumer metaphor with the citizen-of-an-ecology metaphor. You are not picking items off a shelf. You are drinking from a river that you and others also pour into.
Chapter 2 — Network Pollution and the Information Environment
In a 2021 report for the Media, Inequality & Change Center, Whitney Phillips and Ryan Milner coined the phrase network pollution to describe the cumulative degradation of the information environment by toxic communication. Their analogy is precise. Industrial pollution is not the product of any single malicious actor. It is the aggregate externality of thousands of legal, rational, profit-seeking activities. No factory owner sets out to poison the watershed, yet the watershed is poisoned nonetheless. Network pollution works the same way: spam, clickbait, harassment, disinformation, manipulative optimization, engagement-bait outrage, and platform-amplified conspiracism together produce an environment in which it becomes harder and harder to breathe.
The concept has real explanatory power because it directs attention away from individual villains and toward the structural conditions that generate pollution. Social platforms are optimized for attention. Attention is maximized by strong emotional reactions. Strong emotional reactions are most reliably produced by outrage, fear, and identity threat. So the equilibrium behavior of the system — absent countervailing design — is the steady accumulation of the toxic material that produces those reactions. Nobody in particular has to be evil for the river to end up full of heavy metals.
Phillips and Milner identify several kinds of network pollution. There is deliberate pollution, created by actors who know they are seeding falsehood, harassment, or hate. There is accidental pollution, produced by well-intentioned journalists, educators, and users who amplify toxic content in the process of debunking or criticizing it. And there is ambient pollution, the low-grade background haze of context collapse, out-of-context screenshots, and engagement-optimized rage that fills every feed regardless of any particular intent. A single tweet can pass through all three forms in a day.
The most important practical insight is that debunking and “calling out” can themselves be vectors of contamination. Phillips’s earlier work, This Is Why We Can’t Have Nice Things, showed how trolling subcultures used mainstream journalistic outrage as a megaphone — the more the press condemned the troll, the more visibility the troll received, the more the spectacle spread. The same dynamic now shapes fact-checking. Repeating a false claim in order to deny it can imprint the claim on readers who no longer remember the denial. Van der Linden’s Foolproof summarizes the evidence: under some conditions, corrections can produce a “continued influence effect,” where the myth persists even after people accept that it has been refuted. This does not mean debunking is futile, but it does mean that careless debunking is part of the problem.
A public health analogy helps. You would not respond to a cholera outbreak by giving every family a stronger immune system; you would clean the water. Network pollution asks the same of the information environment. Individual skills matter, but no population can skill its way out of a polluted watershed.
Chapter 3 — Critical Ignoring as a Survival Skill
Traditional media literacy in the twentieth century taught people to engage critically with a finite set of authoritative sources. You read the newspaper, watched the evening news, and thought carefully about what they told you. This model fails in the twenty-first century because the limiting resource is no longer the availability of information — it is the availability of attention. Attention is finite. Information is infinite. Engaging critically with every piece of content you encounter would consume a lifetime before lunch.
The cognitive scientists Anastasia Kozyreva, Sam Wineburg, Stephan Lewandowsky, and Ralph Hertwig have argued that the corresponding twenty-first-century skill is critical ignoring: the deliberate, disciplined practice of not engaging with most of what crosses your feed. Critical ignoring is not the same as closing your eyes. It is the strategic allocation of cognitive resources, based on an understanding of how the environment is designed to capture them.
Three techniques structure the practice. First, self-nudging: rearranging your own information environment so that toxic inputs are harder to reach. Turning off notifications, uninstalling apps that produce compulsive scrolling, using feed readers or email newsletters that you choose rather than algorithms that choose for you — all are forms of self-nudging. They acknowledge that willpower is an unreliable defense and that environmental design is stronger.
Second, lateral reading. Developed and tested empirically by Sam Wineburg’s Stanford History Education Group, lateral reading is the habit used by professional fact-checkers: when you land on an unfamiliar source, you do not stay on the page trying to assess it from its own self-presentation. You open new tabs and read about the source across the web. The counterintuitive finding is that reading laterally for two minutes produces better accuracy judgments than reading vertically for twenty, because the page itself is optimized to look trustworthy whether or not it is.
Third, do-not-feed-the-trolls discipline. Engagement with bad-faith provocation is usually what the provocateur wants. Critical ignoring means learning to recognize bait for what it is and refusing the bite, not because the content does not deserve condemnation but because public condemnation amplifies it. This is hard because the psychological reward structure of social media strongly favors posting the clever dunk. The survival skill is learning that the rewards are a trap.
Critical ignoring is unglamorous. It does not produce the feeling of having bravely engaged. But in an environment where attention is the scarcest resource and where pollution is continuously manufactured, it may be the single most protective habit available.
Chapter 4 — The Anatomy of Online Hate
Online hate is not a spontaneous expression of individual prejudice amplified by microphones. It is an organized communication system with recognizable tactics, infrastructures, and feedback loops. The Center for Countering Digital Hate, in a series of reports on extremist communities, antisemitic content, and platform enforcement failure, has repeatedly shown that a small number of coordinated actors generate a large fraction of the hateful content that reaches mainstream visibility. The “ten percent of users producing ninety percent of the toxicity” pattern is familiar to researchers of every platform.
The anatomy can be described as a pipeline. At the entry point are gateway communities: gaming servers, meme pages, self-improvement forums, trading communities. These are not explicitly hateful. They are places where lonely people — often young men — find belonging, humor, and status. Gateway communities draw attention precisely because they are fun. Over time, a subset of members encounter increasingly “edgy” content: transgressive jokes, ironic racism, “just asking questions” about protected groups. The irony provides deniability both to poster and reader. Phillips’s earlier research on trolling documented in detail how “lulz” — the pleasure of transgression for its own sake — supplies cover and rationalization for content that, when stripped of the ironic frame, is straightforwardly cruel.
Further along the pipeline come identitarian communities that offer explicit explanations for the reader’s dissatisfaction: the loss of a masculine ideal, demographic threat, cultural humiliation. These communities provide ready-made cognitive frames and in-group solidarity. The furthest nodes are explicitly extremist: neo-Nazi forums, violent accelerationist networks, terroristic manifestos. Most users never travel the entire pipeline. But the pipeline exists, and researchers such as Marwick and Lewis have shown how it is actively curated by a range of actors with different goals — from cynical entrepreneurs to sincere ideologues.
Hate is rewarded by platform design. Outrage generates engagement. Engagement generates reach. Reach generates advertising revenue. A piece of hateful content that provokes angry replies from its targets and celebratory replies from its allies wins twice: both reactions feed the algorithm. The targets of hate are then forced to choose between silence (which concedes the space) and response (which amplifies the abuser). This is the double bind at the center of online hate research.
Counter-speech matters but is not a complete answer. Individuals cannot outspeak a torrent. Structural remedies — de-platforming of serial offenders, de-monetization of hateful publishers, design friction on harassment campaigns — have been shown repeatedly, including in CCDH’s own evaluations, to reduce the reach of hateful communities significantly. “Just counter it with more speech” assumes a level playing field that does not exist when one side is organized, amplified, and financially rewarded.
Chapter 5 — Disinformation and Media Manipulation
Alice Marwick and Rebecca Lewis, in their foundational 2017 Data & Society report Media Manipulation and Disinformation Online, provided the first comprehensive map of the actors, tactics, and motives that produce contemporary disinformation. Their framework remains the common vocabulary of the field.
They distinguish sharply between misinformation (false or misleading content circulated without intent to deceive) and disinformation (false or misleading content circulated deliberately). The distinction matters because the two require different responses. Misinformation is primarily a literacy and ecosystem problem. Disinformation is primarily a security and governance problem. Most contaminated content mixes the two: a deliberate seeder at the source and countless sincere spreaders downstream.
Marwick and Lewis catalogued the actors. They include ideologues (true believers advancing a political project), entrepreneurs (hoaxers and content farms chasing clicks), state actors (foreign and domestic intelligence services running influence operations), trolls (users who enjoy the chaos), and activists (those attempting to hijack narratives for social movements). These categories overlap. The same account can be all five at once.
They also catalogued tactics. Trading up the chain is the practice of planting a story in a fringe outlet, then using its visibility there to argue that mainstream outlets must cover it. Source hacking exploits journalists’ need for quotes by impersonating experts or feeding pre-packaged narratives to reporters on deadline. Keyword squatting floods a search term with pre-prepared content so that anyone researching the term finds the manipulator’s framing. Memeification packages claims in humorous formats that resist fact-checking because they operate as jokes. Each tactic exploits a legitimate feature of the media environment — journalistic openness, search indexing, humor — and turns it against the system.
Cailin O’Connor and James Owen Weatherall, in The Misinformation Age, complement this picture with formal models. They show, using simple network simulations, that propagandists can substantially shift collective belief in a population even when the population is composed of rational Bayesian reasoners. All that is required is selective sharing of genuine but unrepresentative evidence — for instance, publishing every study that finds no link between a product and harm, while suppressing every study that finds one. The lesson is that disinformation does not require lying. It only requires selective truth.
The strategic implication for defenders is that fact-checking is necessary but insufficient. You cannot out-correct a propaganda machine whose comparative advantage is volume. The more effective interventions are those that reduce the conditions under which manipulation flourishes: slower sharing, friction on virality, transparency about ad targeting, and public education in how manipulation actually operates.
Chapter 6 — Polarization and Moral Outrage on Social Media
The claim that social media polarizes us is one of the most common pieces of common sense about the modern internet. The more interesting question, which researchers such as William Brady, Molly Crockett, Jay Van Bavel, and Jamie Carpenter have tried to answer, is how polarization happens and whose behavior actually changes.
Carpenter and collaborators, in a paper on political polarization and moral outrage on social media, examined how outrage content travels through networks. Their findings replicate what Brady and Crockett had shown earlier: posts that contain moral-emotional language (“disgust,” “evil,” “betrayal”) are shared at substantially higher rates than neutral posts, and this effect is concentrated within ideological in-groups. That is, outrage travels well — but mostly through people who already share the poster’s worldview. The result is a feedback loop in which each side is exposed to increasingly extreme characterizations of the other side’s behavior. The out-group is rarely experienced directly; it is experienced through the worst quotations your own side chooses to circulate.
This process has been given several names in the literature. Affective polarization is the growing emotional hostility between partisan groups, which has increased dramatically in many democracies over the past two decades. Importantly, affective polarization has grown faster than issue polarization — people have not actually become further apart on most policy questions as quickly as they have come to despise members of the other party. Social media is not the only cause, but its incentive structure directly rewards the production of affect.
A second key concept is false polarization: the widespread overestimation of how extreme the other side is. Survey research from the “Perception Gap” projects has consistently found that both left and right believe the other side holds views substantially more radical than its members actually hold, and that the heaviest social media users are the most mistaken. The very people most exposed to “the other side’s” opinions through social feeds are the ones with the least accurate picture of them, because what they are exposed to is not a representative sample but a curated selection of the most outrageous examples.
Eli Pariser’s The Filter Bubble, written in 2011, anticipated part of this story by arguing that personalized algorithms would produce epistemic enclosures in which users were rarely exposed to dissenting views. Subsequent research has complicated the picture. Users of social platforms are in fact exposed to cross-cutting content — often more than in their offline lives — but the exposure is hostile in tone and serves to entrench rather than soften positions. The problem is not that we never encounter the other side; it is that we encounter its worst caricature, repeatedly.
The survival implication is that one-click sharing of outrage content is almost never an act of principled communication. It is, functionally, a contribution to collective misperception. A practical discipline is to ask, before amplifying outrage: am I about to show my in-group the worst member of the out-group, as if that person represented the whole? If so, the share is a small act of pollution.
Chapter 7 — Influencers, Propaganda, and Political Persuasion
The Oxford Internet Institute’s Computational Propaganda Project, founded by Philip Howard and Samantha Bradshaw, has documented a global pattern: in country after country, political actors have moved from broadcast advertising through automated bots to the use of human influencers as the dominant instrument of organized persuasion. Influencer-driven propaganda is effective precisely because it does not look like propaganda. It looks like a lifestyle vlogger sharing an authentic opinion, a fitness personality mentioning a policy, a mommy-blogger reposting a meme.
The shift is structural. Audiences have become skilled at ignoring overt advertising and government messaging. Platform algorithms reward content with high engagement and “authenticity” signals. Influencers are optimized for exactly those signals. Political operators — foreign states, domestic campaigns, corporate interests, wellness grifters — have therefore learned to route messages through influencer networks rather than paid spots. The influencer may be paid, coordinated, or ideologically aligned; often the user cannot tell which.
Roscini’s analysis of how influencer culture polarizes national politics shows that the incentive structure of the influencer economy selects for content that performs tribal identity. An influencer builds an audience by being legible as “one of us,” which means making claims that flatter in-group assumptions. The audience does not experience this as persuasion; it experiences it as recognition. The political content rides on the parasocial bond — the feeling that the influencer is a friend — which is far more powerful than any traditional campaign advertisement.
Three features make this form of propaganda especially difficult to resist. First, disclosure fails. Labels such as “#ad” or “sponsored” are widely ignored, especially when the sponsoring relationship is ideological rather than commercial. Second, persuasion is indirect. The viewer is not being told what to think; they are being invited to identify with someone who happens to think that way. Direct refutation is socially costly because it feels like attacking a friend. Third, authenticity is an economic category. Influencers who appear most authentic are often the most carefully managed. “Authenticity” in the attention economy is not a property of persons but a product of branding.
The defense against influencer-mediated propaganda is not to distrust all influencers. It is to develop a habit of asking who benefits if this position spreads. When an influencer takes a political stance, the relevant questions are: What is the incentive structure around this post? Whose campaign, product, or coalition gains from it? Would the influencer keep saying it if the incentives reversed? Those questions do not always yield definitive answers, but asking them begins to rebuild the analytic distance that parasocial intimacy dissolves.
Chapter 8 — Distrust, Radicalization, and the Manosphere
Research by Padda and colleagues on the so-called manosphere — Andrew Tate, Nick Fuentes, incel forums, pickup-artist communities, and adjacent grievance networks — emphasizes a common template. These communities do not begin by offering ideology. They begin by offering an explanation. They tell a young man that the dissatisfaction he feels in his life has a name, a source, and a remedy. The name is usually feminism, wokeness, or demographic change; the source is usually a specific out-group; the remedy is usually submission to the influencer’s program.
The sequence matters. First comes the diagnosis — “you feel lost because society has lied to you.” Then comes the community — “there are others who see what you see.” Then comes the ideology — “here is why they are to blame.” Then comes the behavioral program — “here is how real men act.” Each stage binds the user more tightly. Leaving requires not just changing one’s mind but losing one’s community and admitting that the diagnosis that felt like a revelation was a trap.
The “Fragmented Beliefs, United Threats” framework describes the corresponding strategic puzzle. Radical online communities do not share a coherent doctrine. They share an affective orientation — distrust of mainstream institutions, contempt for perceived enemies, loyalty to in-group leaders — while disagreeing about almost everything else. This makes them resistant to argument because there is no single doctrine to falsify. It also makes them capable of coalition: anti-feminists, crypto enthusiasts, religious traditionalists, accelerationists, and free-speech absolutists may all flock to the same influencer despite having little in common, because they share a common enemy.
Radicalization is powered by the same algorithmic logic that powers influencer propaganda, plus a specific psychological mechanism: the dignity repair offered to humiliated young men. Andrew Tate succeeded because his core promise was not misogyny per se — it was that his followers could become men whom the world would respect. The misogyny was the by-product of the dignity repair. Any intervention that only challenges the misogyny, without addressing the dignity deficit, fails because it leaves intact the reason the follower signed up.
Effective responses, supported by exit-program research in the deradicalization literature, focus on the relational entry point rather than the ideology. They provide alternative sources of community, status, and meaning. They avoid public humiliation of the follower, which tends to deepen commitment. They take seriously the wound that made the ideology attractive. None of this requires sympathy with the ideology itself; it requires strategic understanding of how the ideology took root.
Chapter 9 — AI Slop and the Dead Internet
“AI slop” is the informal term for the flood of low-quality, generatively produced content now filling search results, social feeds, image boards, and retail listings. The defining properties of slop are that it is cheap to produce, indistinguishable at a glance from human content, optimized for algorithmic visibility rather than reader value, and generated in volumes that overwhelm traditional moderation. A single prompt chain can produce thousands of articles about almost any topic in an afternoon, each plausible enough to rank in search but none anchored to reality or verifiable expertise.
Slop matters because it changes the base rate of contamination in the information environment. Before 2022, a reader’s default assumption could be that a written article reflected some human intention to say something — true or false, but intended. Slop erodes that default. An increasing fraction of text, images, and video is produced without any author at all, in the sense that no human stands behind the claims. Van der Linden’s work on “pre-bunking” becomes newly urgent in this context: if we cannot reliably trace content to accountable sources, we must train the population to recognize the manipulation techniques independently of whether a human is behind them.
The informal “Dead Internet Theory” is an overstatement of a real trend. It proposes that most of the internet is now bots talking to bots, scripts generating engagement to game advertising and recommendation systems. The strong form is wrong; most users are still human. But the weak form — that the baseline density of machine-generated content has risen to the point that a naive user cannot assume authorship — is empirically defensible and getting more so.
Jean Baudrillard’s concept of the simulacrum is worth recalling here. Baudrillard described a sequence in which representations first reflect reality, then mask it, then mask the absence of reality, and finally become “pure simulacra” with no relationship to any original. Generative AI moves the media environment further along that sequence. A machine-generated news article is not a distortion of reporting; there was no reporting. A machine-generated product review is not an exaggeration of experience; there was no experience. The map is no longer unfaithful to the territory because there is no territory.
Survival in a slop environment requires a habit of provenance-first reading. Before asking whether a claim is true, ask whether there is a traceable person, institution, or record accountable for it. If not, treat the content as decorative rather than informative. This reverses the older literacy model in which claims were evaluated on their merits; the volume of slop makes merit-first evaluation impossible because there are too many claims to evaluate.
Chapter 10 — AI as a Tool of Targeted Harm
Generative AI is not only a source of ambient slop. It is also a weapon that can be aimed. The Center for Countering Digital Hate, Stanford Internet Observatory, and numerous news investigations have documented how open and semi-open generative systems have been used to produce non-consensual intimate imagery (often called “deepfake porn”), harassment campaigns targeted at specific individuals, fraud aimed at the elderly, and fabricated evidence designed to damage reputations.
The release of increasingly capable video models — referenced in the course under the shorthand “Sora 2” — and audio cloning systems makes the problem structurally worse in two ways. First, the floor of production has fallen: creating a realistic fabricated video once required a film studio’s resources; it now requires a laptop and a prompt. Second, the ceiling of verification has risen only partly: forensic tools for detecting synthetic content exist, but they lag behind generation and are not available to ordinary users who encounter the content on their phones.
The most important insight from the harm literature is that the burden of proof has shifted. In the pre-generative environment, a person accused by a photograph had to explain away the photograph. In the generative environment, a person who is telling the truth about what they did must compete with plausible fabrications about what they did not do; a person who has been victimized by a fabrication must prove a negative. The epistemic asymmetry favors the attacker.
Targeted AI harm falls heavily on the already vulnerable. Gendered violence online — threats, harassment, stalking, image-based abuse — has been transformed by generative image tools that allow abusers to produce synthetic intimate imagery of any target from a few public photos. Surveys by organizations working on image-based sexual abuse have documented sharp increases in such content and a corresponding expansion of the set of possible victims: the technology makes every woman with any photographic presence online a potential target. The UN Women “Power On” initiative has placed this form of abuse at the center of its digital violence agenda.
Defensive measures exist but are imperfect. Watermarking and content provenance standards (such as the C2PA specification) can help when systems adopt them, but are easily stripped. Platform policies against synthetic abusive imagery have been unevenly enforced. Legal reform — criminalizing the creation and distribution of non-consensual synthetic intimate imagery — has advanced in some jurisdictions and is the most promising long-term lever. Meanwhile, practical defense for individuals includes reducing one’s public image footprint where possible, using reverse-image monitoring, and — crucially — building social networks that will believe you if a fabrication is deployed against you. Credibility among peers is, in the end, the most reliable form of protection.
Chapter 11 — Algorithmic Bias and Gendered Violence Online
Safiya Umoja Noble’s Algorithms of Oppression documented, in meticulous detail, how search engines and related automated systems embed and amplify racist and sexist patterns of thought. Her central example is the way search queries about Black women and girls returned sexualized and degrading results at rates that could not be explained by representative content on the web. Noble’s argument was not that engineers wanted those results; it was that commercial search optimizes for clicks and that clicks, in a society with existing prejudices, track those prejudices. The algorithm learned the bias because its training signal was the bias.
Ruha Benjamin’s Race After Technology generalizes this insight into what she calls the “New Jim Code”: the way ostensibly neutral technical systems reproduce historical patterns of discrimination by encoding past outcomes as future defaults. Facial recognition systems trained on predominantly white faces misidentify Black faces. Risk-assessment tools trained on past arrest data predict higher risk for Black defendants. Hiring algorithms trained on past successful hires recommend people who look like past successful hires. Each system is described as objective. Each is in fact a machine for automating yesterday’s injustices at the speed and scale of software.
Gendered harm works through similar mechanisms. Translation models default to masculine pronouns for professions coded as high-status and feminine pronouns for professions coded as domestic. Generative image models, when prompted for “CEO” or “doctor,” overwhelmingly produce white men; when prompted for “nurse” or “cleaner,” they produce women. These defaults then feed back into the culture as visual reinforcements of the stereotypes that generated them. The loop is tight and self-reinforcing unless deliberately broken.
The literature on what Noble and Benjamin call algorithmic audits — systematic testing of systems to uncover disparate outcomes — offers a partial remedy. Public-interest audits by journalists, academics, and civil-society groups have exposed biases that internal testing missed, and in some cases forced model providers to retrain or filter outputs. Audits alone cannot fix underlying data biases, but they make the problems legible and create political pressure for correction. The Alan Turing Institute’s “Doing AI Differently” program and parallel initiatives are attempts to shift the default posture of AI development from “release now, audit later” to “audit as a precondition of release.”
Individual survival in the biased-system environment requires a working skepticism: when an automated system gives you a result, especially about a person or group, ask what the training data looked like and whose outcomes it was built to predict. When you produce content with generative tools, notice the defaults you inherit and push against them consciously. When you encounter claims that a technology is “objective,” treat the word as a warning rather than a reassurance.
Chapter 12 — Resistant Media: Memes, Counter-Speech, and Solidarity
If the chapters so far have been relentlessly diagnostic, this one turns to resistance. The internet is not only a source of pollution. It is also the site of genuinely creative counter-practices: memes that deflate propaganda, counter-speech that interrupts harassment, mutual-aid networks that route around hostile platforms, and archival projects that preserve what manipulators try to erase.
MacDonald’s The Art of Memes is a useful entry point because it takes seriously the craft of meme-making as a rhetorical form. Memes are often dismissed as frivolous or treated only as vectors of disinformation. Both views are wrong. Memes are compact, iterative, remixable units of argument. They travel faster than articles because they are formally designed to travel. They can be used for pollution, but they can also be used for lucid counter-messaging, for solidarity, and for the naming of conditions that do not yet have vocabulary. A good meme is not a substitute for careful argument, but it is a way to get the stakes of an argument to spread past the people already willing to read carefully.
Counter-speech research, most thoroughly developed by the Dangerous Speech Project and allied organizations, distinguishes effective from ineffective responses to hateful content. The most effective counter-speech is usually calm, specific, and directed at third parties rather than at the hateful speaker. Trying to change the mind of the person producing the hate is almost always a losing battle; showing the silent audience that the hate does not represent community norms is much more achievable. Humor helps when it is good; sarcasm often fails because it can read as endorsement. Reporting mechanisms, used together with public counter-speech, are stronger than either alone.
Solidarity matters structurally. A single person confronted with harassment is isolated; a group that has prepared to support one another distributes the cost. The most durable counter-forces against online abuse have been networks — of journalists, researchers, activists, survivors — who agree in advance to verify, amplify, and defend. These networks function not because their members are individually heroic but because they have institutionalized the ethic of nobody faces this alone.
Mutual-aid infrastructures, pioneered during crises such as the COVID-19 pandemic and natural disasters, show what it looks like to use the same digital tools that host harassment to instead coordinate care. These projects typically rely on smaller, higher-trust platforms; on transparent norms; on some form of gatekeeping that filters bad actors without becoming a tool of repression; and on slow rhythms that discourage the engagement-optimized reactivity of mainstream feeds. They are a proof of concept. A healthier ecology is possible on the same wires.
Chapter 13 — Toward Better Media Futures
The seminar’s final movement asks what it would mean to build an information environment that does not routinely poison those who live in it. This is not a question with a single technical answer. It is a question about design, governance, culture, and individual practice working together.
At the platform design level, proposals include: friction on virality (small delays or confirmations before mass sharing); transparency in recommendation (telling users why they are being shown a post); provenance metadata (cryptographic signatures binding content to sources); interoperable moderation (letting communities set their own rules without being locked into any one platform’s politics); and advertising reform that decouples revenue from engagement intensity. None of these is a silver bullet. Together they would significantly change the equilibrium behavior of the system.
At the governance level, proposals range from targeted transparency mandates (forcing platforms to release structured data about the spread of high-stakes content) to stronger accountability for large-scale synthetic media abuse to antitrust action against the concentration of speech infrastructure in a handful of firms. The UN Women “Power On” initiative has pressed for international recognition of technology-facilitated gender-based violence as a human-rights issue. The Alan Turing Institute’s “Doing AI Differently” program has pressed for participatory, public-interest approaches to AI development. Neither initiative imagines that governance alone will suffice, but both understand that individual heroism cannot substitute for structural rules.
At the cultural level, the most important shift is the restoration of an ethic of care in communication. Phillips and Milner use the language of watershed stewardship: just as a healthy river depends on the countless small decisions of people who live along it, a healthy information environment depends on the everyday choices of participants to share carefully, to correct gently, to refuse to amplify what they cannot verify, to check before forwarding, and to remember that the person on the other side of a post is usually a person. None of this requires special skill. It requires the consistent application of ordinary decency under the hard conditions the ecology imposes.
At the individual level, the survival curriculum distilled from the seminar can be summarized as a small set of habits. Practice critical ignoring; engage fully with fewer, better sources. Read laterally; check what the web says about a source before trusting what the source says about itself. Assume that content optimized to provoke is designed to, and respond by pausing rather than sharing. Ask who benefits if a claim spreads. Notice the provenance of images, videos, and audio; assume synthetic origin is possible. Build and protect relationships with people whose credibility you trust and who trust yours. Do not outsource your moral reactions to your feed. Be suspicious of the feeling of certainty that follows a minute of scrolling. Cultivate boredom as a defense against manipulation — the feed cannot move someone who is willing to put the phone down.
Appendix A — A Working Vocabulary
Network pollution. The cumulative degradation of the information environment by toxic communication, produced as an externality of ordinary platform operation rather than by any single actor. Concept from Phillips & Milner, You Are Here.
Critical ignoring. The deliberate, skilled practice of not engaging with most online content, in recognition that attention — not information — is the scarce resource. Includes self-nudging, lateral reading, and refusal of bait.
Lateral reading. Evaluating a source by reading about it on other sites rather than staying on the source itself. Empirically superior to vertical reading for judging credibility.
Disinformation vs. misinformation. Disinformation is deliberately deceptive; misinformation is accidentally false. Marwick & Lewis’s categorical distinction from the Data & Society report.
Trading up the chain. Planting a story in fringe media to create visibility, then using that visibility to argue mainstream outlets should cover it.
Affective polarization. Growing emotional hostility between partisan groups, often outpacing actual policy disagreement.
Filter bubble / echo chamber. Terms for epistemic enclosures produced by algorithmic personalization and self-selected networks. Empirically complicated: users encounter the other side, but usually in its worst form.
Parasocial bond. The one-directional feeling of intimacy that audiences develop toward performers, influencers, or streamers they do not personally know; a key channel for modern influencer propaganda.
Manosphere. The loose network of male-grievance communities — pickup artists, MGTOW, incels, hyper-masculine influencers — that function as gateways into broader radicalization pipelines.
AI slop. Low-quality, generatively produced content flooding search and social feeds, optimized for visibility rather than for readers.
Simulacrum. Baudrillard’s term for a representation with no original; invoked to describe generative media that has no anchoring in a reported reality.
Algorithmic bias. Systematic disparate outcomes produced by automated systems trained on historical data that reflect historical injustice; concept elaborated by Noble and Benjamin.
Counter-speech. Public communication intended to interrupt, contextualize, or undermine hateful or manipulative speech. Most effective when calm, specific, and addressed to onlookers rather than to the hateful speaker.
Provenance. The traceable origin of a piece of content; in the generative era, asking whether anyone is accountable for a claim is increasingly necessary.
Dignity repair. The psychological promise that radical communities offer to humiliated followers: a path to renewed self-respect. Understanding the repair is necessary to understand why the ideology sticks.
Appendix B — Practice Prompts
These prompts are designed for seminar discussion or private reflection; they do not have single correct answers.
Identify content that recently made you feel strong anger or disgust. Trace its path: who posted, who amplified, what algorithmic decisions exposed you. What would it have taken for it not to reach you?
Choose a source you trust. Read laterally about it for ten minutes. Does your trust survive, and on what grounds?
Pick a claim in your feed containing moral-emotional language. Re-describe it in neutral language. Does it still hold interest and urgency?
Observe an influencer you follow in a domain adjacent to politics. Catalogue the political and moral claims they make in a week. Whose interests are served if those claims spread?
Recall a time you shared content to “show” how bad the other side is. Was the example representative? What picture of the other side did your sharing contribute to?
Imagine designing an alternative platform for public conversation. What frictions would you build in? What signals would you reward?
Consider a person you know who has moved toward a radical online community. Without argument, identify the wounds the community is answering. What non-radical alternatives could address those needs?
Design a personal week-long critical-ignoring experiment. What inputs will you remove, what will you retain, and what would count as success?
Appendix C — Recommended Readings Beyond the Primary
Siva Vaidhyanathan’s Antisocial Media critiques Facebook as an infrastructure. Shoshana Zuboff’s The Age of Surveillance Capitalism provides the political-economic frame for platform behavior. Kate Crawford’s Atlas of AI traces the material underpinnings of machine learning. Francesca Tripodi’s The Propagandists’ Playbook gives an ethnographic account of how conservative audiences do “research” in ways manipulators exploit. Joan Donovan and colleagues’ Meme Wars traces specific manipulation campaigns from fringe origin to mainstream impact. Tarleton Gillespie’s Custodians of the Internet studies moderation as infrastructure. These texts converge on the seminar’s claim: the toxicity of contemporary media environments is the predictable output of systems designed to do what they are doing. Surviving such environments begins with seeing the systems clearly, continues with refusing to contribute to their worst tendencies, and ends, if it ends well, with the slow collective work of building something better on the same wires.