PHIL 363: Lies, Misinformation, and Their Spread

Jennifer Saul

Estimated study time: 1 hr 13 min

Table of contents

Sources and References

  • Cappelen, H. & Dever, J. (2019). Bad Language. Oxford University Press. (Extract on non-ideal philosophy of language.)
  • Saul, J.M. (2012). Lying, Misleading, and What is Said: An Exploration in Philosophy of Language and in Ethics. Oxford University Press.
  • Webber, J. (2013). “Liar!” Analysis, 73(4), 651–659.
  • Rees, C.F. (2014). “Better Lie!” Analysis, 74(1), 59–64.
  • Carson, T.L. (2006). “The Definition of Lying.” Noûs, 40(2), 284–306.
  • Frankfurt, H. (2005). On Bullshit. Princeton University Press.
  • Cassam, Q. (2021). “Bullshit, Post-Truth, and Propaganda.” In Political Epistemology, Oxford University Press.
  • Habgood-Coote, J. (2019). “Stop Talking about Fake News!” Inquiry, 62(9–10), 1033–1065.
  • Dougherty, T. (2013). “Sex, Lies, and Consent.” Ethics, 123(4), 717–744.
  • Yancy, G. & Jones, J. (eds.). Various works on racist deception and epistemologies of ignorance.
  • Anderson, L. (2017). “Racist Humor.” Philosophy Compass, 12(1).
  • Saul, J.M. (2018). “Negligent Falsehood, White Ignorance, and False News.” In E. Michaelson & A. Stokke (eds.), Lying: Language, Knowledge and Ethics. Oxford University Press.
  • Nguyen, C.T. (2020). “Echo Chambers and Epistemic Bubbles.” Episteme, 17(2), 141–161.
  • Levy, N. (2023). “Echoes of Covid Misinformation.” Philosophical Psychology, 36(5), 875–896.
  • Stanford Encyclopedia of Philosophy entries on “The Definition of Lying and Deception.”

Chapter 1: Non-Ideal Philosophy of Language

1.1 The Idealized Picture of Language

Traditional philosophy of language has operated under a broadly optimistic set of assumptions about how speakers use language. Drawing on the influential work of H.P. Grice, philosophers have typically assumed that participants in a conversation are cooperative (合作的), sincere (真诚的), informative (信息丰富的), and relevant (相关的). This is encapsulated in Grice’s Cooperative Principle (合作原则), which holds that speakers contribute to conversation in ways that align with the mutually accepted purpose of the exchange.

Cooperative Principle (合作原则): Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. (Grice, 1975)

Under this idealized picture, speakers say what they mean, mean what they say, and listeners can rely on a battery of inferential mechanisms—such as conversational implicature (会话含义)—to recover what is communicated beyond the literal content of utterances. The result is a theory of language that is elegant, formally precise, and profoundly disconnected from the messy realities of actual human communication.

The idealized framework has deep roots. The Ordinary Language Philosophy (日常语言哲学) movement of mid-twentieth-century Oxford—associated with J.L. Austin, Gilbert Ryle, and the later Wittgenstein—did attempt to take seriously the full variety of linguistic behavior. Austin catalogued speech acts, illocutionary forces, and the conditions under which utterances misfire. Yet even this tradition, for all its attention to the particulars of language use, operated within an essentially benign picture of communication. Austin’s examples are drawn from promises, verdicts, and christenings—not from propaganda, slurs, or sexual coercion. The tradition assumed that the relevant norms of language use were generally honored and that failures were accidental or marginal rather than systematic.

1.2 Cappelen and Dever’s Challenge

Herman Cappelen and Josh Dever’s Bad Language mounts a systematic challenge to this idealized approach. Their central contention is that philosophers of language have been studying an imaginary world—one in which speakers cooperate, tell the truth, and respect conversational norms. The real world, however, is populated by liars, bullshitters, manipulators, propagandists, and people who use language to oppress, silence, and deceive.

Key Insight: By focusing exclusively on idealized language use, philosophy of language has rendered itself incapable of addressing the most socially and politically urgent features of actual communication.

Cappelen and Dever identify three broad categories of non-ideal language (非理想语言):

  1. Messing with truth: lying, misleading, bullshitting—speakers who deliberately violate the norms of honest communication.
  2. Harmful speech: slurs, pejoratives, hate speech, and generics that encode and transmit prejudice.
  3. Non-ideal speech acts: silencing, coercion, and the failure of speech acts that should succeed (e.g., when a woman’s refusal of consent is not recognized as a refusal).

Their book thus serves as a roadmap for the kind of applied philosophy of language (应用语言哲学) that this course pursues: one that takes seriously the ways language is used to harm, deceive, and oppress.

Each of these three categories deserves elaboration. The first category—messing with truth—covers the territory that will occupy most of this course. Lying is its most obvious instance, but the category extends to subtler violations: the bullshitter who is indifferent to truth rather than actively opposing it, the propagandist who frames and selects information to mislead without making any single false statement, the misleader who exploits conversational implicature to produce false beliefs while uttering only truths. Standard philosophy of language analyzed these as exotic or marginal cases; Cappelen and Dever insist they are central features of human communication.

The second category—harmful speech—encompasses slurs, pejoratives, and what are called generics (泛化表述). Generics are statements like “Pit bulls are dangerous” or “Immigrants commit crimes”—generic claims about a kind that purport to characterize the kind as a whole. Philosophers like Sarah-Jane Leslie have argued that generics are cognitively powerful and resilient: once a generic belief is formed, it is resistant to statistical counterevidence. A single salient instance of a pit bull attacking someone can reinforce the generic, even for someone who is told that the vast majority of pit bulls are gentle. This makes generics particularly effective vehicles for transmitting and entrenching prejudice.

The third category—non-ideal speech acts—covers cases where the standard machinery of speech act theory breaks down. Austin analyzed how utterances can “misfire”: a promise does not bind if the speaker was coerced; a verdict is not valid if the judge was bribed. But feminist philosophers like Rae Langton have identified a more troubling form of misfire: the case in which a woman says “No” to sexual advance and the utterance simply does not register as a refusal. The speech act is performed, but it lacks the perlocutionary uptake—the hearer’s response—that would normally constitute its success. Langton calls this illocutionary silencing (非言语沉默): the speaker’s words are heard but not recognized as having the force they are intended to have.

1.3 The Parallel to Political Philosophy: Ideal vs. Non-Ideal Theory

The shift from ideal to non-ideal philosophy of language parallels a similar movement in political philosophy. The distinction between ideal theory (理想理论) and non-ideal theory (非理想理论) was introduced by John Rawls, who argued that we should begin by specifying the principles of justice that would govern a perfectly just, fully compliant society, and then derive guidance for less-than-ideal circumstances. Critics of this approach, most prominently Charles Mills, have argued that ideal theory is not merely incomplete but actively distorting.

Mills’s critique in The Racial Contract and subsequent work is that ideal theory, by assuming a world of fully rational, fully just, fully informed citizens, systematically erases the actual conditions of racial oppression. If you begin by imagining a world without racism, you are ill-equipped to theorize about—let alone remedy—a world saturated by it. Worse, ideal theory can serve as an ideological instrument: it allows theorists to avoid confronting the actual injustices of existing society by retreating to a comfortable abstraction.

Cappelen and Dever make the analogous case for language: if our theory of meaning assumes that speakers are always cooperative and sincere, we will have no resources for understanding propaganda, dogwhistles, gaslighting, or the linguistic mechanisms through which oppression operates.

Example: A politician who says "We need to protect our neighborhoods" may be exploiting the gap between what is literally said and what is communicated. In an idealized framework, we would simply compute the conversational implicature. In a non-ideal framework, we ask: who benefits from the ambiguity? Who is harmed? What prejudices does the utterance activate?

The non-ideal turn in philosophy of language is thus simultaneously a political and a methodological move. It insists that the analysis of language cannot be separated from the analysis of power.

1.4 Conceptual Engineering as a Method

One of the most important responses to the problems identified by Cappelen and Dever is the approach known as conceptual engineering (概念工程). Rather than simply analyzing our existing concepts, conceptual engineering asks whether our concepts are fit for purpose—and proposes to revise or replace them when they are not.

The project has two main variants. Sally Haslanger argues for a form of ameliorative analysis (改良性分析): we should ask what work we want our concepts to do, and then articulate the concept that best serves that purpose. If our existing concept of “race” encodes biologistic falsehoods, we should revise it to capture the genuine social reality that the concept is meant to track. If our existing concept of “woman” is exclusionary or harmful, we should work to construct a better one.

Herman Cappelen, in his book Fixing Language, defends a more deflationary view: our concepts are constantly being revised by usage, and this process is largely uncontrolled and unpredictable. We cannot simply decide what our concepts should mean; we can, however, make targeted interventions in the social practices that govern concept use.

The connection to non-ideal philosophy of language is direct. Many of the harmful phenomena discussed in this course—slurs, propaganda, fake news, gaslighting—can be understood as involving conceptual distortions. Propaganda exploits the concepts of “security,” “community,” and “threat” in ways that serve the interests of the powerful. Fake news exploits the concept of “news” itself. Correcting these distortions requires not just providing accurate information but engaging in the harder work of conceptual repair.

Debate: Critics of conceptual engineering worry that it is both more difficult and more politically contested than its proponents acknowledge. Who gets to decide which concepts need engineering? How do we prevent conceptual engineering from becoming just another form of ideological manipulation, with different parties trying to capture contested concepts for their own purposes?

Chapter 2: Lying vs. Misleading

2.1 Defining Lying

One of the oldest and most persistent questions in philosophy of language is: what exactly is a lie (谎言)? The most widely accepted definition requires four conditions:

Traditional Definition of Lying: A person lies if and only if they (1) make a statement, (2) believe the statement to be false, (3) address it to another person, and (4) intend the addressee to believe the statement is true.

This definition has roots in Augustine (奥古斯丁) and Thomas Aquinas (托马斯·阿奎那), both of whom held that lying consists in asserting what one believes to be false with the intention of deceiving. Immanuel Kant (康德) famously argued that lying is always morally wrong, even when lying to a murderer who asks where your friend is hiding.

The crucial philosophical debate concerns whether the fourth condition—intention to deceive (欺骗意图)—is truly necessary.

2.2 Deceptionists vs. Non-Deceptionists

Philosophers are divided into two camps on this question:

Deceptionists hold that a statement counts as a lie only if the speaker intends to deceive. On this view, if you say something false but have no expectation that your audience will believe it, you have not lied. Thomas Carson, in earlier formulations, defended a version of this view, as did Chisholm and Feehan, Charles Fried, and others. Fried, for instance, emphasized that assertion (断言) involves an implicit warranty of truth, and that lying breaks this warranty in a way that constitutes a breach of trust.

Non-Deceptionists argue that you can lie without intending to deceive. Jennifer Saul, Roy Sorensen, and Andreas Stokke all defend versions of this position. On Saul’s account, lying requires that the speaker believe they are in a warranting context (保证语境)—a context in which their assertion is taken to guarantee the truth of what is said—even if they do not specifically intend to deceive. This allows Saul’s definition to handle cases like bald-faced lies (厚颜无耻的谎言), which are discussed in Chapter 3.

2.3 Grice’s Maxims and the Mechanics of Misleading

To understand how misleading works, it is essential to understand Grice’s full account of conversational cooperation. Grice proposed that the Cooperative Principle is implemented through four specific maxims (准则):

Grice's Maxims (格莱斯准则):
  • Quantity: Be as informative as required; do not give more or less information than needed.
  • Quality: Do not say what you believe to be false; do not say that for which you lack adequate evidence.
  • Relation: Be relevant.
  • Manner: Be perspicuous; avoid obscurity and ambiguity; be brief and orderly.

These maxims are not merely descriptive regularities; they are normative standards that competent speakers are expected to follow. Because listeners know that speakers are expected to follow these maxims, they draw inferences—conversational implicatures (会话含义)—when a speaker’s utterance seems to fall short of what the maxims would require.

The machinery of implicature is the key to understanding misleading. A skilled misleader exploits the listener’s assumption that the speaker is being cooperative. By saying something literally true but informationally incomplete (violating Quantity) or contextually unexpected (exploiting Relation), the misleader triggers an inference in the listener that goes beyond—and contradicts—the literal content of what was said.

Consider the Maxim of Quantity. If someone says “Some politicians are honest,” the listener will typically infer that not all politicians are honest—because if the speaker knew all were honest, cooperation would require them to say so. The implicature (not all are honest) is generated by the listener’s assumption that the speaker said as much as they knew. A misleader can exploit this by deliberately saying “some” when they know “all” is true, in order to create a false impression.

2.4 Lying vs. Misleading: The Distinction

The distinction between lying and misleading (误导) is central to Jennifer Saul’s book Lying, Misleading, and What is Said. The core idea:

  • Lying involves asserting something one believes to be false.
  • Misleading involves saying something one believes to be true, but in a way calculated to produce a false belief in the hearer—typically through conversational implicature (会话含义).
Example: Suppose you ask whether Mary has been seeing her ex-boyfriend Valentino. Mary replies, truthfully, "Valentino has been sick with mononucleosis for weeks." She says this knowing that you will infer that she has not been seeing him. In fact, she has---but what she said was true. She has misled you, but has she lied?

On the standard analysis, Mary has not lied. She has exploited the Gricean mechanism of conversational implicature to communicate something false while saying something true. This is a case of paltering (利用真话误导)—using truthful statements to create false impressions.

2.5 Saul’s Taxonomy: Lying, Misleading, and Asserting-Something-False

Saul’s work introduces a careful three-way taxonomy that is worth dwelling on. She distinguishes:

  1. Lying: making a statement one believes to be false with the intention of warranting it as true.
  2. Misleading: saying something true (or at least, something one does not believe to be false) in a way designed to produce a false belief via implicature, presupposition, or other pragmatic mechanisms.
  3. Asserting something false: making a statement one believes to be false, but without the full structure of a lie—for instance, in a context where no warranty of truth is in play (as in fiction, hypothetical reasoning, or acknowledged brainstorming).

This taxonomy is especially illuminating when applied to real-world examples. Politicians routinely mislead without lying, precisely because they (and their advisors) know that deceptionist definitions of lying give them cover. Lawyers are trained to ask questions that produce false implicatures while remaining literally true. Public relations professionals craft statements that are technically accurate but deeply misleading.

Political Example: A politician is asked about their involvement in a scandal. They reply, "I have never been convicted of any wrongdoing." This is literally true (let us assume), but it implicates that there was no wrongdoing---when in fact there was, but charges were never brought. The politician has not lied, on most definitions; they have exploited the Maxim of Quantity (the audience expects them to share relevant information about the scandal) to generate a false impression.
Legal Example: A lawyer, asked in a deposition whether their client met with a certain individual, replies, "My client has no memory of meeting that individual." This may be literally true (the client claims not to remember) while strongly implicating that no meeting took place. The carefully crafted assertion exploits the listener's cooperative assumptions.

2.6 Saul on the Moral (In)Significance of the Distinction

The most striking claim in Saul’s work is that the moral distinction between lying and misleading is far less significant than most people believe. Many speakers (and many philosophers) have the strong intuition that merely misleading someone is less morally objectionable than outright lying. Saul challenges this intuition.

Her central argument: in most contexts, the moral wrongness (道德错误) of lying and misleading is equivalent, because both involve the deliberate attempt to produce a false belief in the hearer. Whether you produce that false belief by asserting a falsehood or by exploiting an implicature, the harm to the hearer is the same.

Qualification: Saul acknowledges that in certain adversarial contexts---such as courtrooms or negotiations---misleading may be more defensible than lying, because these contexts do not carry the same expectation of full disclosure. But in ordinary conversational contexts, she argues, the liar and the misleader are on equal moral footing.

2.7 Objections: Kant and Carson

Two important objections to the view that lying and misleading are morally equivalent pull in opposite directions.

Kant’s objection insists that lying is always and absolutely wrong, regardless of context or consequences. For Kant, lying is a violation of the categorical imperative (定言命令): one cannot universalize the maxim “lie when convenient” without destroying the very institution of truth-telling on which communication depends. Crucially, however, Kant’s absolute prohibition applies specifically to lying—to the sincere assertion of what one believes to be false. On Kant’s view, misleading through technically true statements may not be lying at all, and he has sometimes been interpreted as permitting (or at least not explicitly prohibiting) deliberate misleading. This reading of Kant has troubled many commentators, since it seems to allow a loophole that undermines the spirit of his prohibition.

Carson’s objection comes from the opposite direction. Carson argues that, far from being morally equivalent to lying, deliberate misleading can be every bit as bad—or worse—than lying. Carson’s reasoning: the moral wrongness of lying and misleading alike derives from the harm they do to the victim. If misleading successfully produces a false belief and causes the same harm as lying, then the misleader is as blameworthy as the liar. The mechanism—implicature vs. assertion—is morally irrelevant; what matters is the deliberate production of epistemic harm.

2.8 Webber and Rees: A Debate

Jonathan Webber and Clea Rees offer competing positions on the moral comparison.

Webber argues that lying is generally worse than misleading. His argument turns on the concept of credibility (可信度). When someone lies, they damage the credibility of their assertions—the most basic form of linguistic communication. When someone misleads, they damage only the credibility of their implicatures. Since assertions are more fundamental than implicatures, lying does more damage to the social fabric of communication.

Rees turns Webber’s argument on its head. She argues that misleading is often worse than lying, precisely because misleading exploits a deeper form of trust. The misleader relies on the hearer’s willingness to draw inferences, to be cooperative, and to fulfill epistemic obligations. By exploiting these dispositions, the misleader abuses a more generous and more vulnerable form of trust than the liar does.

Upshot: The three-way debate between Saul, Webber, and Rees illustrates a fundamental question about the ethics of communication: does the mechanism of deception matter morally, or only its effects?

Chapter 3: Bald-Faced Lies

3.1 What Are Bald-Faced Lies?

A bald-faced lie (厚颜无耻的谎言) is a lie told in circumstances where both the speaker and the audience know that the statement is false. The falsity of the assertion is, in a sense, common knowledge—yet the speaker makes the assertion anyway.

Example: A student is caught cheating on an exam. The professor confronts the student with clear evidence. The student says, "I did not cheat." Both parties know this is false. The student does not expect to be believed. Yet intuitively, the student has lied.
Example: An Iraqi doctor, during a political inspection, tells the inspector, "There are no soldiers here," while uniformed soldiers are plainly visible in the ward. Both parties know the statement is false. The doctor speaks under duress, performing a ritual assertion demanded by the political context.

What is philosophically puzzling about bald-faced lies is precisely that they seem to be genuine lies despite the absence of deception. This challenges the widespread assumption that deception is the essential function—and the essential wrong—of lying.

3.2 The Role of Mutual Knowledge

The concept of mutual knowledge (共同知识) is central to understanding what makes a bald-faced lie distinctive. In a standard lie, the speaker knows that p is false, and intends the listener not to know this. In a bald-faced lie, however, both parties know that p is false—and both parties know that both know. This is not merely shared knowledge but mutual or common knowledge: a state of affairs that everyone knows, and everyone knows that everyone knows.

This mutual knowledge is what makes the bald-faced lie philosophically paradoxical. If both parties know the statement is false, in what sense is the speaker asserting it? Is it really an assertion at all, or is it something else—a performance, a ritual, a display of power?

Roy Sorensen has argued that bald-faced lies reveal that assertion is not fundamentally about communication or the transmission of beliefs. Rather, assertion is a normative act—an act that commits the speaker to the truth of what is said, regardless of whether the speaker expects to be believed. On this view, the bald-faced liar asserts p and is thereby committed to p’s truth, even though no communicative transmission occurs.

3.3 Carson’s Analysis (Sections 1–6)

Thomas Carson’s treatment of bald-faced lies is one of the most influential in the literature. In the first six sections of his paper, Carson develops an account of lying that can accommodate bald-faced lies while maintaining that lying is a philosophically distinct and morally significant category.

Carson’s key move is to replace the traditional requirement of intention to deceive with the notion of warranting the truth (保证真实性). On Carson’s definition:

Carson's Definition: A person lies if and only if they make a statement that they believe to be false, in a context in which they warrant the truth of the statement---that is, in a context in which their assertion invites the audience to rely on the statement as true.

This definition allows bald-faced lies to count as genuine lies. Even when the audience knows the statement is false, the speaker is still operating within a context where assertions carry a truth-warranty. The bald-faced liar violates this warranty, and it is this violation—not the production of a false belief—that constitutes the lie.

3.4 Speech Act Theories and Bald-Faced Lies

Different theories of speech acts handle bald-faced lies in different ways, revealing deep commitments in the philosophy of language.

On an expressivist account of assertion (associated with Williamson’s knowledge norm), to assert p is to express one’s knowledge that p. The bald-faced liar, on this view, is performing an assertion that they know they cannot properly perform—a kind of normative violation even in the absence of deception.

On a communicativist account (closer to Grice’s view), the point of assertion is to transmit information and produce beliefs in the hearer. On this view, bald-faced lies are puzzling: if no belief transmission occurs, it is hard to see how they count as assertions at all, let alone lies.

On a commitment-theoretic account (associated with Robert Brandom’s inferentialist program), to assert p is to undertake a commitment to be able to justify p and to the downstream consequences of p. The bald-faced liar undertakes this commitment while knowing they cannot fulfill it—a form of normative default that constitutes lying regardless of epistemic effects.

Implication: The debate about bald-faced lies is not merely a debate about an exotic edge case. It forces us to decide what assertion fundamentally is---a communicative act, a normative commitment, or an expressive performance---and different answers yield different verdicts about bald-faced lies.

3.5 Why Bald-Faced Lies Matter

Bald-faced lies pose a direct challenge to Deceptionist definitions of lying, since no deception occurs (the audience is not fooled). If lying requires the intention to deceive, bald-faced lies are not lies at all—which conflicts with the strong intuition that they are.

Carson argues that this conflict reveals a deep truth about lying: the wrongness of lying is not primarily about producing false beliefs. It is about violating the norm of assertion (断言规范)—the social norm according to which making an assertion commits you to the truth of what you say. Bald-faced lies violate this norm just as thoroughly as ordinary lies, even though they fail to deceive.

Moral Severity: Strangely, we often condemn bald-faced lies more severely than ordinary lies. The bald-faced liar displays a brazen contempt for the truth and for the audience that goes beyond ordinary deception. This suggests that the wrongness of lying involves more than epistemic harm---it involves a kind of disrespect for the social practices that make communication possible.

3.6 Political Bald-Faced Lies and Alternative Facts

Bald-faced lies are especially significant in political contexts. Authoritarian regimes routinely produce statements that everyone knows to be false—official propaganda that contradicts the evidence of citizens’ own eyes. These statements function not as attempts to deceive but as demonstrations of power: the regime shows that it can compel public affirmation of falsehoods.

Example: A government spokesperson asserts that a peaceful protest was actually a violent riot, despite widely circulated video evidence to the contrary. No one is deceived. But the assertion serves to establish the official narrative, to signal that dissent will not be tolerated, and to force citizens into complicity with falsehood.

The most politically salient recent example is Kellyanne Conway’s use of the phrase “alternative facts” (另类事实) in January 2017. When challenged about demonstrably false claims regarding crowd sizes, Conway did not retract the claims but instead proposed that they constituted a different but equally valid set of facts. This formulation is philosophically interesting: it performs a kind of epistemic relativism, suggesting that truth is not a single standard but a perspective-dependent one.

Philosophers have noted that “alternative facts” is not merely a confused expression but a strategically deployed one. By framing false claims as “alternative facts” rather than lies, Conway was doing several things simultaneously: refusing to acknowledge a norm of truth, demonstrating that the speaker will not be bound by standard epistemic accountability, and signaling to supporters that loyalty to the group trumps correspondence with external reality. This is closer to the logic of bald-faced lying than to ordinary deception.

3.7 Are Bald-Faced Lies a New Phenomenon?

It is worth asking whether bald-faced lies are a distinctively modern political phenomenon or whether they have always existed. Hannah Arendt, writing about totalitarianism in the mid-twentieth century, identified the systematic use of publicly known falsehoods as a characteristic instrument of authoritarian power. The aim is not to deceive but to demonstrate that the regime controls what counts as truth. Arendt called this the “image-making” function of propaganda: the creation of a parallel reality that citizens are required to publicly affirm, even when they privately disbelieve it.

The modern information environment, however, has given bald-faced lies new reach and new functions. Social media allows bald-faced lies to spread rapidly, to be amplified by supporters, and to create an illusion of mass assent. The visible falsity of the lie becomes, paradoxically, part of its appeal to in-group members: affirming a lie that outsiders recognize as false is a powerful act of group solidarity.


Chapter 4: Bullshit

4.1 Frankfurt’s Analysis

Harry Frankfurt’s essay On Bullshit (originally published in 1986, expanded in 2005) provides the foundational philosophical analysis of bullshit (胡说八道). Frankfurt’s central question is deceptively simple: what distinguishes bullshit from lying?

Bullshit (Frankfurt): Bullshit is speech produced with no concern for whether what is said is true or false. The bullshitter is distinguished from the liar by their indifference to truth (对真相的漠视). The liar knows the truth and deliberately says the opposite; the bullshitter does not care about the truth at all.

Frankfurt draws on Max Black’s earlier discussion of “humbug” and develops a precise taxonomy. The key distinctions:

  1. The liar is aware of the truth and tries to lead the audience away from it. The liar is thus, in a perverse way, guided by the truth.
  2. The bullshitter is not guided by the truth at all. The bullshitter’s statements may be true or false—this is beside the point. What matters is that the bullshitter’s aim is not to describe reality but to create an impression.

4.2 The Structural Analysis of Bullshit

Frankfurt’s account involves a deeper structural claim about the aims of bullshit. The liar’s goal is to produce a specific false belief—to make the hearer believe that p is true when it is false. The bullshitter’s goal is quite different: to produce a certain impression of the speaker (关于说话者的印象). The bullshitter is not primarily concerned with what the hearer believes about the world; they are concerned with what the hearer believes about them—their expertise, confidence, authority, or competence.

This is why Frankfurt says the bullshitter is trying to “get away with something.” The politician who bullshits about economic policy is not primarily trying to make voters believe specific false claims about the economy; they are trying to create the impression of someone who knows what they are talking about, someone who should be trusted, someone who has the situation under control.

Example: A student who has not read the assigned texts writes an essay full of plausible-sounding but vague claims. They are not asserting specific falsehoods; they have produced the essay without regard to whether its claims are true or false. Their aim is to appear knowledgeable. This is paradigmatic bullshit on Frankfurt's analysis.

4.3 Why Bullshit Is Worse Than Lies

Frankfurt’s most provocative claim is that bullshit is a greater enemy of truth than lying. His reasoning:

  • The liar at least acknowledges the authority of truth. By attempting to conceal the truth, the liar implicitly recognizes that the truth matters.
  • The bullshitter, by contrast, rejects the very framework in which truth and falsity are relevant. The bullshitter’s indifference to truth corrodes something more fundamental than any particular false belief: it corrodes the very ideal of truth-seeking.
Implications: Frankfurt warns that excessive exposure to bullshit can gradually destroy the bullshitter's own capacity to distinguish truth from falsehood. The liar must maintain a clear sense of what is true in order to avoid saying it; the bullshitter need not maintain this capacity, and may eventually lose it.

4.4 Why There Is So Much Bullshit

Frankfurt identifies a structural cause: modern life places enormous pressure on people to have opinions about matters they know nothing about. Politicians are expected to pronounce on every issue; pundits must fill 24-hour news cycles; social media users feel compelled to comment on everything. When people are required to speak about matters beyond their knowledge, bullshit is the inevitable result.

Example: A television commentator is asked about a complex geopolitical crisis. They have no genuine expertise but are expected to fill airtime. What they produce is not lying (they are not deliberately saying the opposite of what they know) but bullshit: speech produced without regard for its truth or falsity, aimed at creating an impression of knowledgeability.

4.5 Cassam’s Extension: Bullshit Artists in Politics

Quassim Cassam extends Frankfurt’s account by focusing specifically on what he calls bullshit artists (胡言乱语者) in political life. Where Frankfurt’s bullshitter is primarily characterized by indifference to truth, Cassam’s bullshit artist is more actively engaged: they are concerned not just with creating an impression of themselves but with undermining the epistemic standing of their opponents and of institutions they wish to discredit.

The political bullshit artist employs a range of techniques: making confident assertions without evidence, moving quickly between claims so that no single claim can be pinned down and refuted, attacking the credibility of fact-checkers and journalists, and generating a general atmosphere of epistemic chaos in which it becomes difficult to know what to believe. The cumulative effect is not the transmission of specific false beliefs but the degradation of the information environment as a whole.

4.6 Post-Truth as Institutionalized Bullshit

Cassam further argues that the concept of post-truth (后真相) is best understood as describing a condition in which bullshit has become institutionalized—in which the norms of public discourse have been systematically loosened so that speakers no longer feel obligated to correspond their assertions to evidence.

The post-truth condition is not simply a matter of more people lying or being deceived. It is a structural shift in the norms governing public communication. When a politician’s factual inaccuracies go consistently unpunished by their supporters—when the social sanctions that normally attach to public lying are suspended or redirected—then the Frankfurt-Cassam account suggests we are witnessing the institutionalization of indifference to truth.

Post-Truth vs. Lying: It is a mistake to describe post-truth politics as simply involving "a lot of lying." This framing misses the Frankfurt-Cassam point: the problem is not that specific false statements are being made, but that the normative framework---the expectation that speakers will try to be accurate---is being dismantled. Post-truth is bullshit as a political institution.

4.7 Critique: Is Frankfurt’s Account Too Narrow?

Frankfurt’s analysis has been criticized on several grounds. One important objection concerns accidentally true bullshit. Suppose a bullshitter makes a claim without regard for its truth, and the claim happens to be true. On Frankfurt’s account, this is still bullshit—the truth or falsity of the statement is irrelevant to its classification. But some critics argue that this makes the category of bullshit too wide: it includes not only the paradigm cases Frankfurt has in mind but also cases of lucky guessing, brainstorming, and even some forms of creative speculation.

A related objection is that Frankfurt’s focus on the individual speaker’s indifference to truth may miss the more important phenomenon: institutionalized and systematic bullshit, which does not require any individual actor to be indifferent to truth but which emerges from structural incentives and organizational dynamics. Advertising, public relations, and political spin may involve speakers who care deeply about specific effects (persuading, selling, framing) while being systematically indifferent to truth as a standard—a collective phenomenon not reducible to individual psychology.


Chapter 5: Propaganda and Post-Truth

5.1 Cassam: From Bullshit to Propaganda

Quassim Cassam’s “Bullshit, Post-Truth, and Propaganda” argues that the concepts of bullshit and post-truth (后真相), while widely used, are inadequate tools for understanding contemporary political manipulation. Cassam contends that the more accurate and analytically useful concept is propaganda (宣传).

5.1.1 Cassam’s Critique of Frankfurt

Cassam offers a modified account of bullshitting: bullshitters are people who conceal their own ignorance by pretending to know what they do not know. This differs subtly from Frankfurt’s account. For Frankfurt, the bullshitter is indifferent to truth; for Cassam, the bullshitter is actively concealing their ignorance—which involves a specific kind of deception about one’s own epistemic state.

But Cassam’s main argument is that neither Frankfurt’s account nor his own captures what is most dangerous about contemporary political discourse. The problem is not merely that politicians bullshit. The problem is that they engage in propaganda: systematic, deliberate attempts to manipulate public belief and action through language that exploits cognitive biases, emotional vulnerabilities, and social divisions.

5.1.2 Why “Post-Truth” Is Inadequate

Cassam argues that the label “post-truth” trivializes and misdescribes the tactics of political manipulators. To call our era “post-truth” suggests that truth has simply lost its authority—that people no longer care about truth. But Cassam insists that the reality is more sinister: propagandists actively work to undermine specific truths while promoting specific falsehoods. They are not indifferent to truth; they are hostile to truths that threaten their power.

Propaganda (宣传): Systematic communication designed to manipulate public belief, attitude, or action, typically in the service of political power. Unlike bullshit, propaganda is purposeful and strategic; unlike mere lying, it operates through a range of mechanisms including emotional appeal, framing, repetition, and the exploitation of cognitive biases.

5.2 Stanley’s Account: Propaganda and Democracy

Jason Stanley’s work, particularly How Propaganda Works, develops a philosophically rigorous account of propaganda that complements Cassam’s. Stanley’s central thesis is that propaganda is not merely a form of dishonest communication; it is a mechanism that undermines the epistemic conditions for democracy (破坏民主的认识论条件).

Democratic legitimacy depends on citizens being able to form beliefs through rational deliberation—to weigh evidence, evaluate arguments, and hold their government accountable. Stanley argues that propaganda attacks this capacity by exploiting what he calls flawed ideological beliefs (有缺陷的意识形态信念): beliefs that are deeply held, emotionally charged, and resistant to rational revision. By triggering these beliefs, propaganda short-circuits rational deliberation and replaces it with an appeal to identity, fear, or tribal loyalty.

Stanley distinguishes two main forms of political propaganda:

  1. Supporting propaganda: communication that promotes the interests of the propagandist while purporting to advance the common good.
  2. Undermining propaganda: communication that contributes to political goals by causing the audience to abandon epistemic rationality—by making them distrust institutions, experts, or each other.

The second form is particularly dangerous because it does not require the audience to believe any specific false claim; it only requires them to become epistemically disoriented and to lose confidence in the sources of information that might hold power accountable.

5.3 Habgood-Coote: Stop Talking About Fake News

Joshua Habgood-Coote mounts a related but distinct argument about the concept of fake news (假新闻). His thesis is bold: we should stop using the term “fake news” altogether.

5.3.1 Three Arguments for Abandonment

Habgood-Coote offers three reasons:

  1. Semantic instability: The term “fake news” has no stable public meaning. It is used to describe deliberate disinformation, satire, biased reporting, and news one simply disagrees with. This semantic chaos renders the term analytically useless.

  2. Redundancy: We already have a rich vocabulary for the phenomena “fake news” purports to describe—disinformation (虚假信息), misinformation (错误信息), propaganda, fraud, satire, and so on. The term “fake news” adds nothing that these more precise terms do not already capture.

  3. Propagandistic use: Most dangerously, “fake news” has itself become a vehicle for propaganda. When politicians label legitimate journalism as “fake news,” they exploit the term’s ambiguity to discredit the press and undermine public trust in reliable sources of information.

Example: A political leader routinely dismisses unfavorable news coverage as "fake news." The term functions not as a description of the news but as a rhetorical weapon---a way of discrediting the messenger without engaging with the message. This is precisely the propagandistic use that Habgood-Coote warns against.

5.3.2 Conceptual Engineering

Habgood-Coote’s argument connects to the broader project of conceptual engineering (概念工程)—the idea that we should not simply analyze our existing concepts but actively improve or replace them when they prove defective. On his view, “fake news” is a defective concept that should be retired in favor of more precise and less manipulable alternatives.

Debate: Critics of Habgood-Coote contend that despite its ambiguity, "fake news" picks out a genuine and distinctive phenomenon---deliberately fabricated content designed to look like legitimate journalism---that is not fully captured by older terms like "propaganda" or "disinformation." The debate remains active.

5.4 Dog Whistles: Coded Communication and Political Manipulation

One of the most important mechanisms of contemporary propaganda is the dog whistle (隐语暗号)—coded language that carries one meaning for a general audience and a different, more specific meaning for a targeted sub-audience. The term derives from the analogy of a whistle pitched at a frequency only dogs can hear: the general public hears a neutral or innocuous message, while the intended audience receives a politically charged one.

Jennifer Saul has contributed significantly to the philosophical analysis of dog whistles. She distinguishes between:

  • Explicit dog whistles: cases where the coded meaning is acknowledged within the in-group but deniable in public. A political phrase like “law and order” or “welfare queen,” used in contexts saturated with racial coding, may communicate racial animus to one audience while appearing to address concerns about crime or public spending to another.
  • Implicit dog whistles: cases where even the speaker may not be fully aware of the coded meaning their language carries. The communication of racial resentment may occur through mechanisms (activation of stereotypes, priming of in-group identity) that do not require conscious intent.

The philosophical significance of dog whistles is threefold. First, they illustrate the limits of a purely literal analysis of language: to understand what a dog whistle communicates, one must attend to the context, the audience, and the social history of the expression. Second, they raise questions about moral responsibility: if the speaker can plausibly deny the coded meaning, are they accountable for its effects? Third, they illustrate a form of strategic ambiguity (战略性模糊)—the deliberate cultivation of a gap between literal and communicated meaning—that is central to the propagandist’s toolkit.

5.5 Gaslighting as Epistemic Harm

Gaslighting (煤气灯操控) is a form of psychological manipulation in which a person or group causes the target to question their own perceptions, memories, or sanity. The term derives from the 1944 film Gaslight, in which a husband manipulates his wife into doubting her own perception of reality. In recent years, the concept has been taken up by philosophers as a form of epistemic harm (认识论伤害).

Gaslighting differs from ordinary lying or misleading in that its primary target is not a specific belief but the victim’s epistemic capacities themselves. The gaslighter does not simply try to make the victim believe false things; they try to undermine the victim’s confidence in their own ability to distinguish true from false, real from imagined.

Kate Abramson, one of the philosophers who has analyzed gaslighting most carefully, distinguishes gaslighting from other forms of manipulation by its constitutive features: it involves a systematic attempt to undermine the victim’s rational agency, and it typically involves the manipulation of the victim’s emotional responses as well as their beliefs. The gaslighter may express concern for the victim’s well-being while simultaneously causing them to doubt their own sanity—a combination of apparent care and actual harm that makes the manipulation particularly difficult to resist.

The political dimension of gaslighting connects to Cassam’s analysis of propaganda. A regime that systematically denies the evidence of citizens’ own experience—that insists peaceful protests were violent, that crises do not exist, that victims of oppression are the real aggressors—is engaging in a form of collective gaslighting.

5.6 The Frankfurt School: Structural Conditions of Propaganda

A broader intellectual context for the analysis of propaganda is provided by the Frankfurt School (法兰克福学派), and in particular by Max Horkheimer and Theodor Adorno’s Dialectic of Enlightenment (1944). Their concept of the culture industry (文化工业) anticipates many of the concerns of contemporary propaganda analysis.

Horkheimer and Adorno argued that the mass media of their era—cinema, radio, popular music—had been integrated into the capitalist system of production and distribution in a way that transformed them from vehicles of enlightenment into instruments of social control. The culture industry does not primarily deceive through false statements; it shapes the cognitive and emotional dispositions of its audience so that critical thought becomes difficult and conformity becomes attractive. Entertainment functions as a form of ideological pacification: audiences are kept occupied and satisfied with standardized products that reinforce existing social arrangements.

The Frankfurt School analysis suggests that propaganda need not take the form of individual false assertions. It can operate through the structure of the information environment itself—through what is made salient, what is made entertaining, what is rendered unthinkable. This structural analysis of propaganda connects directly to Stanley’s account of how propaganda undermines the epistemic conditions for democracy.


Chapter 6: Deception and Sexual Consent

6.1 Dougherty’s Argument

Tom Dougherty’s “Sex, Lies, and Consent” argues that deceiving someone into sex is seriously morally wrong (严重的道德错误) whenever the deception concerns a deal-breaker (决定性因素)—a fact that, if known, would have led the deceived party to refuse consent.

Deal-Breaker (决定性因素): A fact about a person or situation such that, if the other party were aware of it, they would not consent to sexual activity. Examples include: being married to someone else, having a sexually transmitted infection, or misrepresenting one's identity.

6.1.1 The Core Argument

Dougherty’s argument proceeds in three steps:

  1. Deception vitiates consent (欺骗使同意无效): When a person consents to sex based on false information deliberately provided by their partner, their consent is not genuine. They have consented to sex with a person (as described), not to sex with the person as they actually are.

  2. Sex without genuine consent is seriously wrong: This premise draws on the widely accepted principle that sexual activity requires informed consent (知情同意).

  3. Therefore, deception about deal-breakers is seriously wrong: If the deception concerns a fact that would have been decisive for the other party, then the deceiver has effectively bypassed the other party’s autonomous decision-making about their own body and sexual activity.

6.1.2 Scope of the Argument

Dougherty defends a broad version of this argument: culpably deceiving another person into sex is seriously wrong regardless of the content of the deception. What matters is not whether the deal-breaker is “reasonable” or “important” by some external standard, but whether it is genuinely decisive for the person who was deceived.

Example: A person lies about their profession to secure a sexual encounter. If the other party would not have consented had they known the truth, then the deception vitiates their consent, and the sexual activity is seriously morally wrong---even though the deception concerns a seemingly trivial matter.

A key conceptual distinction in the literature is between disclosure obligations (披露义务) and conditions on valid consent (有效同意的条件). These two concepts are related but distinct:

  • A disclosure obligation is a duty to share relevant information with a potential sexual partner. Violating a disclosure obligation is morally wrong, but its wrongness may be independent of whether it vitiates consent.
  • A condition on valid consent is a feature of the consent situation such that, if it is not met, the consent that results is not genuine or binding.

Dougherty’s argument involves both concepts but focuses primarily on the second. He argues that deception about deal-breakers does not merely violate a disclosure obligation; it undermines the validity of the consent that results. This is a stronger and more controversial claim.

6.3 Bromwich and Manson’s Conflation Objection

Bromwich and Manson raise what they call the conflation objection: Dougherty conflates the disclosure requirement and the understanding requirement in a way that leads to implausible results.

Their argument: there may be many pieces of information that a potential partner would want to know and that, if known, would lead them to refuse consent—but this does not mean that withholding such information vitiates the consent that results. The validity of consent depends on whether the consenting party understood what they were consenting to, not on whether they had access to all information that might have influenced their decision.

On Bromwich and Manson’s view, deception may violate a duty of disclosure without vitiating consent. Consent is vitiated only when the deception concerns the very nature of the act or the basic identity of the actor—not whenever it concerns a fact that the deceived party would have found decisive.

Dougherty’s response is that this distinction is less clear than it appears. It is difficult to specify, independently of the victim’s values, what counts as “the nature of the act” or “basic identity.” If the deceived party’s conception of who they were having sex with is partly constituted by facts that turn out to be false—profession, relationship status, health status, shared values—then their consent was indeed not consent to the act as it actually occurred.

A category of cases that has attracted increasing philosophical attention is identity-based deception (身份欺骗), of which catfishing (网络伪装) is the most familiar contemporary example. Catfishing involves creating a false online identity in order to deceive a target into a romantic or sexual relationship.

Identity-based deception raises specific philosophical questions. First, there is the question of which aspects of identity are relevant to consent. Dougherty’s deal-breaker framework seems well suited to handle catfishing: if the catfisher misrepresents their gender, age, appearance, or basic biographical facts, and if the target would not have consented had they known the truth, then the consent is vitiated.

But more complex cases arise. Suppose a person misrepresents not their biological identity but their social or cultural identity—their religion, ethnicity, or nationality—and the target would not have consented had they known the truth. Does this vitiate consent? The deal-breaker framework says yes, but this implication troubles some theorists, who worry that it reinforces exclusionary or prejudicial identity requirements.

Robin West and other feminist theorists have raised a deeper challenge to consent frameworks: even when consent is genuinely given, freely and informedly, sexual interactions may be harmful and wrong. This is the claim that consent sets the bar too low (同意设定了过低的标准).

West’s argument is that consent frameworks focus on the minimum condition for permissibility (absence of coercion and deception) rather than on the positive conditions for genuinely good sexual relations (mutual respect, genuine desire, equality of power). A focus on consent can obscure the many ways in which sexual interactions can be harmful, demeaning, or exploitative while remaining “consensual” in the technical sense.

This critique has implications for Dougherty’s analysis. Dougherty’s framework is designed to identify when deception renders otherwise permissible sex impermissible. But West’s challenge suggests that this framing already concedes too much: it assumes that sexual activity is permissible whenever consent (properly understood) is present, and that the only wrong worth analyzing is the failure of consent. West’s view suggests that we need a richer normative framework that goes beyond consent.

Connection to Non-Ideal Theory: West's critique of consent frameworks mirrors Cappelen and Dever's critique of idealized philosophy of language. Just as idealized language theory cannot account for the ways language is used to harm and oppress, consent-focused sexual ethics cannot account for the full range of ways in which sexual interactions can be harmful. Both critiques call for a move to a richer, more socially embedded normative framework.

Chapter 7: Racist Deception and Negligent Falsehood

7.1 Yancy, Jones, and Epistemologies of Ignorance

George Yancy and Janine Jones explore the ways in which racism operates through systematic deception—not just individual lies, but entire structures of knowledge and ignorance that conceal the reality of racial oppression.

7.1.1 White Ignorance: Mills’s Epistemology

Drawing on Charles Mills’s concept of white ignorance (白人无知), Yancy argues that white people in racist societies are systematically shielded from knowledge about the reality of racial oppression. This ignorance is not accidental; it is produced and maintained by social structures, educational institutions, media representations, and everyday conversational practices.

White Ignorance (白人无知): A form of structurally produced and maintained ignorance about the realities of racial oppression, experienced by members of dominant racial groups. It involves not merely a lack of knowledge but an active resistance to acquiring such knowledge. (Mills, 2007)

Mills develops this concept in his essay “White Ignorance” as part of his broader project of non-ideal epistemology (非理想认识论). Standard epistemology, Mills argues, assumes an idealized knower: a rational, unbiased individual who forms beliefs in accordance with evidence. Non-ideal epistemology recognizes that actual knowing agents are embedded in social structures that systematically shape what they are exposed to, what they are motivated to know, and what they are permitted to believe.

White ignorance is produced through several mechanisms. First, there is selective attention: white-dominated media and educational institutions highlight stories and facts that are consistent with racial innocence narratives and suppress or marginalize stories that reveal racial injustice. Second, there is motivated inattention: members of the dominant group have psychological and material incentives not to acknowledge racial injustice, since acknowledgment would require moral and political response. Third, there is testimonial injustice: the testimony of members of oppressed groups is systematically given less credibility than the testimony of members of dominant groups, so that reports of racist harm are discounted or disbelieved.

White ignorance is a form of epistemology of ignorance (无知的认识论)—an inverted epistemology in which the cognitive resources of the dominant group are systematically oriented away from racial truths. This is not mere individual bias but a collective epistemic failure with deep structural roots.

7.1.2 Fricker’s Hermeneutical Injustice

Miranda Fricker’s concept of hermeneutical injustice (诠释性不公正) provides an important complement to Mills’s white ignorance. Hermeneutical injustice occurs when a gap in collective interpretive resources puts someone at an unfair disadvantage when trying to make sense of their own social experience.

The classic example Fricker gives is sexual harassment before the concept existed: women who experienced harassing behavior by supervisors often struggled to articulate what was happening to them, because the collective interpretive vocabulary did not yet include a concept that accurately captured their experience. This was not merely a matter of lacking a word; it was a matter of lacking the conceptual resources to understand their own situation.

Applied to racial epistemology, hermeneutical injustice explains why white ignorance is so resistant to correction. Even when members of dominant groups are exposed to testimony about racial harm, they may lack the conceptual resources to properly interpret it—to understand what structural racism is, how it operates, and how it differs from individual prejudice. The concepts available in mainstream discourse may systematically distort their understanding.

7.1.3 Racist Deception as Structural Phenomenon

Yancy emphasizes that racist deception is not limited to individual acts of lying. Rather, entire discourses, narratives, and systems of representation function to conceal racial injustice. The “deception” is distributed across institutions, media, and everyday interactions, making it difficult for any single individual to identify or resist.

Example: The widespread narrative that racial inequality is the result of individual failures of effort or character, rather than systemic discrimination, functions as a form of racist deception---one that is reproduced across news media, political rhetoric, educational curricula, and everyday conversation.

7.2 Anderson on Racist Humor and Implicit Communication

Luvell Anderson examines how racist humor (种族歧视幽默) and other forms of implicit communication serve as vehicles for the transmission and normalization of racist attitudes. Racist jokes, even when they are not explicitly asserted as true claims, can function to reinforce stereotypes, signal in-group membership, and create hostile environments for members of targeted groups.

Anderson’s analysis connects to the broader theme of non-ideal philosophy of language: the harm of racist humor cannot be captured by looking only at the literal content of what is said. It operates through implicature, presupposition, and the activation of background stereotypes—precisely the mechanisms that standard philosophy of language has analyzed in sanitized, idealized terms.

7.3 Saul on Negligent Falsehood

Jennifer Saul’s “Negligent Falsehood, White Ignorance, and False News” introduces the concept of negligent falsehood (过失性虚假陈述) to capture a category of untruth that falls between deliberate lying and innocent error.

7.3.1 The Concept

Negligent Falsehood (过失性虚假陈述): A false assertion made by a speaker who does not know it is false, but who should know it is false---who would know it is false if they had exercised the epistemic diligence that their social position and communicative role demand.

Saul’s concept fills an important gap in the existing taxonomy of deception. Traditional analyses focus on the distinction between lying (knowingly asserting falsehood) and honest error (unknowingly asserting falsehood). But there is a morally significant category between these two: the speaker who asserts falsehood out of culpable ignorance—ignorance that could and should have been avoided.

7.3.2 Epistemic Obligations of Speakers

Negligent falsehood implies that speakers have epistemic obligations (认识论义务) that vary with their social position and communicative role. A private individual who repeats a false claim they heard from a friend may not be culpably negligent. A journalist, politician, or public intellectual who makes false claims about social groups without checking the evidence is culpably negligent, because their communicative role places special demands on epistemic diligence.

This is a substantive claim: it holds that the epistemic obligations of speech are not uniform but are indexed to social position and power. Those who speak from positions of authority and reach bear a heightened obligation to verify claims, especially claims about vulnerable groups who lack the power to easily correct false narratives about themselves.

7.3.3 The Connection to White Ignorance

Saul explicitly connects negligent falsehood to Mills’s concept of white ignorance. Many of the false claims made about race—about crime rates, about the causes of inequality, about the character of different racial groups—are made by speakers who do not know they are false but who would know they are false if they had bothered to examine the evidence. Their ignorance is itself a product of the structures of white ignorance that Yancy describes.

Example: A politician asserts that a particular minority group is responsible for a disproportionate share of violent crime. The politician may genuinely believe this claim. But the claim is false, and the politician's belief is the product of selective exposure to misleading statistics, biased media coverage, and a failure to consult reliable sources. The assertion is not a lie, but it is a negligent falsehood---and it is morally blameworthy precisely because the speaker's ignorance is culpable.

7.3.4 Color-Blind Racism as Negligent Falsehood

Color-blind racism (色盲种族主义) is the ideology that holds that race should be and largely is ignored in social life—that the best way to address racism is to stop talking about race, to treat all individuals as individuals, and to refuse to use race as a category of analysis or policy. Eduardo Bonilla-Silva and others have argued that color-blind racism is itself a form of racism, because it operates to maintain racial inequality by denying the existence of the structural factors that produce it.

Color-blind racism is particularly susceptible to analysis as negligent falsehood. Those who sincerely hold color-blind racist beliefs typically do not see themselves as racist; they genuinely believe that racial disparities are explained by non-racial factors (culture, individual effort, economic incentives). But the evidence for structural racism is substantial and widely available. The color-blind racist’s ignorance of this evidence is culpable: it is sustained by the same mechanisms of white ignorance that Mills and Yancy describe, and it could be corrected by epistemic diligence that the speaker’s social position and communicative role demand.

7.3.5 Negligent Falsehood and False News

Saul extends her analysis to the phenomenon of false news (虚假新闻). Much of the misinformation that circulates in contemporary media is not produced by deliberate liars but by negligent speakers—people who share and amplify false claims without exercising basic epistemic diligence. Saul argues that this negligence is morally blameworthy and that we need conceptual resources beyond “lying” and “honest mistake” to capture its distinctive character.

Connection to Habgood-Coote: Saul's preference for the term "false news" over "fake news" aligns with Habgood-Coote's argument that "fake news" is a semantically unstable and propagandistically exploitable term. "False news" is more precise: it simply describes news that is false, without the connotations of deliberate fabrication or political weaponization.

Chapter 8: Echo Chambers and Epistemic Bubbles

8.1 Nguyen’s Distinction

C. Thi Nguyen’s “Echo Chambers and Epistemic Bubbles” introduces a crucial distinction between two phenomena that are routinely conflated in public discourse and academic research alike.

Epistemic Bubble (认知泡沫): A social epistemic structure in which relevant voices and sources of information are absent---not actively excluded, but simply not heard. Members of an epistemic bubble lack exposure to relevant information and arguments, but they have not been given reasons to distrust outside sources.
Echo Chamber (回音室): A social epistemic structure in which members have been brought to actively distrust all outside sources of information. In an echo chamber, other voices are not merely absent---they have been systematically discredited.

8.1.1 The Structural Difference

The key structural difference between the two phenomena is this:

  • In an epistemic bubble, the problem is one of omission: relevant voices are missing. The bubble is maintained by algorithmic filtering, social homophily, or simple lack of access to diverse sources.
  • In an echo chamber, the problem is one of active exclusion and discrediting: members have been taught to regard outside sources as untrustworthy, biased, or malicious. The chamber is maintained by a systematic manipulation of trust.

Nguyen’s more precise formulation focuses on the source-vetting process (信源评估过程). In a healthy epistemic community, members evaluate sources on the basis of track record, expertise, transparency, and method. In an echo chamber, this process has been corrupted: members have been given misleading information about which sources are reliable, and this corruption makes the echo chamber difficult to escape even when members are exposed to accurate information.

8.1.2 Different Solutions

This structural difference has profound practical consequences. An epistemic bubble can be burst simply by exposing its members to the missing information. Present someone in an epistemic bubble with relevant evidence, and they may well update their beliefs.

An echo chamber, however, cannot be escaped so easily. Presenting new evidence to someone inside an echo chamber may actually reinforce the chamber, because the member has been primed to interpret outside evidence as hostile, biased, or fabricated. The very act of presenting counter-evidence confirms what the echo chamber told them: that outsiders are trying to manipulate them.

Example: A person who only follows certain news sources on social media may be in an epistemic bubble---they simply have not encountered alternative viewpoints. Exposing them to a well-sourced article from a different perspective may change their mind.

A person who has been told that mainstream media is systematically lying, that scientists are part of a conspiracy, and that anyone who disagrees is either naive or malicious is in an echo chamber. Showing them a well-sourced article from a mainstream outlet may only confirm their suspicion that "they" are trying to deceive them.

8.2 Algorithmic Echo Chambers and Social Media

Nguyen’s analysis was originally framed in terms of social communities, but it applies with particular force to the algorithmically mediated information environments of social media platforms. Recommendation algorithms on platforms like Facebook, YouTube, and Twitter are designed to maximize engagement. They do this by learning what content a user responds to positively and serving more of it.

The result is a form of algorithmic echo chamber (算法回音室): not a community of people who have explicitly agreed to distrust outsiders, but a technological system that progressively filters the information environment toward content that confirms existing beliefs and emotions. Research by Renée DiResta and others has documented how recommendation algorithms can lead users from moderate content toward increasingly extreme positions in a matter of weeks, by following the path of maximum engagement.

The philosophical point is that algorithmic echo chambers corrupt the source-vetting process just as effectively as community-based echo chambers. Users who have been algorithmically steered toward a set of sources may develop strong beliefs in the reliability of those sources, not because they have been explicitly told to trust them but because repeated positive reinforcement has shaped their epistemic dispositions.

8.3 Sunstein and Group Polarization

Cass Sunstein’s earlier work on group polarization (群体极化) provides important background for understanding echo chambers. Sunstein documented the empirical phenomenon that groups of like-minded individuals, when they deliberate together, tend to reach more extreme positions than the average of the individual members’ initial positions.

Group polarization occurs through two main mechanisms. First, there is the informational mechanism: in a homogeneous group, most of the arguments presented favor the group’s initial position, so members encounter more arguments on one side than the other and update accordingly toward more extreme versions of their initial view. Second, there is the social comparison mechanism: members of a group want to be perceived favorably by other members, and when the group’s position is known, members have an incentive to position themselves at or beyond the group’s norm—which creates a dynamic of escalating extremity.

Sunstein’s analysis suggests that echo chambers are not merely a matter of lacking information; they are actively belief-distorting environments. The deliberative dynamics within an echo chamber are systematically biased toward the most extreme versions of the group’s existing views.

8.4 Echo Chambers and Epistemic Autonomy

Nguyen argues that echo chambers are particularly insidious because they undermine epistemic autonomy (认知自主性)—the capacity to form beliefs through one’s own rational engagement with evidence. Echo chamber members may feel that they are thinking independently, precisely because they have been taught to distrust the mainstream. But their apparent independence is illusory: their beliefs are shaped by the systematic manipulation of their trust structures.

Connection to Propaganda: Nguyen's analysis of echo chambers connects directly to Cassam's analysis of propaganda. Propaganda creates and maintains echo chambers by systematically discrediting alternative sources of information. The echo chamber, in turn, makes its members more susceptible to further propaganda. The result is a self-reinforcing cycle of manipulation and isolation.

8.5 Can Epistemic Bubbles Be Rational?

A challenge to the entirely negative picture of epistemic bubbles is that filtering information (过滤信息) can be a rational cognitive strategy. We are all exposed to far more information than we can process; selective attention is not merely a bias but a cognitive necessity. The filter bubble (过滤泡沫)—Eli Pariser’s term for the personalized information environment created by recommendation algorithms—can be understood as an extension of rational cognitive efficiency.

On this view, there is nothing inherently irrational about forming beliefs primarily on the basis of a curated subset of available information, as long as the curation process is not systematically biased against true beliefs. The problem with algorithmic filter bubbles is not that they filter, but that they filter on the basis of engagement rather than accuracy—selecting for emotionally resonant and confirmatory content rather than reliable content.

This suggests that the right response to epistemic bubbles is not to demand that individuals expose themselves to all available information (which is impossible) but to design curation systems that are aligned with epistemic values—that select for accuracy, diversity, and reliability rather than engagement.

8.6 Why This Matters for Misinformation

Nguyen’s distinction explains why misinformation is so resistant to correction. If the problem were merely one of epistemic bubbles, then fact-checking and media literacy would be sufficient remedies. But if the problem is one of echo chambers, then these remedies may be counterproductive: they may be perceived as further evidence of the conspiracy that the echo chamber posits.

This analysis suggests that addressing misinformation requires more than simply providing correct information. It requires rebuilding trust—a much more difficult and long-term project.


Chapter 9: Rationality and Misinformation

9.1 Levy’s Challenge

Neil Levy’s “Echoes of Covid Misinformation” mounts a provocative challenge to the standard narrative about echo chambers and irrationality. The standard narrative holds that people who believe Covid misinformation—who oppose vaccines, masks, and lockdowns—are irrational, trapped in echo chambers, and need to be made more rational. Levy argues that this narrative is wrong on almost every count.

The puzzle Levy begins with is one that any honest observer of the Covid-19 information landscape must confront: why did educated, intelligent, apparently reasonable people come to hold beliefs that contradicted the scientific consensus? Standard explanations appeal to irrationality, motivated reasoning, or cognitive bias. Levy argues that these explanations are both factually inadequate and morally counterproductive.

9.2 The Social Epistemology of Testimony

To understand Levy’s argument, it is necessary to begin with the social epistemology of testimony (证言). One of the most fundamental facts about human knowledge is that most of what we believe, we believe on the basis of what other people tell us. We have not personally verified that climate change is human-caused, that vaccines are safe, that the earth is billions of years old, or that germs cause disease. We believe these things because we trust the relevant experts and institutions.

This reliance on testimony is not a cognitive weakness; it is a rational response to the division of epistemic labor in a complex society. No individual can verify even a small fraction of the claims they need to hold in order to function. Trust in reliable informants is epistemically rational, not merely expedient.

The crucial question is: how do we identify which informants are reliable? This is the question of source vetting (信源评估), and it is here that the social epistemology of testimony intersects with the problem of misinformation.

9.3 The Rationality of Echo Chamber Belief

Levy’s central thesis: belief formation within echo chambers is often rational (回音室中的信念形成通常是理性的). His argument proceeds as follows:

  1. Human beings are epistemically social animals (社会性认知动物). We cannot individually verify most of the things we believe. We rely on testimony, on the judgments of experts, and on the epistemic communities in which we are embedded.

  2. This reliance on higher-order evidence (高阶证据)—evidence about the reliability of sources, rather than direct evidence about the world—is not a failure of rationality. It is a rational response to our epistemic situation. We ought to defer to genuine experts, and we ought to update our beliefs when our trusted sources update theirs.

  3. Within an echo chamber, the same rational processes operate. If your epistemic community contains people who present themselves as experts, and if those “experts” change their views, it is rational (given your evidence) to change yours.

Higher-Order Evidence (高阶证据): Evidence about the reliability, expertise, or trustworthiness of a source, as opposed to first-order evidence about the subject matter itself. Example: knowing that a particular scientist has a strong track record is higher-order evidence that bears on how much weight to give their claims.

9.4 The Problem Is Not Irrationality

Levy draws a surprising conclusion: the problem of Covid misinformation is not a problem of irrationality. People who believe misinformation are often responding rationally to the (misleading) higher-order evidence available to them. They trust certain sources—sources that happen to be unreliable—and they update their beliefs in light of what those sources say. This is exactly what rational epistemic agents do.

The implication is that calling people “irrational” for believing misinformation is both inaccurate and counterproductive. It is inaccurate because their belief-formation processes are structurally rational. It is counterproductive because it further alienates them and reinforces the echo chamber’s narrative that outsiders are dismissive and hostile.

9.5 Inoculation Theory: Pre-Bunking vs. Debunking

One important implication of Levy’s analysis concerns the relative effectiveness of different strategies for combating misinformation. The traditional approach is debunking (事后辟谣): providing accurate information to correct false beliefs after they have formed. Levy’s analysis suggests that debunking is limited in effectiveness, precisely because the beliefs have been formed through rational deference to trusted sources. Providing contrary information from outside the trusted network may simply fail to penetrate, or may be rejected as hostile.

Inoculation theory (接种理论) proposes an alternative: pre-bunking (事前预防), or exposing people to weakened forms of misinformation and the techniques used to spread it before they encounter the real thing. The analogy to vaccination is deliberate: just as a vaccine introduces a weakened pathogen to build immune resistance, pre-bunking introduces weakened misinformation to build cognitive resistance.

Research by Sander van der Linden and colleagues has found that inoculation is more effective than debunking in a number of contexts. By explaining the techniques of manipulation—false experts, emotional appeals, conspiracy framing—before people encounter them, inoculation helps people resist manipulation without requiring them to evaluate specific claims against their existing source networks.

The connection to Levy’s analysis is that inoculation works by improving people’s higher-order evidence—their evidence about which sources and techniques to trust. Rather than trying to correct specific false beliefs (which may be rationally held given existing source networks), inoculation changes the source network itself by building awareness of manipulation tactics.

9.6 What Should We Do?

If the problem is not irrationality, then the solution is not to make people more rational. Levy argues that there is no “special problem of echo chambers”—rather, there is a general problem of misleading higher-order evidence. The solution is to supplant unreliable higher-order evidence with better evidence. This means:

  1. Improving the information environment: Making reliable sources more accessible, visible, and trustworthy.
  2. Addressing the root causes of distrust: Understanding why people have come to distrust mainstream institutions (media, science, government) and working to rebuild that trust.
  3. Not attempting to dismantle echo chambers directly: Direct attacks on echo chambers may be perceived as further evidence of the conspiracy. The goal should be to improve the overall epistemic environment, not to confront individuals.
Connection to Nguyen: Levy's analysis complements Nguyen's distinction between echo chambers and epistemic bubbles. Nguyen shows that echo chambers are harder to escape than epistemic bubbles because they manipulate trust; Levy shows that the beliefs formed within echo chambers are often rationally held given the available evidence. Together, they suggest that the problem of misinformation is fundamentally a problem of trust and evidence quality, not of individual cognitive failure.

9.7 Partisan Divergence and Covid

Levy uses the specific case of Covid-19 misinformation (新冠疫情错误信息) to illustrate his argument. Public support for pandemic responses (lockdowns, mask mandates, vaccines) diverged sharply along partisan lines in many countries. Conservatives tended to oppose these measures; liberals tended to support them.

On the standard narrative, this divergence reflects conservative irrationality or susceptibility to misinformation. Levy argues instead that it reflects different epistemic communities with different trusted sources. Conservative media figures and politicians cast doubt on pandemic measures; liberal media figures and politicians supported them. Individuals on both sides updated their beliefs in response to their trusted sources—a rational process applied to different bodies of (higher-order) evidence.

Example: A person whose trusted news sources consistently feature interviews with scientists who question vaccine efficacy may rationally conclude that vaccines are less effective than mainstream science claims. The problem is not that this person is irrational, but that their higher-order evidence---their information about which sources to trust---is misleading.

9.8 Synthesizing Conclusion: Language, Power, and Epistemic Harm

The arc of this course reveals a common thread running through all nine chapters. Philosophy of language, when it takes the real world seriously, is inescapably a philosophy of power and harm.

Chapter 1 established the methodological foundation: idealized theories of language cannot account for the ways language is actually used to deceive, manipulate, and oppress. Cappelen and Dever’s call for a non-ideal philosophy of language, paralleling Mills’s critique of ideal theory in political philosophy, is the organizing principle of the course.

Chapters 2 and 3 refined the basic taxonomy of dishonest communication. Saul’s distinction between lying and misleading showed that the mechanism of deception matters for our conceptual taxonomy but may not matter morally. Carson’s analysis of bald-faced lies showed that deception is not even necessary for lying: the violation of the norm of assertion is sufficient. Together, these analyses reveal that the wrongness of dishonest speech is more complex and more various than simple accounts of lying can capture.

Chapter 4 introduced bullshit as a category distinct from lying: speech produced not in defiance of truth but in indifference to it. Frankfurt’s analysis of the bullshitter’s structural indifference to truth identified a threat to epistemic culture more corrosive than lying, because it corrodes the very framework within which truth matters.

Chapter 5 scaled up the analysis from individual speech acts to systematic political communication. Propaganda, post-truth, dog whistles, gaslighting, and the structural analysis of the culture industry all illustrate the ways in which the mechanisms of language and communication can be weaponized at scale. Stanley’s account of propaganda’s attack on democratic epistemology showed that the stakes of these linguistic phenomena extend beyond individual harm to the conditions of political legitimacy itself.

Chapter 6 applied the framework to the ethics of sexual consent, showing how deception in intimate contexts connects the philosophy of language to the philosophy of bodily autonomy. Dougherty’s deal-breaker analysis, contested by Bromwich and Manson and challenged by West’s feminist critique of consent frameworks, illustrates how non-ideal philosophy of language intersects with feminist ethics.

Chapter 7 turned to the specifically racial dimensions of epistemic harm. Mills’s white ignorance, Saul’s negligent falsehood, and the analysis of color-blind racism as a form of culpable ignorance all show how the structures of racist society are reproduced through the mechanisms of language and knowledge—not only through deliberate lies but through the systematic production and maintenance of ignorance.

Chapter 8 analyzed the social epistemological structures that amplify and entrench misinformation. Nguyen’s distinction between echo chambers and epistemic bubbles, combined with Sunstein’s account of group polarization and the analysis of algorithmic amplification, showed that the problem of misinformation is not merely a problem of individual irrationality but of corrupted social epistemic structures.

Chapter 9 completed the picture by showing, through Levy’s analysis, that the beliefs produced within these corrupted structures are often rationally formed given the available evidence. The problem is not that people are irrational; the problem is that the information environment has been systematically corrupted by the mechanisms—lying, bullshitting, propaganda, dog whistles, gaslighting, echo chambers—that the course has been analyzing throughout.

The conclusion is one of qualified pessimism combined with practical urgency. The mechanisms of epistemic harm are powerful, structural, and self-reinforcing. They cannot be addressed merely by providing more accurate information or by denouncing irrationality. They require the more demanding work of conceptual engineering, institutional reform, and the patient rebuilding of epistemic trust.

Final Reflection: The arc of this course---from non-ideal philosophy of language through lying, bullshit, propaganda, and echo chambers---reveals that language is not merely a tool for transmitting information. It is a site of power, manipulation, and struggle. Understanding how language can be used to deceive, oppress, and mislead is not a peripheral concern for philosophy of language. It is, as Cappelen and Dever argue, central to any philosophy of language that takes the real world seriously. The analysis of lies, misinformation, and their spread is not an application of philosophy of language to practical concerns; it is philosophy of language done properly.
Back to top