SOC 312: Nature of Scientific Knowledge

Matthew Silk

Estimated study time: 1 hr 24 min

Table of contents

Sources and References

This document synthesizes material from A.F. Chalmers, What Is This Thing Called Science? (4th ed.), supplemented by works from Heather Douglas, Helen Longino, Stuart Firestein, George Nicholas, Kyle Whyte, David Harker, Harry Collins and Robert Evans, Kathryn Plaisance and Ethan Kennedy, S. D. Eigenbrode et al., Maya Goldenberg, Stefano Mammola and Ruben Martinez, and Kumar et al. Additional context draws on the Stanford Encyclopedia of Philosophy entries for scientific realism, the problem of induction, Thomas Kuhn, Karl Popper, feminist epistemology, and philosophy of science more broadly, as well as resources from Cambridge History and Philosophy of Science and MIT OpenCourseWare philosophy of science materials. Further references include Peter Lipton’s Inference to the Best Explanation, Imre Lakatos’s The Methodology of Scientific Research Programmes, Paul Feyerabend’s Against Method, Bas van Fraassen’s The Scientific Image, David Bloor’s Knowledge and Social Imagery, Ian Hacking’s Representing and Intervening, Sandra Harding’s The Science Question in Feminism, Donna Haraway’s Situated Knowledges, Linda Tuhiwai Smith’s Decolonizing Methodologies, and Francis Bacon’s Novum Organum.


Chapter 1: What Is Science?

1.1 The Common View of Science

The question “What is science?” appears deceptively simple. In popular culture, science (科学) is often equated with a body of established facts, a collection of truths about the natural world discovered through careful observation and experiment. This naive picture treats scientific knowledge as uniquely authoritative precisely because it is grounded in hard evidence rather than opinion, tradition, or authority. On this account, what distinguishes science from other human endeavors is its method (方法论) — a systematic procedure for gathering data, formulating hypotheses, and testing them against experience.

As Chalmers emphasizes at the outset of What Is This Thing Called Science?, this popular image deserves scrutiny. The idea that science simply reads facts off nature through unbiased observation conceals deep philosophical puzzles. What counts as a relevant observation? How do we move from a finite collection of data to general laws? What makes one theory superior to a rival? These questions have occupied philosophers, historians, and sociologists of science for over a century, and the answers are far less straightforward than the popular image suggests.

1.2 Science as Distinguished by Method

One influential answer holds that science is distinguished not by its subject matter but by its methodology (方法论). Physics, biology, chemistry, and the social sciences all study different domains, yet they supposedly share a common approach: the scientific method (科学方法). In its textbook version, this method involves formulating a hypothesis, deriving testable predictions, conducting experiments or observations, and revising the hypothesis in light of the results.

Remark. The notion of a single, unified scientific method is itself contested. Historians of science such as Peter Galison have argued that different scientific disciplines employ markedly different methods, and that the unity of science is more a philosophical aspiration than a historical reality. The idea that there is one method governing all of science is an idealization that does not survive close contact with actual scientific practice.

1.3 The Demarcation Problem

The attempt to specify what makes science science — and to distinguish it from non-science, pseudo-science, and other forms of inquiry — is known as the demarcation problem (划界问题). This problem has proven remarkably difficult. Early logical positivists proposed that scientific statements are meaningful if and only if they are empirically verifiable. Karl Popper countered that the hallmark of science is not verifiability but falsifiability (可证伪性): a theory is scientific if and only if it makes predictions that could, in principle, be shown false. Thomas Kuhn argued that neither criterion adequately captures how science actually works, proposing instead that science is characterized by the existence of a shared paradigm (范式) guiding research within a community.

Each of these proposals captures something important, yet none is entirely satisfactory on its own. The demarcation problem remains an active area of philosophical inquiry, and the course as a whole can be understood as an extended exploration of different answers to it.

1.4 Why the Philosophy of Science Matters

Understanding the nature of scientific knowledge is not merely an academic exercise. Disputes about what counts as science shape public policy on climate change, vaccination, and environmental regulation. They influence which research programs receive funding and which knowledge traditions are recognized as legitimate. The sociology of scientific knowledge (科学知识社会学) examines how social, cultural, and institutional factors shape the production of scientific knowledge, raising questions about objectivity, trust, and the role of values in science that have profound practical consequences.


Chapter 2: Observation, Induction, and Experience

2.1 Naive Inductivism and the Priority of Observation

The simplest account of scientific knowledge holds that science begins with observation (观察). According to naive inductivism (朴素归纳主义), the scientist approaches nature with an open mind, carefully records what is observed, and then generalizes from particular observations to universal laws. On this view, scientific knowledge is secure because it rests on the solid foundation of sense experience, and the process of generalization follows the logic of induction (归纳法) — reasoning from particular instances to general conclusions.

Induction. A form of reasoning in which one moves from a finite number of particular observations to a general conclusion. For example, having observed that the sun has risen every morning in recorded history, one concludes that the sun will rise tomorrow. Inductive reasoning is ampliative: the conclusion goes beyond the information contained in the premises.

Chalmers identifies several tenets of naive inductivism: (1) science starts with observation; (2) observation provides a secure basis for knowledge; (3) scientific laws are derived from observation statements by induction; and (4) the more observations that support a generalization, the more probable it becomes.

Naive inductivism has an intuitive appeal, but it rests on assumptions that crumble under scrutiny. The claim that scientists approach nature with a completely open mind, free from theoretical preconceptions, is psychologically implausible and philosophically untenable. As we shall see, observations are always shaped by prior expectations, and the step from particular observations to general laws is logically fraught. Moreover, naive inductivism provides no guidance about which observations to make: the instruction “observe!” is meaningless unless one already has some idea of what to look for and why.

2.2 The Problem of Induction (Hume)

Even if we set aside concerns about theory-ladenness, induction faces a fundamental logical problem first articulated by David Hume in the eighteenth century. The problem of induction (归纳问题) is this: no finite number of observations can logically guarantee a universal conclusion. No matter how many white swans we have observed, we cannot deduce that all swans are white — and indeed, the discovery of black swans in Australia refuted precisely this generalization.

Hume’s argument has two prongs. First, inductive inferences are not deductively valid: the premises (a finite set of observation statements) do not entail the conclusion (a universal generalization). Second, any attempt to justify induction by appeal to past success is circular: it assumes that the future will resemble the past, which is itself an inductive inference requiring justification. The argument applies with equal force to the claim that natural laws are uniform across space and time — an assumption that undergirds all scientific prediction but that cannot itself be established by observation or experiment without begging the question.

Hume concluded that our expectation that the future will resemble the past is grounded not in reason but in custom and habit. This does not mean that induction is useless — we plainly could not survive without it — but that its justification lies outside the domain of pure logic. The problem of induction has never been solved to the satisfaction of all philosophers, and it poses a serious challenge to any account of science that treats induction as the foundation of scientific knowledge.

Remark. The problem of induction should not be confused with skepticism about science. The point is not that scientific knowledge is worthless, but that its justification cannot rest solely on the accumulation of confirming instances. More sophisticated accounts of scientific reasoning are needed.

2.3 Goodman’s New Riddle of Induction

Nelson Goodman sharpened the problem of induction in a celebrated 1955 paper by introducing the predicate “grue” (绿蓝). Define an object as “grue” if it is observed before some future time t and is green, or is not observed before t and is blue. All emeralds observed to date are green, but they are equally “grue.” The inductive evidence supports both “all emeralds are green” and “all emeralds are grue” equally well, yet the two generalizations make incompatible predictions about emeralds observed after time t.

Goodman’s riddle shows that the problem of induction is even more radical than Hume recognized. It is not merely that we cannot justify the step from observed instances to universal generalizations; we cannot even specify which generalizations the evidence supports without appealing to some prior distinction between “projectible” and “non-projectible” predicates. Goodman proposed that the relevant distinction is one of entrenchment: predicates that have been successfully used in past inductions are more projectible than novel or gerrymandered predicates. But this response itself relies on inductive reasoning, and the adequacy of entrenchment as a solution remains debated.

2.4 Logical Positivism and the Vienna Circle

The philosophical movement known as logical positivism (逻辑实证主义), centered on the Vienna Circle (维也纳学派) of the 1920s and 1930s, attempted to place science on an absolutely secure epistemological foundation. Members of the Circle — including Moritz Schlick, Rudolf Carnap, Otto Neurath, and others — proposed the verification principle (验证原则): a statement is cognitively meaningful if and only if it is either analytically true (true by definition) or empirically verifiable. On this criterion, the statements of metaphysics, theology, and ethics are literally meaningless, since they cannot be verified by observation.

The Vienna Circle drew heavily on the new logic of Frege and Russell and on Wittgenstein’s Tractatus Logico-Philosophicus. They envisioned a unified science expressed in a single logical language, grounded in elementary observation statements (protocol sentences) from which all scientific knowledge could be derived. This ambitious program foundered on several difficulties: the verification principle itself is neither analytic nor empirically verifiable, raising the question of its own status; the distinction between observation statements and theoretical statements proved impossible to draw sharply; and the project of reducing all scientific theories to observation statements turned out to be unworkable. Nevertheless, logical positivism profoundly shaped twentieth-century philosophy of science, and many of its central questions — about meaning, verification, and the demarcation of science — remain alive in contemporary debate.

2.5 The Theory-Ladenness of Observation

A decisive objection to naive inductivism concerns the claim that observation is prior to and independent of theory. Philosophers of science since N. R. Hanson and Thomas Kuhn have argued that observation is theory-laden (理论负载的): what we observe depends in part on the theoretical framework we bring to experience. Two scientists looking at the same phenomenon may literally see different things depending on their training, expectations, and conceptual commitments.

Hanson, in Patterns of Discovery (1958), drew on Gestalt psychology to argue that observation is not a two-stage process of first seeing and then interpreting, but a single act in which perception is already structured by theoretical knowledge. His famous example involves two astronomers at dawn: Tycho Brahe, a geocentrist, and Johannes Kepler, a heliocentrist. Both see the sun, but Tycho sees a moving object traversing the sky while Kepler sees a stationary object around which the earth rotates. The sensory input is the same, but the perceptual experience differs because it is shaped by different theoretical commitments.

Example. Consider the Müller-Lyer illusion (缪勒-莱尔错觉): two lines of equal length appear unequal when one has inward-pointing arrowheads and the other has outward-pointing arrowheads. Even when one knows the lines are equal, the illusion persists. This demonstrates that perception is not a straightforward registration of objective reality but is shaped by cognitive and perceptual processes that can lead us astray. In science, analogous effects mean that trained observers may perceive features of data --- a faint signal in a noisy dataset, a subtle pattern in a microscope image --- that untrained observers miss entirely.
Example. Consider a trained radiologist and a layperson examining the same chest X-ray. The radiologist perceives tumors, fractures, and abnormalities; the layperson sees only shadows. The radiologist's observations are structured by extensive theoretical knowledge of anatomy and pathology. Observation, in this case, is not a passive reception of data but an active, theory-informed process.

This insight has far-reaching consequences. If observation is theory-laden, then it cannot serve as the neutral foundation on which all scientific knowledge is built. The relationship between theory and observation is not one-directional (from observation to theory) but reciprocal: theories shape what we observe, and observations in turn constrain and revise our theories.

2.6 Instruments and the Extension of Observation

A further complication for naive accounts of observation concerns the role of scientific instruments (科学仪器). Much of modern science depends on instruments — telescopes, microscopes, spectrometers, particle detectors — that extend the range of human perception far beyond unaided sense experience. But instruments do not simply transmit observations passively; they produce data that must be interpreted in light of theories about how the instrument works. The images produced by an electron microscope, for instance, are not photographs in the ordinary sense; they are artifacts of a complex physical process, and understanding what they show requires substantial theoretical knowledge of electron optics, specimen preparation, and image formation.

This raises the question of whether instrument-mediated observations are genuinely observations at all, or whether they are better understood as theory-laden inferences. Ian Hacking has argued that our confidence in the reality of the entities observed through instruments is grounded in the convergence of multiple independent methods: if different instruments, based on different physical principles, all point to the same entity, that convergence provides strong evidence that the entity is real rather than an artifact of any particular instrument or theory.


Chapter 3: Experimentation and the Production of Evidence

3.1 The Role of Experiment in Science

While observation provides much of the raw material for scientific inquiry, experimentation (实验) plays an equally central role. An experiment differs from mere observation in that it involves a deliberate intervention in nature: the scientist actively manipulates variables, controls for confounding factors, and creates artificial conditions designed to test specific hypotheses.

Chalmers argues that experimentation is not a straightforward matter of reading facts off nature. Experiments are theory-laden in the same way that observations are. The design of an experiment — which variables to manipulate, which to control, how to measure outcomes — presupposes a theoretical framework. Moreover, experimental results require interpretation, and that interpretation is itself shaped by background theories and assumptions.

3.2 Bacon’s Idols and the Discipline of Experiment

Francis Bacon, often regarded as one of the founders of the experimental method, was acutely aware of the obstacles to genuine knowledge. In the Novum Organum (1620), Bacon identified four categories of cognitive distortion, which he called idols (偶像):

  • Idols of the Tribe (族类偶像): distortions inherent in human nature itself, such as the tendency to perceive patterns where none exist, to attend selectively to confirming evidence, and to impose order on random phenomena.
  • Idols of the Cave (洞穴偶像): distortions arising from the individual’s particular education, temperament, and experience, which lead each person to interpret the world through a personal lens.
  • Idols of the Marketplace (市场偶像): distortions arising from the imprecision and ambiguity of language, which can lead to confusion and pseudo-problems.
  • Idols of the Theatre (剧场偶像): distortions arising from received philosophical systems and dogmas, which impose artificial frameworks on nature.

Bacon’s idols anticipate many concerns of contemporary philosophy of science: confirmation bias, theory-ladenness, the role of language in shaping thought, and the danger of unexamined assumptions. His proposed remedy — careful, systematic experiment guided by a methodical process of elimination — laid the groundwork for the experimental tradition, even though his specific inductive method has been superseded by more sophisticated approaches.

3.3 Experiment as Intervention: Hacking’s Argument

The interventionist conception of experiment, developed by philosophers such as Ian Hacking, emphasizes that experimentation involves doing things to the world, not merely looking at it. In Representing and Intervening (1983), Hacking argues that our confidence in the reality of theoretical entities like electrons is grounded not in the theories that describe them but in our ability to manipulate them — to use them as tools in further experiments. The ability to intervene successfully provides a form of evidence that goes beyond mere observation.

Hacking’s central argument for experimental realism (实验实在论) rests on the following insight: when scientists use electrons to investigate other phenomena — spraying them at targets, deflecting them with electric fields, manipulating their spin — they treat electrons as tools rather than as theoretical posits. The fact that these manipulations succeed, producing predictable effects, constitutes powerful evidence that electrons are real entities, not merely convenient fictions. “If you can spray them, they are real,” Hacking memorably declared.

This argument has important implications for the scientific realism debate (discussed in Chapter 17). It suggests that realism about theoretical entities can be grounded not in the truth of scientific theories but in the success of experimental practice. Even if our theories about electrons are revised or replaced, the fact that we can reliably manipulate them provides a theory-independent reason for believing in their existence.

Controlled experiment. An experimental design in which two or more conditions are compared, with all variables held constant except the one under investigation (the independent variable). The condition without the intervention is called the control group. This design aims to isolate the causal effect of the independent variable on the dependent variable.

3.4 Replication and Reliability

A hallmark of good experimental science is replicability (可重复性): the expectation that an experiment, if repeated under sufficiently similar conditions, will yield the same results. Replication serves as a check against error, bias, and fraud. However, in practice, replication is more complicated than it appears. The ongoing replication crisis (重复性危机) in psychology and other fields has revealed that many published experimental results fail to replicate, raising important questions about the reliability of the experimental method itself.

Example. The Open Science Collaboration's 2015 effort to replicate 100 published psychology studies found that only about 36% of the replications produced statistically significant results, compared to 97% of the original studies. This finding has prompted widespread discussion about statistical methodology, publication bias, and the incentive structures of academic science.

The replication crisis has multiple roots. Publication bias (发表偏倚) — the tendency of journals to publish positive results and reject null findings — creates a distorted literature in which effects appear more robust than they are. P-hacking (p值操纵), the practice of selectively analyzing data until a statistically significant result is obtained, inflates false-positive rates. Small sample sizes, flexible experimental protocols, and the pressure to publish novel findings all contribute to the problem. The crisis has prompted reforms including pre-registration of studies, registered reports, and open data policies, but the structural incentives that generate unreliable results have proven difficult to change.

In medicine, the replication problem has particularly grave consequences. Ioannidis’s influential 2005 paper “Why Most Published Research Findings Are False” argued that the majority of published biomedical findings are likely false positives, especially in fields with small effect sizes, low prior probabilities, and high flexibility in study design. This finding has prompted soul-searching in biomedicine and has contributed to the movement toward evidence-based medicine and systematic reviews.

3.5 The Duhem-Quine Problem

When an experimental prediction fails, what exactly has been refuted? The Duhem-Quine thesis (迪昂-蒯因论题) holds that individual hypotheses cannot be tested in isolation. Any experimental test involves not just the hypothesis under investigation but also auxiliary assumptions about the experimental apparatus, background theories, and initial conditions. When a prediction fails, logic alone cannot determine which of these many assumptions is at fault. This problem, sometimes called holism (整体论) about confirmation and refutation, has profound implications for the logic of scientific testing.

Remark. Pierre Duhem articulated this point with respect to physics in the early twentieth century; W. V. O. Quine generalized it to all of knowledge. The thesis implies that no single experiment can conclusively refute a theory, since the blame can always be shifted to an auxiliary assumption.

Chapter 4: Scientific Reasoning

4.1 Deduction and Induction Revisited

Scientific reasoning involves both deductive reasoning (演绎推理) and inductive reasoning (归纳推理), and the relationship between them is central to understanding scientific method.

Deduction is truth-preserving: if the premises are true, the conclusion must be true. In science, deduction is used to derive predictions from hypotheses and theories. If theory T predicts that under conditions C, phenomenon P will occur, and we observe that P does not occur, then (assuming the deduction is valid and the auxiliary assumptions are correct) we can conclude that T is false. This is the logical core of falsificationism (证伪主义), as we shall see in detail below.

Induction, by contrast, is ampliative but not truth-preserving. Inductive reasoning includes inference to the best explanation, statistical inference, and analogical reasoning, in addition to simple enumerative induction (generalizing from particular cases).

4.2 The Hypothetico-Deductive Method

The hypothetico-deductive method (假说-演绎法, HD method) represents the most widely taught model of scientific reasoning. It proceeds as follows: (1) formulate a hypothesis H; (2) deduce from H, together with auxiliary assumptions A and initial conditions C, a prediction P; (3) test P by observation or experiment; (4) if P is observed, H is confirmed (to some degree); if P is not observed, H is disconfirmed (subject to the Duhem-Quine caveat about auxiliary assumptions).

The HD method captures an important feature of scientific practice: the deductive derivation of testable predictions from theoretical hypotheses. However, it faces several well-known difficulties. First, it does not specify how hypotheses are generated in the first place — it concerns only the testing of hypotheses, not their discovery. Second, it suffers from the tacking problem: if H entails P, then (H & Q) also entails P for any arbitrary Q, so a successful prediction confirms not only H but also any conjunction of H with an irrelevant hypothesis. Third, the HD method does not readily account for the comparative evaluation of rival hypotheses: observing P confirms both H1 and H2 if both entail P, but the method provides no guidance for choosing between them.

These difficulties have motivated the development of alternative accounts of scientific reasoning, including inference to the best explanation and Bayesian epistemology.

4.3 Inference to the Best Explanation

One of the most important forms of scientific reasoning is inference to the best explanation (最佳解释推理), also known as abduction (溯因推理). Given a set of data, the scientist infers the hypothesis that, if true, would best explain the data. “Best” here is typically understood in terms of criteria such as explanatory power, simplicity, coherence with background knowledge, and fertility (the ability to generate new predictions and research questions).

Inference to the best explanation (IBE). A form of reasoning in which one infers, from the fact that a hypothesis would, if true, provide the best explanation of the evidence, that the hypothesis is (probably) true. IBE is widely used in science and everyday life but is not deductively valid: the best available explanation may still be false.

Charles Sanders Peirce first identified abduction as a distinct form of reasoning alongside deduction and induction. Peter Lipton’s Inference to the Best Explanation (2004) provides the most developed contemporary defense, arguing that IBE is the predominant form of reasoning in mature science. Lipton distinguishes between the likeliest explanation (the one most probably true) and the loveliest explanation (the one that would, if true, provide the deepest and most satisfying understanding). He argues that scientists are guided by loveliness in forming hypotheses and by likeliness in evaluating them, and that the two criteria tend to converge because of a deep connection between explanatory virtue and truth.

Lipton’s account faces the challenge of explaining why explanatory virtue should be a guide to truth. Why should the universe be organized in a way that rewards our aesthetic sense of what constitutes a good explanation? Lipton responds that evolution and cultural selection have shaped our explanatory instincts to track real patterns in the world, but this response itself relies on a form of IBE.

4.4 Bayesian Reasoning

Bayesian epistemology (贝叶斯认识论) offers a formal framework for understanding how evidence should update our beliefs. According to Bayesianism, a scientist’s degree of belief in a hypothesis can be represented as a probability, and Bayes’ theorem (贝叶斯定理) specifies how this probability should be updated in light of new evidence:

P(H|E) = P(E|H) * P(H) / P(E)

where P(H) is the prior probability of the hypothesis, P(E|H) is the likelihood of the evidence given the hypothesis, P(E) is the probability of the evidence, and P(H|E) is the posterior probability of the hypothesis given the evidence.

Bayesianism has several attractive features as an account of scientific reasoning. It provides a precise, quantitative framework for updating beliefs. It naturally handles the comparative evaluation of rival hypotheses: the hypothesis that makes the evidence most probable (has the highest likelihood) will receive the largest boost in probability. It accommodates both the hypothetico-deductive insight that successful predictions confirm a theory and the insight from IBE that explanatory power is evidentially relevant (since explanatory hypotheses typically assign higher likelihoods to the observed evidence). And it captures the intuition that extraordinary claims require extraordinary evidence: hypotheses with very low prior probabilities require very strong evidence to achieve high posterior probabilities.

Remark. Bayesianism offers a powerful and flexible framework for scientific reasoning, but it faces challenges. The choice of prior probabilities is often subjective, and it is unclear how to assign priors in cases where we have little or no background knowledge. The problem of **old evidence** --- how to account for the confirmatory impact of evidence that was already known when the hypothesis was formulated --- has also proven difficult. Nevertheless, Bayesian methods are increasingly influential in statistics, machine learning, and the philosophy of science.

4.5 The Role of Ignorance

Stuart Firestein, in Ignorance: How It Drives Science, argues that science is not primarily about accumulating knowledge but about cultivating productive ignorance (无知). Good science, on this view, is driven not by what we know but by what we do not know — by the identification of well-formulated questions and gaps in understanding. Firestein contends that the popular image of science as a march toward certainty is misleading; in practice, every answer generates new questions, and the frontier of ignorance expands with knowledge.

This perspective reframes the relationship between knowledge and uncertainty. Rather than viewing uncertainty as a deficiency to be eliminated, Firestein suggests that it is the engine of scientific inquiry. The ability to identify what one does not know — and to formulate that ignorance as a tractable research question — is a core scientific skill.

Example. Firestein recounts how neuroscientists studying olfaction (the sense of smell) found that their growing knowledge of olfactory receptors opened up vast new areas of ignorance about how the brain processes smell information, how different odorants are discriminated, and how smell interacts with memory and emotion. Each discovery raised more questions than it answered.

Chapter 5: Falsificationism

5.1 Popper and the Logic of Falsification

Karl Popper developed falsificationism (证伪主义) as an alternative to inductivism. Popper accepted Hume’s argument that induction cannot be logically justified and concluded that science does not proceed by accumulating confirming instances. Instead, the hallmark of scientific rationality is the willingness to subject theories to severe tests and to abandon them when they are falsified.

Falsifiability. A theory is falsifiable (可证伪的) if and only if there exist possible observation statements that would, if true, be inconsistent with the theory. Falsifiability is, for Popper, the criterion of demarcation between science and non-science. A theory that is compatible with every possible observation --- that cannot, even in principle, be shown false --- is not scientific.

On Popper’s account, science progresses through a process of conjecture and refutation (猜想与反驳). Scientists propose bold conjectures — hypotheses that make risky predictions — and then attempt to refute them through observation and experiment. Theories that survive severe tests are said to be corroborated (经受住检验的), but corroboration is not confirmation: a corroborated theory has not been shown to be true, merely that it has not yet been shown to be false.

Popper’s demarcation criterion was motivated in part by his desire to distinguish genuine science from what he regarded as pseudo-sciences. He was particularly critical of Marxism and Freudian psychoanalysis, both of which, he argued, were compatible with virtually any possible observation. Marxists could explain both revolution and the absence of revolution; Freudians could explain both the presence and the absence of any behavioral pattern. These theories, Popper argued, were irrefutable not because they were true but because they were empty: their apparent explanatory power was achieved at the cost of making no risky predictions.

5.2 The Asymmetry Between Verification and Falsification

Popper’s key insight is that there is a logical asymmetry (不对称性) between verification and falsification. No finite number of observations can verify a universal statement (“all swans are white”), but a single counter-example can falsify it (one black swan). This asymmetry gives falsification its logical power and explains why Popper insisted on falsifiability as the criterion of demarcation.

Example. Consider the theory "all metals expand when heated." This theory is falsifiable because we can specify in advance what observation would refute it: discovering a metal that contracts when heated. Contrast this with a vague claim like "things happen for a reason." No possible observation could refute such a claim, which is why Popper would classify it as non-scientific.

5.3 Bold Conjectures vs. Ad Hoc Modifications

Popper distinguished sharply between bold conjectures (大胆猜想) and ad hoc modifications (特设修改). A bold conjecture is a hypothesis that goes beyond what is already known, makes precise and risky predictions, and is highly falsifiable. An ad hoc modification, by contrast, is a change made to a theory solely to accommodate anomalous data, without generating any new testable predictions. Popper regarded ad hoc modifications as methodologically impermissible because they reduce the falsifiability of the theory and represent a retreat from the spirit of critical inquiry.

The distinction between bold conjectures and ad hoc modifications is central to Popper’s normative ideal of science. Scientists should seek not to protect their theories from refutation but to expose them to the most severe tests possible. A theory that survives a severe test is more impressive than one that survives only a weak test, and a theory that is continually modified to evade refutation is scientifically worthless, no matter how apparently comprehensive it becomes.

5.4 Advantages of Falsificationism

Falsificationism has several attractive features. It explains why science progresses: by eliminating false theories, we move closer to the truth (or at least away from error). It provides a clear criterion of demarcation. It accounts for the provisional character of scientific knowledge: since theories are never verified, they are always open to revision. And it offers a normative ideal for scientific practice: scientists should actively seek to refute their own theories rather than seeking only confirming evidence.

5.5 Degrees of Falsifiability

Popper recognized that not all falsifiable theories are equally good. He introduced the notion of degrees of falsifiability (可证伪度): theories that make more precise predictions and are compatible with fewer possible observations are more falsifiable and, if they survive testing, more impressive. A theory that predicts the position of a planet to within an arc-second is more falsifiable than one that predicts merely that the planet will be “somewhere in the sky.”


Chapter 6: The Limits of Falsificationism and Lakatos

6.1 The Duhem-Quine Problem Revisited

As noted in Chapter 3, the Duhem-Quine thesis poses a serious challenge to falsificationism. If hypotheses cannot be tested in isolation, then a failed prediction does not unambiguously falsify the hypothesis under test. The scientist can always save the hypothesis by modifying an auxiliary assumption. Popper acknowledged this problem but argued that scientists should avoid such evasive maneuvers, which he called ad hoc hypotheses (特设假说) — modifications made solely to protect a theory from refutation, without any independent testable consequences.

Ad hoc hypothesis. A modification to a theory introduced solely to accommodate anomalous data, without generating any new testable predictions. Popper regarded ad hoc modifications as methodologically impermissible because they reduce the falsifiability of the theory.

6.2 The Role of Auxiliary Hypotheses

In practice, scientists routinely modify auxiliary assumptions in response to anomalous results, and this is often perfectly reasonable. When Uranus’s orbit deviated from Newtonian predictions, astronomers did not reject Newtonian mechanics; instead, they postulated the existence of an undiscovered planet (Neptune), which was subsequently observed. This was a spectacularly successful modification of an auxiliary assumption. The challenge for falsificationism is to distinguish between legitimate and illegitimate modifications — a task that is easier to state in principle than to carry out in practice.

6.3 Lakatos and the Methodology of Scientific Research Programmes

Imre Lakatos attempted to resolve the difficulties of both Popper and Kuhn by developing the methodology of scientific research programmes (科学研究纲领方法论). A research programme consists of a hard core (硬核) of fundamental assumptions that practitioners are committed to defending, a protective belt (保护带) of auxiliary hypotheses that can be modified in response to anomalies, and a heuristic (启发法) that guides the development of the programme.

Lakatos distinguished between progressive and degenerating research programmes. A programme is progressive if its modifications of the protective belt lead to new predictions that are subsequently confirmed — if, that is, the programme continues to generate novel empirical content. A programme is degenerating if its modifications are purely ad hoc, serving only to accommodate anomalies without predicting anything new. On Lakatos’s account, the rational response to a failed prediction is not to abandon the hard core immediately (as Popper would advise) but to modify the protective belt. The programme should be abandoned only when it has become persistently degenerating and a progressive rival is available.

Example. The Newtonian research programme had as its hard core Newton's three laws of motion and the law of universal gravitation. When anomalies arose --- such as the perturbations of Uranus's orbit --- researchers modified the protective belt (by postulating the existence of Neptune) rather than abandoning the hard core. This modification was progressive because it led to a novel prediction (the existence and location of Neptune) that was confirmed. By contrast, the later discovery that Mercury's perihelion precession could not be accommodated within the Newtonian programme, while Einstein's general relativity could explain it, contributed to the eventual replacement of the Newtonian programme.

Lakatos’s framework offers several advantages over both Popper and Kuhn. It accounts for the historical fact that scientists do not abandon theories at the first sign of falsification. It provides a rational criterion for evaluating research programmes (progressive vs. degenerating) without invoking Kuhn’s controversial notion of irrational paradigm shifts. And it captures the important insight that scientific rationality operates over extended research programmes, not individual theories.

6.4 The Problem of Underdetermination

A related difficulty is the problem of underdetermination (欠确定性). At any given time, the available evidence may be compatible with multiple, mutually incompatible theories. If the evidence does not uniquely determine the correct theory, then additional criteria — simplicity, elegance, coherence, fruitfulness — must be invoked. But these criteria go beyond the purely logical framework of falsificationism and introduce considerations that are, in varying degrees, subjective and value-laden.

Remark. The underdetermination of theory by evidence is sometimes distinguished into two forms: transient underdetermination (which may be resolved by future evidence) and permanent underdetermination (which persists no matter how much evidence is gathered). The stronger, permanent form raises deep questions about the limits of scientific knowledge.

6.5 Historical Counterexamples

Chalmers and others have noted that strict falsificationism cannot account for much of the history of science. Many of the most successful theories in the history of science — including Newtonian mechanics, the kinetic theory of gases, and early versions of the atomic theory — were born refuted: they were known to be inconsistent with some available evidence from the outset. Had scientists followed Popper’s prescription to abandon theories at the first sign of falsification, much of modern science would never have developed.

Example. When Copernicus proposed his heliocentric model, it was inconsistent with the observed absence of stellar parallax. A strict falsificationist would have rejected the theory. In fact, the parallax was too small to be detected with the instruments available at the time; it was not observed until 1838, nearly three centuries later. Copernicus's theory was "born refuted" yet was ultimately vindicated.

Chapter 7: Scientific Revolutions and Kuhn’s Paradigm Shifts

7.1 Thomas Kuhn and the Structure of Scientific Revolutions

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) fundamentally transformed the philosophy and sociology of science. Kuhn rejected the cumulative, progressive picture of science shared by both inductivists and falsificationists. Instead, he argued that science alternates between periods of normal science (常规科学) — in which researchers work within a shared framework, or paradigm (范式) — and periods of revolutionary science (革命性科学), in which the prevailing paradigm is overthrown and replaced by a new one.

Paradigm. In Kuhn's usage, a paradigm is the set of shared commitments --- theoretical principles, experimental methods, exemplary problem-solutions, and values --- that define a scientific community and guide its research. A paradigm determines which problems are worth solving, which methods are appropriate, and what counts as a satisfactory solution.

Kuhn’s concept of the paradigm has been enormously influential but also notoriously ambiguous. Margaret Masterman identified over twenty different senses in which Kuhn used the term in Structure. In response to this criticism, Kuhn later refined his concept, distinguishing between the disciplinary matrix (学科范式) — the entire constellation of beliefs, values, techniques, and shared commitments of a scientific community — and exemplars (范例) — the concrete problem-solutions that serve as models for further research. Exemplars are, for Kuhn, the most fundamental component of a paradigm: students learn to be scientists by working through exemplary problems and learning to see new problems as analogous to them. This process of learning by example cannot be fully captured in explicit rules or algorithms; it involves the acquisition of tacit knowledge and perceptual skills.

7.2 Normal Science and Puzzle-Solving

During periods of normal science, researchers do not question the fundamental assumptions of the paradigm. Instead, they engage in puzzle-solving (解谜): extending the paradigm to new domains, refining its predictions, and resolving anomalies within its framework. Normal science is highly productive precisely because researchers share a common framework and do not constantly debate fundamentals.

Kuhn compared normal science to puzzle-solving because, like a crossword puzzle, it involves applying known techniques to problems that are expected to have solutions. The paradigm provides the rules of the game, the criteria for success, and the standards by which solutions are evaluated. Failure to solve a puzzle reflects on the competence of the researcher, not on the validity of the paradigm. This feature of normal science explains why anomalies are typically tolerated or set aside rather than treated as refutations: within normal science, an unsolved puzzle is a challenge, not a crisis.

The productivity of normal science depends on the narrowing of focus that a paradigm provides. By taking fundamental questions off the table, the paradigm frees researchers to pursue highly specialized investigations that accumulate detailed knowledge. This is why Kuhn argued that normal science, despite its conservative character, is responsible for the vast majority of scientific achievement.

7.3 Anomalies, Crisis, and Revolution

Normal science inevitably produces anomalies (反常) — results that resist explanation within the prevailing paradigm. When anomalies accumulate and resist resolution, the scientific community enters a period of crisis (危机), characterized by disagreement, proliferation of alternative theories, and philosophical reflection on fundamentals. If a new paradigm emerges that resolves the accumulated anomalies and opens up new research directions, a scientific revolution (科学革命) occurs.

The transition from crisis to revolution is not a purely rational process, according to Kuhn. It involves a collective judgment by the scientific community that the old paradigm has failed and that the new one is more promising. This judgment is influenced by many factors — the severity and number of the anomalies, the availability of a viable alternative, the age and temperament of the scientists involved — and cannot be reduced to a simple algorithm. Kuhn described the adoption of a new paradigm as an act of faith, analogous to a religious conversion, which drew sharp criticism from rationalists.

Example. The transition from Newtonian mechanics to Einstein's theory of relativity is a paradigmatic example of a scientific revolution. Newtonian mechanics had been the dominant paradigm in physics for over two centuries. Anomalies such as the precession of Mercury's perihelion and the null result of the Michelson-Morley experiment resisted explanation within the Newtonian framework. Einstein's special and general theories of relativity resolved these anomalies and fundamentally restructured the conceptual framework of physics.

7.4 Incommensurability

One of Kuhn’s most controversial claims is that successive paradigms are incommensurable (不可通约的): they employ different concepts, ask different questions, and appeal to different standards of evaluation, making direct comparison difficult or impossible. Proponents of different paradigms may literally see the world differently, interpret the same data differently, and talk past each other because they do not share a common language.

Kuhn identified three dimensions of incommensurability: (1) methodological incommensurability — different paradigms employ different standards for evaluating theories and different methods of investigation; (2) observational incommensurability — scientists working within different paradigms perceive the world differently, even when looking at the same phenomena; and (3) semantic incommensurability — key terms change their meaning across paradigms, so that proponents of different paradigms are, in a sense, speaking different languages even when they use the same words.

Remark. Kuhn's thesis of incommensurability has been widely debated. Critics argue that it leads to relativism: if paradigms cannot be compared, then there is no rational basis for preferring one to another, and the history of science is a series of irrational conversions rather than a rational progression. Kuhn denied that incommensurability implies irrationality, but the precise relationship between the two remains contested. In later work, Kuhn softened his position, speaking of "local incommensurability" --- partial failures of translation between paradigms --- rather than total incomparability.

7.5 The Function of Paradigms

Beyond their role in guiding research, paradigms serve important social functions. They define the boundaries of a scientific community, determine who counts as a competent practitioner, and establish the criteria for evaluating research. The socialization of new scientists through graduate training involves, in large part, the internalization of a paradigm — learning to see the world through its conceptual lens, to apply its methods, and to accept its standards.

7.6 Paradigm Shift as Gestalt Switch

Kuhn famously compared the experience of a paradigm shift (范式转换) to a gestalt switch (格式塔转换): a sudden, holistic reorientation of perception. Just as the famous duck-rabbit figure can be seen as either a duck or a rabbit but not both simultaneously, a scientist who adopts a new paradigm comes to see the world in a fundamentally different way. This metaphor captures the discontinuous, revolutionary character of paradigm change, but it has also been criticized for suggesting that paradigm choice is irrational or merely a matter of perception.

7.7 Reception and Criticism of Structure

Kuhn’s Structure of Scientific Revolutions has been one of the most widely cited academic books of the twentieth century, influencing not only philosophy and history of science but also sociology, political science, literary theory, and popular culture. The term “paradigm shift” has entered everyday language, often in ways far removed from Kuhn’s original meaning. Within philosophy of science, however, the reception was sharply divided. Popper and the critical rationalists accused Kuhn of irrationalism and relativism. Lakatos attempted to synthesize the insights of both Popper and Kuhn through his methodology of scientific research programmes. Larry Laudan argued that Kuhn overstated the discontinuity of scientific revolutions and underestimated the role of problem-solving effectiveness as a criterion for theory choice. Despite these criticisms, Kuhn’s central insight — that the history of science involves fundamental conceptual ruptures, not merely the steady accumulation of truths — has become a permanent feature of the philosophical landscape.


Chapter 8: Values and Science

8.1 The Ideal of Value-Free Science

A long-standing ideal in the philosophy of science holds that science should be value-free (价值中立的): scientists should pursue truth without allowing moral, political, or social values to influence their conclusions. On this view, science describes the world as it is, not as we would like it to be; values belong to the realm of ethics and politics, not to the realm of facts.

This ideal has deep roots in the positivist tradition, which drew a sharp distinction between facts and values. The fact-value distinction (事实-价值区分) holds that descriptive statements about what is the case are logically independent of normative statements about what ought to be the case. Science, on this view, deals only with facts.

8.2 Douglas and the Rejection of Value-Free Science

Heather Douglas, in Science, Policy, and the Value-Free Ideal, mounts a powerful challenge to the value-free ideal. Douglas argues that values inevitably and legitimately play a role in scientific reasoning, particularly at the stage of evaluating evidence and accepting or rejecting hypotheses. Her argument centers on the concept of inductive risk (归纳风险): the risk of error inherent in any decision to accept or reject a hypothesis.

Inductive risk. The risk of drawing an incorrect conclusion from the evidence --- either accepting a false hypothesis (Type I error) or rejecting a true one (Type II error). Douglas argues that when the consequences of error are significant, scientists must take those consequences into account, and doing so necessarily involves value judgments.

Consider a toxicologist evaluating whether a chemical is safe for human consumption. Setting the threshold for statistical significance — the level of evidence required before concluding that the chemical is harmful — involves a trade-off between the risk of falsely declaring the chemical dangerous (which could deprive people of a useful product) and the risk of falsely declaring it safe (which could expose people to harm). This trade-off cannot be resolved by the evidence alone; it requires a judgment about which error is more serious, and that judgment is inherently value-laden.

Douglas’s inductive risk argument extends far beyond toxicology. In pharmaceutical trials (药物试验), the decision about how much evidence is required before approving a new drug involves weighing the risk that an ineffective or harmful drug reaches patients against the risk that a beneficial drug is delayed. In climate science (气候科学), the decision about what level of confidence is required before declaring that anthropogenic climate change is occurring involves weighing the risk of costly premature action against the risk of catastrophic inaction. In both cases, the standard of evidence cannot be set by the evidence alone; it requires value judgments about the relative seriousness of different types of error.

8.3 Direct and Indirect Roles of Values

Douglas distinguishes between the direct role and the indirect role of values in science. Values play a direct role when they serve as reasons for accepting a claim independently of the evidence — for example, accepting a hypothesis because it is politically convenient. Douglas agrees that this is illegitimate. Values play an indirect role when they influence how much evidence is required before a conclusion is drawn — for example, demanding stronger evidence before accepting a claim with potentially catastrophic consequences. Douglas argues that this indirect role is not only legitimate but unavoidable.

Remark. Douglas's argument does not imply that science is merely politics by other means. Rather, she contends that the relationship between science and values is more complex than the value-free ideal allows. Values should not determine what the evidence says, but they inevitably shape how we respond to the evidence, particularly under conditions of uncertainty.

8.4 Science Advisors and Policy

The relationship between science and policy is mediated by science advisors (科学顾问) who translate scientific findings into policy-relevant recommendations. Douglas argues that science advisors must navigate the tension between their role as epistemic authorities (reporting what the evidence shows) and their role as citizens with moral obligations (acknowledging the consequences of different policy choices). The traditional model, in which scientists report facts and policymakers decide values, is untenable because, as Douglas demonstrates, the assessment of evidence itself involves value judgments. A more honest model acknowledges that science advisors inevitably exercise judgment about inductive risk, and that this judgment should be transparent and subject to democratic scrutiny.

8.5 Case Study: Algorithmic Evaluation

The role of values in science extends beyond the natural sciences. Kumar et al. examine how algorithms used for evaluation and decision-making embed value judgments in their design. Choices about which variables to include, how to weight them, and what outcomes to optimize reflect assumptions about what matters — assumptions that are inherently normative. This case illustrates that the value-free ideal is as problematic in computational and social-scientific contexts as it is in the natural sciences.

Example. A hiring algorithm trained on historical data may perpetuate existing biases if the data reflects past discrimination. The choice to use such data, and the choice of which metrics to optimize (e.g., short-term productivity vs. long-term diversity), are value-laden decisions embedded in the algorithm's design. Treating the algorithm as a neutral, value-free tool obscures these choices.

Chapter 9: Objectivity and Feminist Epistemology

9.1 The Problem of Objectivity

Objectivity (客观性) is often regarded as the cardinal virtue of science. But what exactly does objectivity mean? At a minimum, it seems to require that scientific conclusions be determined by the evidence and not by the personal biases, interests, or values of individual scientists. Yet as we have seen, observation is theory-laden, evidence underdetermines theory, and values inevitably play a role in scientific reasoning. Can science still be objective?

9.2 Longino and Contextual Empiricism

Helen Longino, in Science as Social Knowledge, develops an account of objectivity that locates it not in the individual scientist but in the social practices of the scientific community. Longino’s contextual empiricism (语境经验主义) holds that individual scientists inevitably bring background assumptions and values to their research. Objectivity is achieved not by eliminating these assumptions (which is impossible) but by subjecting them to critical scrutiny within a community of inquirers.

Contextual empiricism. Longino's view that scientific knowledge is shaped by both empirical evidence and the background assumptions (or "contextual values") of the researcher. Objectivity is a property of communities, not individuals, and is achieved through critical social interaction.

Longino identifies four conditions that a scientific community must satisfy for its knowledge-production processes to be objective — her four criteria for transformative criticism (转化性批评的四个标准):

  1. Recognized avenues for criticism. There must be publicly recognized forums (journals, conferences, peer review) for the presentation and critique of scientific claims. Without such forums, background assumptions remain invisible and unchallenged.
  2. Uptake of criticism. The community must be responsive to criticism: beliefs and methods must be capable of modification in light of critical discussion. A community that ignores or suppresses criticism cannot achieve objectivity, no matter how many forums for criticism it provides.
  3. Public standards. There must be publicly shared standards — of evidence, reasoning, and methodology — by which theories and claims are evaluated. These standards provide the common ground necessary for productive disagreement and rational resolution of disputes.
  4. Tempered equality of intellectual authority. All qualified members of the community must have an equal opportunity to contribute to the critical discussion, regardless of social position. If certain perspectives are systematically excluded — due to gender, race, institutional status, or other factors — then the community’s critical resources are impoverished and its claims to objectivity are undermined.

Longino’s framework implies that objectivity is not an all-or-nothing property but a matter of degree: the more fully a scientific community satisfies these four conditions, the more objective its knowledge. It also implies that the exclusion of marginalized voices is not merely a social injustice but an epistemic failure, since it reduces the diversity of perspectives available for critical scrutiny.

9.3 Feminist Epistemology and Standpoint Theory

Feminist epistemology (女性主义认识论) has made important contributions to the philosophy of science by highlighting how gender and other social categories shape the production of knowledge. Standpoint theory (立场论), associated with Sandra Harding and others, argues that the social position of the knower — including their gender, race, and class — affects what they can know. Marginalized standpoints may offer epistemic advantages because those who occupy them are more likely to perceive the biases and limitations of dominant frameworks.

Harding’s concept of strong objectivity (强客观性) pushes this argument further. She argues that the traditional notion of objectivity — which requires the scientist to be detached and disinterested — actually produces a weak form of objectivity, because it fails to interrogate the background assumptions that the dominant group brings to research. Strong objectivity, by contrast, begins from the lives and experiences of marginalized groups and extends the requirement of critical scrutiny to the assumptions of the researcher. By starting from the margins, strong objectivity reveals features of social and natural reality that are invisible from the center.

Remark. Standpoint theory does not claim that oppressed groups have automatic access to the truth. Rather, it argues that starting research from the perspective of marginalized groups can reveal assumptions and biases that are invisible from the dominant perspective. This is a methodological claim, not a claim about infallibility.

9.4 Haraway’s Situated Knowledges

Donna Haraway, in her influential 1988 essay “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective,” develops an alternative to both the “god trick” of traditional objectivism — the pretense of a view from nowhere — and the relativism that denies any grounding for knowledge claims. Haraway argues for situated knowledges (情境知识): all knowledge is produced from a particular social, material, and embodied location, and responsible knowledge claims acknowledge their situatedness rather than pretending to transcend it.

Haraway’s account rejects the dichotomy between objectivism and relativism, proposing instead a vision of objectivity as partial, located, and critical. Objectivity is achieved not by abstracting away from one’s position but by being accountable for it — by making one’s location visible and available for critical scrutiny. This vision has been enormously influential in feminist science studies and has contributed to a broader rethinking of objectivity across the humanities and social sciences.

9.5 Diversity and Scientific Knowledge

Philips argues that diversity (多样性) within the scientific community is not merely a matter of social justice but an epistemic good: diverse communities produce better science. When researchers bring different backgrounds, perspectives, and assumptions to a problem, the community is better equipped to identify hidden assumptions, generate novel hypotheses, and subject claims to rigorous critique. This argument extends Longino’s point that objectivity is a social achievement: the more diverse the community, the more robust the critical scrutiny, and the more objective the resulting knowledge.

Example. Research on cardiovascular disease long focused on male subjects, leading to diagnostic criteria and treatments that were less effective for women. The inclusion of diverse perspectives in the research community helped identify this bias and led to more inclusive research designs, ultimately improving medical care for all patients.

Chapter 10: Indigenous Knowledge Systems

10.1 What Is Indigenous Knowledge?

Indigenous knowledge (原住民知识), also called traditional ecological knowledge (传统生态知识, TEK), refers to the knowledge systems developed by Indigenous peoples over centuries of close interaction with their environments. These systems encompass detailed understanding of local ecosystems, weather patterns, medicinal plants, agricultural techniques, and natural resource management, as well as the cultural, spiritual, and ethical frameworks within which this understanding is embedded.

Traditional ecological knowledge (TEK). The cumulative body of knowledge, practice, and belief, evolving by adaptive processes and handed down through generations by cultural transmission, about the relationship of living beings (including humans) with one another and with their environment. TEK is characteristically holistic, relational, and embedded in cultural practices.

10.2 Nicholas on Intellectual Property and Indigenous Knowledge

George Nicholas raises critical questions about the relationship between Western science and Indigenous knowledge, particularly concerning intellectual property and the appropriation of Indigenous knowledge by researchers and corporations. Nicholas argues that Indigenous knowledge has frequently been extracted, decontextualized, and commodified without the consent or benefit of the communities that produced it. This raises ethical questions about who owns knowledge, who has the right to use it, and what obligations researchers have to the communities they study.

10.3 Whyte on Indigenous Knowledge and Environmental Justice

Kyle Whyte situates Indigenous knowledge within the broader context of environmental justice (环境正义) and colonialism. Whyte argues that the marginalization of Indigenous knowledge systems is not merely an intellectual oversight but a consequence of colonial power structures that have systematically devalued non-Western ways of knowing. Recognizing Indigenous knowledge as legitimate science requires reckoning with this colonial history and restructuring the institutions of knowledge production to be more inclusive.

Remark. The question of whether Indigenous knowledge qualifies as "science" depends on how one defines science. If science is defined narrowly in terms of Western institutional structures and methods, then Indigenous knowledge may not qualify. But if science is defined more broadly as systematic, empirically grounded knowledge of the natural world, then many Indigenous knowledge systems clearly meet the criteria. The debate reveals that the demarcation problem is not merely philosophical but deeply political.

10.4 Two-Eyed Seeing and Epistemic Pluralism

Some scholars advocate for epistemic pluralism (认识论多元主义) — the view that multiple knowledge systems can coexist and complement each other. The concept of Two-Eyed Seeing (双眼视角), known in Mi’kmaw as Etuaptmumk, was developed by Mi’kmaw Elder Albert Marshall. It proposes learning to see from one eye with the strengths of Indigenous knowledge and from the other eye with the strengths of Western science, and using both eyes together for the benefit of all. This approach does not subordinate one knowledge system to the other but seeks to integrate their respective strengths.

Two-Eyed Seeing has been particularly influential in environmental science and health research in Canada. It resists both assimilation (in which Indigenous knowledge is absorbed into Western frameworks and loses its distinctive character) and separation (in which the two systems remain isolated). Instead, it calls for a creative weaving together that respects the integrity of both traditions while generating richer understanding than either could achieve alone.

10.5 Traditional Ecological Knowledge in Practice

The practical value of TEK has been demonstrated in numerous domains. In fire management (火灾管理), Indigenous peoples across Australia, North America, and elsewhere have practiced controlled burning (cultural burning) for millennia. These practices reduce fuel loads, promote biodiversity, regenerate plant communities, and prevent catastrophic wildfire. Western fire management, which long suppressed fire, has increasingly recognized the value of Indigenous burning practices, and collaborations between Indigenous fire practitioners and Western ecologists have produced more effective land management strategies.

In fisheries management (渔业管理), Indigenous communities possess detailed knowledge of fish behavior, migration timing, spawning habitats, and ecosystem dynamics that has been accumulated over generations of observation. In Atlantic Canada, collaborations between Mi’kmaw communities and marine biologists have integrated this traditional knowledge with Western stock assessment methods, producing management strategies that are both more ecologically effective and more culturally appropriate.

Example. In fisheries management in Atlantic Canada, collaborations between Indigenous communities and marine biologists have integrated traditional knowledge of fish behavior, migration patterns, and ecosystem dynamics with Western scientific methods of stock assessment. The resulting management strategies have been more effective and culturally appropriate than those based on either knowledge system alone.

10.6 Decolonizing Methodologies and OCAP Principles

Linda Tuhiwai Smith’s Decolonizing Methodologies: Research and Indigenous Peoples (1999) has been profoundly influential in reshaping the relationship between research and Indigenous communities. Smith, a Maori scholar, argues that Western research has historically been an instrument of colonialism, serving to classify, categorize, and control Indigenous peoples while extracting their knowledge. Decolonizing methodologies require centering Indigenous perspectives, priorities, and protocols in the research process, ensuring that research serves Indigenous communities rather than exploiting them.

In the Canadian context, the OCAP principles (OCAP原则) — Ownership, Control, Access, and Possession — have been developed by First Nations to govern research involving Indigenous data and knowledge. Ownership means that Indigenous communities own their cultural knowledge, data, and intellectual property. Control means that communities have the right to control all aspects of research affecting them. Access means that communities must have access to data about themselves and their communities. Possession refers to the physical control of data and the mechanisms for protecting it. OCAP represents a fundamental reorientation of the research relationship, placing Indigenous communities at the center of knowledge production about their own lives and territories.


Chapter 11: Trust in Science

11.1 Why Trust Matters

Scientific knowledge is, for most people, known only at second hand. Few of us have the expertise to evaluate the evidence for climate change, the safety of vaccines, or the efficacy of medical treatments. We rely on the testimony of scientists and scientific institutions. This reliance raises the question of trust (信任): what grounds do we have for trusting science, and when, if ever, is distrust warranted?

11.2 Harker on Trust and Epistemic Dependence

David Harker examines the epistemology of trust in science, arguing that our trust in scientific claims is grounded in the reliability of the social and institutional processes by which those claims are produced. Peer review, replication, the norm of transparency, and the competitive structure of academic science all serve as checks on error and fraud. Harker argues that trust in science is rational insofar as these institutions function well, but he acknowledges that institutional failures — such as the replication crisis, publication bias, and conflicts of interest — can undermine that trust.

Epistemic dependence. The condition of relying on others for knowledge that one cannot independently verify. In modern societies, most of our knowledge is epistemically dependent: we rely on experts, institutions, and testimony for the vast majority of our beliefs.

11.3 The Credibility of Scientific Claims

Trust in science depends not only on the reliability of scientific institutions but also on the perceived credibility of individual scientists and scientific communities. Credibility is influenced by factors such as track record, transparency, institutional affiliation, and the absence of conflicts of interest. It is also shaped by social factors such as race, gender, and cultural background, raising important questions about whose testimony is taken seriously and whose is dismissed.

Remark. Miranda Fricker's concept of epistemic injustice (认识论不公正) is relevant here. Fricker argues that certain speakers are systematically given less credibility than they deserve due to prejudice related to their social identity. This injustice can distort the production of scientific knowledge by silencing marginalized voices and perspectives.

11.4 Anti-Vaccination Movements and Climate Denial

The erosion of public trust in science is starkly illustrated by the anti-vaccination movement (反疫苗运动) and climate change denial (气候变化否认). In both cases, well-established scientific consensus is rejected by significant segments of the public, often on the basis of misinformation, ideological commitments, or distrust of institutional authority.

The anti-vaccination movement, galvanized by the discredited 1998 Wakefield study linking the MMR vaccine to autism, persists despite overwhelming scientific evidence of vaccine safety and efficacy. The movement illustrates how a single fraudulent study, amplified by social media and exploiting existing distrust of pharmaceutical companies and government, can undermine public confidence in a well-established scientific consensus. Addressing vaccine hesitancy requires not merely correcting misinformation but rebuilding the institutional trust that has been damaged.

Climate change denial similarly involves the rejection of a robust scientific consensus supported by multiple independent lines of evidence. Research has shown that climate denial is sustained not by legitimate scientific disagreement but by organized disinformation campaigns, often funded by fossil fuel industries, that exploit the appearance of scientific uncertainty to delay policy action. The case illustrates how manufactured doubt can erode public trust in science and obstruct evidence-based policy.

11.5 Public Trust and Science Communication

Declining public trust in science on issues such as climate change and vaccination has prompted urgent reflection on science communication. Scholars have argued that the “deficit model” — the assumption that public distrust stems from ignorance and can be remedied by providing more information — is inadequate. Trust involves not only understanding but also values, interests, and relationships. Effective science communication must address not only what scientists know but also how they know it, what they are uncertain about, and why the public should care.


Chapter 12: Scientific Expertise

12.1 What Is an Expert?

The concept of scientific expertise (科学专业知识) is central to contemporary debates about the authority of science. An expert, in the relevant sense, is someone who possesses specialized knowledge and skills that most people lack. But expertise raises difficult questions: how do we identify genuine experts? How much authority should we grant them? And what happens when experts disagree?

12.2 Whyte and Crease on Expertise and Trust

Whyte and Crease explore the relationship between expertise and public trust, arguing that expertise is not merely a matter of individual knowledge but is embedded in social relationships and institutions. They examine how the authority of experts is constructed, maintained, and challenged, and how the erosion of trust in expertise can have serious social consequences.

12.3 Collins and Evans on the Sociology of Expertise

Harry Collins and Robert Evans, in Rethinking Expertise, develop an influential taxonomy of expertise. They propose what they call a periodic table of expertise (专业知识周期表), distinguishing between several levels:

  • No expertise. The layperson who has no relevant knowledge.
  • Beer-mat knowledge. Superficial familiarity with a topic, sufficient for casual conversation but not for serious engagement.
  • Popular understanding. A deeper engagement with the topic, typically gained from reading popular science.
  • Primary source knowledge. The ability to read and understand the primary scientific literature.
  • Interactional expertise. The ability to engage competently in the discourse of a specialized field without being able to contribute to it through practice — for example, a sociologist who can converse fluently with physicists about quantum mechanics without being able to do the experiments.
  • Contributory expertise. Full practical competence in a field, including the ability to contribute new knowledge through research.
Interactional expertise. The ability to converse competently in the language of a specialist domain, acquired through sustained interaction with practitioners, without possessing the practical skills needed to contribute to the domain directly. Collins and Evans argue that interactional expertise is a genuine and important form of expertise, distinct from both popular understanding and contributory expertise.

The distinction between contributory expertise (贡献型专业知识) and interactional expertise (互动型专业知识) is one of Collins and Evans’s most important contributions. Contributory experts can actually do the science — design experiments, analyze data, produce new results. Interactional experts understand the science well enough to engage meaningfully with practitioners, ask probing questions, and evaluate claims, but cannot themselves produce new scientific knowledge. Collins and Evans argue that interactional expertise plays a crucial role in interdisciplinary collaboration, science policy, and science journalism, and that its existence challenges simplistic models of expertise as a binary (expert/non-expert) distinction.

12.4 The Expertise Problem in Democracy

Collins and Evans identify what they call the problem of extension (扩展问题): in a democratic society, how far should the authority of technical experts extend into the domain of public policy? On one hand, complex technical questions — about nuclear safety, genetic engineering, or climate change — require specialized expertise that most citizens lack. On the other hand, democratic legitimacy requires that policy decisions be accountable to the public, not delegated to an unelected technocratic elite.

Collins and Evans propose that the solution lies in distinguishing between the technical phase of a decision (which should be guided by contributory and interactional experts) and the political phase (which should be decided by democratic processes). But the boundary between the technical and the political is itself contested, and Douglas’s inductive risk argument suggests that it may be impossible to draw a clean line between them.

12.5 The Problem of Expert Disagreement

When experts disagree, non-experts face a difficult epistemic situation. One strategy is to defer to the majority view among qualified experts. Another is to evaluate the credentials, track records, and potential biases of the competing experts. A third is to examine the institutional and social factors that may explain the disagreement. None of these strategies is foolproof, but together they provide a reasonable basis for navigating expert disagreement.

Example. The debate over the safety of genetically modified organisms (GMOs) illustrates the challenge of expert disagreement. While the vast majority of biologists and agricultural scientists consider GMOs safe for human consumption, a vocal minority disagrees. Non-experts must assess not only the scientific evidence but also the institutional affiliations, funding sources, and potential conflicts of interest of the competing experts.

Chapter 13: Science, Ethics, and Society

13.1 The Ethical Dimensions of Scientific Practice

Science is not only an epistemic enterprise but also a moral one. Scientists make ethical decisions at every stage of research: in the choice of research questions, the treatment of human and animal subjects, the reporting of results, and the application of findings. The relationship between science and ethics (科学与伦理) has become increasingly complex as scientific research raises novel ethical challenges — from genetic engineering and artificial intelligence to climate change and pandemic response.

13.2 Research Ethics and Responsible Conduct

The modern framework for research ethics (研究伦理) emerged in response to historical abuses — the Nazi medical experiments, the Tuskegee syphilis study, and others — that demonstrated the dangers of unregulated research. Key principles include:

  • Informed consent. Research subjects must be fully informed about the nature and risks of the research and must freely consent to participate.
  • Beneficence and non-maleficence. Researchers must aim to maximize benefits and minimize harms.
  • Justice. The burdens and benefits of research must be distributed fairly.
  • Integrity. Researchers must report their findings honestly and transparently.
Remark. Silk and MacDonald examine the intersections of science, ethics, and society, arguing that ethical considerations are not external constraints imposed on science but are intrinsic to the practice of good science. A science that ignores the ethical implications of its work is not only morally deficient but epistemically impoverished, since it fails to consider the full range of relevant consequences.

13.3 Science and Social Responsibility

The question of social responsibility (社会责任) in science has become increasingly urgent. Should scientists be held responsible for the applications of their research, even when those applications were not foreseen? Should they refuse to conduct research that may be used for harmful purposes? These questions do not have easy answers, but they reflect the growing recognition that science does not operate in a social vacuum.

Example. The development of nuclear weapons during the Manhattan Project raised profound questions about the social responsibility of scientists. Many of the physicists involved, including J. Robert Oppenheimer and Leo Szilard, subsequently expressed deep ambivalence about their role in creating weapons of mass destruction. The case illustrates the tension between the pursuit of knowledge and the potential for catastrophic misuse.

Chapter 14: Science and the Public

14.1 Public Understanding of Science

The relationship between science and the public is a central concern of the sociology of scientific knowledge (科学知识社会学). How does scientific knowledge reach the public? How is it understood, misunderstood, and contested? And what role should the public play in shaping the direction of scientific research?

14.2 The Deficit Model vs. the Dialogue Model

For much of the twentieth century, science communication was dominated by the deficit model (缺陷模型): the assumption that public skepticism toward science stems from a deficit of scientific knowledge, and that the remedy is simply to provide more and better information. Research in science communication has thoroughly undermined this model. Studies consistently show that providing more information does not reliably increase public trust or acceptance of scientific findings; in some cases, it can even backfire, entrenching existing beliefs through the mechanisms of motivated reasoning and identity-protective cognition.

The alternative is the dialogue model (对话模型), which treats science communication not as a one-way transmission from expert to layperson but as a two-way exchange in which the public’s concerns, values, and local knowledge are taken seriously. The dialogue model recognizes that public engagement with science is shaped not only by cognitive factors (what people know) but also by social and affective factors (whom they trust, what they value, how they relate to scientific institutions).

14.3 Goldenberg on Vaccine Hesitancy and Public Trust

Maya Goldenberg examines the phenomenon of vaccine hesitancy (疫苗犹豫), arguing that it cannot be adequately understood as a failure of public knowledge. Many vaccine-hesitant individuals are well-informed about the scientific evidence; their hesitancy stems not from ignorance but from distrust of pharmaceutical companies, government regulators, and the medical establishment. Goldenberg argues that addressing vaccine hesitancy requires rebuilding trust through transparency, community engagement, and genuine responsiveness to public concerns.

Remark. Goldenberg's analysis illustrates the limitations of the deficit model of science communication, which assumes that public skepticism is simply a matter of insufficient information. Trust, Goldenberg argues, is relational: it depends on the perceived integrity, competence, and responsiveness of institutions, not merely on the transmission of facts.

14.4 Martinez and Mammola on Science Communication

Martinez and Mammola examine the practices of contemporary science communication, including the use of social media, data visualization, and public engagement initiatives. They argue that effective science communication requires not only translating technical findings into accessible language but also acknowledging uncertainty, presenting limitations, and engaging with the public as active participants in the process of knowledge production rather than passive recipients of expert pronouncements.

14.5 Citizen Science and Public Participation

The growing movement toward citizen science (公民科学) represents an effort to involve the public directly in the process of scientific research. Citizen science projects range from bird-watching surveys and environmental monitoring to distributed computing and data analysis. Proponents argue that citizen science democratizes the production of knowledge, engages the public in meaningful ways, and generates valuable data that professional scientists could not collect on their own.

Example. The Galaxy Zoo project invites members of the public to classify galaxies based on their morphology. Volunteers have classified millions of galaxies, and their contributions have led to genuine scientific discoveries, including the identification of a new class of astronomical objects ("green pea" galaxies). The project demonstrates that non-experts can make substantive contributions to scientific knowledge.

14.6 The Post-Truth Era

The concept of a post-truth (后真相) era — in which emotional appeals and personal beliefs carry more weight than objective facts in shaping public opinion — poses distinctive challenges for science communication. The proliferation of social media, the fragmentation of epistemic communities into ideological echo chambers, and the deliberate manufacture of doubt by interest groups have created an information environment in which scientific findings must compete with misinformation, conspiracy theories, and alternative narratives for public attention and trust.

Responding to the post-truth challenge requires more than better communication strategies. It requires attending to the institutional and social conditions that foster trust: transparency, accountability, inclusivity, and the demonstrated willingness of scientific institutions to serve the public good. The philosophy of science, by clarifying the nature and limits of scientific knowledge, can contribute to a more realistic and resilient public understanding of what science can and cannot do.


Chapter 15: Expertise, Interdisciplinarity, and Collaboration

15.1 Interactional Expertise in Practice

Plaisance and Kennedy extend Collins and Evans’s concept of interactional expertise (互动型专业知识) to examine how it functions in interdisciplinary collaboration. They argue that interactional expertise is essential for communication across disciplinary boundaries: researchers who understand the language, methods, and standards of multiple disciplines can serve as bridges, translating concepts and mediating disputes.

Interdisciplinary collaboration. Research that integrates methods, concepts, and perspectives from two or more disciplines to address problems that cannot be adequately addressed by any single discipline alone. Successful interdisciplinary collaboration requires not only technical knowledge but also the social skills needed to navigate different disciplinary cultures, norms, and expectations.

15.2 Eigenbrode et al. on Philosophical Dialogue in Interdisciplinary Teams

Eigenbrode et al. propose a framework for facilitating communication in interdisciplinary research teams by making the philosophical assumptions underlying different disciplines explicit. They identify several dimensions along which disciplines differ:

  • Ontological assumptions: What kinds of entities and processes exist?
  • Epistemological assumptions: What counts as knowledge, and how is it obtained?
  • Methodological assumptions: What methods are appropriate for investigating the phenomena of interest?
  • Value commitments: What outcomes are considered desirable?

By making these assumptions explicit and subjecting them to collective scrutiny, interdisciplinary teams can identify points of convergence and divergence, negotiate shared frameworks, and avoid misunderstandings that arise from unstated differences.

Example. An interdisciplinary team studying invasive species might include ecologists, economists, sociologists, and policy analysts. The ecologists might frame the problem in terms of ecosystem function and biodiversity; the economists in terms of costs and benefits; the sociologists in terms of community impacts; and the policy analysts in terms of regulatory effectiveness. Making these different framings explicit allows the team to develop a more comprehensive understanding of the problem and to design interventions that address multiple dimensions simultaneously.

15.3 The Challenge of Integration

Interdisciplinary collaboration faces significant institutional and intellectual challenges. Academic reward structures (hiring, promotion, tenure) tend to favor disciplinary specialization over interdisciplinary breadth. Different disciplines have different standards of evidence, different publication norms, and different conceptions of what constitutes a significant contribution. Overcoming these barriers requires institutional support, mutual respect, and a willingness to learn from unfamiliar perspectives.

Remark. The growing complexity of contemporary scientific problems --- climate change, pandemic preparedness, artificial intelligence governance --- increasingly demands interdisciplinary approaches. The philosophy of science can contribute to these efforts by clarifying the conceptual and methodological differences between disciplines and by facilitating productive dialogue across disciplinary boundaries.

15.4 Transdisciplinarity and Knowledge Co-Production

Beyond interdisciplinarity, some scholars advocate for transdisciplinarity (跨学科性) — an approach that integrates not only different academic disciplines but also non-academic knowledge holders, including practitioners, policymakers, Indigenous communities, and the public. Transdisciplinary research aims to co-produce knowledge that is both scientifically rigorous and socially relevant, breaking down the traditional boundary between knowledge producers and knowledge users.


Chapter 16: Synthesis — The Nature of Scientific Knowledge

16.1 No Simple Answer

The question with which we began — “What is science?” — does not admit of a simple, unitary answer. Science is not defined by a single method, a single logic, or a single set of values. It is a complex, multifaceted social practice that encompasses observation, experimentation, theorizing, modeling, and argumentation, all embedded in institutional structures and shaped by social, cultural, and political forces.

16.2 Key Tensions

The course has explored several persistent tensions in the philosophy and sociology of science:

  • Empiricism vs. theory-ladenness. Science is grounded in empirical evidence, but that evidence is always interpreted through a theoretical lens.
  • Logic vs. history. Logical reconstructions of scientific method (inductivism, falsificationism) often fail to account for the actual history of science.
  • Objectivity vs. values. Science aspires to objectivity, but values inevitably play a role in shaping research and interpreting evidence.
  • Universalism vs. pluralism. Western science claims universal validity, but other knowledge systems — including Indigenous knowledge — offer alternative, and sometimes complementary, ways of understanding the natural world.
  • Expertise vs. democracy. Scientific expertise is essential for addressing complex problems, but the concentration of epistemic authority in experts raises questions about democratic accountability and public trust.

16.3 Science as a Social Enterprise

Perhaps the most important lesson of the philosophy and sociology of science is that science is not a solitary pursuit of truth but a social enterprise (社会事业). Scientific knowledge is produced, evaluated, and revised within communities of inquirers, and the quality of that knowledge depends on the quality of the social processes — peer review, critical discussion, replication, diversity — that sustain it. Understanding the nature of scientific knowledge requires understanding not only its logical structure but also its social organization.

Remark. The recognition that science is a social enterprise does not entail that scientific knowledge is merely a social construction with no connection to reality. Rather, it means that the objectivity and reliability of scientific knowledge depend on the functioning of social institutions and practices that facilitate critical scrutiny, error correction, and the inclusion of diverse perspectives. Science is our best tool for understanding the natural world, but it works best when it is practiced in communities that are open, diverse, critical, and responsive to both evidence and values.

16.4 Looking Forward

As science confronts new challenges — artificial intelligence, gene editing, climate change, pandemics — the questions raised in this course become ever more pressing. How should we evaluate the claims of algorithms and machine learning models? How should we integrate Indigenous knowledge with Western science? How can we rebuild public trust in an era of misinformation? How should we balance the pursuit of knowledge with ethical responsibility? These are not merely academic questions; they are questions that will shape the future of science and, with it, the future of human society.


Chapter 17: Scientific Realism, Anti-Realism, and Social Constructivism

17.1 The Realism Debate

One of the deepest and most persistent questions in the philosophy of science concerns the relationship between scientific theories and reality. Scientific realism (科学实在论) holds that successful scientific theories are approximately true descriptions of the world, including its unobservable aspects, and that the theoretical entities postulated by science — atoms, electrons, genes, quarks — really exist. The core realist intuition is captured by the no miracles argument (无奇迹论证): the empirical success of science would be a miracle if our best theories were not at least approximately true. Hilary Putnam memorably declared that realism is “the only philosophy that does not make the success of science a miracle.”

Scientific realism. The view that (1) the aim of science is to produce true theories about both observable and unobservable aspects of the world; (2) successful scientific theories are approximately true; and (3) the theoretical entities postulated by successful theories genuinely exist. Realists hold that science progressively approximates the truth about the structure of reality.

17.2 Van Fraassen’s Constructive Empiricism

Bas van Fraassen, in The Scientific Image (1980), developed the most influential anti-realist alternative to scientific realism: constructive empiricism (建构经验主义). Van Fraassen argues that the aim of science is not truth but empirical adequacy (经验充分性): a theory is empirically adequate if what it says about the observable world is true. We need not believe that the unobservable entities postulated by our theories (electrons, quarks, dark matter) actually exist; we need only believe that our theories “save the phenomena” — that they accurately predict and organize our observations.

Van Fraassen’s position rests on a sharp distinction between the observable and the unobservable. He argues that while we can have good reasons for believing claims about observable entities (those that can, in principle, be perceived by the unaided human senses), we have no comparable justification for believing in unobservable entities, since our evidence for them is always indirect. The success of a theory in predicting observable phenomena does not require that its claims about unobservable entities be true; an empirically adequate theory that posits different unobservable mechanisms would serve equally well.

Remark. The realism debate has generated a rich ecosystem of positions. Entity realism (Hacking, Cartwright) is realist about entities that can be manipulated but agnostic about the theories that describe them. Structural realism (Worrall) holds that science captures the structure of reality even when its theoretical ontology is revised. Pessimistic meta-induction (Laudan) argues against realism by pointing to the long history of successful theories that were later abandoned, suggesting that our current best theories may likewise be false. Each position illuminates different aspects of the relationship between science and reality.

17.3 Feyerabend’s Epistemological Anarchism

Paul Feyerabend, in Against Method (1975), mounted the most radical challenge to the idea that science follows a distinctive methodology. Feyerabend argued that the history of science reveals no fixed methodological rules that have been consistently followed; every methodological rule has been violated at some point, and some of the most important scientific advances were made by violating the accepted rules of the time. His conclusion: the only methodological principle that does not inhibit scientific progress is “anything goes” (怎么都行) — epistemological anarchism (认识论无政府主义).

Feyerabend was not arguing that all methods are equally good. Rather, he was arguing against the codification of scientific methodology into fixed rules. He contended that the complexity and unpredictability of scientific discovery means that no set of rules can anticipate every situation, and that rigid adherence to methodological norms can be just as harmful to scientific progress as their violation. Galileo, for instance, violated the prevailing methodological norms of his time (by using the telescope when its reliability was unestablished and by arguing against the evidence of the unaided senses), yet his violations proved enormously productive.

Feyerabend also raised political concerns about the authority of science, arguing that granting science a privileged epistemic status in democratic societies is a form of intellectual tyranny analogous to the authority once exercised by the Church. He advocated for the separation of science and state, allowing citizens to choose freely among competing knowledge traditions — including astrology, traditional medicine, and Indigenous knowledge — rather than having Western science imposed on them as the sole arbiter of truth.

17.4 Social Constructivism and the Strong Programme

The sociology of scientific knowledge (科学知识社会学, SSK) emerged in the 1970s as a research programme that applied sociological analysis not merely to the institutional structures of science but to the content of scientific knowledge itself. The most influential formulation was David Bloor’s strong programme (强纲领), developed at the Edinburgh school (爱丁堡学派).

Bloor proposed four principles for the sociological study of scientific knowledge:

  1. Causality. The sociology of scientific knowledge should be causal: it should identify the social causes that lead to the production of particular beliefs and knowledge claims.
  2. Impartiality. It should be impartial with respect to truth and falsity, rationality and irrationality: both true and false beliefs require sociological explanation.
  3. Symmetry. The same types of causes should explain both true and false beliefs. One should not explain true beliefs by appeal to rational evidence and false beliefs by appeal to social factors; the same explanatory resources should be applied to both.
  4. Reflexivity. The programme should be applicable to itself: the sociology of scientific knowledge must acknowledge that its own claims are subject to sociological explanation.
The strong programme. Bloor's methodological framework for the sociology of scientific knowledge, which holds that both true and false scientific beliefs should be explained by the same types of social causes. The strong programme rejects the traditional asymmetry that explains true beliefs by appeal to evidence and reason while explaining false beliefs by appeal to social or psychological factors.

The strong programme was enormously controversial. Critics accused it of relativism — of denying that scientific knowledge has any special claim to truth or objectivity. Defenders argued that the programme does not deny the reality of the natural world but merely insists that the social processes by which knowledge is produced deserve analysis regardless of whether the resulting beliefs turn out to be true. The symmetry principle, in particular, generated intense debate: does treating true and false beliefs symmetrically imply that truth is merely a social convention?

The influence of SSK extended beyond philosophy into the interdisciplinary field of science and technology studies (STS, 科学技术学), which examines the production, dissemination, and social implications of scientific knowledge. Key contributions include laboratory studies (Latour and Woolgar, Knorr-Cetina), which examined the day-to-day practices through which scientific facts are constructed in the laboratory, and actor-network theory (Latour, Callon), which analyzed science as a network of human and non-human actors.

17.5 Beyond the Science Wars

The debates between realists and constructivists — sometimes called the science wars (科学大战) — reached their peak in the 1990s, culminating in the Sokal affair, in which physicist Alan Sokal published a deliberately nonsensical paper in a cultural studies journal to expose what he saw as the intellectual bankruptcy of postmodernist approaches to science. The science wars generated more heat than light, but they also clarified important distinctions and prompted more nuanced positions on both sides.

The most productive legacy of these debates has been the recognition that the relationship between science and society is complex and multidirectional. Scientific knowledge is shaped by social factors, but it also constrains and transforms social reality. The task for contemporary philosophy and sociology of science is not to choose between realism and constructivism but to develop frameworks that do justice to both the social embeddedness and the epistemic achievements of science.

Back to top