PHIL 258: Introduction to the Philosophy of Science
Doreen Fraser
Estimated study time: 42 minutes
Table of contents
Sources and References
- Peter Godfrey-Smith, Theory and Reality: An Introduction to the Philosophy of Science, 2nd ed. (Chicago: University of Chicago Press, 2021).
- Thomas S. Kuhn, The Structure of Scientific Revolutions, 4th ed. (Chicago: University of Chicago Press, 2012).
- Rudolf Carnap, “The Elimination of Metaphysics Through Logical Analysis of Language” (1932).
- Karl Popper, The Logic of Scientific Discovery (1934/1959).
- Karl Popper, Conjectures and Refutations (London: Routledge, 1963).
- Imre Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” in Criticism and the Growth of Knowledge (1970).
- Larry Laudan, Progress and Its Problems (Berkeley: University of California Press, 1977).
- Carl Hempel, “Studies in the Logic of Explanation,” Philosophy of Science 15 (1948): 135–175.
- Wesley Salmon, Scientific Explanation and the Causal Structure of the World (Princeton: Princeton University Press, 1984).
- Philip Kitcher, “Explanatory Unification,” Philosophy of Science 48 (1981): 507–531.
- Bas van Fraassen, The Scientific Image (Oxford: Clarendon Press, 1980).
- Helen Longino, Science as Social Knowledge (Princeton: Princeton University Press, 1990).
- Donna Haraway, “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective,” Feminist Studies 14 (1988): 575–599.
- Stanford Encyclopedia of Philosophy entries: “Scientific Realism,” “Thomas Kuhn,” “The Demarcation Problem,” “Scientific Explanation,” “Bayesian Epistemology,” “Feminist Epistemology and Philosophy of Science.”
Chapter 1: What Is Science? The Demarcation Problem
1.1 The Central Question
Philosophy of science begins with a deceptively simple question: what distinguishes science from non-science? This is the demarcation problem (科学划界问题). It asks whether there is some criterion or set of criteria that separates genuine scientific inquiry — physics, chemistry, biology — from pseudo-science, metaphysics, religion, and common sense. The stakes are high: public policy, education, and legal disputes (such as creationism vs. evolution in the courtroom) all depend on our ability to draw this line.
1.2 Historical Approaches
1.2.1 The Verificationist Criterion
The logical positivists of the Vienna Circle proposed that the meaning of a statement is given by the method of its verification. A statement is scientifically meaningful if and only if it can, in principle, be verified by observation or experiment. Statements that fail this test — such as “the Absolute is beyond time” — are dismissed as literally meaningless, not merely false.
1.2.2 Popper’s Falsificationism
Karl Popper rejected verificationism as a criterion of meaning but retained the demarcation question as central. For Popper, what makes a theory scientific is not that it can be confirmed, but that it is falsifiable (可证伪的): it makes predictions that could, in principle, be shown to be false. Popper regarded Marxist historiography and Freudian psychoanalysis as pseudo-scientific precisely because they seemed capable of accommodating any evidence whatsoever.
1.2.3 Kuhn and Beyond
Thomas Kuhn argued that the demarcation problem, as traditionally conceived, may be misguided. Science is distinguished not by a single logical criterion but by the social and cognitive practice of puzzle-solving (解谜) within a shared paradigm. Later philosophers, including Larry Laudan, argued that the demarcation problem is intractable and that the term “pseudo-science” is an empty category. The contemporary consensus tends to favor a cluster of features — testability, progress, peer review, methodological rigor — rather than any single necessary and sufficient condition.
1.3 Science and Non-Science: A Spectrum?
Rather than a sharp boundary, many contemporary philosophers see a continuum between clear paradigm cases of science (particle physics, molecular biology) and clear cases of non-science (astrology, numerology), with genuinely contested cases in between (some forms of economics, certain branches of psychology). This does not mean the distinction is worthless, only that it is best understood as a matter of degree rather than kind.
Chapter 2: Logical Positivism and the Vienna Circle
2.1 Origins and Motivations
Logical positivism (逻辑实证主义), also known as logical empiricism, emerged in the 1920s and 1930s from a group of philosophers, mathematicians, and scientists centered in Vienna — the Vienna Circle (维也纳学派). Key members included Moritz Schlick, Rudolf Carnap, Otto Neurath, Hans Hahn, and (on the periphery) Ludwig Wittgenstein and Karl Popper. The movement sought to place philosophy on a rigorous scientific footing by applying the tools of modern logic, developed by Frege, Russell, and Whitehead, to the analysis of scientific language.
2.2 The Verification Principle
The cornerstone of logical positivism is the verification principle (证实原则), or verifiability criterion of meaning. In its strong form: a synthetic (non-analytic) statement is meaningful if and only if it is conclusively verifiable by experience. Carnap and others quickly recognized that this strong form was too restrictive — it would rule out universal scientific laws (which cannot be conclusively verified by any finite number of observations). Successive weakenings included:
- Weak verifiability: a statement is meaningful if experience could be relevant to its truth or falsehood.
- Confirmability: a statement is meaningful if observations could raise or lower its probability (Carnap’s later view).
Despite these revisions, critics argued that the verification principle is self-undermining: the principle itself is neither analytic nor empirically verifiable, so by its own criterion it is meaningless.
2.3 The Analytic-Synthetic Distinction
Logical positivists relied on a sharp distinction between analytic (分析性的) statements (true by virtue of meaning or logic, e.g., “all bachelors are unmarried”) and synthetic (综合性的) statements (true or false by virtue of how the world is, e.g., “water boils at 100 degrees Celsius at standard pressure”). W.V.O. Quine’s 1951 paper “Two Dogmas of Empiricism” famously attacked this distinction, arguing that no principled line can be drawn between statements true by convention and those true by fact. This critique was one of several that led to the decline of logical positivism as a dominant movement.
2.4 Carnap’s Rational Reconstruction
Rudolf Carnap proposed that the philosophy of science should engage in rational reconstruction (理性重建): taking the informal reasoning of actual scientists and recasting it in precise logical terms. The goal was not to describe how scientists actually think (a task for psychology) but to exhibit the logical structure of scientific knowledge. Carnap developed formal systems of inductive logic (归纳逻辑), attempting to define a precise notion of the degree of confirmation that a body of evidence confers on a hypothesis.
2.5 The Legacy of Logical Positivism
By the 1960s, logical positivism in its strict form was widely considered refuted. Yet its legacy is enormous. It established the philosophy of science as a rigorous, technical discipline; it focused attention on the logical structure of theories, the nature of confirmation, and the relationship between theory and observation. Many subsequent debates — about scientific realism, the theory-ladenness of observation, the nature of explanation — were reactions to positivist positions.
Chapter 3: Popper: Falsificationism and the Growth of Knowledge
3.1 The Problem of Induction
Induction (归纳法) is the process of drawing general conclusions from particular observations. David Hume showed in the eighteenth century that induction cannot be logically justified: no finite number of observations of white swans can logically guarantee that all swans are white. This is the problem of induction (归纳问题). Popper accepted Hume’s critique as decisive and concluded that science does not, and should not, rely on induction.
3.2 Falsificationism
In place of induction, Popper proposed falsificationism (证伪主义) as both a criterion of demarcation and a methodology for science. A theory is scientific if and only if it is falsifiable — that is, if it makes predictions that could, in principle, be refuted by observation. The scientific method, on this view, consists of bold conjectures followed by severe tests. Scientists should try their hardest to refute their own theories; a theory that survives such testing is corroborated (受检验的), though never proven.
3.2.1 Asymmetry of Verification and Falsification
Popper emphasized a logical asymmetry: while no number of confirming instances can verify a universal law, a single counter-instance can refute it. “All swans are white” cannot be confirmed by any finite observation, but it is decisively refuted by a single black swan. Falsificationism exploits this asymmetry: science progresses by eliminating false theories.
3.3 Degrees of Falsifiability and the Growth of Knowledge
Popper argued that more falsifiable theories are better — they are bolder, they say more about the world, and they are more testable. Einstein’s general relativity, which made precise quantitative predictions about the bending of starlight, was more falsifiable than vaguer theories that could accommodate almost any observation. Scientific progress consists in replacing less falsifiable theories with more falsifiable ones that survive testing.
3.4 Problems with Falsificationism
3.4.1 The Duhem-Quine Problem
The Duhem-Quine thesis (迪昂-蒯因论题) holds that hypotheses are never tested in isolation. Any prediction derived from a theory relies on auxiliary hypotheses (辅助假设) — assumptions about initial conditions, the reliability of instruments, and the absence of interfering factors. When a prediction fails, logic alone cannot determine whether the main hypothesis or an auxiliary assumption is at fault. This means that falsification is never as clean as Popper’s model suggests.
3.4.2 The Role of Confirmation
Critics argue that Popper underestimates the importance of confirmation. Scientists do not merely try to falsify; they also seek positive evidence. Moreover, the notion of corroboration turns out to be hard to distinguish clearly from confirmation, despite Popper’s insistence on the difference.
3.5 Popper’s Influence
Despite these criticisms, Popper’s influence on the philosophy of science and on scientists themselves has been immense. Many working scientists describe themselves as Popperians. The emphasis on testability, boldness, and the critical attitude remains a central part of the self-image of science.
Chapter 4: Kuhn: Paradigms, Normal Science, and Revolutions
4.1 The Structure of Scientific Revolutions
Thomas Kuhn’s The Structure of Scientific Revolutions (1962) is arguably the most influential work in the philosophy of science in the twentieth century. Kuhn challenged the cumulative, logic-driven picture of science offered by both positivists and Popper, replacing it with a historically grounded account emphasizing discontinuity, community, and the social dimensions of scientific practice.
4.2 Paradigms
The most famous (and most contested) concept in Kuhn’s work is the paradigm (范式). In its broadest sense, a paradigm is a shared framework — including theories, methods, standards, exemplary problem-solutions, and values — that defines a scientific community and its practice. Kuhn later distinguished two senses:
- Disciplinary matrix (学科基质): the entire constellation of beliefs, techniques, and values shared by a scientific community.
- Exemplars (范例): the concrete puzzle-solutions that serve as models for future work (e.g., the inclined plane in Newtonian mechanics).
4.3 Normal Science
Most scientific activity, Kuhn argued, is normal science (常规科学): the routine work of solving puzzles within the framework set by the reigning paradigm. Normal scientists do not question the paradigm’s fundamental assumptions; instead, they articulate it, extend its range, and improve its precision. The paradigm defines what counts as a legitimate problem and what counts as an acceptable solution.
4.3.1 Puzzle-Solving
Kuhn likened normal science to puzzle-solving. A crossword puzzle has a guaranteed solution (assuming it was constructed properly), and the solver works within established rules. Similarly, the normal scientist assumes the paradigm is correct and that any difficulty is a puzzle to be solved, not evidence against the paradigm. This expectation of solvability is what distinguishes puzzles from unsolvable mysteries.
4.4 Anomalies and Crisis
When normal science encounters persistent failures — phenomena that resist explanation within the paradigm — these become anomalies (反常现象). A few anomalies are tolerable and even expected. But when anomalies accumulate, when the best scientists cannot resolve them, and when the paradigm begins to seem inadequate, a state of crisis (危机) ensues. Crisis loosens the grip of the paradigm, making the community receptive to radical alternatives.
4.5 Scientific Revolutions
A scientific revolution (科学革命) occurs when one paradigm is replaced by another. Kuhn described this process as involving a fundamental shift in the conceptual framework: the new paradigm does not merely add to the old one but reorganizes the field’s concepts, problems, and standards. Kuhn drew an analogy with political revolutions: just as political revolutions change the institutions by which political change is debated, scientific revolutions change the standards by which theories are evaluated.
4.5.1 Incommensurability
Kuhn’s most controversial thesis is that successive paradigms are incommensurable (不可通约的). This means that they employ different concepts, ask different questions, and apply different standards. Proponents of rival paradigms, Kuhn suggested, may talk past each other because the meaning of key terms changes across paradigms. “Mass” in Newtonian mechanics is not the same concept as “mass” in Einsteinian mechanics, even though the same word is used.
4.5.2 Gestalt Switches and World Change
Kuhn compared paradigm shifts to gestalt switches (格式塔转换) — the sudden perceptual reorganization that occurs when one sees, say, the duck-rabbit figure shift from one interpretation to another. He went further, suggesting (provocatively) that scientists working under different paradigms live in “different worlds.” This claim has been interpreted in stronger and weaker ways, from a radical metaphysical thesis to a more modest claim about perceptual and conceptual frameworks.
4.6 The Rationality of Science
Kuhn’s account raised deep questions about whether scientific change is rational. If paradigm choice cannot be settled by logic and evidence alone, does the triumph of a new paradigm depend on rhetoric, power, or generational change? Kuhn denied that he was an irrationalist, insisting that there are good reasons for paradigm change — reasons involving accuracy, scope, simplicity, and fruitfulness — but that these reasons do not logically compel agreement.
4.7 The Legacy of Kuhn
Kuhn’s influence extends far beyond the philosophy of science. The word “paradigm shift” has entered everyday language. Kuhn inspired the sociology of scientific knowledge, science and technology studies, and the “strong programme” in the sociology of science. Yet his work also provoked powerful criticisms from philosophers who sought to preserve a more rationalist account of scientific progress.
Chapter 5: Lakatos and Laudan: Research Programs and Traditions
5.1 Lakatos: Methodology of Scientific Research Programs
Imre Lakatos sought to combine the best insights of Popper and Kuhn. He rejected Popper’s naive falsificationism (theories are refuted by single observations) but also rejected Kuhn’s apparent irrationalism. His solution was the concept of the scientific research program (科学研究纲领).
5.1.1 Hard Core and Protective Belt
The hard core (硬核) contains the fundamental assumptions of the program — for Newtonian mechanics, these include Newton’s three laws and the law of universal gravitation. The hard core is, by methodological decision, irrefutable. Anomalies are dealt with by modifying the protective belt (保护带) of auxiliary hypotheses, initial conditions, and observational interpretations.
5.1.2 Progressive and Degenerating Programs
Lakatos distinguished between progressive (进步的) and degenerating (退化的) research programs. A program is theoretically progressive if successive modifications of the protective belt lead to novel predictions; it is empirically progressive if some of those predictions are confirmed. A program that only adjusts ad hoc to accommodate known anomalies, without predicting new facts, is degenerating.
5.1.3 Criticism of Lakatos
Critics have pointed out that Lakatos gives no clear criterion for how long one should wait before declaring a program degenerating. At any given moment, a temporarily degenerating program might recover. Lakatos himself acknowledged this difficulty, stating that his methodology applies only “in hindsight” — but this concession weakens the methodology’s prescriptive force.
5.2 Laudan: Research Traditions and Problem-Solving
Larry Laudan offered yet another framework. He rejected the focus on truth and confirmation, proposing instead that science aims at problem-solving effectiveness (问题解决效力). A theory is better than its rivals if it solves more empirical and conceptual problems while generating fewer anomalies.
5.2.1 Empirical and Conceptual Problems
Laudan distinguished between empirical problems (经验问题) — puzzles about the natural world that call for explanation — and conceptual problems (概念问题) — internal inconsistencies or tensions with other accepted theories. A theory’s progressiveness is measured by the ratio of solved problems to generated anomalies and conceptual problems.
5.2.2 Criticism of Laudan
Laudan’s framework faces challenges of its own. How do we individuate and count problems? Is the distinction between empirical and conceptual problems always clear? And does the rejection of truth as a goal of science undermine the motivation for doing science at all?
5.3 Comparing Kuhn, Lakatos, and Laudan
All three thinkers rejected the logical positivist picture of science as a purely logical enterprise. All emphasized the historical and social dimensions of scientific practice. But they differed on the role of rationality: Kuhn was most willing to acknowledge extra-rational factors; Lakatos tried to preserve a rational methodology; Laudan sought a middle path by redefining rationality in terms of problem-solving rather than truth-seeking.
Chapter 6: Bayesian Approaches to Confirmation
6.1 The Basic Idea
Bayesianism (贝叶斯主义) offers a formal framework for understanding how evidence confirms or disconfirms hypotheses. It uses probability theory — specifically Bayes’ theorem (贝叶斯定理) — to model the rational updating of beliefs in response to evidence.
P(H | E) = [P(E | H) x P(H)] / P(E)
where P(H) is the prior probability (先验概率) of H, P(E | H) is the likelihood (似然度), and P(H | E) is the posterior probability (后验概率).
6.2 Confirmation as Probability Raising
On the Bayesian account, evidence E confirms hypothesis H just in case E raises the probability of H:
P(H | E) > P(H)
Evidence disconfirms H if it lowers the probability:
P(H | E) < P(H)
This simple idea captures a great deal of intuitive reasoning about evidence.
6.3 Prior Probabilities
A central and controversial feature of Bayesianism is its reliance on prior probabilities (先验概率). Before any evidence is gathered, the agent assigns a probability to each hypothesis. Different agents may assign different priors, leading to different posteriors even given the same evidence. This is often called the problem of the priors (先验概率问题).
6.3.1 Subjective and Objective Bayesianism
Subjective Bayesians (主观贝叶斯主义者) hold that any coherent set of priors is rationally permissible; what matters is that the agent updates correctly via Bayes’ theorem. Over time, as evidence accumulates, agents with different priors will converge on the same posterior probabilities (this is the convergence theorem or “washing out of the priors”). Objective Bayesians (客观贝叶斯主义者) seek constraints on priors — for example, the principle of indifference (无差异原则) assigns equal probability to each possibility when there is no reason to favor one over another.
6.4 Bayesianism and Traditional Problems
6.4.1 The Ravens Paradox
The ravens paradox (乌鸦悖论), due to Carl Hempel, asks: if “all ravens are black” is confirmed by observing a black raven, is it also confirmed by observing a non-black non-raven (e.g., a white shoe)? The paradox arises because the two statements are logically equivalent (by contraposition). Bayesianism offers a resolution: while a white shoe does technically confirm the hypothesis, it does so only by a negligibly small amount, because the prior probability of observing a non-black non-raven is very high.
6.4.2 The Problem of Old Evidence
The problem of old evidence (旧证据问题), identified by Clark Glymour, asks: how can evidence that was already known before a theory was formulated confirm that theory? On a strict Bayesian account, if E is already known, then P(E) = 1, and E cannot raise the probability of H. Various solutions have been proposed, including “counterfactual” approaches that consider what the agent’s probabilities would have been had they not known E.
6.5 Strengths and Limitations
Bayesianism provides a powerful, unified framework for reasoning about evidence. It handles many traditional puzzles elegantly and connects philosophy of science to formal decision theory and statistics. However, critics worry about the subjectivity of priors, the computational intractability of real scientific reasoning, and the idealization involved in treating scientists as perfect Bayesian updaters.
Chapter 7: Observation and Theory: The Theory-Ladenness of Observation
7.1 The Empiricist Picture
The traditional empiricist picture assumes a clean separation between observation (观察) and theory (理论). Observations provide the neutral, objective data against which theories are tested. This picture was central to both logical positivism and early forms of falsificationism.
7.2 Theory-Ladenness
The thesis of the theory-ladenness of observation (观察渗透理论) holds that what we observe — and how we describe our observations — is influenced by the theories we already hold. This idea, developed by N.R. Hanson, Kuhn, and Paul Feyerabend, challenges the empiricist picture at its foundation.
7.2.1 Hanson’s Analysis
Norwood Russell Hanson argued in Patterns of Discovery (1958) that seeing is not a purely passive reception of sense data; it is an interpretive act. When Tycho Brahe and Johannes Kepler watched the sunrise, they literally “saw” different things: Tycho saw the sun moving around the earth, while Kepler saw the earth turning to reveal the sun.
7.2.2 Kuhn on Observation
Kuhn extended this analysis, arguing that paradigm shifts involve changes in perception. After a revolution, scientists working under the new paradigm see the world differently. The shift from the phlogiston theory to oxygen chemistry, for example, involved not just a new theory but a new way of seeing combustion.
7.3 The Observation-Theory Distinction
The theory-ladenness thesis does not necessarily destroy the distinction between observation and theory; it complicates it. Several responses have been offered:
- Degrees of theory-ladenness: Some observations are more theory-laden than others. Reading a thermometer is relatively uncontroversial; interpreting a cloud-chamber photograph requires sophisticated theoretical knowledge.
- Intersubjective agreement: Even if all observation is theory-laden, observers with different theoretical commitments can sometimes agree on observational reports.
- Reliable instruments: Scientific instruments can be calibrated and tested independently, providing a form of objectivity that does not depend on theory-neutral perception.
7.4 The Underdetermination of Theory by Evidence
A related issue is the underdetermination thesis (不充分决定论): for any body of evidence, there are in principle multiple theories that are equally well supported. If observation cannot uniquely determine theory, then theory choice must depend on additional factors — simplicity, elegance, explanatory power, or social context.
Chapter 8: Scientific Explanation: DN Model, Causal, Unificationist
8.1 What Is Scientific Explanation?
One of the central aims of science is to explain phenomena. But what exactly constitutes a good scientific explanation? This question has generated several competing accounts.
8.2 The Deductive-Nomological (DN) Model
The deductive-nomological model (演绎-律则模型), developed by Carl Hempel and Paul Oppenheim, is the classic account. An explanation consists of two parts:
- The explanans (解释项): a set of statements including at least one general law and a description of initial conditions.
- The explanandum (被解释项): the statement describing the phenomenon to be explained.
The explanandum must follow deductively from the explanans. The explanation is adequate if it invokes true premises and if the derivation is logically valid.
8.2.1 Problems with the DN Model
The DN model faces well-known counterexamples:
The flagpole problem: From the height of a flagpole and the angle of the sun, we can deduce the length of its shadow, and this counts as a DN explanation. But we can equally deduce the height of the flagpole from the length of the shadow and the sun’s angle. Yet intuitively, the shadow does not explain the flagpole’s height. The DN model cannot distinguish explanatory from non-explanatory derivations.
Irrelevant factors: A man takes birth-control pills and does not become pregnant. We can construct a valid DN argument: all men who take birth-control pills do not become pregnant; this man takes birth-control pills; therefore he does not become pregnant. But the pills are explanatorily irrelevant.
Symmetry: The DN model does not capture the asymmetry of explanation (解释的不对称性) — the intuition that causes explain effects but not vice versa.
8.3 Statistical Explanation
Hempel also developed an inductive-statistical (IS) model (归纳-统计模型) for probabilistic explanation. An event is explained by showing that it was highly probable given certain laws and conditions. But this faces the problem that events with low probability can still be explained (e.g., explaining why a particular atom decayed, given that the probability was low).
8.4 Causal Accounts
Causal theories of explanation (因果解释理论) hold that to explain an event is to identify its cause. Wesley Salmon developed an influential version based on the notion of causal processes (因果过程) — physical processes that can transmit marks or information — and causal interactions (因果交互作用) — spatiotemporal intersections of causal processes that modify them.
8.4.1 Advantages
Causal accounts handle the flagpole problem (the flagpole’s height causes the shadow, not vice versa) and the irrelevance problem (birth-control pills do not cause men’s non-pregnancy). They also capture the intuitive asymmetry of explanation.
8.4.2 Challenges
Defining causation precisely is notoriously difficult. Different accounts of causation (counterfactual, probabilistic, mechanistic) yield different accounts of explanation. Moreover, some scientific explanations — especially in physics — do not seem to invoke causes in any straightforward sense (e.g., explanations citing symmetry principles or conservation laws).
8.5 Unificationist Accounts
Philip Kitcher proposed that explanation is unification (统一化): to explain is to show that diverse phenomena can be derived from a small number of argument patterns. The best explanatory theory is the one that derives the most phenomena from the fewest patterns.
8.5.1 Challenges to Unification
Critics ask whether unification is really sufficient for explanation, or merely a desirable feature of theories. A theory might unify without explaining (e.g., a mere conjunction of two unrelated laws unifies without illuminating). Moreover, the framework requires a precise account of what counts as an “argument pattern,” which has proven elusive.
8.6 Pragmatic and Pluralist Approaches
Bas van Fraassen proposed a pragmatic theory of explanation (语用解释理论): explanations are answers to why-questions, and what counts as a good answer depends on context. This approach embraces the variability of explanation across different contexts and purposes. Many contemporary philosophers adopt a pluralist stance, holding that different accounts capture different aspects of the rich and varied practice of scientific explanation.
Chapter 9: Scientific Realism vs. Anti-Realism
9.1 The Realism Debate
The debate between scientific realism (科学实在论) and anti-realism (反实在论) is one of the most enduring controversies in the philosophy of science. It concerns the status of the unobservable entities — electrons, genes, quarks, dark matter — posited by our best scientific theories.
9.2 Arguments for Realism
9.2.1 The No-Miracles Argument
The most influential argument for realism is the no-miracles argument (非奇迹论证), due to Hilary Putnam: the success of science — its ability to predict novel phenomena, to develop powerful technologies, and to achieve convergence across independent lines of investigation — would be a miracle if our theories were not at least approximately true. Realism, the argument goes, is the best explanation of the success of science.
9.2.2 Inference to the Best Explanation
Realists often invoke inference to the best explanation (最佳解释推理, also known as abduction): we should believe the theory that best explains the evidence. Since scientific theories best explain their domains, we should believe they are approximately true, including their claims about unobservable entities.
9.3 Arguments Against Realism
9.3.1 The Pessimistic Meta-Induction
The pessimistic meta-induction (悲观元归纳), due to Larry Laudan, notes that the history of science is littered with once-successful theories that are now regarded as false: the caloric theory of heat, the phlogiston theory, the luminiferous aether. If past successful theories turned out to be false, what grounds do we have for thinking our current theories are true?
9.3.2 Underdetermination
Anti-realists also invoke underdetermination: if multiple incompatible theories can account for all the evidence, we have no empirical grounds for choosing between them, and no basis for claiming that any one of them is true.
9.3.3 Van Fraassen’s Constructive Empiricism
Bas van Fraassen’s constructive empiricism (建构经验主义), developed in The Scientific Image (1980), is the most influential anti-realist position. Van Fraassen holds that science aims not at truth but at empirical adequacy (经验适当性): a theory is empirically adequate if what it says about observable entities and events is true. We need not believe that what a theory says about unobservable entities is true — we need only accept the theory as empirically adequate.
9.4 Entity Realism and Structural Realism
Several intermediate positions have emerged:
Entity realism (实体实在论), associated with Ian Hacking, holds that we should be realists about entities that we can manipulate (e.g., electrons used in experiments), even if we are skeptical about the theories describing them.
Structural realism (结构实在论), developed by John Worrall, holds that while our theories may be wrong about the nature of unobservable entities, they are right about the structure of the world — the mathematical relations between entities. This position accommodates the pessimistic meta-induction by noting that mathematical structure is often preserved across theory change.
9.5 The Contemporary Debate
The realism debate remains very much alive. Contemporary discussions focus on selective realism (being realist about some theoretical posits but not others), the role of explanation in theory choice, and the relationship between realism and the practice of science.
Chapter 10: Feminism and Philosophy of Science
10.1 The Challenge
Feminist philosophy of science (女性主义科学哲学) asks how gender, and more broadly social identity, affects the production of scientific knowledge. It challenges the traditional view of science as a value-free, purely objective enterprise and examines the ways in which social factors — including gender bias — can shape scientific theories, methods, and institutions.
10.2 Feminist Empiricism
Feminist empiricism (女性主义经验主义) holds that sexism and androcentrism in science are correctable biases. The remedy is not to abandon empiricism but to practice it more rigorously: more diverse scientists, better methodology, more careful attention to evidence. On this view, feminism improves science by helping it live up to its own ideals of objectivity and evidence-responsiveness.
10.3 Standpoint Epistemology
Standpoint epistemology (立场认识论) goes further, arguing that marginalized social positions can provide epistemic advantages. Those who occupy subordinate positions in social hierarchies may see aspects of reality that are invisible to those in positions of privilege. This does not mean that any individual’s perspective is automatically correct; rather, certain social positions generate resources for critical inquiry.
10.4 Longino’s Contextual Empiricism
Helen Longino developed a sophisticated account of contextual empiricism (语境经验主义), arguing that background assumptions (背景假设) inevitably mediate the relationship between evidence and hypothesis. Because these assumptions reflect social values and interests, objectivity cannot be a property of individual scientists. Instead, objectivity is a social achievement: it requires communities with diverse perspectives, venues for criticism, shared standards, and responsiveness to criticism. Longino’s account shows how values can be part of good science, not merely distortions of it.
10.5 Donna Haraway and Situated Knowledges
Donna Haraway argued for the concept of situated knowledges (情境知识): all knowledge is produced from a particular embodied, social, and historical location. The “god trick” of claiming to see everything from nowhere is an illusion. Genuine objectivity requires acknowledging one’s situatedness and engaging in critical dialogue with others who occupy different positions.
10.6 Implications for Scientific Practice
Feminist philosophy of science has practical implications: it supports the diversification of the scientific workforce, the re-examination of research priorities, and the critical scrutiny of background assumptions in research design. It challenges the idea that science can or should be “value-free” while offering tools for distinguishing productive from distorting values in science.
Chapter 11: Science, Values, and Society
11.1 The Value-Free Ideal
The traditional view holds that science should be value-free (价值无涉的): scientists should follow the evidence wherever it leads, without allowing personal, political, or moral values to influence their conclusions. This ideal has deep roots in the Enlightenment and was articulated forcefully by the logical positivists, who distinguished sharply between facts and values.
11.2 Challenges to the Value-Free Ideal
11.2.1 The Role of Values in Theory Choice
Kuhn identified five values that scientists use in choosing between theories: accuracy (精确性), consistency (一致性), scope (范围), simplicity (简单性), and fruitfulness (多产性). These are epistemic values (认识论价值) — they are connected to the goal of finding truth. But Kuhn noted that scientists may weigh these values differently, and that this legitimate disagreement opens space for social and personal factors to influence theory choice.
11.2.2 Epistemic vs. Non-Epistemic Values
Philosophers distinguish between epistemic values (认识论价值) — values related to truth-seeking (accuracy, simplicity, explanatory power) — and non-epistemic values (非认识论价值) — moral, political, social, or economic values. The question is whether non-epistemic values have a legitimate role in science.
11.2.3 Inductive Risk
The argument from inductive risk (归纳风险论证), developed by Richard Rudner and later by Heather Douglas, holds that scientists must make value judgments in deciding how much evidence is enough to accept or reject a hypothesis. The consequences of error — accepting a false hypothesis or rejecting a true one — depend on the context, and assessing those consequences requires moral and social judgment.
11.3 The Social Structure of Science
11.3.1 Merton’s Norms
Sociologist Robert K. Merton identified four norms (sometimes called the CUDOS norms) that he argued characterize the institution of science:
- Communalism (公有主义): Scientific knowledge is shared, not privately owned.
- Universalism (普遍主义): Claims are evaluated on their merits, regardless of the identity of the scientist.
- Disinterestedness (无私利性): Scientists are motivated by the pursuit of knowledge, not personal gain.
- Organized skepticism (有组织的怀疑主义): All claims are subjected to critical scrutiny.
11.3.2 Challenges to Merton’s Norms
The sociology of scientific knowledge (SSK) and science and technology studies (STS) have questioned whether these norms accurately describe scientific practice. In reality, priority disputes, secrecy (especially in industry-funded research), and bias (against women, minorities, and researchers from less prestigious institutions) are pervasive.
11.4 Science and Public Policy
11.4.1 The Linear Model and Its Critique
The linear model (线性模型) of science-society interaction holds that basic research leads to applied research, which leads to technology, which leads to social benefit. This model has been criticized for oversimplifying the complex feedback loops between science, technology, and society.
11.4.2 Science Advising and Democratic Governance
How should scientific expertise be integrated into democratic decision-making? Philip Kitcher has argued for well-ordered science (良序科学): a framework in which research priorities are set through democratic deliberation, informed by scientific expertise but responsive to public values. This challenges the traditional view that scientists should determine their own research agendas without outside input.
11.5 Trust and the Authority of Science
In an era of climate change denial, vaccine hesitancy, and the politicization of public health, questions about the epistemic authority (认识论权威) of science have become urgent. Philosophers ask: what grounds the authority of scientific claims? How should non-experts evaluate competing scientific claims? What role do trust, credibility, and social institutions play in the dissemination of scientific knowledge?
11.6 Conclusion: The Ongoing Project
The philosophy of science is a living, evolving enterprise. The questions raised in this course — about demarcation, confirmation, explanation, realism, and the role of values — remain open and actively debated. Studying them equips us not only to understand science better but to participate more thoughtfully in the social, political, and ethical decisions that science informs.