PHIL 358b: Scientific Realism & Anti-Realism
Doreen Fraser
Estimated study time: 1 hr 2 min
Table of contents
Sources and References
Primary texts
- van Fraassen, Bas C. The Scientific Image. Oxford: Clarendon Press, 1980.
- Psillos, Stathis. Scientific Realism: How Science Tracks Truth. London: Routledge, 1999.
- Laudan, Larry. “A Confutation of Convergent Realism.” Philosophy of Science 48, no. 1 (1981): 19–49.
- Musgrave, Alan. “The Ultimate Argument for Scientific Realism.” In Relativism and Realism in Science, edited by R. Nola. Dordrecht: Kluwer, 1988.
- Worrall, John. “Structural Realism: The Best of Both Worlds?” Dialectica 43, no. 1–2 (1989): 99–124.
- Ladyman, James. “What Is Structural Realism?” Studies in History and Philosophy of Science 29, no. 3 (1998): 409–424.
- Hacking, Ian. Representing and Intervening. Cambridge: Cambridge University Press, 1983.
- Cartwright, Nancy. How the Laws of Physics Lie. Oxford: Clarendon Press, 1983.
- Fine, Arthur. “The Natural Ontological Attitude.” In Scientific Realism, edited by J. Leplin. Berkeley: University of California Press, 1984.
- Stanford, P. Kyle. Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. Oxford: Oxford University Press, 2006.
- Chang, Hasok. Is Water H₂O? Evidence, Realism and Pluralism. Dordrecht: Springer, 2012.
- Duhem, Pierre. The Aim and Structure of Physical Theory. Translated by P. P. Wiener. Princeton: Princeton University Press, 1954 [1906].
- Lipton, Peter. Inference to the Best Explanation. 2nd ed. London: Routledge, 2004.
- Okasha, Samir. Philosophy of Science: A Very Short Introduction. Oxford: Oxford University Press, 2002.
- Ladyman, James, and Don Ross. Every Thing Must Go: Metaphysics Naturalised. Oxford: Oxford University Press, 2007.
Online resources
- Chakravartty, Anjan. “Scientific Realism.” Stanford Encyclopedia of Philosophy (2017). https://plato.stanford.edu/entries/scientific-realism/
- Ladyman, James. “Structural Realism.” Stanford Encyclopedia of Philosophy (2020). https://plato.stanford.edu/entries/structural-realism/
- Muller, Fred. “Van Fraassen’s Constructive Empiricism.” SEP (2022). https://plato.stanford.edu/entries/constructive-empiricism/
Chapter 1: The Realism Debate — What Is at Stake?
1.1 Framing the Question
Philosophy of science returns again and again to a deceptively simple question: when a scientific theory is empirically successful, what does that success tell us about the world? Does it tell us that the world is more or less the way the theory says it is — electrons, quarks, fields, and all — or does it tell us only that the theory is a reliable instrument for organising experience and generating predictions?
This question defines the contemporary scientific realism (科学实在论) debate. It is not merely a debate about semantics, nor is it simply a rehash of older debates between idealism and materialism. It is a live dispute about the epistemic and metaphysical significance of scientific practice, and it has implications for how we understand scientific progress, theory change, and the relationship between evidence and belief.
The debate is typically framed around three interconnected dimensions:
Metaphysical dimension: There is a mind-independent world that science investigates. Scientific theories purport to describe features of that world, including features that are not directly accessible to perception.
Semantic dimension: Scientific theories are genuine truth-apt claims about that world, including claims about unobservable entities. The language of science is not merely instrumental shorthand; theoretical terms genuinely refer to things in the world. The sentence "electrons have negative charge" is true or false in the same way as "the cat is on the mat."
Epistemic dimension: Our best scientific theories are approximately true, and mature science gives us reason to believe this. The epistemic dimension is the most contested: even granting that theoretical claims are truth-apt and that the world exists mind-independently, one might deny that we are in any position to know that our theories are close to the truth about the unobservable world.
Anti-realist positions reject one or more of these dimensions. A constructive empiricist like Bas van Fraassen accepts the metaphysical dimension — the world exists independently of our theorising — but disputes the epistemic claim, arguing that we are only warranted in believing that our best theories are empirically adequate (经验充分的) — that is, that they correctly describe observable phenomena — without committing ourselves to claims about the unobservable realm. An instrumentalist might reject the semantic dimension as well, treating theoretical vocabulary as a calculation device rather than genuine description. An idealist might reject the metaphysical dimension entirely.
1.2 Why the Debate Matters
Students approaching this debate for the first time sometimes wonder why it matters. If a theory works, why should it concern us whether the entities it posits “really exist”? Several considerations explain the significance.
First, the debate bears directly on scientific progress. If we adopt the realist picture, then scientific revolutions — such as the replacement of Newtonian mechanics by relativistic mechanics — represent progressive approximations to truth. But if we adopt the anti-realist picture, successive theories need not be seen as converging on anything beyond better empirical coverage. How we understand scientific history depends on our position in this debate.
Second, the debate intersects with questions about scientific explanation. Realists typically hold that genuine explanation requires appeal to real causal mechanisms. Anti-realists may argue that explanation is merely a form of organisation of phenomena, or that explanatory virtues are pragmatic rather than epistemic guides to truth.
Third, the debate touches on the demarcation of science itself. Disputes about whether string theory or multiverse cosmology constitute genuine science often implicitly appeal to realist or anti-realist commitments.
Fourth, the debate has implications for the rationality of intertheoretic reasoning. If a scientist is anti-realist about unobservables, it becomes difficult to explain why she should prefer a theory that posits fewer kinds of fundamental entities, or why theoretical unification should be a scientific goal. The realist has a ready explanation: unification suggests we are carving nature at its joints.
1.3 Mapping the Positions — A Taxonomy
The debate is not a binary choice between naive realism and full instrumentalism. There is a rich landscape of intermediate positions, which this course will explore in detail.
| Position | Metaphysical | Semantic | Epistemic | Key Proponent |
|---|---|---|---|---|
| Scientific Realism | Mind-independent world | Theories literally true/false | Mature theories approximately true | Psillos, Boyd |
| Constructive Empiricism | Mind-independent world | Theories literally true/false | Belief warranted only for observable content | van Fraassen |
| Instrumentalism | World exists | Theories are tools, not truth-apt | Predictive success is the only measure | Mach, Osiander |
| Entity Realism | Mind-independent world | Entities literally exist | Belief in manipulable entities; agnosticism on theories | Hacking, Cartwright |
| Epistemic Structural Realism | Mind-independent world | We know structure, not nature | Structure is approximately known | Worrall |
| Ontic Structural Realism | Structure is all there is | Objects are nodes in structure | Structure is the complete ontology | Ladyman, Ross |
| Natural Ontological Attitude | Accept science at face value | Core acceptance without philosophical gloss | Neither realist nor anti-realist additions | Fine |
| Perspectival Realism | World exists; knowledge is perspectival | Knowledge is framework-relative but genuine | Multiple perspectives yield genuine but partial knowledge | Chang |
Each row represents a distinct philosophical stance, with different commitments along the metaphysical, semantic, and epistemic axes. One important observation is that positions can diverge sharply along the epistemic axis while agreeing on the metaphysical and semantic axes — the most contested battleground is therefore epistemology.
1.4 A Note on the Semantic Dimension
The semantic dimension deserves more attention than it sometimes receives in introductory treatments. The debate between scientific realism and instrumentalism (工具主义) is partly a debate about whether theoretical language has literal reference. The logical empiricists of the Vienna Circle drew a sharp distinction between the theoretical vocabulary (理论词汇) and the observational vocabulary (观察词汇) of science. Theoretical terms like “electron” or “gene” were, on one view, mere logical constructs defined by their observational consequences — they need not refer to anything beyond those consequences.
The demise of the strict observational/theoretical distinction — pressed most forcefully by Quine and Sellars — removed one motivation for semantic instrumentalism. If we cannot cleanly separate observational from theoretical vocabulary, the instrumentalist’s privileging of observational language becomes difficult to maintain. This shift helps explain why contemporary anti-realism typically takes the form of van Fraassen’s constructive empiricism (which grants literal reference but restricts epistemic warrant) rather than classic instrumentalism (which denies literal reference altogether).
Chapter 2: Scientific Realism — Core Thesis and the No-Miracles Argument
2.1 The Standard Formulation of Scientific Realism
The most influential formulation of scientific realism in the contemporary debate is associated with Richard Boyd and Stathis Psillos, among others. Psillos in Scientific Realism: How Science Tracks Truth (1999) offers a careful articulation: scientific realism is the view that the world has a definite structure that is independent of our minds, that mature scientific theories describe this structure with approximate truth, and that we are warranted in believing this.
(1) The world has a fixed nature independent of our cognitive activity.
(2) Mature scientific theories are approximately true of this world.
(3) The theoretical virtues of a theory (empirical success, coherence, simplicity) are indicators of approximate truth, not merely pragmatic values.
The phrase “mature theories” is important. Realists do not claim that every scientific theory ever proposed is approximately true — many have been false. They claim rather that the theories that have survived sustained empirical testing and achieved predictive and explanatory success in their domain are likely to be tracking something real.
2.2 The No-Miracles Argument
The central argument for scientific realism is the No-Miracles Argument (无奇迹论证), most forcefully articulated by Alan Musgrave and Hilary Putnam. Putnam’s famous formulation runs: realism is the only philosophy of science that does not make the success of science a miracle.
The argument can be reconstructed as follows:
P1. Our best scientific theories are strikingly empirically successful — they make novel predictions that are confirmed, generate technological applications, and cohere with one another across domains.
P2. The best explanation of this empirical success is that these theories are approximately true — they accurately describe the world, including its unobservable aspects.
P3. If a theory were not approximately true, its predictive success would be a "miracle" — a massive, unexplained coincidence.
C. Therefore, we have good reason to believe that our best scientific theories are approximately true.
2.3 Musgrave’s Formulation
Alan Musgrave’s version of the No-Miracles Argument in “The Ultimate Argument for Scientific Realism” (1988) gives the argument a distinctive sharpening. Where Putnam’s formulation can seem like a general appeal to theoretical plausibility, Musgrave reconstructs the argument as a specific instance of Inference to the Best Explanation applied at the meta-level. The explanandum is not individual successful predictions but the overall pattern of scientific success — the systematic character of novel prediction, cross-domain coherence, and technological application.
Musgrave emphasises that the argument gains its force specifically from novel predictions (新颖预测). When a theory predicts phenomena that were not used in its construction — when it yields surprises that are then confirmed — the explanatory burden on the anti-realist grows heavy. Mere accommodation of known data might be explained instrumentally: a clever curve-fit might achieve this without any claim to truth. But when a theory constructed to explain phenomenon P subsequently predicts phenomenon Q — which was unknown and unexpected — the coincidence requires explanation. The realist explanation is that the theory is approximately true of the underlying reality that causes both P and Q. Musgrave argues there is no comparably good anti-realist explanation: the constructive empiricist can note the success but, by her own lights, has no explanation for why the theory should yield novel successes rather than merely fitting the data it was designed to fit.
2.4 Does the NMA Beg the Question?
A significant line of criticism holds that the No-Miracles Argument is question-begging. The charge, pressed most clearly by van Fraassen and elaborated by others, runs as follows: the NMA uses IBE to argue for the truth of realism. But the realist’s ability to use IBE is itself contested by the anti-realist. The anti-realist holds that IBE licences belief only in the empirical adequacy of the best explanation, not in its truth. Accordingly, the NMA cannot be used to establish the stronger realist conclusion without already presupposing the realist interpretation of IBE.
The NMA is itself an application of IBE at the meta-level: we infer, from the evidence of scientific success, that realism is approximately true because realism best explains that success. But the anti-realist disputes whether IBE delivers truth rather than empirical adequacy. If the anti-realist is right, the NMA establishes only that realism is empirically adequate as a meta-theory — which is precisely the concession the realist wants the anti-realist to make at the object level. The NMA thus presupposes the very point in dispute.
Realists respond in several ways. Some bite the bullet and agree that all justifications of inference rules are to some degree circular — this is simply the human epistemic condition. Psillos appeals to the fact that IBE has a track record of leading to true beliefs that can be independently verified within the observable domain; this track record gives pragmatic, if not purely logical, justification for extending IBE to the unobservable domain.
2.5 Van Fraassen’s Response to the NMA
Van Fraassen’s response to the NMA has two components. First, he offers the selectionist challenge: theories that are empirically successful are successful because they have been selected by a demanding empirical process. Unsuccessful theories have been eliminated. The success of surviving theories is therefore no more surprising than the survival of organisms well-adapted to their environment — and no more requires explanation in terms of correspondence to underlying reality. Success is a selection effect, not evidence of truth.
Second, van Fraassen argues that the anti-realist has her own explanation of success: the theory is empirically adequate. An empirically adequate theory is one that correctly describes the observable phenomena; and a theory that correctly describes the phenomena will predictively succeed. No appeal to unobservable truth is required. The NMA assumes that empirical adequacy is somehow inexplicable without truth, but van Fraassen denies this.
Realists counter that the selectionist response merely pushes the question back: why has the selection process repeatedly converged on theories that are not only empirically adequate but also make novel predictions extending beyond the domain for which they were selected? The convergent success across independent domains seems harder to explain without appeal to approximate truth.
Chapter 3: Inference to the Best Explanation
3.1 The Structure and Varieties of IBE
Philosophers distinguish several varieties of IBE. In its weakest form, IBE is merely a heuristic — it tells us which hypotheses are worth investigating further, not which to believe. In its stronger form, IBE licenses genuine doxastic commitment to the best-explaining hypothesis. The realist needs the strong form.
The nature of IBE as an inference rule raises immediate questions. Unlike deductive inference, IBE is ampliative: the conclusion goes beyond what is strictly implied by the premises. Unlike enumerative induction, IBE does not merely project observed regularities; it posits underlying causes or structures to explain them. This ampliative character is both its strength — it can generate genuine knowledge of unobservables — and its vulnerability to anti-realist critique.
3.2 Lipton’s Account of Loveliness
Peter Lipton’s account in Inference to the Best Explanation (2004) is the most detailed philosophical analysis of IBE. Lipton distinguishes between two senses of “best explanation” that are often conflated:
Likeliest explanation: The hypothesis that is, all things considered, most probable given the evidence. This is the epistemically correct choice if we had reliable prior probabilities and could apply Bayes's theorem.
Loveliest explanation: The hypothesis that would, if true, provide the greatest understanding — the most illuminating, coherent, and unifying account of the phenomena.
Lipton's thesis: We infer the loveliest explanation, not because we conflate loveliness with likelihood, but because loveliness is our best guide to likelihood in domains where prior probabilities are unavailable or unreliable. A lovely explanation is one that is deep, general, and precise; these virtues tend to correlate with truth.
Lipton identifies several criteria by which the loveliness of an explanation is assessed: depth (the explanation reveals underlying mechanisms rather than merely redescribing the phenomenon), breadth (the explanation unifies previously disparate phenomena), precision (the explanation yields specific, testable predictions), and non-ad-hocness (the explanation does not invoke auxiliary assumptions tailored specifically to accommodate the evidence). These criteria map closely onto the theoretical virtues invoked by scientific realists.
3.3 Van Fraassen’s Circularity Objection
Van Fraassen mounts a sophisticated objection to IBE. In The Scientific Image, he argues that IBE cannot be a fundamental rule of inference because it is not truth-preserving in the way that deductive inference is, and it is not well-calibrated in the way that Bayesian conditionalization is. More pointedly, he argues that IBE, applied reflexively, would force us to conclude that the world is as our theories say it is in domains we have no access to — a conclusion he finds epistemically overreaching.
The reliability of IBE as a route to truth is itself an empirical claim. To justify IBE by saying "IBE is our best explanation of our success in forming true beliefs" is viciously circular: it uses IBE to justify IBE. Without independent justification, IBE cannot support the strong realist conclusion that our theories are approximately true of the unobservable world. At most, IBE can justify beliefs about entities and processes within the observable domain — its track record there provides some grounding. But extending IBE to unobservables requires an additional step that IBE itself cannot authorise without circularity.
Van Fraassen also raises what might be called the best-of-a-bad-lot objection: IBE advises us to infer the best available explanation, but it gives us no guarantee that the best available explanation is a good explanation in any absolute sense. Among a set of false theories, one may be the “best” in relative terms. The history of science, on the anti-realist reading, suggests that scientists have repeatedly inferred the best of a bad lot, only to find that the true theory was not among the options they considered. This objection connects to Stanford’s point about unconceived alternatives (discussed in Chapter 5).
3.4 Okasha’s Treatment
Samir Okasha, in Philosophy of Science: A Very Short Introduction (2002) and related work, provides a useful framework for situating the IBE debate. Okasha emphasises that the controversy over IBE is not primarily about whether scientists use IBE — they clearly do — but about whether IBE is a reliable route to truth, and if so, why.
Okasha notes that the realist and anti-realist can largely agree on the role of IBE in guiding theory choice within science while disagreeing about whether that guidance is truth-tracking. The anti-realist can grant that IBE selects for theories that are simpler, more unified, and more predictively powerful without granting that these properties are truth-conducive at the level of unobservable posits. Simplicity and unification may be pragmatic virtues — features that make theories easier to work with — without being indicators of correspondence to unobservable reality.
The realist response is to argue that the pragmatic value of theoretical virtues is itself best explained by their truth-conduciveness: the reason simpler theories are easier to work with is that the world has a simpler structure than more complicated theories would suggest. This response again invokes IBE at the meta-level, which raises the circularity concern once more.
3.5 Responses from the Realist
Psillos and others respond that van Fraassen’s critique proves too much. If the circularity objection disqualifies IBE, it equally disqualifies all forms of ampliative inference, including the very inductive methods that underpin empirical adequacy judgments. The anti-realist who trusts inductive reasoning for observable predictions but distrusts IBE for unobservable claims owes us an explanation of where this asymmetry is grounded.
Furthermore, realists argue that IBE is not an isolated logical principle but is embedded in the overall practice of scientific reasoning. Its track record — novel predictions confirmed, technologies built on theoretical posits — provides the pragmatic justification it requires. This is not vicious circularity but the normal epistemic situation of any cognitive agent who must use her inference mechanisms to evaluate those same mechanisms.
Chapter 4: Constructive Empiricism (van Fraassen)
4.1 The Central Thesis
Bas van Fraassen’s The Scientific Image (1980) is the most important anti-realist text in contemporary philosophy of science. Van Fraassen does not deny that the world exists or that science investigates it. His target is the epistemic ambition of realism — the claim that we are warranted in believing the theoretical, unobservable content of our best theories.
Science aims to give us theories that are empirically adequate. A theory is empirically adequate if what it says about observable things and events in the world is true — that is, if it "saves the phenomena." Acceptance of a theory involves belief only in its empirical adequacy, not in the truth of its theoretical claims about unobservables.
The central move in van Fraassen’s position is the distinction between acceptance (接受) and belief (相信). A scientist who accepts a theory is committed to using it as a basis for explanation and prediction, and to working within its framework. But acceptance does not require belief in the truth of the theory’s claims about electrons, quarks, or fields. Acceptance is a pragmatic commitment; belief is an epistemic one. Van Fraassen holds that acceptance is all science requires and all that scientific rationality demands.
4.2 The Observable/Unobservable Distinction
The pivot of constructive empiricism is the distinction between the observable (可观察的) and the unobservable (不可观察的). Van Fraassen defines observability in terms of what can be directly perceived by the unaided human senses under favorable conditions.
Observable: Moons of Jupiter (seen through a telescope, but the telescope merely enhances vision); the bending of light around the sun; color changes of litmus paper; the track of a charged particle in a cloud chamber (the track itself is visible, even if the particle is not).
Unobservable: Electrons (too small to see even in principle with visible light); quarks (permanently confined); the big bang (no observer could have been present); the interior of a neutron star.
4.3 Where to Draw the Line — and Why It Matters
The observable/unobservable distinction has attracted fierce criticism. Grover Maxwell and others argue that observability is a matter of degree — we observe with the naked eye, with glasses, with microscopes, with electron microscopes, in an unbroken continuum. There is no principled cut-off point. Why should the line fall between the naked eye and the magnifying glass rather than between the magnifying glass and the optical microscope?
Van Fraassen’s response is subtle and has often been misread. He does not claim to offer a precise criterion for observability. He insists only that the line is not drawn by theory — observability is a feature of the relation between objects and human perceivers, not a feature that theories assign to their posits. What counts as observable is fixed by human perceptual capacities, which are contingent biological facts.
"The line between observable and unobservable is a vague one. But there is a line, however vague. And that is all we need. The claim that science aims to save the phenomena — observable phenomena — does not require a sharp line. It requires only that there is a difference between directly perceiving something and merely inferring its existence from theory. That difference is real even if its boundaries are fuzzy."
A deeper challenge concerns the theory-ladenness of observation. If all observation is theoretically mediated — if what we see depends on our conceptual frameworks — then van Fraassen’s appeal to a theory-independent observational base seems compromised. His response is that theory-ladenness of observation does not imply that observational reports have no independent epistemic standing. We can acknowledge that our perceptual reports are influenced by background theory while still maintaining that perceptual experience exerts a constraint on theory that is not itself purely theoretical.
A further and underappreciated problem is that the observable/unobservable distinction, if drawn by human perceptual capacities, seems epistemically arbitrary. Had we evolved with different perceptual systems — sensitive to X-rays, say — different things would count as observable. This species-relative character of observability seems to make van Fraassen’s epistemology parochial. He accepts this consequence: the epistemic distinction is genuinely tied to the kind of creatures we are. What we can directly perceive is what grounds our most secure knowledge; what we can only theoretically infer is epistemically second-order.
4.4 Empirical Adequacy and the Aim of Science
Van Fraassen’s notion of empirical adequacy (经验充分性) is more technical than it might appear. A theory is empirically adequate if and only if it has at least one model such that all actual observable phenomena are “isomorphically embeddable” into that model. This model-theoretic formulation matters: it means that empirical adequacy is not just about predicting what we have observed but about correctly representing all phenomena within the structure of the theory’s models.
Empirical adequacy is a stronger criterion than predictive success. A theory that happens to predict all observed data points might do so by overfitting or by coincidence. Empirical adequacy, for van Fraassen, requires that the observable substructure of the world be embeddable into the theory's model — a structural condition with real content. This also means that empirical adequacy is not directly verifiable, since it concerns all actual phenomena, including unobserved ones.
4.5 The Voluntarist Strand
A less-discussed but important aspect of van Fraassen’s constructive empiricism is its voluntarist epistemology (意志论认识论). Van Fraassen is a self-described voluntarist about epistemic stances: we are free, within the constraints of consistency and coherence, to adopt whatever doxastic and practical attitudes we choose toward theories. The decision to accept rather than merely use a theory, and the decision to believe or merely accept, are exercises of rational agency rather than forced conclusions of epistemic norms.
This voluntarism connects to van Fraassen’s broader opposition to epistemic rules that purport to mandate belief. He is skeptical of Bayesian conditionalization as a universal rational requirement, and equally skeptical of IBE as a mandatory inference rule. What rationality requires is coherence and non-dogmatism — it does not require a particular stance toward the unobservable content of theories.
4.6 Internal vs. External Questions
Van Fraassen’s distinction between internal questions (内部问题) and external questions (外部问题) provides another dimension of his position. Internal questions are questions asked from within a theoretical framework: “Does the theory posit quarks? Does the theory predict this experimental outcome?” These have determinate answers settled by the theory. External questions are questions asked from outside: “Do quarks really exist? Is the theory true?” These are the philosophical questions. Van Fraassen’s position is that the external question of whether the entities posited by a theory really exist cannot be answered by the theory itself — only by a reflective philosophical stance toward the theory.
This distinction echoes Carnap’s distinction between internal and external questions about the existence of abstract objects. Van Fraassen is not a Carnapian — he does not hold that external questions are meaningless — but he holds that they cannot be answered by further scientific theorising. The decision to treat a theory as approximately true of unobservables is a philosophical addition, not a scientific one.
4.7 Objections to Constructive Empiricism
Several objections have been pressed against van Fraassen’s position.
The symmetry objection: If we are warranted in believing that a theory is empirically adequate — which is itself a claim about unobserved (though in principle observable) phenomena — then why not believe the stronger realist claim? The constructive empiricist believes more than just the observed data; she believes the theory is adequate for all observables. This seems to employ the very IBE the anti-realist rejects.
The pragmatics objection: Scientists do not merely use theories; they assert that electrons exist. The semantic content of scientific discourse cannot be sanitised into acceptance-talk without distorting scientific practice.
The van Fraassen–Fine dialogue: Arthur Fine argues that van Fraassen’s constructive empiricism is itself a metaphysical stance, and one no less metaphysically loaded than scientific realism. The claim that science aims at empirical adequacy, rather than truth, is a normative claim about the goals of science that requires independent justification.
Chapter 5: The Pessimistic Meta-Induction (Laudan)
5.1 Laudan’s Challenge
Larry Laudan’s 1981 paper “A Confutation of Convergent Realism” is perhaps the single most cited piece in the realism debate. Laudan’s strategy is to turn the realist’s own methodology against realism. If IBE justifies believing current successful theories are approximately true, then the same form of reasoning applied to the history of science should make us hesitant about that conclusion.
Many past scientific theories were highly successful by the standards of their time — they generated novel predictions, cohere with other accepted theories, and guided successful technological applications. Yet these theories have subsequently been found to be radically false: they posited entities that do not exist (phlogiston, caloric, the luminiferous ether, vital force). Therefore, by induction, we should expect that our current successful theories are also likely to be radically false in their theoretical posits.
Laudan supplies a lengthy historical catalogue including the crystalline spheres of medieval astronomy, the humoral theory of medicine, the vibratory theory of heat, the phlogiston theory of combustion, the caloric theory, the ether theories of electromagnetic propagation, and many others. Each was empirically successful by the standards of its era; each proved false in its central theoretical posits.
5.2 The Structure of the Argument
The PMI can be made logically precise. Let S(T) mean “theory T is empirically successful” and AT(T) mean “the theoretical posits of T approximately track reality.” The realist’s No-Miracles Argument asserts: S(T) is strong evidence for AT(T). The PMI responds: the historical base rate of AT(T) given S(T) is very low, so even granting IBE, we should not be confident that AT(T) holds for current theories.
5.3 Historical Case Studies
Laudan’s argument is powerfully illustrated by a set of detailed historical cases. These are not cherry-picked failures but theories that were, in their time, the best-confirmed and most powerful theories available:
Phlogiston theory (18th century): Successfully explained combustion, calcination, and respiration within its domain, and guided the isolation of several gases. Stahl's phlogiston was a real theoretical entity in the explanatory framework of the time. Yet phlogiston does not exist; Lavoisier's oxygen theory replaces it, reversing even the direction of the causal story (combustion is combination with oxygen, not release of phlogiston).
Caloric theory (early 19th century): Successfully predicted heat flow, gave the correct form of Carnot's efficiency formula, and accounted for many thermal phenomena with high precision. Carnot's derivation of the maximum efficiency of heat engines used caloric as an indestructible fluid. Yet caloric — a weightless, conserved fluid — does not exist; thermodynamics replaces caloric with the kinetic theory and the concept of heat as molecular motion.
Luminiferous ether (19th century): Successfully organised all of nineteenth-century electromagnetism and wave optics. The ether was not a peripheral assumption but the central ontological commitment of the electromagnetic programme. Yet special relativity eliminates the ether entirely — not as an approximation, but as a category error.
Vital force (vitalism, 18th–19th centuries): The posit that living matter is governed by a special non-physical vital force was not merely speculative but was used to generate predictions about the behaviour of organic compounds and the limits of chemical synthesis. Yet vitalism is false; organic chemistry proceeds without vital force. The synthesis of urea by Wöhler (1828) was an early sign; molecular biology eliminated the residue.
These entities were not reduced or refined — they were abandoned. The phlogiston theorist was not approximately right in the way the Newtonian physicist is approximately right. There is no phlogiston; there is only oxygen and the process of oxidation. This discontinuity is the core of Laudan’s challenge.
5.4 The Selectivist Response — Psillos’s Divide et Impera
Realists have developed several sophisticated responses to Laudan. The most influential is the selectivist response or divide et impera (分而治之) strategy, associated especially with Psillos.
Not all parts of a theory are equally responsible for its empirical success. A theory's success is generated by specific theoretical components — its "working posits" — while other components are "idle posits" that play no genuine explanatory role. The realist should commit to the working posits and withhold commitment from the idle posits. When we apply this selectivity, Laudan's historical examples are defused: the abandoned entities (ether, phlogiston, caloric) were precisely the idle posits, not the working posits responsible for the theory's successes.
Psillos argues that in the case of the caloric theory, the mathematical structure of heat flow — the equations of thermal conduction, the concept of heat capacity, the form of Carnot’s efficiency result — was preserved in thermodynamics. The idle posit was caloric as a conserved material fluid; the working posit was the mathematical structure of heat exchange, which survives. Similarly, in the case of the ether, the Maxwell equations that described electromagnetic wave propagation were preserved in special relativity; the ether as a material medium was an idle posit.
Critics press the following objection: how do we identify working posits independently of hindsight? If we determine which posits were “working” by checking which ones were retained, the divide et impera strategy risks becoming circular — it predicts retention of what we know was retained. Psillos’s response is that the distinction between working and idle posits can be drawn prospectively by examining which theoretical claims are actually used in the derivation of the theory’s successful predictions. This is a logical/structural analysis, not a historical one.
5.5 The Convergence Response
A second realist response is the convergence argument: the history of science, properly understood, shows increasing accuracy and convergence on a stable set of theoretical claims, rather than the radical discontinuity Laudan suggests. The mathematical structure preserved across the Fresnel-to-Maxwell transition, and the Newtonian-to-relativistic-to-quantum-mechanical transition, shows that something is being accumulated. This point feeds directly into structural realism (Chapter 8).
5.6 Kyle Stanford’s Unconceived Alternatives
P. Kyle Stanford extends the spirit of Laudan’s argument with a distinct challenge. In Exceeding Our Grasp (2006), Stanford argues for the Problem of Unconceived Alternatives (未设想替代方案问题, UCA). The fact that scientists could not conceive of the alternatives to their theories that later proved correct is not an accidental historical feature but a systematic pattern.
At each stage in the history of science, scientists have been unable to conceive of the alternative theories that would later displace their best current theories. This is a recurrent pattern, not mere anecdote. Darwin could not conceive of the genetic mechanisms that would explain inheritance; the physicists of 1890 could not conceive of the quantum-mechanical alternative to classical electrodynamics. By induction, we should conclude that we are now in the same situation: there exist alternatives to our current theories that we cannot now conceive. This undermines the realist's confidence that current theories are approximately true, not merely empirically adequate among the alternatives we happen to have considered.
Stanford’s argument is importantly distinct from Laudan’s. Laudan focuses on the falsity of the theoretical posits of past theories; his evidence is the content of abandoned theories. Stanford focuses on the systematic failure of imagination — scientists’ inability to articulate alternatives — as evidence of a deeper cognitive limitation. Even if current theories have true theoretical posits, we cannot know this, because our confidence is based on the absence of known alternatives; and the history of science shows that absence of known alternatives is a poor guide to the absence of actual alternatives.
Stanford’s argument is also distinct from the standard underdetermination thesis. Underdetermination points to the logical availability of alternatives compatible with the evidence. Stanford’s UCA points to the psychological and cognitive unavailability of alternatives that were not logically ruled out but were simply beyond the conceptual horizon of scientists at a given time. This makes the UCA harder for the realist to dismiss as merely a logical possibility.
Realist responses to the UCA: Psillos and others argue that Stanford’s argument proves too much — it would undermine not just scientific realism but any form of inductive confidence, since the logic applies equally to observable-level generalisations. Stanford replies that the observable-level case is disanalogous: the history of theory change is a systematic pattern of unconceived alternatives at the theoretical level that has no comparable parallel at the observational level.
Chapter 6: Underdetermination of Theory by Evidence
6.1 The Duhem Problem
The underdetermination thesis (不充分决定性论题) has its roots in Pierre Duhem’s analysis of physical theory. Duhem observed in The Aim and Structure of Physical Theory (1906) that a physical hypothesis cannot be tested in isolation. Hypotheses face evidence only in conjunction with auxiliary assumptions: assumptions about the functioning of instruments, the reliability of background theories, the absence of interfering factors.
It is impossible to subject an isolated scientific hypothesis to empirical test. A physical hypothesis H is always tested in conjunction with a set of auxiliary hypotheses A1, A2, ..., An. When the conjunction H ∧ A1 ∧ ... ∧ An implies a prediction that is disconfirmed, logic tells us only that at least one element of the conjunction is false — it does not tell us which. Therefore, any hypothesis can be saved from refutation by revising one or more of the auxiliary assumptions.
Duhem’s own application of this insight was largely diagnostic: he used it to argue that crucial experiments — experiments designed to definitively refute one of two rival hypotheses — are impossible in physics. The negative result of a supposed crucial experiment can always be attributed to the failure of an auxiliary assumption rather than to the falsity of the hypothesis under test.
6.2 The Quine–Duhem Thesis
W. V. O. Quine extended Duhem’s point into a radical holism: our beliefs confront experience not individually but as a corporate body (the “web of belief”). Any statement can be held true come what may if we make compensating adjustments elsewhere in the web.
The Quine–Duhem thesis has two components relevant to underdetermination:
Contrastive underdetermination (对比性不充分决定性): For any body of evidence E, there are multiple incompatible theories T1, T2, … that are each empirically adequate with respect to E. Choosing among them cannot be settled by evidence alone. This is underdetermination at the level of theory choice: the evidence runs out before the choice is made.
Holist underdetermination (整体论不充分决定性): For any theory T and any evidence E that apparently refutes T, there exists a set of revisions to auxiliary hypotheses such that T remains consistent with E. This is underdetermination at the level of theory revision: no experimental result can force the abandonment of any particular hypothesis.
Quine’s version of holism is more radical than Duhem’s: Duhem confined his holism to physics and allowed that theoretical choice could eventually be settled by “good sense” and the standards of the scientific community. Quine extended holism to all of language and knowledge, including mathematics and logic, and was more pessimistic about the prospects for principled theory choice.
6.3 Empirically Equivalent Theories
A particularly important form of underdetermination involves empirically equivalent theories (经验等价理论). Two theories are empirically equivalent if they make exactly the same predictions about all observable phenomena — past, present, and future.
Consider Newtonian mechanics with universal gravitation. Now construct a variant in which all distances are doubled and all masses quadrupled while keeping the ratio G·m1·m2/r² constant — the variant predicts exactly the same observable phenomena as the original. Such a variant is empirically equivalent but ontologically different.
More philosophically significant examples arise in quantum mechanics: the Bohm pilot-wave interpretation, the Copenhagen interpretation, the Everett many-worlds interpretation, and GRW collapse theories are all empirically equivalent (at least under current experimental conditions) but have radically different ontologies. The realist cannot appeal to empirical evidence alone to choose among them.
Van Fraassen exploits empirical equivalence to support constructive empiricism: since empirically equivalent theories can differ in their claims about unobservables while agreeing on observables, there is no empirical basis for preferring one unobservable story over another. The rational stance is to accept the observable content while remaining agnostic about the unobservable content.
6.4 Significance for the Realism Debate
Underdetermination challenges scientific realism in a direct way. If multiple incompatible theories are equally supported by all possible evidence, the realist cannot appeal to empirical success to choose among them. Whichever theory we happen to work with, the evidence underdetermines whether that theory is approximately true or whether one of its empirically equivalent rivals is.
The anti-realist uses underdetermination as a motivation for epistemic restraint: since the evidence runs out before determining the theoretical ontology, the rational response is to suspend judgment about unobservables and accept only what the evidence directly supports — the observable content of theories.
6.5 Realist Responses to Underdetermination
Realists typically dispute that genuine empirical equivalence is common in science, or that it has the anti-realist significance claimed. Several responses are available:
The reformulation response: Many apparent cases of empirically equivalent theories are merely different formulations of the same theory. Newtonian mechanics formulated using forces, or using Lagrangian methods, or using Hamiltonian methods, are not genuinely distinct theories — they are the same physical content expressed in different mathematical languages. Reformulations do not pose a genuine underdetermination problem.
The non-empirical virtues response: Even if two theories are empirically equivalent, they may differ in non-empirical theoretical virtues — simplicity, coherence, unifying power, mathematical elegance. Realists argue that these virtues are truth-conducive, not merely pragmatic, so they can rationally distinguish among empirically equivalent rivals.
The transient underdetermination response: What appears to be permanent empirical equivalence often dissolves when the range of possible evidence is extended. Quantum interpretations that are currently empirically equivalent may be distinguishable by future experiments (for instance, some versions of GRW spontaneous collapse predict slight deviations from standard quantum mechanics at the level of macroscopic superpositions). Transient underdetermination does not support a stable anti-realist conclusion.
The no-miracles response: Even granting underdetermination, the realist can argue that among the empirically equivalent rivals, the one that we have actually developed — on the basis of independent theoretical constraints, historical development, and practical success — is the one most likely to be approximately true. The underdetermination argument establishes a logical possibility, not a practical epistemic predicament.
Chapter 7: Entity Realism — Hacking and Cartwright
7.1 Hacking’s Experimental Argument
Ian Hacking’s Representing and Intervening (1983) offers a distinctive path through the realism debate. Hacking is skeptical of scientific theories as grand representational systems — he shares something of the anti-realist worry about theoretical ontology. But he is a firm realist about entities. His slogan captures the position: “If you can spray them, they’re real.”
The argument is grounded in experimental practice. When scientists use an entity as a tool to investigate other phenomena — when they manipulate it to produce effects — they demonstrate a kind of practical knowledge of its causal properties that is independent of any particular theoretical description of it.
We are entitled to believe in the real existence of those unobservable entities that we can reliably manipulate to produce effects and that we use as tools in experiments. Belief in entities is grounded not in inference to the best explanation but in the practical success of manipulation. Theoretical descriptions of these entities, however, may be idealized, incomplete, or revisable.
Hacking’s canonical example is the electron. Physicists do not merely postulate electrons in theoretical models; they use electrons: they spray them at targets, fire them through fields, build accelerators around them. The causal precision required for such manipulation establishes that something with those causal powers exists, regardless of which theoretical framework describes that something.
The entity realist’s argument has a distinctive structure. It does not proceed via IBE from explanatory power to existential conclusion. It proceeds from the success of a specific kind of practice — causal manipulation — to the conclusion that the entity manipulated is real. The argument is: you cannot reliably use X as a causal tool unless X exists and unless your working knowledge of its causal properties is approximately correct.
7.2 Nancy Cartwright: Laws as Lies
Nancy Cartwright’s How the Laws of Physics Lie (1983) extends entity realism with a distinctive thesis about the status of laws. Cartwright argues that the fundamental laws of physics — Maxwell’s equations, Schrödinger’s equation, Newton’s laws of motion — are, strictly speaking, false. They describe idealised situations that do not obtain in nature.
Physical laws do not describe real systems; they describe simulacra — idealised models constructed to permit mathematical treatment. The model of a harmonic oscillator does not describe any real spring; it describes an abstract entity whose behaviour is governed by a differential equation. Real systems are brought under such models by a complex process of idealisation, abstraction, and approximation. The explanatory power of laws derives not from their truth but from the pragmatic success of this modelling process.
Cartwright distinguishes phenomenological laws (现象性定律), which do describe observable regularities with some accuracy, from fundamental laws (基本定律), which achieve generality only by losing contact with real systems. The latter lie; the former approximately tell the truth. Her positive realism is about capacities (能力): there are real causal powers or capacities in nature, and the phenomenological laws give us reliable information about them even when the fundamental equations do not.
The simulacra account challenges a common assumption of the realism debate: that the primary epistemic unit is the full theoretical system or its constituent laws. Cartwright argues that this assumption distorts how physics actually works. Physics does not apply laws directly to the world; it constructs models — the frictionless plane, the ideal gas, the point mass — that are deliberate falsifications of reality. The success of these models is a success of the modelling strategy, not of the laws that define the models.
7.3 The Manipulation Criterion’s Problems
The manipulation criterion for entity realism is intuitively compelling but faces serious challenges.
The theory-ladenness problem: Our identification of an entity as an electron, rather than as some other particle, is theory-laden. To spray electrons is to rely on a theoretical framework that tells us what electrons are, how to select for them, how to control their trajectories, and how to detect their effects. Manipulation does not give us theory-free access to entities. The entity realist seems to presuppose the very theoretical descriptions whose approximate truth she claims to be agnostic about.
Hacking’s response is that we need not know which complete theoretical description of electrons is correct to know that electrons exist. We can triangulate from multiple, partially overlapping, partially inconsistent theoretical descriptions to a common causal core — the entity with the mass, charge, and spin that are consistently implicated across all the different frameworks. This “robustness” across frameworks is itself evidence of the entity’s reality.
The individuation problem: When we manipulate entities in an experiment, we must identify them as entities of a particular kind. But kind-membership is theory-dependent. The claim that we are manipulating electrons, not some other type of particle, depends on theoretical criteria for what makes something an electron. The manipulation criterion cannot be fully divorced from theoretical description.
The scope limitation: Manipulation-based entity realism is plausible for entities like electrons, which we have learned to manipulate with precision. But much of science posits entities — the Higgs boson, dark matter particles, gravitational waves — whose existence we infer from indirect evidence without anything like routine manipulation. The manipulation criterion, if taken strictly, would leave us agnostic about a large portion of theoretical physics.
Cartwright’s capacities as unobservables: Cartwright’s positive account of capacities is itself a realist commitment about unobservables — the capacities are not directly observable. This may be seen as a version of the very IBE-based realism she wishes to avoid. She seems to argue: the best explanation of the systematic success of phenomenological laws is the existence of real causal capacities. But this is an inference to the best explanation of the traditional kind.
Chapter 8: Structural Realism — Worrall and Ladyman
8.1 The Motivation: Preserving the Best of Both Worlds
John Worrall’s 1989 paper “Structural Realism: The Best of Both Worlds?” is motivated by a dilemma: the No-Miracles Argument supports realism, but the PMI undermines it. Worrall’s proposal is to accept a modified realism that can honour both the argument from success and the lesson from history.
The key historical observation is that while the ontology of science changes dramatically across revolutions — from the elastic solid luminiferous ether of Fresnel to the electromagnetic field of Maxwell — the mathematical equations are often preserved. Fresnel’s equations for the reflection and refraction of light survived the transition to Maxwell’s theory virtually intact, even though Maxwell’s ontology was utterly different from Fresnel’s.
What is preserved and approximately known across scientific revolutions is the structure of the world, as encoded in the mathematical equations of our theories. The nature of the entities underlying that structure — what it is like to be an electron "from the inside" — is beyond our knowledge. We know the relations; we do not know the intrinsic natures of the relata.
8.2 The Fresnel-Maxwell Continuity Case
Worrall’s central example is the transition from Fresnel’s elastic solid ether theory to Maxwell’s electromagnetic field theory. This case rewards detailed examination.
Fresnel’s theory was built around the hypothesis of a luminiferous ether — a material, elastic medium filling all of space, through which light propagates as a transverse mechanical wave. On this basis, Fresnel derived equations governing the reflection and refraction of polarised light at a boundary between two optical media. These equations — now known as the Fresnel equations — were spectacularly successful. They predicted that the polarisation of reflected light at the Brewster angle would produce specific intensity ratios, and these predictions were confirmed with great precision. Fresnel’s equations were genuine novel predictions made on the basis of a theoretical framework.
Maxwell’s electromagnetic theory replaced Fresnel’s entirely at the ontological level. There is no elastic solid ether in Maxwell’s framework; the ether is replaced by the electromagnetic field, which has no mechanical medium of support. The mechanism of wave propagation is entirely different. Yet what happened to Fresnel’s equations? They appear in Maxwell’s theory as exact limiting cases of the full electromagnetic equations. The mathematical content of Fresnel’s theory was preserved precisely; the ontological interpretation was replaced entirely.
The Fresnel-Maxwell case illustrates Worrall's central claim: what is preserved across scientific revolutions is mathematical structure — the equations — while the ontological interpretation is replaced. The realist lesson is that we should commit to the mathematical structure and withhold commitment from the ontological gloss. The equations encode genuine knowledge of the world's structural relations; the entities whose interactions supposedly generate those relations are theoretical posits beyond our epistemic reach.
8.3 The Newman Objection to Epistemic Structural Realism
ESR faces a serious objection originating from a 1928 paper by M. H. A. Newman, directed originally against Bertrand Russell’s structuralism, and revived in the context of ESR by Demopoulos and Friedman.
The Newman objection runs as follows: if all we know about the unobservable world is its structural relations — the abstract mathematical structure that our equations encode — then our knowledge is nearly trivial. By a theorem of set theory, any collection of objects of the right cardinality can be given any structural description: for any abstract structure S and any set of objects of the appropriate cardinality, there exists a relation on those objects that realises S. Therefore, the claim that the world has structure S amounts to little more than the claim that the world contains enough objects to instantiate S. This is almost no constraint at all.
If the structural realist knows only the abstract relational structure of the unobservable world — the mathematical form of the equations — without knowing anything about the nature of the relata, then the structural knowledge is trivially satisfiable by any sufficiently large collection of objects. The Fresnel equations, interpreted as pure abstract structure, are compatible with virtually any underlying ontology. This seems to make structural realism epistemically vacuous.
The standard response on behalf of ESR is that the structural realist does not claim to know only abstract mathematical structure — she claims to know a physically interpreted structure, where the physical interpretation is constrained by the causal role of the posited relations. The relations that ESR is committed to are not arbitrary set-theoretic constructions; they are the physically significant relations that figure in successful causal-explanatory frameworks. This response requires unpacking what “physical interpretation” amounts to without smuggling in knowledge of intrinsic natures — a non-trivial task.
8.4 Ontic Structural Realism (Ladyman)
James Ladyman and Don Ross push structural realism further in Every Thing Must Go (2007). They find Worrall’s epistemic version unstable because it retains the idea that there are underlying entities (with unknown intrinsic natures) that instantiate the structure. But if we cannot know anything about these entities except their structural relations, the postulation of underlying non-structural natures appears idle — an unknowable Kantian thing-in-itself.
Structure is not merely all we can know about the world; structure is all there is. Objects are not prior to relations; rather, objects are merely nodes in a web of relations. There are no intrinsic natures underlying the structural relations because those relations constitute the objects themselves. The ontology of science is irreducibly structural.
Ladyman draws support from physics: quantum field theory posits fields as the fundamental ontology, and quantum particles lack determinate individual identities in ways that undermine the classical notion of an individual object with intrinsic properties. In quantum mechanics, elementary particles of the same kind (electrons, photons) are subject to permutation symmetry — swapping two identical particles produces no physically distinct state. This indistinguishability (不可区分性) of quantum particles suggests that the classical notion of an individual with haecceity — a primitive “thisness” — is inapplicable at the quantum level. If there are no individuals in the classical sense, perhaps the ontology of physics is better understood as irreducibly structural.
8.5 Objections to Structural Realism
The no-structure-without-objects objection: A relation must be a relation between relata. If relata are eliminated, we are left with relations between nothing — an incoherent notion. Worrall’s ESR avoids this: he retains unknown relata. But Ladyman’s OSR must explain how pure structure without underlying objects is metaphysically coherent.
OSR proponents respond that this objection presupposes a traditional substance-based metaphysics. The relata of structural relations in OSR are not independent substances with intrinsic natures; they are themselves constituted by their place in the relational structure. This is metaphysically unusual but not obviously incoherent — it parallels structuralist views in mathematics, where numbers are constituted by their relations in the number system rather than being independently existing entities that happen to stand in numerical relations.
The individuation problem: If all there is are structural relations, what distinguishes one structure from another? Structures must be individuated somehow, and this individuation seems to presuppose something non-structural.
The reference problem: If our theoretical terms refer only to structural roles, how do we identify the same structure across theories? The continuity of mathematical equations across revolutions is real, but identifying it as the preservation of the “same” structure requires assumptions that go beyond bare mathematical form.
The Newman objection: As discussed above, purely abstract structural knowledge is nearly trivially satisfiable. ESR needs to say more about the content of structural knowledge to avoid Newmanian vacuity.
Despite these challenges, structural realism remains one of the most actively developed positions in contemporary philosophy of science, especially given its engagement with modern physics.
Chapter 9: Beyond the Debate — NOA, Perspectivism, and the Contemporary Landscape
9.1 Arthur Fine’s Natural Ontological Attitude
Arthur Fine’s Natural Ontological Attitude (自然本体论态度, NOA) represents a radical departure from both realism and anti-realism. Fine argues that the realism debate is itself a mistake — that both realists and anti-realists add an interpretive gloss to science that science itself does not require and that cannot be justified.
NOA is the stance of accepting the results of science at face value, without adding any philosophical interpretation. Where science says electrons exist, the NOA-ist says electrons exist — in the same sense and with the same warrant as the scientist. NOA refuses to append either a realist "... and this means the world really is that way, mind-independently and approximately truly" or an anti-realist "... but we should only believe the observable parts." The philosophical additions are the problem, not the solution.
Fine draws an analogy: we accept the results of ordinary empirical inquiry without demanding a philosophical account of what “acceptance” really means or whether the ordinary objects we perceive “really exist.” We extend the same attitude to science. The realism debate arises only when philosophers try to add something to this core acceptance — and both realists and anti-realists add something unjustified.
The NOA position is elusive and has been criticised as unstable. Realists argue that NOA, properly developed, just is scientific realism without the philosophical label. Anti-realists argue that NOA, applied consistently, slides into constructive empiricism: if we accept science at face value, and science practised carefully is instrumentally cautious about unobservables, then NOA supports something like van Fraassen’s acceptance.
Fine’s response to both objections is that they presuppose the adequacy of the very framework he is rejecting. The realist reads NOA as minimal realism; the anti-realist reads it as minimal anti-realism. But these readings project philosophical frameworks onto a stance that is deliberately prior to any such framework. NOA takes science on its own terms; the debate about what those terms mean philosophically is precisely what NOA refuses to enter.
9.2 Hasok Chang and Scientific Pluralism
Hasok Chang’s work, particularly Is Water H₂O? Evidence, Realism and Pluralism (2012), introduces a distinctive approach that complicates the standard realism debate by attending carefully to the history and philosophy of chemistry.
Chang’s detailed case study concerns the seemingly obvious question: is water H₂O? His answer is that this question is more complex than it appears, and that attending to its complexity reveals important features of how scientific knowledge is constructed. The identification of water with H₂O was not a simple discovery but a contested theoretical achievement that required the suppression of the oxygen theory’s competitors, including the competing framework of the phlogistonists who had their own coherent system for understanding the same reactions.
Different theoretical frameworks — including some that were historically suppressed — can each embody coherent empirical knowledge and support successful practice within their domain. The phlogiston framework was not simply false: it was a coherent system that successfully organised a large domain of chemical knowledge, including reactions that the oxygen framework describes differently. Chang argues that scientific pluralism — the sustained coexistence and development of multiple theoretical frameworks — is epistemically valuable. It preserves the cognitive diversity that enables science to respond to anomalies and to recognise phenomena that a single dominant framework might miss.
Chang’s positive thesis is active scientific pluralism: we should deliberately cultivate multiple theoretical frameworks even in domains where one framework is dominant, because the alternatives preserve resources for dealing with future anomalies and keep the scientific community responsive to the full complexity of natural phenomena.
9.3 Perspectival Realism
Building on his historical work, Chang develops perspectival realism (视角实在论) as a philosophical position that is distinct from both standard realism and anti-realism. The label covers a family of related views, some versions associated with Michela Massimi as well.
Chang argues that scientific knowledge is always knowledge from a particular perspective — a system of practice embedded in a tradition, with specific instruments, concepts, and standards of evidence. This is not relativism. Knowledge gained from one perspective is genuine knowledge; it reliably tracks features of reality as experienced from within that perspective.
The perspectival realist makes two claims that distinguish her position from standard scientific realism:
First, knowledge from different perspectives is genuinely plural: there is no single God’s-eye framework that captures all aspects of reality, and no guarantee that the perspectives generated by different scientific traditions are converging toward such a framework.
Second, the evaluation of knowledge claims should be internal to perspectives, not measured against a theory-independent standard of correspondence to the world. This does not mean that perspectives cannot be criticised or compared; it means that such comparison is itself a practice that takes place within a broader epistemic community with shared standards.
The history of science reveals that different theoretical frameworks — the phlogiston tradition and the oxygen tradition, for instance — can both be epistemically coherent and empirically successful from within their own perspective. This does not mean they are equally true in some global sense; it means that the realism debate should not presuppose a single God's-eye view from which all theories are assessed. Scientific pluralism — the coexistence of multiple successful frameworks — is not a failure but a resource.
Chang’s perspectival realism has the virtue of taking the history of science seriously without capitulating to the PMI. It acknowledges that theories are abandoned and replaced, but denies that this succession must be understood as progressive convergence toward a unique true theory. Multiple perspectives can each contribute genuine knowledge of a multifaceted world.
9.4 Is the Debate Productive?
A question that any serious student of this debate must confront is whether the debate is productive — whether it is making progress, and whether the terms in which it is conducted are the right ones. Several critical assessments are worth noting.
The pragmatist critique: Some philosophers argue that the debate between realists and anti-realists has become largely sterile because both sides are responding to arguments rather than to the actual practice of science. The positions have become highly refined technically, but the refinements have moved further from the original practical question: what attitude should a working scientist take toward her theoretical posits?
The naturalist critique: Quine’s naturalism — the view that philosophy of science is continuous with science — suggests that the realism debate cannot be settled by purely a priori philosophical argument. What is needed is empirical investigation of the conditions under which scientific inference is reliable and the conditions under which it misleads. This is a project for cognitive science, history of science, and formal epistemology, not for armchair philosophy.
The case for continuing the debate: Despite these critiques, the debate has generated genuine philosophical progress. It has sharpened our understanding of the nature of scientific explanation, the structure of evidential reasoning, the significance of theory change, and the relationship between science and metaphysics. The positions developed in this debate — structural realism, constructive empiricism, entity realism — are philosophically sophisticated contributions that could not have been reached without the dialectical pressure of argument and counter-argument.
9.5 The Contemporary Landscape
The realism–anti-realism debate continues to evolve. Several trends mark the contemporary landscape.
Turn to practice: Following Hacking, Cartwright, and Chang, many philosophers of science have argued that the debate should be informed by detailed attention to scientific practice rather than to the logical structure of theories in isolation. What scientists actually do — how they build models, design experiments, negotiate standards of evidence — should constrain philosophical positions.
Engagement with physics: The development of quantum field theory, quantum gravity, and the interpretation of quantum mechanics has given new urgency to structural realism and has raised questions about whether the concept of an individual entity with determinate properties is coherent at the fundamental level.
Scientific pluralism: Following Chang and others, the idea that a single unified theory of the world should be the goal of science has been challenged. Pluralists argue that the proliferation of models, frameworks, and theories in science is not a temporary imperfection but a permanent and productive feature.
Deployment of formal tools: Bayesian epistemology and formal learning theory have been brought to bear on the debate, offering more precise formulations of the conditions under which empirical success should raise our credence in theoretical claims.
Naturalistic epistemology: The growing influence of cognitive science, evolutionary epistemology, and the history of science on philosophy of science has pushed the debate toward more empirically grounded approaches, challenging the a priori character of much traditional argumentation in this area.
9.6 Synthetic Assessment
The realism–anti-realism debate is unlikely to be definitively resolved by any single argument. The No-Miracles Argument has genuine force: the novel predictive successes of science require explanation, and the realist explanation is the most natural. But the PMI and the problem of unconceived alternatives show that the history of science does not straightforwardly support the robust epistemic optimism the realist needs.
The most defensible positions appear to be those that:
- Take history seriously without being paralysed by it (selective realism, structural realism).
- Attend to scientific practice rather than idealised theory structure (entity realism, perspectivism).
- Calibrate epistemic commitment to the actual contribution of theoretical posits to predictive success (divide et impera).
- Acknowledge the limits of our cognitive situation without collapsing into agnosticism about the existence of any unobservable structure (epistemic structural realism).
The realism debate is not merely a technical dispute within philosophy of science. It bears on how we understand rationality, how we conceive the relationship between human knowledge and the world, and how we evaluate the extraordinary intellectual achievement that constitutes modern science. Neither realism nor anti-realism can simply be read off from the facts of scientific practice; both are philosophical interpretations of those facts. The richness of the debate lies in the fact that each position captures something genuine about science, and the task of philosophy is to articulate, with rigor and humility, what exactly that something is. Perhaps the most honest conclusion is that scientific realism and anti-realism are not doctrines to be adopted wholesale, but frameworks within which specific epistemological questions about specific sciences and specific theoretical posits must be worked out case by case — with close attention to the history, the logic, and the practice of the science in question.