PHIL 205: Philosophy of Economics

Patricia Marino

Estimated study time: 32 minutes

Table of contents

Note: This course is cross-listed as ECON 261 / PHIL 205. The content is identical regardless of which department enrolment you hold. Students enrolled under ECON 261 satisfy economics breadth requirements; those under PHIL 205 satisfy philosophy breadth requirements. Both sections attend the same lectures.

Sources and References

Primary texts — Daniel Hausman, ed., The Philosophy of Economics: An Anthology, 3rd ed. (Cambridge University Press, 2008); Milton Friedman, Essays in Positive Economics (University of Chicago Press, 1953)

Supplementary texts — Robert Nozick, Anarchy, State, and Utopia (Basic Books, 1974); John Rawls, A Theory of Justice (Harvard University Press, 1971); Cathy O’Neil, Weapons of Math Destruction (Crown Press, 2016); Russ Shafer-Landau, ed., Ethical Theory: An Anthology (Wiley, 2013); Louise Antony, Charlotte Witt, and Margaret Atherton, eds., A Mind of One’s Own: Feminist Essays on Reason and Objectivity (Westview Press, 2001); Drucilla Barker and Edith Kuiper, eds., Toward a Feminist Philosophy of Economics (Routledge, 2003)

Online resources — Stanford Encyclopedia of Philosophy (plato.stanford.edu), PhilPapers (philpapers.org), EconPapers (econpapers.repec.org), NBER Working Papers (nber.org), arXiv economics (arxiv.org/econ)


Chapter 1: Introduction — Economics and Philosophy

1.1 Why Philosophy and Economics?

Philosophy and economics have been intertwined since antiquity. Adam Smith — widely regarded as the founding figure of modern economics — was first and foremost a moral philosopher. His The Theory of Moral Sentiments (1759) preceded The Wealth of Nations (1776) and provided its moral foundation. The famous passage on the “invisible hand” cannot be understood apart from Smith’s philosophical conviction that commercial society channels self-interest into socially beneficial outcomes — a claim that is as much normative as it is descriptive.

The division of economics from philosophy in the twentieth century was largely institutional rather than conceptual. Many of the deepest questions in economics — about rationality, welfare, justice, and measurement — remain irreducibly philosophical. This course treats that division as a historical artifact rather than a permanent feature of intellectual life.

The philosophy of economics asks three broad families of questions:

  1. Methodological questions: How does economics produce knowledge? What role do assumptions, models, and idealizations play? Can economics be a value-free science?
  2. Conceptual questions: What do economic concepts like rationality, welfare, preference, and efficiency really mean? Are the formalizations economists use adequate representations of human life?
  3. Normative questions: What makes an economic outcome just or unjust? How should cost, benefit, and inequality be measured and weighed in public policy?

1.2 Adam Smith and the Origins of Political Economy

Smith’s Wealth of Nations opens with the observation that the division of labour is the great engine of economic growth. In a famous example, he shows how a pin factory employing ten workers each specialised in a single step can produce forty-eight thousand pins per day — a feat impossible if each worker made pins from start to finish. This insight — that specialisation and exchange generate prosperity — became the organising idea of classical economics.

Yet Smith was equally alert to pathologies of commercial society. He worried that specialised labour degrades workers intellectually and morally. He observed that merchants and manufacturers systematically lobby for policies that serve their interests at the expense of the public. And he was acutely aware that the benefits of commercial society were unevenly distributed.

For later philosophers of economics, Smith’s legacy raises an immediate methodological puzzle: is the Wealth of Nations a positive theory of how markets work, a normative argument for free trade, or both? This question — whether economics can cleanly separate description from prescription — runs through the entire course.


Chapter 2: What Is Economics? Definitions and Methodology

2.1 Defining the Discipline

Daniel Hausman surveys competing definitions of economics in his introductory essay. The discipline has been defined variously as the science of wealth (Smith), the science of exchange (Jevons), and — most influentially in the twentieth century — the science of choice under scarcity (Robbins). Lionel Robbins’s 1932 definition frames economics as “the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.” This definition is striking for what it excludes: it makes no reference to production, distribution, markets, or money. Economics becomes, on this view, a formal theory of rational allocation.

Roger Backhouse and Steven Medema trace how Robbins’s definition came to dominate the discipline and why it remains contested. Historicists and institutionalists objected that a purely formal definition strips economics of its subject matter — the actual economy, with its institutions, power relations, and historical contingencies. Behaviouralists later objected that the “ends and means” framework presupposes a model of rationality that is empirically false. Feminist economists objected that Robbins’s framework systematically excludes unpaid care work and non-market production.

Positive economics aims to describe and predict economic phenomena without making value judgements — to say what is rather than what ought to be. Normative economics evaluates outcomes and recommends policies — it engages directly with questions of what is good, fair, or just.

2.2 Mill on Political Economy as a Hypothetical Science

John Stuart Mill argued in “On the Definition of Political Economy” (1844) that political economy is not a fully empirical science but a hypothetical one: it studies what humans would do if they were motivated solely by the desire for wealth, abstracting from other motivations like benevolence, aversion to labour, and love of luxury. Mill called this the method of ceteris paribus abstraction — isolating one causal factor by holding others fixed.

This is both the great strength and the great limitation of economic reasoning. The strength is analytical precision: by isolating a single motive, economists can derive clear predictions. The limitation is that the predictions apply only to hypothetical agents, not necessarily to real human beings acting in complex social contexts.

Mill’s account anticipates a debate that persists today: when is idealisation a legitimate methodological tool, and when does it become a distortion that misleads more than it illuminates?


Chapter 3: Methodology I — Are Economic Assumptions False? Friedman and His Critics

3.1 Friedman’s Instrumentalism

Milton Friedman’s “The Methodology of Positive Economics” (1953) is one of the most cited and contested essays in twentieth-century social science. Friedman argues that the scientific value of a theory should be judged solely by its predictive success, not by the realism of its assumptions. The as-if methodology he defends holds that it does not matter whether economic agents actually maximise utility — what matters is whether they behave as if they do.

Friedman’s famous illustration: we need not assume that expert billiard players consciously calculate angles of incidence and Newtonian mechanics. They play as if they do. Similarly, firms need not consciously maximise profits; competitive pressure selects for firms that behave as if they do.

Friedman's essay was motivated partly by a desire to insulate economics from criticism of its psychological assumptions. If assumptions need not be realistic, then objections to the homo economicus model — that people are not purely self-interested, fully rational, or well-informed — become methodologically irrelevant.

3.2 Caldwell’s Critique

Bruce Caldwell subjects Friedman’s instrumentalism to a detailed internal critique. Caldwell argues that Friedman conflates two distinct methodological positions:

  1. Instrumentalism proper: theories are not true or false; they are merely useful or not useful as predictive instruments.
  2. Descriptivism: good theories are those whose assumptions are approximately true of reality.

Caldwell shows that Friedman’s own arguments slide between these positions. When Friedman defends economics against critics, he appeals to predictive success (instrumentalism). But when he argues that competitive markets select for profit-maximising firms, he implicitly appeals to causal realism (descriptivism). The methodology is internally inconsistent.

More fundamentally, Caldwell questions whether predictive success is really the only criterion for theory choice. Theories also matter for policy. If a theory’s assumptions are wildly false, policies derived from it may fail catastrophically even when the theory’s narrow predictions are locally accurate. The assumption that agents are fully rational may be adequate for predicting aggregate price responses in competitive markets but disastrous as a basis for, say, designing retirement savings policy.


Chapter 4: Methodology II — Economic Models and Economics as a Science

4.1 The Status of Economic Models

Robert Sugden’s “Credible Worlds” (2000) offers a subtle account of what theoretical economic models do. Sugden rejects both the instrumentalist view (models are predictive tools, nothing more) and the naive realist view (models describe the world). Instead, he argues that economic models construct credible worlds — internally consistent possible worlds whose logic economists explore.

The value of a model like the game-theoretic account of oligopoly, or the Edgeworth box representation of exchange, is not that these constructions accurately mirror reality. It is that they are internally coherent and that their logic, once worked out, generates insights that can be applied — with caution and judgment — to the actual economy. Sugden calls this trans-world induction: reasoning from what would happen in the model world to what might happen in the actual world.

A model in economics is an abstract, simplified representation of some economic process or institution. Models typically specify agents, their preferences and information, the rules of interaction, and an equilibrium concept. The justification of a model is partly internal (is it consistent?) and partly external (does it illuminate something about the actual economy?).

4.2 Is Economics a Science?

Raj Chetty argues in defence of economics as an empirical science: modern economists have embraced randomised controlled trials, natural experiments, and large administrative datasets in ways that bring economics close to the experimental sciences. The “credibility revolution” in econometrics has made causal identification a central concern, replacing the older reliance on theoretical assumptions.

Eric Schliesser responds that Chetty’s account works well for microeconomic policy evaluation but neglects macroeconomics, financial economics, and the philosophy of economics itself. The hardest questions — about growth, inequality, and financial stability — do not yield to randomised trials.

Alex Rosenberg takes a more sceptical position, arguing that economics is best understood as a branch of applied mathematics rather than an empirical science. Economic theory is structured like mathematics: it proceeds by deduction from axioms and is evaluated by internal consistency rather than by contact with evidence. When economists do engage with data, they typically test derived predictions rather than fundamental assumptions — a methodological asymmetry that Rosenberg finds troubling.

The debate about whether economics is a science is not merely academic. It affects how much authority economists claim when advising policymakers, how economic research is funded, and whether alternative frameworks (feminist, institutionalist, post-Keynesian) are taken seriously within the discipline.

Chapter 5: Rational Choice Theory — Classical Critiques

5.1 The Standard Model

The standard model of rational choice in economics posits an agent with a preference ordering over outcomes that satisfies completeness (the agent can compare any two outcomes) and transitivity (if A is preferred to B and B to C, then A is preferred to C). A rational agent chooses the option that maximises her utility — a numerical representation of her preference ordering — given her beliefs and budget constraint.

This model has proven extraordinarily versatile. It underlies consumer theory, producer theory, game theory, and much of public economics. Its formal power is undeniable. But its adequacy as a model of real human agency has been challenged from multiple directions.

5.2 Sen’s “Rational Fools”

Amartya Sen’s “Rational Fools” (1977) is one of the most influential critiques of the standard model. Sen targets the assumption that all human behaviour can be explained by utility maximisation — that apparent altruism, commitment, and other-regardingness must ultimately be reducible to self-interest.

Sen draws a crucial distinction between sympathy and commitment:

Sympathy occurs when the welfare of another person enters directly into one's own utility function. If I am distressed by your suffering, your suffering reduces my utility; by helping you, I am still maximising my own utility. Sympathy is compatible with the standard model.

Commitment is different: it involves acting on a principle or value in a way that may reduce one’s own welfare (on any reasonable accounting). A whistleblower who sacrifices her career to expose wrongdoing acts from commitment, not sympathy. Commitment cannot be accommodated within a utility-maximisation framework without distorting what commitment means.

Sen’s verdict on homo economicus is pointed: “The purely economic man is indeed close to being a social moron.” A being capable only of self-interested utility maximisation cannot be a moral agent — it cannot make promises, keep obligations, or act from principle. Sen argues that economic theory needs richer accounts of motivation and agency. The failure to provide them does not merely make economics empirically inaccurate; it makes it incapable of understanding important dimensions of social and economic life.


Chapter 6: Feminist Critiques of Rational Choice Theory

6.1 England’s Sociological Critique

Paula England’s “A Feminist Critique of Rational-Choice Theories” (1989) identifies three assumptions embedded in neoclassical rational choice theory that encode androcentric (male-centred) biases:

  1. Separative selfhood: the model assumes fully autonomous individuals whose preferences are formed independently of social relationships. This misses the extent to which preferences, identities, and capacities are formed through relationships — a dimension of social life that feminist scholars argue has been systemically devalued and coded as feminine.

  2. No interpersonal utility comparisons: the standard framework refuses to compare welfare across individuals, making it impossible to evaluate whether one person’s gain justifies another’s loss. This renders the framework normatively silent on questions of distribution and inequality.

  3. Tastes as given and unquestioned: the model treats preferences as exogenous data, not asking how they are formed or whether they are adaptive to unjust social conditions. England argues that preferences shaped by oppression — accepting lower wages, deferring to authority — cannot serve as uncritical normative baselines.

6.2 Cudd’s Defence and Revision

Ann Cudd’s response to feminist critics is more conciliatory than dismissive. Cudd agrees that rational choice theory, as commonly practised, embeds problematic assumptions. But she argues that the framework itself is flexible enough to incorporate relational preferences, adaptive preference formation, and interpersonal comparison — if economists are willing to do the work.

Cudd identifies feminist insights that can enrich rational choice theory: the recognition that preferences are socially conditioned, that cooperation and care are rational responses to social interdependence, and that power asymmetries structure the choices available to different agents. On Cudd’s view, the problem is not rational choice theory per se but its application in impoverished, ideologically convenient forms.

The debate between England and Cudd echoes a broader methodological dispute in feminist philosophy: should scholars work within mainstream frameworks to correct their biases (feminist empiricism), or does the framework itself need to be replaced (feminist standpoint theory or feminist postmodernism)?

Chapter 7: Behavioral Economics and Its Critics

7.1 The Behavioral Challenge

Behavioural economics emerged in the 1980s and 1990s as a systematic programme of testing the predictions of standard rational choice theory against experimental data. Drawing heavily on the work of psychologists Daniel Kahneman and Amos Tversky, behavioural economists documented a wide range of systematic deviations from the rational choice model: loss aversion (losses loom larger than equivalent gains), hyperbolic discounting (preference for immediate rewards is much stronger than standard exponential discounting predicts), framing effects (choices depend on how options are described, not just on their objective properties), and anchoring (irrelevant numerical cues affect judgements).

7.2 Jolls, Thaler, and Sunstein

Christine Jolls, Richard Thaler, and Cass Sunstein’s “A Behavioral Approach to Law and Economics” (1998) is a landmark attempt to import behavioural findings into legal analysis. They argue that law and economics, which traditionally assumes fully rational actors, should be reconstructed around three families of behavioural deviation:

Bounded rationality: cognitive limitations mean agents make systematic errors — they use heuristics rather than optimal algorithms, and the heuristics can mislead.

Bounded willpower: agents have self-control problems; they make plans they know they will fail to execute. Standard models treat preferences as stable; real people struggle with weakness of will.

Bounded self-interest: people care about fairness and reciprocity in ways that cannot be reduced to self-interest. Workers accept pay cuts more readily when they perceive them as fair; consumers boycott firms they view as exploitative even at personal cost.

These deviations have legal implications. If consumers systematically underestimate risks (bounded rationality), mandatory disclosure requirements may be more effective than standard economics suggests. If employees systematically over-weight present income relative to deferred compensation (bounded willpower), mandatory pension contributions may be welfare-improving even on liberal premises.

7.3 Posner’s Scepticism

Richard Posner defends the rational choice model against behavioural critiques. His core argument is that deviations from rationality documented in experimental settings may not survive in competitive markets. When the stakes are high, agents consult experts, acquire information, and learn from mistakes. When markets are competitive, irrational actors lose to rational ones. The relevant question is not whether experimental subjects make mistakes in artificial conditions, but whether systematic irrationality persists in consequential real-world decisions.

Posner also questions the policy implications of behaviourism. If agents are irrational, who designs the “nudges”? The regulators themselves may be subject to cognitive biases, political pressures, and information deficits. Paternalistic interventions justified by behavioural economics may fail worse than the market failures they purport to correct.


Chapter 8: Normative Economics I — Property Rights and Inequality

8.1 Nozick’s Libertarian Framework

Robert Nozick’s “Distributive Justice” (1973) — an excerpt from Anarchy, State, and Utopia — argues for a historical entitlement theory of distributive justice. Nozick holds that a distribution of holdings is just if and only if it arose through a just history: just initial acquisition and just transfers.

Nozick's entitlement theory comprises three principles:
  1. Justice in acquisition: the original appropriation of unowned resources is just if it meets a Lockean proviso — roughly, that it does not worsen the situation of others.
  2. Justice in transfer: a transfer of holdings from one person to another is just if it is voluntary (gift, exchange, etc.).
  3. Justice in rectification: injustices in acquisition or transfer must be corrected.
Any distribution that arises from just acquisitions and transfers — however unequal — is just. Redistribution by the state violates individual rights; taxation of earnings is, Nozick provocatively claims, on a par with forced labour.

Nozick’s target is patterned theories of justice — those that say a distribution is just if and only if it fits some pattern (equal distribution, distribution according to need, distribution according to desert). Against any such theory, Nozick poses the Wilt Chamberlain argument: start from any just patterned distribution; allow people to make voluntary transfers (e.g., paying a talented performer to perform); the result is an unpatterned, unequal distribution that arose through free choice. Maintaining any pattern requires continuous interference with voluntary transactions.

8.2 Rawls on Distributive Justice

John Rawls defends a radically different framework in A Theory of Justice (1971). Rawls argues that principles of justice should be those that rational persons would choose from behind a veil of ignorance — a thought experiment in which they do not know their place in society, their class, their talents, or their conception of the good.

From behind the veil, Rawls argues, rational persons would choose two principles:

  1. Equal basic liberties: each person should have the most extensive system of equal basic liberties compatible with a similar system for all.
  2. The difference principle: social and economic inequalities are just only if they benefit the least advantaged members of society.

The difference principle is a powerful egalitarian constraint: it permits inequality only as an incentive to generate growth that improves the absolute position of the worst-off. It rules out the Nozickian view that any voluntarily produced distribution is just, since distributions that arise from the free market may leave the worst-off far worse than they could be.

8.3 Sen on Property, Entitlement, and Hunger

Amartya Sen’s “Property and Hunger” (1988) bridges normative political philosophy and empirical economics. Sen’s key insight, developed in his earlier Poverty and Famines (1981), is that famines are rarely caused by absolute food shortages. The Bengal famine of 1943, the Ethiopian famines of the 1970s and 1980s, and the Bangladeshi famine of 1974 all occurred in conditions of adequate or near-adequate aggregate food supply. What failed was people’s entitlements — their legal and economic ability to command food.

An entitlement, in Sen's framework, is a person's command over commodities — the set of commodity bundles she can acquire through legally sanctioned means: her own production, exchange of her labour or assets, transfer, or inheritance. Famine occurs when entitlements collapse: wages fall, food prices rise, or employment disappears, leaving people unable to acquire food even when it exists in the market.

Sen’s framework has deep implications for how we think about property rights and justice. Nozick treats property rights as side-constraints — near-absolute entitlements that limit what the state may do. Sen shows that the same property rights framework that protects the holdings of the wealthy can generate the entitlement failures that produce mass starvation. Property rights are not neutral instruments: they structure who can command resources, and that structure has life-or-death consequences.


Chapter 9: Normative Economics II — Problems in Cost-Benefit Analysis

9.1 The Appeal of Cost-Benefit Analysis

Cost-benefit analysis (CBA) is the dominant framework for evaluating public policy in most wealthy democracies. Its basic logic is straightforward: a policy is worth adopting if and only if its total benefits, summed across all affected parties, exceed its total costs. CBA operationalises utilitarian welfare economics: aggregate welfare is what matters, and money provides a common metric for aggregating heterogeneous goods and harms.

9.2 Frank’s Philosophical Critique

Robert Frank’s “Why Is Cost-Benefit Analysis So Controversial?” identifies three families of objection:

  1. The measurement problem: how do we assign monetary values to non-market goods — clean air, human lives, dignity, ecosystems? Willingness-to-pay (WTP) measures are standard: a person’s WTP for a safety improvement reflects how much she values it. But WTP is sensitive to wealth. The rich will pay more to avoid a given risk than the poor, not because they value their lives more but because they have more money. CBA based on WTP systematically undervalues the welfare of the poor.

  2. Incommensurability: some values cannot be reduced to a common monetary metric without distortion. The claim that a human life is worth $9 million (as regulatory agencies sometimes assume) is not a description of how people actually value their lives; it is a modelling convenience that may generate absurd results when taken seriously.

  3. Distributional blindness: standard CBA aggregates costs and benefits regardless of who bears them. A policy that produces $100 of benefit for a billionaire and $99 of cost for a subsistence farmer registers as a net gain. Many critics argue that distributional weights — weighting gains to the poor more heavily than gains to the rich — are essential to any defensible form of CBA.

9.3 Hansson on Philosophical Problems

Sven Ove Hansson provides a systematic philosophical inventory of difficulties:

  • Future generations: how should CBA discount costs and benefits borne by people not yet born? Any positive discount rate treats future welfare as less important than present welfare, which seems morally arbitrary.
  • Catastrophic and irreversible risks: standard CBA handles these poorly. A small probability of catastrophic, irreversible harm may not register as a large expected cost, but the asymmetry between reversible and irreversible outcomes argues for precautionary weighting.
  • Non-consequentialist values: CBA is consequentialist — it evaluates policies by their outcomes. But many people hold that some outcomes are wrong regardless of their consequences (torture, for instance), and that some distributions are unjust regardless of aggregate welfare.

9.4 Choy’s Indigenous Worldview Critique

Yee Keong Choy challenges CBA from an Indigenous worldview perspective. Standard CBA assumes a Western ontology in which nature is a collection of resources with instrumental value to human beings. Many Indigenous frameworks treat nature as relational and intrinsically valuable — land is not a commodity to be priced but a community to be respected. Choy argues that the framing of environmental decisions in CBA terms forecloses Indigenous perspectives before the analysis begins, making CBA a tool of epistemic as well as material dispossession.


Chapter 10: Algorithmic Bias and Discrimination in Economic Decision-Making

10.1 The Rise of Algorithmic Decision Systems

Contemporary economic life is increasingly governed by algorithmic decision systems: credit scoring algorithms determine who receives loans; hiring algorithms filter job applicants; predictive policing algorithms direct police resources; healthcare algorithms determine treatment recommendations. These systems share a common structure: they use historical data about inputs (characteristics of individuals) and outputs (past decisions and outcomes) to produce predictions or rankings used in high-stakes decisions.

The proliferation of such systems has generated intense debate about algorithmic discrimination. The concern is that algorithms trained on historical data may reproduce, and in some cases amplify, patterns of discrimination embedded in that history.

10.2 O’Neil on Weapons of Math Destruction

Cathy O’Neil argues that many high-stakes algorithms share a dangerous combination of features: they are opaque (their workings are proprietary or technically impenetrable), unaccountable (decisions are attributed to “the algorithm,” diffusing responsibility), and self-fulfilling (predictions shape the very outcomes they purport to predict). O’Neil calls such systems “weapons of math destruction.”

A recidivism prediction algorithm used in criminal sentencing exemplifies the problem. The algorithm uses zip code, employment history, and social network as predictors of reoffending. Each of these variables is correlated with race as a consequence of historical discrimination in housing, employment, and policing. The algorithm thereby perpetuates racial disparities in sentencing while appearing to rest on neutral statistical reasoning.

10.3 Gandy on Rational Discrimination

Oscar Gandy extends the analysis to what he calls rational discrimination — discrimination that is profit-maximising for firms even when it is socially harmful. Decision support systems that classify individuals by predicted profitability may be rational from the firm’s perspective while generating systematic disadvantage for already marginalised groups. Gandy argues that such systems require active regulatory oversight: the fact that discrimination is produced by an algorithm rather than a discriminatory human intent does not make it less damaging.

The concept of rational discrimination poses a direct challenge to the standard economic argument that competitive markets eliminate discrimination. Gary Becker's classic argument held that discriminatory employers bear a cost for their "taste" for discrimination and are therefore outcompeted by non-discriminating firms. But algorithmic systems may produce statistically accurate, profit-maximising discrimination that no competitive pressure will erode.

10.4 Kleinberg et al. on Algorithms as Discrimination Detectors

Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass Sunstein offer a surprising argument: algorithmic decision systems may be more amenable to legal scrutiny for discrimination than human judgement, not less. When a human decision-maker discriminates, the discrimination is typically invisible — the decision-maker’s reasons are opaque, their data are unrecorded, and establishing discriminatory intent is legally and empirically difficult. An algorithm, by contrast, makes its criteria explicit and auditable. By examining what variables are used and how they are weighted, regulators can assess whether the algorithm produces discriminatory outputs and whether its criteria are justified by predictive validity.

Kleinberg et al. argue that the appropriate regulatory response is therefore not to ban algorithmic decision-making but to require transparency and auditing — to leverage the algorithm’s explicitness as a tool for detecting and remedying discrimination that would otherwise go unnoticed.


Chapter 11: Feminist Philosophy of Economics — Power, Work, and Marginalized Perspectives

11.1 The Feminist Economics Programme

Feminist economics is not a single unified theory but a broad research programme united by the conviction that mainstream economics systematically misrepresents and undervalues forms of economic activity and social relationships associated with women. The programme has three main components: a critique of the conceptual frameworks of mainstream economics (Chapters 5 and 6), an empirical investigation of gender inequality in labour markets, households, and global supply chains, and a normative argument about how economic frameworks should be reconstructed.

11.2 Charusheela on Work and Empowerment

S. Charusheela’s “Empowering Work?” examines the concept of empowerment as it has been deployed in development economics and feminist development policy. She argues that mainstream frameworks — including those of feminist economists — tend to understand empowerment primarily through increased labour force participation and income generation. This framing reproduces assumptions of liberal individualism: the empowered agent is a market participant, her empowerment measured by her ability to earn and consume.

Charusheela draws on postcolonial and Third World feminist frameworks to challenge this framing. For women in many contexts, empowerment through paid work may coexist with intensified exploitation, loss of community ties, and subjection to new forms of discipline. The concept of empowerment requires situating paid work within broader structures of power — gender, class, race, caste, and colonialism — rather than treating market participation as intrinsically liberatory.

11.3 Zein-Elabdin on the Difficulty of Feminist Economics

Eiman Zein-Elabdin’s “The Difficulty of a Feminist Economics” identifies a deep tension within the feminist economics programme. On one hand, feminist economists want to reform mainstream economics from within — to make its models more accurate by incorporating gender, care, and relational goods. On the other hand, a more radical strand of feminist economics argues that the conceptual apparatus of mainstream economics — methodological individualism, preference satisfaction, market exchange as the paradigm of economic interaction — is so deeply gendered that it cannot be reformed without being replaced.

Zein-Elabdin situates this tension within postcolonial theory: the very categories of economic analysis (individual, property, exchange, development) emerged from European colonial projects and cannot be universalised without epistemic violence. A genuinely feminist economics must be also a postcolonial economics — one that takes seriously the experiences and knowledge systems of women in the Global South.


Chapter 12: Decolonization and Euro-Centrism in Economics

12.1 What Decolonization Means

Eve Tuck and K. Wayne Yang’s “Decolonization Is Not a Metaphor” (2012) is essential reading for any course that takes seriously the politics of knowledge production. Tuck and Yang argue that the term “decolonisation” has been widely and irresponsibly adopted as a synonym for general social justice work — “decolonising the curriculum,” “decolonising our methods,” “decolonising student thinking” — in ways that evacuate its specific and unsettling content.

Decolonisation, Tuck and Yang insist, refers specifically to the repatriation of Indigenous land and the restoration of Indigenous sovereignty. It is not a metaphor:

Decolonization doesn’t have a synonym.

The metaphorisation of decolonisation serves what Tuck and Yang call settler moves to innocence — gestures that allow settler-colonial subjects to acknowledge colonialism while avoiding its material implications. Including an Indigenous author on a reading list, or adopting “decolonising methods,” may signal awareness of Indigenous experience while leaving intact the settler-colonial structures — land ownership, governance, legal systems — that are the actual targets of decolonisation.

Tuck and Yang distinguish between external colonialism (the expropriation of resources from colonised territories) and settler colonialism (a structure in which settlers displace Indigenous peoples and claim their land as home). Settler colonialism, they argue, is not an event that ended but a structure that continues — including in contemporary academic institutions built on stolen land.

The implications for the philosophy of economics are significant. When economists propose to “decolonise economics” by diversifying reading lists or acknowledging that economic theory reflects Western assumptions, Tuck and Yang’s framework demands a harder question: does this reform leave intact the property rights regime, the legal structures, and the extractive relationships that constitute settler-colonial economics?

12.2 Kvangraven and Kesar on Decolonising Economics Teaching

Ingrid Harvold Kvangraven and Surbhi Kesar bring Tuck and Yang’s framework into direct dialogue with economics as a discipline. Based on a survey of 498 economists across mainstream, heterodox, and non-economics departments, they examine how economists understand the decolonisation agenda and what they believe economics teaching should include.

Their central finding is structural: the prevailing conception of rigour in economics — characterised by mathematical formalism, quantitative methods, and political neutrality — actively forecloses the decolonisation of the discipline. Rigour, as currently understood, is not a neutral methodological standard but a gatekeeping mechanism that privileges particular forms of knowledge production and marginalises others.

Eurocentrism in economics, as Kvangraven and Kesar define it, is the assumption that capitalist development follows a universal trajectory originating in Europe — that the European experience of industrialisation, property rights, and market exchange is the norm from which other economies are deviations. This assumption is not stated explicitly in most economics textbooks; it is embedded in the models and the questions they ask (and fail to ask).

The dominant approach treats economics as an objective social science free from political contestation. Decolonisation, by contrast, requires centering structural power relations, critically examining the vantage point from which theorisation takes place, and unpacking the politics of knowledge production. These are precisely the moves that the economics profession’s self-image as a neutral science tends to exclude.

Kvangraven and Kesar identify several specific reforms that their survey suggests are both necessary and contested:

  • Pluralism in theory: expanding economics teaching to include heterodox frameworks (post-Keynesian, institutionalist, feminist, Marxist) rather than treating neoclassical theory as the uniquely scientific approach.
  • Historical and contextual grounding: teaching economic history, development history, and the history of colonialism as integral to economic analysis rather than as supplementary context.
  • Epistemic justice: taking seriously economic knowledge produced outside elite Western institutions — including Indigenous knowledge systems, economic thought from the Global South, and non-English-language scholarship.

The survey reveals significant variation across department type and geography: economists in heterodox and non-economics departments are far more receptive to the decolonisation agenda than those in mainstream economics departments, and economists in the Global South are more likely to see Eurocentrism as a serious problem.

12.3 Economics, Knowledge, and Power

The readings in this final chapter converge on a challenge to the self-image of economics as a neutral, technical discipline. From Tuck and Yang’s insistence on the materiality of colonialism to Kvangraven and Kesar’s structural analysis of how rigour forecloses pluralism, the argument is that economics is not merely a tool applied to pre-given economic phenomena. It is itself a knowledge-producing institution embedded in — and partly constitutive of — the structures of inequality it purports to analyse.

Bringing feminist philosophy of economics (Chapters 11 and 6) together with decolonial critique suggests that any adequate philosophy of economics must address not only how economic knowledge is produced and evaluated (methodology), and not only what economic concepts mean (conceptual analysis), but also whose knowledge counts, whose experience is rendered invisible, and whose interests are served by the categories and frameworks we take for granted.

Back to top