ENVS 410: AI, Climate, and Environmental Justice

Estimated study time: 53 minutes

Table of contents

Why make it up
UW’s environment and sustainability programs (ENVS 200/205, SFM 101/102, ERS 215) treat technology generally but have nothing specific on AI’s bidirectional relationship to the climate crisis. This course covers both sides: AI for climate (Rolnick et al.’s Tackling Climate Change with Machine Learning and the Climate Change AI community) and AI as a climate problem (Strubell on training energy, Patterson on carbon emissions, Crawford and Bender on extraction, Hogan and Vonderau on data centres, Brevini on whether AI can be reconciled with planetary boundaries). The environmental justice strand follows Whyte on Indigenous data sovereignty and the geographic distribution of data-centre siting. Drawn from Stanford EARTHSYS 173, MIT 1.S992, UC Berkeley ESPM C167, and Cambridge AI4ER.
  • Rolnick, David, et al. “Tackling Climate Change with Machine Learning.” ACM Computing Surveys 55, no. 2 (2022): 1–96.
  • Strubell, Emma, Ananya Ganesh, and Andrew McCallum. “Energy and Policy Considerations for Deep Learning in NLP.” ACL 2019.
  • Patterson, David, et al. “Carbon Emissions and Large Neural Network Training.” arXiv:2104.10350 (2021).
  • Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
  • Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT 2021.
  • Brevini, Benedetta. Is AI Good for the Planet? Polity, 2022.
  • Hogan, Mél, and Esther Verkaaik (Vonderau). “Perforating Screens: Thinking Through the Materiality of Data Centres.” Media Fields Journal 11 (2016).
  • Whyte, Kyle Powys. “Our Ancestors’ Dystopia Now: Indigenous Conservation and the Anthropocene.” In Routledge Companion to the Environmental Humanities, edited by Ursula Heise, Jon Christensen, and Michelle Niemann. Routledge, 2017.
  • Loewen, Dallas P., et al. “Data Colonialism and Climate Change: When AI Solutionism Meets Indigenous Data Sovereignty.” Ecology and Society 27, no. 4 (2022).
  • Schwartz, Roy, et al. “Green AI.” Communications of the ACM 63, no. 12 (2020): 54–63.
  • Dodge, Jesse, et al. “Measuring the Carbon Intensity of AI in Cloud Instances.” FAccT 2022.
  • Jumper, John, et al. “Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 596 (2021): 583–589.
  • Kelley, Colin P., et al. “Climate Change in the Fertile Crescent and Implications of the Recent Syrian Drought.” PNAS 112, no. 11 (2015): 3241–3246.
  • Online resources: Climate Change AI (climatechange.ai) — organisation and NeurIPS 2019 workshop paper; Stanford EARTHSYS 173 syllabus; MIT 1.S992 materials; UC Berkeley ESPM C167 readings; Cambridge AI for the Environment Research programme (AI4ER); IPCC AR6 synthesis report.

Chapter 1: Two Crises, One Technology — Framing the Relationship

The opening question of this course is deceptively simple: is artificial intelligence good or bad for the climate? In the discourse of technology journalism and corporate sustainability reporting, the answer tends to oscillate rapidly between utopian and dystopian poles — AI will solve climate change, or AI will cook the planet. Neither of these framings survives careful scrutiny. The relationship between artificial intelligence and the climate crisis is genuinely bilateral, recursive, and shot through with political economy, and the first task of serious analysis is to resist the pressure to resolve the tension too quickly. Both the optimistic and the pessimistic claims are grounded in real evidence. The challenge is to hold them together long enough to understand the conditions under which each applies.

The AI for climate (人工智能应对气候变化) frame has its most rigorous articulation in Rolnick et al.’s landmark 2022 survey in ACM Computing Surveys, a 96-page synthesis involving more than twenty researchers from machine learning and climate science communities. Rolnick et al. identify thirteen distinct domains where machine learning techniques could meaningfully contribute to climate mitigation — reducing the emissions that drive warming — and to climate adaptation — adjusting human and ecological systems to changes that are now unavoidable. The domains range across electricity systems, transportation, buildings, industry, agriculture, carbon capture monitoring, climate modelling, disaster response, and biodiversity conservation. What the survey makes clear is that the relationship between ML and climate is not a single application but an entire ecology of potential interventions, operating at timescales from the millisecond (real-time grid balancing) to the decadal (long-range climate projection). The Climate Change AI organisation, which grew from the NeurIPS 2019 workshop that preceded the survey, has since become the primary forum for this research community, and its output makes a credible empirical case that AI capabilities can be redirected toward climate-relevant ends.

The AI as climate problem (人工智能作为气候问题) frame is equally grounded in evidence, though its quantitative articulation came later and remains more contested methodologically. The seminal work is Strubell, Ganesh, and McCallum’s 2019 paper, which for the first time applied systematic energy accounting to the process of training large neural language models. Their finding — that training a large transformer model with neural architecture search produced CO₂-equivalent emissions approximately five times the lifetime emissions of an average American car — landed as a genuine shock in a research community that had implicitly treated computation as immaterial. Patterson et al.’s 2021 response from Google added methodological refinements around hardware efficiency and grid carbon intensity, but did not dissolve the fundamental concern: that as AI models grow larger and more widely deployed, their aggregate energy and carbon costs are substantial and rising. The manufacturing dimension is equally significant. Crawford’s Atlas of AI documents the extraction of rare earth elements and other minerals required to build AI hardware, the environmental and human rights conditions in those supply chains, and the water consumption of data centre cooling systems. The physical substrate of AI, Crawford argues, is anything but immaterial.

The third strand of the course’s argument is the one that most fundamentally reframes the other two: environmental justice (环境正义). Neither the benefits of AI climate solutions nor the costs of AI infrastructure are distributed randomly across the global population. They follow patterns that are legible from colonial history, from the political economy of resource extraction, and from the geographic logic of where power plants, data centres, and mines get sited. The communities most exposed to the intensifying impacts of climate change — through flooding, drought, heat, and sea-level rise — are overwhelmingly in the Global South, in Indigenous territories, and in low-income communities of colour in the Global North. These are not the communities primarily developing or deploying AI climate solutions, nor are they the communities that receive the greatest share of AI’s economic benefits. But they are frequently the communities bearing the heaviest costs of AI infrastructure: the water stress created by data centre cooling, the pollution from e-waste processing, the dispossession associated with renewable energy siting on Indigenous lands. Whyte’s analysis of Indigenous peoples as having already experienced their own version of a climate collapse — through centuries of colonial disruption to ecological relationships — insists that any adequate climate justice framework must centre these histories rather than treating climate as a novel and universal problem.

The course proceeds in three movements. The first (Chapters 2 and 3) surveys the AI for climate applications in detail, examining both the technical promise and the equity limitations of using ML for mitigation and adaptation. The second movement (Chapters 4 and 5) turns the critical lens on AI itself, examining the energy and carbon costs of training and inference, and then widening the frame to the full material infrastructure of data centres, supply chains, and hardware cycles. The third movement (Chapters 6 and 7) develops the environmental justice analysis, working through Indigenous data sovereignty, the geography of data centre siting, and the sacrifice-zone logic of AI infrastructure. Chapter 8 asks what a just and sustainable AI for climate would actually require — institutionally, technically, and politically — and why the answer is not merely a matter of switching to renewable energy.

The Rolnick et al. taxonomy organises ML climate applications along two axes: the relevant sector of the economy (electricity, transport, buildings, industry, agriculture, land use, climate science, societies) and the type of intervention (mitigation vs. adaptation vs. fundamental research). The taxonomy is not a claim that all applications are equally promising or equally ready for deployment; the paper is explicit that societal impact depends on factors beyond technical capability. But it provides the most comprehensive map currently available of where ML research effort could be directed toward climate ends.
Throughout this course, a recurring methodological point concerns the difference between technical potential and deployed impact. Many of the AI climate applications surveyed by Rolnick et al. are still at the research or pilot stage; deployment at scale requires not only technical capability but institutional adoption, regulatory frameworks, business model viability, and political support. The history of energy technology is full of technically superior solutions that failed to diffuse because the surrounding sociotechnical system was not aligned with them. Holding the distinction between potential and impact in mind is essential for honest evaluation.

Chapter 2: AI for Climate Mitigation — Electricity, Transport, and Buildings

Among the thirteen application domains identified by Rolnick et al., electricity systems, transportation, and buildings together account for roughly seventy percent of global greenhouse gas emissions, which makes them the priority focus for climate mitigation. In each of these sectors, AI capabilities intersect with decarbonisation challenges in ways that are both technically concrete and practically complex. The promise is real; so are the barriers to realising it.

Smart grid optimisation (智能电网优化) addresses one of the central technical challenges of the energy transition: integrating large amounts of variable renewable generation — solar and wind — into a grid system designed around dispatchable fossil fuels. Because the sun does not always shine and the wind does not always blow, a grid with high renewable penetration requires sophisticated real-time balancing of supply and demand. AI-based forecasting tools can substantially improve the accuracy of predictions for both renewable generation and electricity demand, enabling grid operators to hold less expensive backup capacity in reserve and to schedule dispatch more efficiently. DeepMind’s partnership with Google to apply reinforcement learning to the cooling systems of Google’s data centres provides a well-publicised case study: the system achieved a roughly forty percent reduction in energy used for cooling by learning patterns of heat generation and cooling response that were too complex for rule-based control systems to exploit. The same reinforcement learning approaches are being adapted for district-level HVAC management and for real-time grid frequency regulation.

In transportation, the potential AI contributions operate at multiple scales. At the vehicle level, AI-assisted powertrain control can improve the efficiency of hybrid and electric vehicles by optimising energy recovery and battery management. At the network level, route optimisation tools — Google Maps’ fuel-efficient routing being the most widely deployed example, reportedly reducing emissions by approximately 1.2 million metric tonnes annually — reduce unnecessary fuel consumption by directing vehicles away from congested or inefficient paths. At the systems level, AI-assisted scheduling and dispatch for freight and public transit can reduce empty running and improve load factors. The relationship between AI and vehicle electrification is itself bidirectional: electrification creates new demand management challenges for grids (charging millions of electric vehicles at predictable times creates demand spikes), and AI tools for managed charging and vehicle-to-grid integration are part of the proposed solution.

Buildings represent the application domain where the gap between technical potential and actual deployment is perhaps most glaring. Building heating, ventilation, and air conditioning systems account for a substantial fraction of electricity consumption in developed economies, and their control logic in most existing buildings is surprisingly primitive — time-based schedules or simple thermostat rules that take no account of occupancy patterns, weather forecasts, or real-time electricity prices. ML-based building energy management systems, including reinforcement learning controllers that optimise HVAC operation across the multiple interacting variables of comfort, energy cost, and carbon intensity, have demonstrated substantial efficiency improvements in controlled trials. The challenge is that buildings are individually unique, that retrofitting existing building control infrastructure is expensive, and that building owners face split incentives when energy costs are paid by tenants. Rolnick et al. note that the technical potential of AI in buildings is large but that realising it requires engagement with business models and regulatory frameworks, not just algorithmic improvement.

The most important theoretical caveat for this entire chapter is the rebound effect (反弹效应), known in ecological economics as the Jevons paradox after the nineteenth-century economist who observed that increased efficiency in coal use led to greater rather than less total coal consumption. The mechanism is straightforward: if AI-driven efficiency reduces the cost of energy services, the reduction in cost stimulates increased demand for those services, partially or wholly offsetting the efficiency gain. In the context of AI-assisted building energy management, more efficient HVAC systems may lead building owners to set more aggressive comfort targets; in the context of route optimisation, more efficient freight routing may expand the economic viability of longer-distance supply chains. The rebound effect does not eliminate the value of efficiency improvements, but it does mean that efficiency improvements alone cannot guarantee absolute emissions reductions. Schwartz et al. make an analogous point about Green AI: more computationally efficient AI enables more AI, and whether the net result is lower or higher total emissions depends on whether growth in AI deployment is bounded by other constraints.

Power Usage Effectiveness (PUE) is the standard metric for data centre energy efficiency, defined as total facility energy divided by IT equipment energy. A PUE of 1.0 would mean that all energy consumed by a data centre goes directly to computation, with none wasted on cooling, lighting, or power conversion. Most large hyperscale data centres operate at PUEs between 1.1 and 1.3; older facilities can exceed 2.0. The metric was developed by the Green Grid industry consortium and is widely used in corporate sustainability reporting, though it measures only overhead efficiency and says nothing about the carbon intensity of the electricity supply.
The AI for mitigation frame tends to focus on use cases where AI is a net contributor to emissions reduction. A complete accounting would also need to include uses of AI that increase emissions: AI-optimised targeting systems that increase the efficiency of fossil fuel exploration and extraction; AI logistics tools that enable faster and more frequent consumer shipping; AI-generated synthetic media that increases consumption of streaming infrastructure. Rolnick et al. note these risks briefly but do not quantify them. The net emissions impact of AI across all its applications — not just the climate-relevant subset — remains genuinely uncertain.

Chapter 3: AI for Climate Adaptation — Agriculture, Disaster Response, and Modelling

Where mitigation involves reducing the emissions that drive climate change, adaptation involves adjusting human societies and ecological systems to the changes that are already locked in. The IPCC AR6 synthesis report makes clear that even under the most optimistic mitigation scenarios, substantial adaptation is now unavoidable: sea levels will continue to rise, precipitation patterns will shift, and extreme weather events will intensify for decades regardless of near-term emissions trajectories. AI applications in adaptation are therefore not an alternative to mitigation but a complement to it — and in many communities in the Global South, adaptation is the more urgent practical priority.

Climate modelling (气候模型) presents one of the most technically mature interfaces between machine learning and climate science. Physics-based general circulation models are computationally expensive: a single high-resolution simulation run covering several centuries of climate evolution requires thousands of processor-hours on supercomputer-class infrastructure. This computational cost creates a fundamental tension with the need for large ensembles of simulations to quantify uncertainty, and for high-resolution regional projections to inform local adaptation planning. ML approaches address this tension in two ways. First, neural network emulators — often called “climate model surrogates” or “climate model emulators” — can learn to reproduce the statistical output of a physics-based model at a fraction of the computational cost, enabling rapid exploration of parameter space and scenario analysis. Second, machine learning downscaling techniques can take coarse-resolution output from global models and produce statistically consistent local projections by learning from historical relationships between large-scale and local climate variables. Both approaches have limitations — emulators can fail outside the distribution of conditions they were trained on, and statistical downscaling can misrepresent the physical mechanisms driving local climate — but they have become important practical tools for expanding the reach of climate science.

Agriculture is the domain where adaptation AI is most immediately consequential for food security. Crop yield prediction from satellite-derived vegetation indices, weather station data, and soil moisture measurements is now a mature application area, with models capable of producing county-level yield estimates weeks before harvest. Such forecasts enable governments to pre-position food aid, stabilise commodity markets, and plan irrigation allocations. Precision irrigation, which uses soil moisture sensors and weather forecasts to schedule and target water application, can reduce agricultural water use by twenty to fifty percent in controlled trials — a significant adaptation benefit in regions facing increasing drought stress. For smallholder farmers in sub-Saharan Africa and South Asia, the most relevant AI application may be disease and pest detection in individual plants: mobile applications like Plantix use convolutional neural networks trained on images of plant disease to provide diagnosis and treatment recommendations via smartphone camera, making expert agronomic knowledge accessible to farmers who have no other access to it.

Disaster risk reduction is a domain where the stakes of AI performance are particularly high. Google’s Flood Forecasting Initiative, deployed initially in India and Bangladesh and subsequently extended to Africa, uses a combination of hydrological modelling and ML-based inundation mapping to issue flood warnings with lead times of hours to days. Early evaluations suggest that the system’s alerts reach approximately fifty million people who would previously have had no flood warning at all — a meaningful humanitarian contribution. Wildfire spread prediction combines fuel moisture data, topographic models, and weather forecasts to produce probabilistic maps of fire spread, informing evacuation decisions and resource deployment. ShakeAlert, the earthquake early warning system operational on the US West Coast, uses ML to distinguish earthquake signals from noise in seismometer data and to rapidly estimate ground shaking intensity. The fundamental limitation in all of these applications is the non-linearity and stochasticity of the physical systems being modelled: even excellent ML systems cannot provide reliable predictions for extreme events that fall outside the historical distribution of training data, and it is precisely such events that are becoming more frequent under climate change.

The climate AI inequality (气候人工智能不平等) problem runs through all of adaptation AI and deserves explicit analysis. The communities most exposed to intensifying climate hazards — in tropical and subtropical regions, in low-lying coastal areas, in smallholder agricultural communities — are precisely those where data infrastructure is weakest. Historical weather station density in sub-Saharan Africa is a small fraction of station density in Europe and North America; satellite time series adequate for crop monitoring require computational infrastructure for processing that is not uniformly accessible; and ground truth data for model training is sparse in the regions where it is most needed. The result is a systematic bias in adaptation AI toward applications that work well in data-rich Northern contexts and perform poorly or require expensive local adaptation in Southern contexts. This is not merely a technical gap to be filled by better data collection — it reflects underlying political and economic structures that determine who generates data, who archives it, and who has the resources to build systems that use it. Loewen et al. make the sharper point that in many cases the data gap is not accidental but the result of active historical decisions: colonial science extracted environmental knowledge from the Global South without establishing locally managed data archives, and the contemporary AI adaptation agenda often proposes to fill the resulting gap by importing Northern-trained models rather than by building local data infrastructure and modelling capacity.

Downscaling in climate science refers to the process of deriving fine-resolution local climate information from coarser-resolution global or regional model output. Statistical downscaling uses empirical relationships between large-scale atmospheric variables (temperature gradients, pressure patterns) and local surface variables (precipitation, temperature at a specific location) learned from historical observations. Dynamical downscaling runs a high-resolution regional climate model nested within a coarser global model. Machine learning approaches to statistical downscaling, including convolutional neural networks applied to climate model output grids, have shown performance comparable to or exceeding traditional statistical methods in many benchmark comparisons.
Jumper et al.'s AlphaFold result — highly accurate prediction of protein three-dimensional structure from amino acid sequence — is occasionally cited in discussions of AI for climate as demonstrating that AI can crack problems of apparently intractable scientific complexity. The analogy requires care. AlphaFold succeeded because protein structure prediction has a clear, well-defined objective function (native structure), massive amounts of training data (the Protein Data Bank), and the ability to verify predictions against crystallographic experiments. Climate prediction faces a more difficult situation: the objective is not a single target state but a probability distribution over many outcomes, training data is limited by the length of the instrumental record, and verification for novel climate regimes is by definition unavailable in advance. The lesson from AlphaFold may be less about AI's ability to solve climate problems than about the importance of investing in the data infrastructure that makes AI effective.

Chapter 4: The Carbon Footprint of AI

The systematic accounting of AI’s energy and carbon costs began in earnest in 2019, when Strubell, Ganesh, and McCallum published what became an immediately controversial paper at the Association for Computational Linguistics annual conference. Their methodology was straightforward: measure electricity consumption during training of large neural language models using hardware performance counters, multiply by the average US grid carbon intensity, and compare the result to intuitive reference points. The headline finding — that training a large transformer model with neural architecture search produced CO₂-equivalent emissions roughly equal to five trans-American airline flights per experiment, and that the cumulative cost across a typical research project involving multiple training runs could exceed the lifetime emissions of an average American car by a factor of five — created a rupture in a research community that had treated computation as effectively weightless. The paper’s limitations were acknowledged by its authors: the calculations used average grid carbon intensity rather than the intensity of specific data centres, and they did not account for hardware efficiency differences across facilities. But the order-of-magnitude finding that large model training has a non-trivial carbon footprint was not seriously contested.

Patterson et al.’s 2021 response, produced by researchers at Google and UC Berkeley, addressed the methodological limitations of the Strubell et al. approach by incorporating hardware-specific efficiency data, actual data centre Power Usage Effectiveness (PUE) measurements, and region-specific grid carbon intensity. Their recalculated estimate for GPT-3 training — approximately 552 tonnes of CO₂-equivalent, compared to Strubell et al.’s higher estimate for a different model — was meaningfully lower than the numbers implied by the earlier methodology, and the paper argued that the use of purpose-built AI accelerators in modern data centres, combined with the increasingly renewable energy procurement by major cloud providers, substantially reduces the carbon intensity of AI training relative to a naive grid-average calculation. The disagreement between the two methodologies reflects a genuine empirical uncertainty: the carbon footprint of any given training run depends on a complex of interacting factors — the specific hardware, the specific facility, the specific time of day and season, and the specific grid region — and none of these are routinely disclosed by AI developers. Dodge et al.’s 2022 FAccT paper makes this methodological challenge explicit, demonstrating that the same computation run at different times of day on the same cloud infrastructure can vary by a factor of five in carbon intensity as the grid mix shifts between high-renewable and high-fossil periods.

The distinction between training and inference (训练与推理) is essential for understanding AI’s aggregate energy footprint but is often elided in popular discussions. Training is energy-intensive but one-time: a large language model may require millions of GPU-hours to train, but once trained, the same weights are used for all subsequent inference. Inference — running the trained model to produce outputs in response to queries — is less energy-intensive per query but scales with usage. For a widely deployed model like the systems underlying ChatGPT or Google’s AI Overviews, the cumulative energy consumption of inference across hundreds of millions of daily queries can substantially exceed the one-time training cost over the model’s operational lifetime. Estimates of ChatGPT’s daily energy consumption range from several to tens of gigawatt-hours, depending on methodology and assumed query volume — numbers that become meaningful when contextualised against the electricity consumption of large cities. As AI systems are embedded in more and more daily workflows, inference energy becomes the dominant term in the lifecycle energy budget, and it scales not with the size of models but with the breadth of their deployment.

Schwartz et al.’s Green AI manifesto, published in Communications of the ACM in 2020, articulated the research community’s response to these findings in a form that was both practical and critical. The practical proposal was straightforward: AI research papers should routinely report the computational cost of experiments alongside their accuracy results, using standardised metrics like floating-point operations or equivalent CO₂ emissions, so that the field can optimise for efficiency as well as performance. The deeper critique was of what Schwartz et al. called “Red AI” — the dominant research paradigm in which performance improvements are achieved primarily by scaling up computation, with diminishing marginal returns and rapidly increasing costs. The observation that progress on many benchmark tasks follows a smooth relationship between log-scale compute and performance has led major AI labs to invest in progressively larger models as the primary research strategy, a dynamic that systematically privileges resource-rich institutions and that treats computational efficiency as a secondary concern. Green AI would require research incentive structures that reward finding the same performance improvements with less computation — a reorientation that the paper acknowledges is difficult to achieve without coordinated action by conferences, journals, and funding agencies.

Carbon intensity of electricity refers to the amount of CO₂-equivalent greenhouse gases emitted per unit of electrical energy generated, typically expressed in grams of CO₂-equivalent per kilowatt-hour (gCO₂eq/kWh). Grid carbon intensity varies enormously by region, time of day, and season, depending on the mix of generation sources. The French grid, which is heavily nuclear, has a carbon intensity of approximately 50 gCO₂eq/kWh; the Polish grid, which is heavily coal, exceeds 700 gCO₂eq/kWh. Within a single grid, intensity varies by time of day: in California, midday hours when solar generation peaks have much lower intensity than evening hours when gas peakers provide backup capacity. Dodge et al. demonstrate that this temporal variation is sufficient to reduce the carbon footprint of AI training by factors of two to five if computation is scheduled for low-intensity periods.
The Strubell et al. paper and its aftermath illustrate a general phenomenon in the sociology of quantification: making previously invisible costs visible through measurement creates both political opportunity and political resistance. The resistance took the form of methodological critiques (some legitimate, some motivated), claims that the numbers were being misinterpreted, and arguments that the AI carbon footprint was small relative to other sectors. The political opportunity was the emergence of a research community focused on computational efficiency and carbon accounting, the Climate Change AI organisation's explicit engagement with the energy costs of AI, and an ongoing conversation within major AI labs about the sustainability of the current scaling paradigm. The numbers are genuinely uncertain, but the underlying question — what is the aggregate energy and carbon cost of the AI systems we are building? — is both answerable and important.

Chapter 5: Data Centres as Material Infrastructure

Hogan and Vonderau’s intervention in media studies, developed through their work on what Hogan calls the “perforating” of the cloud metaphor, begins with a simple but rhetorically powerful move: insisting on the physical reality of data centres against the dematerialising tendency of cloud computing rhetoric. When users store data “in the cloud” or run computation “on cloud infrastructure,” the spatial and material referents of these operations are systematically obscured. Data centres are not clouds. They are industrial buildings, typically occupying tens of thousands of square metres of floor space, containing hundreds of thousands of servers and networking components, consuming electricity at the scale of small cities, and requiring continuous cooling to prevent the heat generated by computation from destroying the hardware. The cloud metaphor, Hogan argues, is not merely innocent imprecision — it actively forecloses the political questions that would arise if the infrastructure were visible.

Water consumption (水资源消耗) is the dimension of data centre environmental impact that receives least attention in popular discourse but that is arguably as significant as energy consumption in water-stressed regions. Evaporative cooling — in which warm air is passed over water-saturated surfaces and the evaporation of water absorbs heat — is the most energy-efficient cooling method available for large data centres in many climates, and it is the dominant technology in hyperscale facilities operated by Amazon, Google, Microsoft, and Meta. A large data centre consuming several hundred megawatts of electrical power may require several million litres of water per day for cooling — comparable to the water consumption of a medium-sized city. The expansion of Microsoft’s data centre operations in the Phoenix metropolitan area, one of the fastest-growing AI infrastructure hubs in the United States, has attracted scrutiny because the Phoenix region is simultaneously one of the areas most stressed by the declining availability of Colorado River water. The direct competition between data centre water use and the needs of agriculture, municipal supply, and ecosystems in the Colorado River basin is a concrete manifestation of the material tensions that Hogan’s analysis anticipates.

The geography of data centre location reflects a specific set of economic and technical constraints that have nothing to do with the geographic distribution of users or computational need. Data centre clusters have emerged in particular locations — northern Virginia (the world’s largest concentration), central Iowa (Microsoft, Google, Meta), rural Oregon and Washington (AWS), Ireland (EU headquarters for many hyperscale providers), Singapore — because those locations combine low land costs, access to high-capacity fibre infrastructure, favourable regulatory environments (including tax incentives), and climate conditions that reduce cooling costs. The data centre desert (数据中心沙漠) phenomenon refers to the paradoxical siting of hyperscale facilities in drought-prone, water-scarce regions where land is cheap and renewable energy (solar, wind) is abundant — the American Southwest being the primary example. The combination of cheap land, cheap renewable electricity, and the ability to access cheap water (often through priority water rights held by agricultural interests) has made these regions attractive, but the long-term sustainability of large-scale evaporative cooling in drought-stressed regions is increasingly questioned.

Crawford’s Atlas of AI extends the material analysis from energy and water to the supply chain of AI hardware itself. Building the servers, networking equipment, and storage systems that constitute AI infrastructure requires cobalt for lithium-ion batteries; lithium, primarily from the Atacama Desert salt flats of Chile, Bolivia, and Argentina; rare earth elements including neodymium, dysprosium, and lanthanum, most of which are currently mined and processed in Inner Mongolia and Jiangxi Province in China; and gold, silver, and platinum for circuit board manufacturing. Each of these supply chains has its own environmental and human rights profile. Cobalt mining in the Katanga region of the Democratic Republic of Congo is extensively documented to rely on artisanal mining involving child labour and to produce severe contamination of soil and water in surrounding communities. Lithium extraction from the Atacama salar ecosystems is in direct conflict with the water needs of Indigenous Atacameño communities and with the fragile wetland ecosystems that support flamingo populations. Crawford’s point is not that AI hardware cannot be manufactured more responsibly — it is that current manufacturing reflects choices made in the context of a competitive global supply chain that externalises environmental and social costs onto the communities and ecosystems least able to resist them.

The corporate response to this material critique has taken the form of ambitious voluntary commitments: Microsoft’s pledge to be carbon-negative by 2030 and to remove all its historical emissions by 2050; Google’s commitment to match all its electricity consumption with carbon-free energy on a 24/7 basis by 2030; Amazon’s Climate Pledge targeting net-zero carbon by 2040. These commitments represent real financial investments in renewable energy procurement and efficiency improvements, and they have accelerated the deployment of utility-scale solar and wind capacity in the regions where data centres are concentrated. However, they also raise methodological questions about the scope of accountability. Most major technology company sustainability reports focus on Scope 1 and Scope 2 emissions — direct combustion and purchased electricity — while Scope 3 emissions, which include the supply chain emissions embodied in hardware manufacturing and the end-of-life emissions from hardware disposal, are either not reported or reported with substantially lower confidence. The gap between the carbon accounting that companies apply to themselves and the full lifecycle emissions of the AI systems they build and operate is a persistent feature of the current disclosure landscape.

Scope 1, 2, and 3 emissions are categories defined by the Greenhouse Gas Protocol, the dominant standard for corporate carbon accounting. Scope 1 covers direct emissions from sources owned or controlled by the company (e.g., combustion in on-site generators). Scope 2 covers indirect emissions from purchased electricity, heat, or steam. Scope 3 covers all other indirect emissions across the value chain — upstream (supply chain, capital goods manufacture) and downstream (use of sold products, end-of-life treatment). For technology companies, Scope 3 typically represents the large majority of total lifecycle emissions, and it is also the most methodologically challenging to measure accurately.
The political economy of data centre regulation illustrates the dynamics that make voluntary corporate commitments insufficient on their own. Data centre operators have successfully lobbied for tax incentives (sales tax exemptions on equipment purchases, property tax abatements) in most of the jurisdictions where they operate, creating a competitive dynamic in which regions bid against each other by lowering the regulatory and fiscal burdens on facilities. This same dynamic makes it difficult for any single jurisdiction to impose meaningful mandatory environmental standards — the threat of relocation is credible and has been exercised. Effective regulation likely requires either federal standards or coordinated action among multiple jurisdictions, both of which face significant political obstacles in the current environment.

Chapter 6: Indigenous Data Sovereignty and Climate Justice

Whyte’s environmental justice framework begins from a historical observation that fundamentally reframes the standard climate narrative. In most mainstream climate discourse, climate change is presented as an unprecedented threat to a stable baseline: human societies are adapted to the Holocene climate, and greenhouse gas emissions are pushing the system outside the envelope of human experience, creating new vulnerabilities and requiring new adaptations. For Indigenous peoples, Whyte argues, this framing is historically illiterate. Indigenous communities in North America, Australia, the Pacific, and elsewhere have already experienced the social and ecological disruptions that climate change threatens to produce for others — not through atmospheric warming but through colonial land dispossession, the disruption of seasonal rounds of hunting, fishing, and gathering, the forced relocation of communities, and the systematic dismantling of the governance institutions through which Indigenous peoples managed ecological relationships. What Whyte calls “our ancestors’ dystopia now” is the recognition that the social structures climate change threatens to collapse for settler-colonial societies — stable relationships between communities and specific territories, reliable seasonal patterns of ecological productivity, intergenerational transmission of ecological knowledge — were already destroyed for most Indigenous communities in the last two centuries.

Indigenous data sovereignty (原住民数据主权) names the rights of Indigenous peoples to govern the collection, storage, analysis, and application of data about their communities, territories, and cultural heritage. The concept emerged partly as a critique of the FAIR Principles — the Findable, Accessible, Interoperable, Reusable framework that has become the dominant standard for open data in scientific research. FAIR principles assume that the benefits of data sharing are universal and that barriers to access are inherently problems to be overcome. Indigenous data sovereignty challenges this assumption: in many contexts, making Indigenous community data accessible and interoperable with external systems without community consent and control constitutes a form of extraction that reproduces colonial patterns of resource appropriation in digital form. The CARE Principles — Collective Benefit, Authority to Control, Responsibility, Ethics — developed by the Global Indigenous Data Alliance as an explicit counterweight to FAIR, insist that data governance frameworks must be evaluated by whether they enhance the capacity of data subjects to exercise collective self-determination, not merely by whether data is technically accessible.

In Canada, the OCAP Principles — Ownership, Control, Access, Possession — were developed by the First Nations Information Governance Centre as a practical framework for First Nations data governance in the context of health research and government data collection. Ownership recognises that a community or group owns information collectively, in the same way that an individual owns their personal information. Control affirms the right of First Nations to govern all aspects of research and information management that affect them. Access specifies that First Nations must have access to information and data about themselves and their communities. Possession, perhaps the most operationally specific principle, holds that physical or digital control of data is a mechanism to assert and protect sovereignty — the practical concern being that data stored on external servers or in government databases is vulnerable to use without consent. These principles are directly relevant to the AI for climate context, because many of the data sources that AI climate applications depend on — land cover surveys, biodiversity monitoring data, hydrological records — include or rely upon territories and knowledge systems that belong to First Nations communities.

The intersection of renewable energy development and Indigenous land rights presents the environmental justice paradox most sharply. Wind and solar installations are preferentially sited on land with high resource quality and low development conflict — which frequently means semi-arid and arid landscapes in remote locations that are also, very often, Indigenous territories. The pattern is documented across North America, Australia, and sub-Saharan Africa: renewable energy projects that are presented as necessary for climate mitigation are sited on Indigenous land without genuine free, prior, and informed consent, reproducing the logic of colonial resource extraction under a new green flag. Whyte emphasises that this is not an argument against renewable energy but against the specific political economy of renewable energy development that treats Indigenous territories as empty land available for colonisation. AI plays a role in this dynamic both as a consumer of the electricity that renewable energy projects produce and as an analytical tool used to identify optimal siting locations — analyses that may identify high-quality sites without surfacing the political and legal status of the communities whose territories those sites occupy.

Loewen et al.’s analysis of data colonialism and climate AI extends the critique to the extraction of traditional ecological knowledge for training datasets. Indigenous communities have accumulated detailed, place-specific knowledge of ecological dynamics — seasonal variation in species abundance, indicators of climate and weather patterns, the response of ecosystems to disturbance — over generations of observation and practice. This knowledge, when made accessible to AI training pipelines without community control and consent, constitutes what Loewen et al. call the digital analogue of bioprospecting: the extraction of locally generated knowledge for external benefit without recognition, compensation, or ongoing relationship. The Inuit Circumpolar Council’s engagement with sea ice knowledge is one of the few examples of Indigenous-led participation in AI-relevant climate research: Inuit knowledge of sea ice dynamics, developed over thousands of years, is qualitatively different from and in many respects complementary to instrumentally measured sea ice extent, and the Council has insisted on community control over how this knowledge is represented in and used by research systems.

Free, Prior, and Informed Consent (FPIC) is the principle, recognised in the UN Declaration on the Rights of Indigenous Peoples (UNDRIP), that Indigenous peoples have the right to give or withhold their consent to projects affecting their lands, territories, and resources before those projects proceed, having been provided with complete and accurate information about proposed activities and their likely impacts. FPIC is not a right of veto in most legal interpretations, but it does require genuine consultation and negotiation rather than notification, and it requires consent from representative institutions rather than from individuals who may not have authority to speak for a community.
The concept of **relational accountability** in Indigenous research ethics, developed by Indigenous scholars including Linda Tuhiwai Smith and Shawn Wilson, provides a useful counterpoint to the extractive data model. Relational accountability holds that research relationships — including the relationships between researchers, communities, data, and the land — carry ongoing ethical obligations that do not terminate with the publication of a paper or the deposit of data in a repository. Applied to AI for climate, relational accountability would require not only consent at the point of data collection but sustained relationships between AI development teams and the communities whose knowledge and territories are involved, and mechanisms for those communities to participate in decisions about how AI systems are developed, evaluated, and deployed.

Chapter 7: Environmental Justice and the AI Infrastructure Chain

The siting of AI infrastructure — data centres, cloud computing facilities, high-performance computing campuses — follows a spatial logic that reproduces well-documented patterns in environmental justice. Locally unwanted land uses, including industrial facilities with significant environmental externalities, have historically been sited disproportionately in communities with less political power to resist them: rural communities, low-income communities, communities of colour, and communities with high proportions of recent migrants. The mechanisms driving this pattern are multiple and interacting — lower land costs in lower-income areas, weaker organised political opposition, greater susceptibility to economic incentive arguments — and they have been documented empirically across dozens of facility types over several decades of environmental justice research. Data centres fit this pattern. While the largest hyperscale facilities tend to be in predominantly white rural communities, smaller colocation facilities, backup power infrastructure, and the power plants that serve large data centre campuses are distributed in ways that impose disproportionate costs on lower-income communities. The diesel backup generators that provide power resilience for data centres during grid outages are a particular concern: they are significant sources of particulate matter and nitrogen oxides, and their use during peak demand events or grid emergencies affects surrounding neighbourhoods in ways that fall unequally on communities already facing elevated exposure to air pollution.

E-waste (电子垃圾) is perhaps the most spatially dispersed and least visible dimension of AI’s environmental justice footprint. The replacement cycle for AI hardware — AI accelerators (GPUs, TPUs, and custom chips) are replaced every two to three years as performance improvements make newer hardware dramatically more cost-efficient — generates substantial quantities of electronic waste. Global e-waste generation, already exceeding fifty million metric tonnes annually before the AI hardware buildout of the early 2020s, is projected to grow substantially as AI-related hardware procurement expands. A small fraction of this waste is formally recycled through certified facilities in countries with strong environmental regulation. The majority travels — often through chains of informal brokers and intermediary markets — to informal processing sites in Ghana (the Agbogbloshie market in Accra, which processes much of Europe’s e-waste), Nigeria (the Alaba International Market in Lagos), and China, where labour-intensive dismantling operations extract residual valuable metals. Workers at these sites, many of them children and adolescents, are exposed to the toxic substances released when circuit boards, batteries, and display components are processed — lead, cadmium, mercury, chromium — in conditions without adequate protective equipment or occupational health support. The communities surrounding informal e-waste processing sites experience elevated rates of heavy metal contamination in soil, water, and blood, with documented consequences for child neurological development.

The rare earth supply chain that Crawford documents in Atlas of AI presents an environmental justice analysis that must follow commodity flows across continents. Cobalt, essential for the lithium-ion batteries that power AI hardware and AI-enabled devices, is predominantly mined in the Katanga province of the Democratic Republic of Congo, where artisanal mining operations — employing hundreds of thousands of people, including children — operate alongside large industrial mines with inadequate environmental management, producing acid mine drainage and heavy metal contamination that affects downstream communities and watersheds. Lithium extraction from the Atacama Desert’s salt flats depletes the brine aquifer that Atacameño Indigenous communities and desert ecosystems depend on, with no mechanism for compensation or remediation that is adequate to the scale of depletion. These supply chain harms are not accidental byproducts of AI development — they are structurally necessary consequences of the current hardware architecture — and they are borne by communities that receive no share of the economic value created by AI systems.

The sacrifice zone (牺牲区) concept, developed in environmental justice scholarship to describe communities that bear concentrated environmental costs in exchange for broader economic benefits that largely accrue elsewhere, applies with uncomfortable precision to the AI infrastructure chain. The communities around cobalt mines in Katanga, the Atacameño communities losing water to lithium extraction, the neighbourhoods in Agbogbloshie breathing toxic smoke from burning circuit boards, the rural communities downwind from data centre backup generators — these communities have all been incorporated into the sacrifice zone of the AI economy. The geographic dispersion of these sacrifice zones across multiple continents makes their cumulative impact difficult to perceive from any single vantage point, but the aggregate pattern is consistent: the environmental costs of AI infrastructure are systematically shifted away from the communities that benefit most from AI and onto communities with the least power to resist the imposition.

A just transition framework for AI infrastructure would require addressing each link in this chain, not only the most visible one. Corporate sustainability commitments focused on renewable electricity procurement for data centres address one part of the picture while leaving the supply chain and end-of-life dimensions largely unaddressed. Community benefit agreements — formal negotiations between data centre developers and host communities — can require local hiring, environmental monitoring, and infrastructure investment, but they apply only to the facility level and do not address the distributed harms of hardware manufacturing and disposal. Extended producer responsibility legislation, which would require technology companies to fund responsible recycling of the hardware they manufacture, has been enacted in some jurisdictions but enforced inconsistently and often circumvented by export. The political economy of making comprehensive change is genuinely difficult: the communities bearing sacrifice-zone costs are in different countries, speaking different languages, facing different legal systems, with no single institutional forum in which their collective claims can be heard together.

Extended producer responsibility (EPR) is a policy instrument that makes manufacturers financially and/or operationally responsible for the end-of-life management of the products they produce. In the context of electronics, EPR schemes require manufacturers to fund collection and recycling programmes, typically through fees assessed at point of sale. The European Union's WEEE Directive (Waste Electrical and Electronic Equipment) is the most extensive implementation, though its effectiveness has been limited by enforcement gaps and by the export of e-waste outside the regulatory perimeter. In Canada, provincial EPR programmes for electronics cover consumer products but largely exclude the server and networking equipment that constitutes the bulk of AI hardware e-waste.
The intersection of AI infrastructure siting and the broader phenomenon of "green gentrification" — the displacement of low-income communities by environmental improvement projects that increase property values — represents an emerging area of concern. In some urban contexts, the arrival of high-wage tech employment associated with AI development has accelerated housing cost increases that displace existing residents. The environmental benefits of any associated improvements in urban sustainability infrastructure are then captured by incoming higher-income residents rather than by the communities that bore the costs of previous industrial contamination. This dynamic is distinct from but related to the sacrifice-zone analysis: in both cases, the distribution of AI's environmental costs and benefits follows existing patterns of economic inequality.

Chapter 8: Toward Just and Sustainable AI — Governance, Practice, and Futures

Brevini’s Is AI Good for the Planet? offers the most systematic synthesis currently available of the evidence on both sides of the question posed in the course’s opening chapter. After surveying the literature on AI for climate applications (broadly positive), AI energy consumption (concerning and growing), AI hardware supply chains (significantly problematic), and the political economy of Big Tech’s relationship to environmental sustainability (structurally misaligned with serious constraint), Brevini arrives at a carefully qualified conclusion: AI’s current environmental balance sheet is negative, but the negativity is not a technological necessity. It reflects choices — about which AI to build, how to power it, on whose land to site it, on whose labour and ecological resources to base its hardware supply chain — and those choices can in principle be made differently. The challenge is that making them differently requires overriding the commercial incentives of an industry that currently externalises its environmental costs with considerable success.

Green AI as a research programme, as defined by Schwartz et al. and extended by subsequent work, makes three practical demands: that AI researchers report the computational costs of their experiments as a routine part of publication; that conferences and journals reward efficiency improvements alongside performance improvements; and that AI development institutions invest in efficiency-oriented research as a legitimate alternative to scale-oriented research. Each of these demands is reasonable, and progress has been made on all three since the 2020 manifesto. However, Green AI faces a structural limitation that Brevini’s analysis makes explicit: technical efficiency improvements are necessary but not sufficient for aggregate environmental improvement in a context of rapid overall growth. The AI industry’s energy consumption has continued to grow despite substantial improvements in hardware efficiency per computation, because the growth in the volume of computation has outpaced efficiency gains by a substantial margin. This is the Jevons paradox applied to computational efficiency: more efficient AI enables more AI, and the net result is higher total energy consumption unless the volume of AI is bounded by some other mechanism.

The sufficiency principle (充足性原则), drawn from ecological economics and applied to AI by a growing number of researchers, argues that efficiency improvements must be complemented by constraints on overall throughput. In ecological economics, sufficiency refers to a normative commitment to limiting consumption to what is adequate for human flourishing rather than maximising it without bound — an approach that is deeply at odds with the growth imperatives of market economies but that has received increasing attention as the limits of efficiency-only strategies have become apparent. Applied to AI, sufficiency would require asking not only “how can we make AI more efficient?” but “which AI applications are worth the energy and material costs they impose, and which are not?” A sufficiency-oriented approach would distinguish between AI applications that address urgent human needs — medical diagnosis, climate modelling, food security monitoring — and AI applications that primarily serve marginal convenience or entertainment — animated social media filters, higher-resolution product photography, personalised advertisement targeting — and would allocate the limited carbon budget of AI infrastructure accordingly. This is a politically contentious claim, and it raises genuine questions about who decides which uses are worth the cost. But the alternative — treating all AI applications as equally deserving of unlimited expansion — is also a choice, and one with predictable consequences for aggregate energy and carbon consumption.

Regulation and mandatory carbon accounting represent the most direct pathway to changing the structural incentives that drive AI’s environmental trajectory. Several regulatory approaches are under active development. The EU AI Act includes provisions for mandatory disclosure of energy consumption for high-impact AI systems, though the threshold for coverage and the scope of reporting requirements were subject to extensive negotiation, and the final provisions are more limited than advocates had sought. In the United States, the Securities and Exchange Commission’s climate disclosure rules, which require large public companies to report Scope 1 and 2 emissions and, in some cases, Scope 3 emissions, apply to AI companies as they do to other industries — though AI-specific requirements remain undeveloped. Proposals for an “AI energy tax” or for requiring AI systems above a computational threshold to purchase carbon offsets have circulated in policy discussions without achieving legislative traction. Dodge et al.’s finding that carbon intensity varies enormously with the time and location of computation suggests that carbon-aware scheduling — designing AI training and inference systems to shift computation toward times and places where the grid is carbon-light — could be a meaningful near-term mitigation measure that is technically feasible and that creates financial incentives if combined with carbon pricing.

Whyte’s relational approach to environmental justice, extended to the AI governance context, requires evaluating AI for climate not only by its technical contribution to emissions reduction but by whether it strengthens or weakens the capacity of vulnerable communities to participate in decisions that affect them. This is a more demanding standard than efficiency or even equity in the distribution of benefits and costs — it asks whether AI governance structures are themselves democratising or concentrating, whether they expand or contract the space of legitimate political participation for the communities most affected by both climate change and AI infrastructure. A governance framework consistent with Whyte’s analysis would include mandatory free, prior, and informed consent requirements for AI infrastructure siting on or near Indigenous territories; meaningful community benefit requirements that give host communities a share of the economic value created by data centres on their land; Indigenous co-governance of AI systems that use Indigenous environmental data; and supply chain accountability mechanisms that extend corporate responsibility to cobalt mines and lithium extraction sites. None of these requirements is technically impossible; all of them face significant political and economic resistance from incumbent interests.

The cross-cutting themes of this course — the bilaterality of AI’s relationship to climate, the structural role of environmental justice in shaping both the distribution of AI’s harms and the design of AI’s applications, the limits of technical efficiency as a response to systemic growth dynamics, and the centrality of governance in determining whether AI’s trajectory can be redirected — connect to a broader cluster of questions about the political economy of technology. HIST 415’s historical analysis of how technologies become entangled with extraction and colonial dispossession illuminates the deep roots of patterns that appear as technical or economic problems in AI governance discussions. SOC 435’s sociological work on the geography of AI infrastructure and the communities that bear its environmental costs provides the empirical grounding for the environmental justice arguments developed here. PHIL 451’s analysis of regulatory responses to AI’s social costs, including its environmental costs, situates the governance proposals discussed in this chapter within the broader landscape of AI law and policy. FINE 430’s examination of generative AI and creative practice, from the perspective of artists whose work is used to train systems without consent or compensation, adds a cultural dimension to the extraction analysis that Crawford and Loewen develop in the environmental register. Taken together, these courses constitute an argument that the question “is AI good for the planet?” cannot be answered without simultaneously answering “good for which communities, on whose terms, through what governance arrangements, and at what pace?”

Carbon-aware computing refers to the practice of scheduling computational workloads to minimise their carbon footprint by shifting computation in time (to periods when grid carbon intensity is lower) or in space (to data centre regions with lower-carbon electricity supplies). The approach is technically straightforward for workloads that are not time-sensitive — including most AI model training runs, which can tolerate delays of hours or days without impacting system functionality. The WattTime and Electricity Maps APIs, which provide real-time and forecast grid carbon intensity data for hundreds of grid regions worldwide, make carbon-aware scheduling practically accessible. Major cloud providers have begun offering carbon-aware scheduling features, and several large AI labs have reported incorporating temporal carbon awareness into their training infrastructure.
The question of whether the AI industry can voluntarily redirect itself toward sustainability, or whether external regulation is necessary, is partly empirical and partly normative. The empirical question is whether competitive dynamics in the AI industry create incentives that systematically override sustainability commitments in the absence of mandatory requirements — the evidence from the period 2019–2025 suggests that voluntary commitments have not prevented substantial growth in AI energy consumption, though they have shaped the form of that growth (toward renewable energy procurement rather than efficiency reduction). The normative question is what conception of corporate responsibility is appropriate for an industry whose externalities are as geographically dispersed and temporally extended as those of AI. Brevini's answer — that AI's environmental costs require the same kind of regulatory response that industrial pollution received in the late twentieth century — places the environmental governance of AI within a well-established political tradition, while acknowledging that the global distribution of AI infrastructure makes international coordination essential and difficult.
Back to top