EARTH281: Geological Impacts on Human Health
Estimated study time: 1 hr
Table of contents
Sources and References
Primary textbook — Olle Selinus, Brian Alloway, Jose Centeno, Robert Finkelman, Ron Fuge, Unnur Lindh & Pauline Smedley (eds.), Essentials of Medical Geology (Springer, revised ed., 2013)
Supplementary texts — Brian J. Alloway (ed.), Heavy Metals in Soils: Trace Metals and Metalloids in Soils and their Bioavailability (Springer, 3rd ed., 2013) Jochen Bundschuh & Miguel Armienta (eds.), Natural Arsenic in Groundwaters of Latin America (CRC Press/Balkema, 2009) World Health Organization, Guidelines for Drinking-water Quality (4th ed. with addenda, WHO Press, 2017)
Online resources — USGS Medical Geology project pages (usgs.gov/centers/gsmnp/science/medical-geology) International Medical Geology Association (IMGA) resources (medicalgeology.org) IARC Monographs on the Identification of Carcinogenic Hazards to Humans — Volumes on arsenic (100C), asbestos (100C), cadmium (100C), lead (87), mercury compounds ATSDR Toxicological Profiles for arsenic, cadmium, lead, mercury, fluoride (atsdr.cdc.gov) Stanford EARTHSYS/ESS 117 Environmental Geochemistry public course materials University of Iceland Medical Geology course (Vala Hjörleifsdóttir) public syllabus
Chapter 1: Introduction to Medical Geology
The Emergence of a Discipline
In the summer of 1990, researchers investigating an unusually high incidence of oesophageal cancer in the Cixian County of Henan Province, China, found themselves tracing the disease not to viral infection, dietary habit, or lifestyle factor, but instead to the geochemical composition of the local soil. The soils of Cixian were deficient in molybdenum and zinc while containing elevated concentrations of nitrosamines from nitrogen-rich groundwater interacting with specific mineralogical substrates. This convergence of geology, geochemistry, and oncology illustrated something that practitioners had long suspected but only recently began to formalise: the Earth’s surface chemistry is one of the most powerful, and most under-recognised, determinants of human health. Medical geology is the scientific discipline that investigates those connections, examining how geological materials and processes — bedrock mineralogy, soil geochemistry, groundwater composition, volcanic emissions, and airborne mineral dust — affect the health of humans, livestock, and wildlife. The International Medical Geology Association, founded in 2006, formally defined the field and created the institutional scaffold for interdisciplinary work linking geoscientists, epidemiologists, toxicologists, and public health professionals.
Historical Roots
The intellectual lineage of medical geology extends at least to Hippocrates of Cos, whose treatise “On Airs, Waters, and Places,” written around 400 BCE, argued systematically that the physical environment — water quality, prevailing winds, and the character of local soils — shaped both the constitutions of inhabitants and the diseases that afflicted them. While Hippocratic reasoning was pre-chemical, the underlying framework — that geography and substrate determine health — has proven remarkably durable. The next landmark came with Georgius Agricola, the Saxon physician and mineralogist whose De Re Metallica (1556) documented with meticulous care the lung diseases suffered by miners in the Erzgebirge, the ore-rich mountain range straddling modern Germany and the Czech Republic. Agricola described what we now recognise as silicosis and probably asbestosis, noting that some women in the mining villages had buried as many as seven successive husbands who had all perished from lung complaints. His work represented the first systematic attempt to link occupational exposure to geological materials with specific patterns of morbidity and mortality.
By the nineteenth century, chemists and physicians had begun mapping patterns of disease across landscapes and connecting them to soil and water chemistry. The phenomenon of endemic goitre — the dramatic enlargement of the thyroid gland visible as a swelling at the base of the throat — was observed to cluster in inland mountain regions far from the sea: the Swiss and Austrian Alps, the Himalayan foothills, the Great Lakes basin of North America, and the highland regions of Central Africa. By the 1910s and 1920s, investigators including David Marine in Ohio had demonstrated that goitre was caused by iodine deficiency and that iodine supplementation reversed it. The geological logic was clear: coastal and marine-influenced soils receive iodine from sea spray and organic marine sediment deposition, while deeply inland and glaciated terrains are stripped of iodine by ice sheets and leaching rainfall, leaving both soils and groundwater with concentrations insufficient to sustain normal thyroid function.
The Geochemical Environment of Health
The connection between geology and human physiology operates through several major pathways. Bedrock mineralogy determines the initial geochemical character of a region: granites weather to produce acidic, quartz-rich soils low in calcium and magnesium, whereas limestones and basalts weather to produce calcareous, nutrient-rich soils. Soil, as the weathering product that mediates between rock and biosphere, controls both the composition of food crops and the trace-element content of groundwater that recharges through it. Dust and aerosols carry geological materials across vast distances; Saharan dust deposits iron and phosphorus across the Amazon basin and the Atlantic Ocean, while volcanic ash modifies soil fertility across entire continents. Humans are therefore embedded in a geochemical landscape from which they draw essential nutrients, and to which they are also exposed, sometimes at harmful concentrations.
The essential elements required for human biochemistry include iron, iodine, selenium, zinc, fluoride, calcium, magnesium, and copper, among others. All of these have geological sources, and their availability varies enormously with local geology and pedology. Toxic elements — arsenic, lead, mercury, cadmium, chromium(VI) — occur naturally in specific geological settings and are also released by mining and industrial activity. The critical point is that for many elements, the relationship between exposure and health outcome is not linear: it is U-shaped. At very low intakes, deficiency disease results; at very high intakes, toxicity results; and somewhere in between lies the range of optimal intake. For essential elements with relatively narrow optima, such as selenium and fluoride, the window between deficiency and toxicity is surprisingly narrow, and geological variability easily pushes populations into one or the other end of the curve.
Dose-Response Principles
Fluoride illustrates the U-shaped relationship with particular clarity. At concentrations below approximately 0.5 mg/L in drinking water, dental caries (tooth decay) is more prevalent in children. At concentrations between roughly 0.5 and 1.5 mg/L — the range that the WHO identifies as the upper guideline value — there is net benefit in reducing dental caries with minimal fluorosis risk. Above 1.5 mg/L, dental fluorosis (mottled, pitted enamel) begins to appear; above about 4 mg/L, skeletal fluorosis produces pain and stiffness in joints; and above 10 mg/L sustained over years, crippling skeletal fluorosis can produce severe bone deformity and paralysis. The geological factor here is the concentration of fluorite (CaF₂) and fluorapatite in the bedrock from which groundwater is drawn, a distribution that creates globally coherent belts of fluoride risk. Lead, by contrast, has no beneficial lower portion. Every measurable increase in blood lead concentration in children is associated with measurable cognitive impairment, and the CDC’s current childhood blood lead reference value of 3.5 μg/dL is defined not as a threshold of safety but as the 97.5th percentile of the distribution in children aged one to five years in the United States — a population-level benchmark rather than a toxicological no-effect level.
Spatial Patterns and Natural versus Anthropogenic Sources
The geography of geochemically-driven disease is coherent and predictable once the underlying geology is understood. Endemic goitre follows iodine-poor Precambrian crystalline shields and glacially stripped highlands. Dental fluorosis clusters in East Africa along the Rift Valley, in India’s crystalline basement regions (Rajasthan, Andhra Pradesh), and in parts of northern China where Precambrian granites and alkaline volcanic rocks weather fluoride into groundwaters. Arsenicosis — the systemic disease caused by chronic arsenic ingestion — appears preferentially in young deltaic alluvial plains such as the Ganges-Brahmaputra-Meghna delta of Bangladesh and West Bengal, the Mekong delta of Vietnam, and the Chaco-Pampean plain of Argentina, all of which share the key geochemical mechanism of reductive dissolution of arsenic-bearing iron oxyhydroxides.
Distinguishing natural from anthropogenic sources is a central challenge in medical geology because the appropriate regulatory and remediation response depends entirely on which is operative. Natural geological variability creates background concentrations that may be elevated in certain regions without any human intervention; smelters, mining operations, leaded petrol, and industrial discharge create point-source or diffuse contamination superimposed on that background. A sophisticated analytical tool for resolving this ambiguity is stable isotope fingerprinting: the four stable isotopes of lead (²⁰⁴Pb, ²⁰⁶Pb, ²⁰⁷Pb, and ²⁰⁸Pb) occur in ratios that are characteristic of specific ore deposits and industrial processes, and the ²⁰⁶Pb/²⁰⁷Pb ratio preserved in Arctic ice cores shifts dramatically upward with the introduction of leaded petrol in the 1940s and returns toward background after the phase-out in the 1970s and 1980s, providing an unambiguous stratigraphic record of the anthropogenic signal. Medical geology as a discipline is committed to making these distinctions rigorously, because treating natural geological exposure as if it were industrial contamination — or vice versa — leads to misdirected and ineffective public health interventions.
Chapter 2: Volcanic Hazards to Human Health
Volcanoes as Geochemical Sources
When Kīlauea volcano on the Island of Hawaiʻi entered its Lower East Rift Zone eruption phase in May 2018, it did something that volcanoes always do but that this particular eruption made unusually visible to the international media: it poisoned the air of a densely settled tropical landscape over many months. The communities of Puna on the island’s lower eastern flank experienced sustained exposure to what local authorities termed “vog” — volcanic smog — a complex aerosol mixture produced when sulphur dioxide emitted from the erupting fissures and lava flows reacted with atmospheric oxygen and water vapour to form sulphuric acid droplets and fine particulate matter. Health systems on the island recorded significant increases in emergency room visits for respiratory complaints, asthma exacerbations, and conjunctival irritation, particularly among children, elderly residents, and people with pre-existing cardiovascular disease. The 2018 eruption serves as a modern case study for a process that has been reshaping human health environments since humans first lived near active volcanoes.
Volcanic Gases: Sources and Toxicological Thresholds
Active volcanoes emit a suite of gases from magmatic degassing, fumarolic activity, and the interaction of lava with surface water and organic matter. The principal gas by mass is water vapour, followed by carbon dioxide, sulphur dioxide, and then smaller quantities of hydrogen sulphide, hydrogen fluoride, hydrogen chloride, and carbon monoxide. Each has a characteristic toxicological profile and an immediately dangerous to life and health (IDLH) concentration established by the US National Institute for Occupational Safety and Health (NIOSH). Sulphur dioxide (SO₂) has an IDLH of 100 parts per million (ppm) and a 1-hour WHO air quality guideline of 20 μg/m³; Kīlauea emits approximately 1,000–2,000 tonnes of SO₂ per day during active effusion. Hydrogen sulphide (H₂S), with its characteristic odour of rotten eggs, has an IDLH of 50 ppm; at concentrations above about 200 ppm it rapidly paralyses the olfactory nerve, removing the warning signal and allowing exposure to continue to lethal concentrations. Hydrogen fluoride (HF) is acutely corrosive to respiratory tissue, with an IDLH of 30 ppm; its chronic exposure pathway is skeletal fluorosis via uptake into bone apatite, as was catastrophically demonstrated in Iceland in 1783. Carbon dioxide, while not acutely toxic at atmospheric concentrations, pools in low-lying areas and closed valleys due to its density relative to air; the Lake Nyos disaster in Cameroon in 1986, while triggered by limnic eruption rather than surface volcanism, killed approximately 1,700 people through CO₂ asphyxiation, illustrating the lethal potential of volcanic CO₂.
Vog: Sulphuric Acid Aerosol and Respiratory Effects
The formation of vog begins when SO₂ emitted from volcanic sources is oxidised in the atmosphere, primarily by hydroxyl radicals, to form sulphur trioxide (SO₃), which combines instantly with water vapour to form sulphuric acid (H₂SO₄) droplets in the sub-micrometre size range. These droplets, along with unreacted SO₂ and secondary particulate matter formed from other volcanic emissions, constitute the aerosol mixture measured as PM₂.₅ — particulate matter with aerodynamic diameter less than 2.5 μm. In the Hawaiʻi vog study conducted by the University of Hawaiʻi and the Hawaii Department of Health using fixed monitoring stations across the island, PM₂.₅ concentrations in Puna during 2008 (a year of elevated Kīlauea activity from the Halemaʻumaʻu summit crater vent) regularly exceeded the US EPA 24-hour standard of 35 μg/m³, reaching peaks above 100 μg/m³ during periods of trade wind reversal that pushed the vog plume back over populated areas. Time-series analysis showed statistically significant associations between daily vog PM₂.₅ concentrations and hospital admissions for asthma, especially in children under 18 years.
Volcanic Ash: Composition and Pulmonary Penetration
Volcanic ash is not ash in the combustion sense but rather solidified fragments of magma and surrounding rock shattered by explosive volcanic activity. Its composition reflects the chemistry of the erupted magma: silica (SiO₂) content ranges from about 48% in basaltic eruptions to over 75% in rhyolitic eruptions; aluminium oxides, iron oxides, calcium and sodium feldspars, and glass shards constitute the remainder. The health significance of ash depends critically on particle size and crystalline silica content. Particles larger than 10 μm are trapped in the nasal passage and upper airways and cleared by mucociliary action. Particles between 2.5 and 10 μm (the PM₁₀ fraction) penetrate to the conducting airways — the bronchi and bronchioles. Particles smaller than 2.5 μm penetrate to the alveolar region of the lung, where they are deposited and either cleared by alveolar macrophages or, if biopersistent and mineralogically reactive, initiate inflammatory cascades. Because crystalline silica — particularly the quartz and cristobalite polymorphs — is the specific mineralogical agent of silicosis, eruptions that generate cristobalite-rich ash pose a qualitatively different hazard than basaltic eruptions.
Silicosis Mechanism
Silicosis results from the inhalation of respirable crystalline silica particles, which are engulfed by alveolar macrophages in an attempt at phagocytic clearance. The crystalline silica surface is chemically reactive: it generates reactive oxygen species (ROS) including hydroxyl radicals via Fenton-like surface chemistry, and silicic acid groups on the particle surface interact with the phospholipid membrane of the phagolysosome. The result is lysosomal rupture, which releases proteolytic enzymes and the NALP3 inflammasome activator into the macrophage cytoplasm, triggering caspase-1 activation and release of interleukin-1β. The macrophage dies, releasing both the silica particle — which is not degraded — and a burst of pro-inflammatory cytokines including TNF-α and TGF-β. The silica particle is re-engulfed by a new macrophage, and the cycle repeats. The net effect over years to decades is progressive pulmonary fibrosis: replacement of normal alveolar tissue with collagenous nodules that reduce lung compliance and gas exchange capacity. The latency period for silicosis is typically 10 to 30 years at occupational exposures, though it can be as short as two to five years at very high concentrations. The International Agency for Research on Cancer (IARC) classified inhaled crystalline silica as a Group 1 human carcinogen (lung cancer) in 1997.
Case Studies: Laki, Montserrat, and Galeras
The eruption of the Laki fissure system in southern Iceland between June 1783 and February 1784 stands as one of the most consequential volcanic events in recorded human history for its health impacts. Over eight months, Laki emitted approximately 122 megatonnes of SO₂ — a quantity exceeding the combined annual industrial SO₂ emissions of modern Europe — along with enormous quantities of HF and volcanic ash. The fluorine poisoning of Iceland’s livestock was catastrophic: fluoride deposited on pasture grass was ingested by sheep and cattle, producing fluorosis in their bones and teeth (the disease known as “Skaftá Fires” or Móðuharðindin, the “Haze Hardship”), and ultimately killing approximately 80% of Iceland’s sheep and 50% of its cattle. The resulting famine killed roughly 25% of Iceland’s human population — approximately 10,000 people out of 40,000. In Montserrat, the long-running eruption of the Soufrière Hills volcano (1995–ongoing) produced pyroclastic flows and ash fall containing unusually high concentrations of cristobalite, a crystalline silica polymorph formed by high-temperature devitrification of volcanic glass. Studies by Baxter and colleagues (1999, published in The Lancet) measured cristobalite concentrations in respirable ash fractions at up to 20% by mass, raising significant concern about long-term silicosis risk for the approximately 5,000 remaining island residents. In Colombia, the Galeras volcano’s repeated eruptions since the 1990s have produced ash fall across the city of Pasto (population ~450,000) and HF-rich gas emissions that have caused dental and skeletal fluorosis in cattle at farms on the volcano’s flanks, with urinary fluoride levels in rural children near Galeras consistently exceeding WHO-recommended maxima.
Chapter 3: Mineral Dusts and Fibrous Minerals
The Spectrum of Pneumoconiosis
The coalfields of northeastern Pennsylvania in the early twentieth century supported a population of approximately 150,000 miners working in conditions of pervasive dust. By the 1920s, rates of progressive massive fibrosis — the most severe form of coal workers’ pneumoconiosis — were high enough to be medically conspicuous, yet the disease was not formally recognised in US compensation law until 1969, when the Federal Coal Mine Health and Safety Act mandated dust control and medical surveillance. The word “pneumoconiosis” derives from the Greek for lung and dust, and the diseases it encompasses form a spectrum according to the mineralogy of the responsible dust. Coal dust produces coal workers’ pneumoconiosis (CWP) through a mechanism involving macrophage activation but without the extreme fibrogenicity of crystalline silica. Silicosis, caused by crystalline silica (quartz, cristobalite, tridymite), has been described in Chapter 2 in the context of volcanic ash; it also affects stonemasons, quarry workers, foundry workers, and sandblasters. Asbestosis, caused by inhaled asbestos fibres, has a distinctive pathology involving pleural fibrosis and ground-glass opacification on high-resolution CT scanning. Each of these dust diseases shares the characteristic of lengthy latency, making the epidemiological connection to occupational exposure difficult to establish and easy to deny.
Asbestos: Fibre Geometry and Pathogenicity
Asbestos is a commercial and regulatory term covering six silicate minerals with fibrous crystal habit: chrysotile (a serpentine-group mineral), and five amphibole minerals — amosite (grunerite), crocidolite (riebeckite), tremolite, anthophyllite, and actinolite. The distinction between chrysotile and the amphiboles is geochemically important and toxicologically contested. Chrysotile fibres are curly and chemically soluble in the acid environment of the phagolysosome; they have a relatively short biopersistence time in lung tissue, estimated at weeks to months. Amphibole fibres are straight, rigid, and chemically resistant; their biopersistence in lung tissue is measured in decades. The fibre geometry hypothesis of pathogenicity, developed primarily by Stanton, Layard, and colleagues in the 1970s and subsequently elaborated by others, proposes that long fibres (greater than 8 μm in length) that are too long to be fully engulfed by a single macrophage but thin enough (less than approximately 0.25 μm in diameter) to reach the distal lung are the most biologically active configuration, producing what is termed “frustrated phagocytosis” — a sustained inflammatory state in which the macrophage cannot complete engulfment and cannot disengage, continuously releasing inflammatory mediators. The IARC classifies all six forms of asbestos as Group 1 human carcinogens.
Mesothelioma and Wittenoom
Mesothelioma is a cancer of the mesothelial lining cells covering the pleura (chest cavity), peritoneum (abdominal cavity), and pericardium. It is almost invariably fatal, with median survival after diagnosis of approximately 12 months, and it has a latency period of 30 to 50 years from first asbestos exposure to clinical presentation. Wittenoom in Western Australia was the site of Australia’s only large-scale crocidolite (blue asbestos) mining and milling operation, which operated from 1943 to 1966. Approximately 6,500 people worked at or lived in Wittenoom, and a further 11,000 are estimated to have passed through the area or been environmentally exposed to tailings dust that spread across the surrounding landscape. A cohort study by de Klerk, Musk, and colleagues tracking Wittenoom workers and residents over several decades found that by the early 2000s, over 100 mesothelioma deaths had been recorded among former workers, and the mortality rate from mesothelioma in the cohort was approximately 9% — a staggering elevation above the background rate of roughly 0.002%. The town of Wittenoom was officially closed by the Western Australian government in 2006, but asbestos-contaminated tailings continue to weather into dust in the region, creating ongoing environmental exposure. The global mesothelioma epidemic — driven by the explosion of asbestos use in shipbuilding, insulation, and construction during the 1940s through 1980s — is expected to produce peak mortality in many European and Asian countries through the 2020s, as the 30–50 year latency plays out from peak industrial exposures.
Erionite and the Villages of Cappadocia
Perhaps the most striking demonstration that fibre geometry rather than asbestos chemistry per se drives mesothelioma comes from the Cappadocian villages of Tuzköy and Karain in central Turkey. These villages, with populations numbering in the hundreds, are built on and from tuff — consolidated volcanic ash — that contains abundant erionite, a fibrous zeolite mineral with fibre morphology closely resembling amphibole asbestos. Erionite is not a silicate in the asbestos family but shares the critical geometric properties: fibres are long, thin, and remarkably biopersistent. Epidemiological investigation by Baris, Saracci, and colleagues beginning in the late 1970s documented mesothelioma rates approximately 1,000 times above background, with some villages recording mesothelioma as the cause of over 50% of all adult deaths over multi-decade observation periods. Critically, Turkish villagers who emigrated to North Dakota in the United States — where erionite also outcrops along road cuttings in the Williston Basin — carried their elevated risk with them and also exposed their descendants born in the US to erionite in road gravel. The erionite case study carries implications for geologic mapping of zeolite-rich formations globally and for the exposure assessment of rural populations in erionite-bearing terrains.
Naturally Occurring Asbestos and the El Dorado Hills Case
Serpentinite and related ultramafic rocks, which form when oceanic mantle material is hydrated during tectonic subduction or obduction, often contain tremolite and chrysotile asbestos as natural mineralogical constituents. The California Department of Health Services undertook a systematic investigation in El Dorado Hills, a rapidly growing suburban community east of Sacramento situated on serpentinite-bearing terrain, following concerns raised by residents in the early 2000s. Tremolite-actinolite asbestos was documented in soils at concentrations up to 17% by mass in some surface samples. Air monitoring during typical neighbourhood activities — gardening, mowing grass, children playing on unpaved areas — detected asbestos fibre concentrations exceeding Californian ambient standards. A case-control study by Pan and colleagues (2005, published in the American Journal of Respiratory and Critical Care Medicine) found elevated mesothelioma incidence in El Dorado County relative to demographically matched California counties, though the modest size of the exposed population limited statistical precision. The El Dorado Hills case illustrates that naturally occurring asbestos (NOA) in residential and recreational settings can pose quantifiable health risks independent of any industrial source.
The Resurgence of Coal Workers’ Pneumoconiosis
Coal workers’ pneumoconiosis was widely believed to be declining in the United States following the 1969 Mine Act’s dust control requirements, and national prevalence surveys through the 1990s supported this view. However, surveillance data from the National Institute for Occupational Safety and Health published in 2018 documented a resurgence of progressive massive fibrosis in Appalachian coalfield workers, with prevalence in certain counties of Virginia, West Virginia, and Kentucky reaching levels not seen since the pre-regulation era. The mechanistic explanation proposed by Blackley and colleagues involves the progressive exhaustion of thick coal seams, which are now being replaced by thinner seams that require miners to cut through adjacent rock using continuous mining machines. These adjacent rocks in the Appalachian basin are frequently silica-rich sandstones and shales, so miners cutting mixed coal-rock strata are now exposed to crystalline silica in addition to coal dust. The interaction between coal dust and silica appears synergistic in producing fibrosis, and the shift from pure coal pneumoconiosis to mixed dust fibrosis represents a qualitative change in disease severity that the older regulatory framework, designed primarily around coal dust limits, did not anticipate.
Chapter 4: Geochemistry of Drinking Water — Fluoride and Arsenic
Fluoride in Groundwater Systems
The fluoride concentration in natural groundwater is controlled by the weathering dissolution of fluoride-bearing minerals, chiefly fluorite (CaF₂) and fluorapatite (Ca₅(PO₄)₃F), tempered by secondary controls including the concentration of calcium (high calcium favours precipitation of CaF₂ and suppresses dissolved fluoride), pH (alkaline conditions promote fluoride dissolution), the residence time of groundwater in the aquifer, and temperature. In regions where Precambrian crystalline basement or Tertiary alkaline volcanic rocks provide abundant fluorapatite and where groundwater is warm, alkaline, and calcium-poor — conditions common in the East African Rift and in parts of India and China — fluoride concentrations in tube well water routinely exceed the WHO guideline of 1.5 mg/L and may reach 10–30 mg/L in extreme cases. The global estimate of people exposed to drinking water fluoride above the WHO guideline is approximately 200 million, with the heaviest burden in Ethiopia, Tanzania, Kenya, India (particularly Rajasthan, Andhra Pradesh, and Telangana), and northern China.
The public health consequences of high fluoride exposure unfold along a concentration gradient. Dental fluorosis, characterised at its mildest by faint white streaking of enamel and at its most severe by pitting, brown staining, and structural weakness of tooth crowns, is caused by disruption of ameloblast function during tooth development at fluoride concentrations above approximately 1.5 mg/L in children during the tooth-forming years (roughly birth to age eight for permanent teeth). It is estimated to affect over 200 million people globally in its mildest form. Skeletal fluorosis develops after years of exposure to concentrations generally exceeding 4 mg/L; early skeletal fluorosis presents as stiffness and pain in the spine and large joints, progressing to calcification of ligaments and interosseous membranes, and in its crippling form (at sustained exposures above approximately 10 mg/L) produces osteosclerosis of the axial skeleton and severe joint deformity. Case studies from the Rift Valley region of Tanzania and Ethiopia have documented crippling skeletal fluorosis affecting adults in their 30s and 40s in villages where borehole water fluoride concentrations exceeded 15 mg/L, with radiographs showing dense “chalky” vertebrae and calcified interspinous ligaments reminiscent of diffuse idiopathic skeletal hyperostosis.
Arsenic in Groundwater: Geochemical Sources and Mechanisms
Arsenic is the twentieth most abundant element in the Earth’s crust, present at average concentrations of approximately 5 mg/kg. It occurs most commonly in sulphide ore minerals, particularly arsenopyrite (FeAsS) and orpiment (As₂S₃), but in most sedimentary environments its geochemically dominant form is as arsenate (As(V)) adsorbed onto the surfaces of iron oxyhydroxide minerals — ferrihydrite, goethite, and lepidocrocite — which coat sediment grains and have very high surface-area sorption capacity for arsenic under oxidising conditions. The critical process controlling arsenic release into groundwater in the majority of high-arsenic aquifers worldwide is reductive dissolution: when young organic-rich sediments are deposited and buried in deltaic and alluvial fan environments, microbial decomposition of sedimentary organic matter consumes dissolved oxygen and then nitrate, eventually driving conditions sufficiently reducing that iron-reducing bacteria such as Geobacter and Shewanella species begin using Fe(III) in iron oxyhydroxide minerals as a terminal electron acceptor. This dissolves the iron oxyhydroxide coatings on sediment grains, and in doing so releases the adsorbed arsenic into porewater, from which it migrates into the groundwater flowing through the aquifer. The process is most effective in young (Holocene, less than about 10,000 years old) fine-grained sediments where organic carbon content is high and hydraulic conductivity is low enough to prevent flushing of the released arsenic.
The Bangladesh Arsenic Crisis
The scale of the Bangladesh arsenic crisis — routinely described as the largest mass poisoning in history — is almost incomprehensible in its demographic scope. Beginning in the 1970s and accelerating through the 1980s, international development organisations and the Bangladesh government promoted the installation of shallow tube wells (hand-pumped boreholes 30–60 metres deep) across the country as a response to the chronic burden of waterborne diarrhoeal disease, including cholera, that killed tens of thousands of Bangladeshis annually when surface water was consumed without treatment. By 1990, approximately four million tube wells had been installed, providing apparently safe, clear, microbially clean drinking water to the majority of Bangladesh’s rural population of roughly 100 million. What the well-drilling programme did not include, because arsenic had not been identified as a problem in Bangladesh’s groundwater and testing was not routine, was chemical analysis for arsenic. The first systematic surveys of tube well water chemistry in the early 1990s, conducted by the British Geological Survey and the Dhaka Community Hospital among others, revealed that a large fraction of shallow tube wells in the southern and central alluvial plains — precisely the areas of most intensive well installation — had arsenic concentrations above the WHO guideline of 10 μg/L, and many had concentrations above the Bangladeshi national standard of 50 μg/L, which was itself much higher than the international guideline. Surveys published by the British Geological Survey in 1996 and subsequently confirmed by the World Health Organization and other bodies estimated that approximately 35–77 million people were drinking water with arsenic above 50 μg/L, and that up to 70 million people were exposed above the WHO guideline.
The health consequences of chronic arsenic exposure through drinking water are systemic and severe. The characteristic skin manifestations, collectively called arsenicosis, include rain-drop hyperpigmentation (patchy increases in skin melanin), palmar and plantar keratosis (hard, rough thickening of the palms and soles), and Bowen’s disease (intraepidermal carcinoma). Internal malignancies — particularly of the bladder, lung, and skin — are elevated substantially, with relative risks for bladder cancer of approximately two to three times background at water arsenic concentrations around 100 μg/L based on epidemiological studies in Taiwan’s Blackfoot Disease endemic area (where arsenic-related peripheral vascular disease also caused gangrene of the limbs) and in Bangladesh itself. Cardiovascular disease mortality, including ischaemic heart disease, was elevated in Bangladeshi cohort studies at arsenic exposures above 100 μg/L. Perhaps most concerning from a long-term public health perspective are the developmental neurotoxicity effects: longitudinal studies by Wasserman and colleagues in Bangladesh documented inverse associations between infant and childhood arsenic exposure and scores on tests of intellectual function, with the association present even at relatively low exposures, suggesting a dose-response relationship without a clear safe threshold.
Global Distribution and Mitigation
Beyond Bangladesh, high-arsenic groundwater occurs in predictable geochemical settings across Asia, Latin America, and parts of Europe. In Inner Mongolia, China, high-arsenic artesian well water in the Hetao Plain has produced arsenicosis in an estimated several hundred thousand people. In the Red River Delta of Vietnam, reducing Holocene alluvial sediments host the same reductive dissolution mechanism documented in Bangladesh. In Chile’s Atacama Desert, geothermally influenced waters from volcanic arc settings introduce arsenic via a different mechanism — hydrothermal leaching — that has produced elevated arsenic exposure in Antofagasta’s municipal water supply (historically reaching 800 μg/L before treatment); a retrospective cohort study by Smith and colleagues published in the American Journal of Epidemiology in 1998 found a five-fold excess of bladder cancer mortality in Antofagasta relative to the low-arsenic comparison region. In Argentina’s Chaco-Pampean plain, high-arsenic groundwater affects an estimated two million rural residents across the provinces of Tucumán, Córdoba, Santa Fe, and La Pampa. Mitigation strategies include drilling deeper boreholes that tap pre-Holocene aquifers where reducing sediments are absent; installing arsenic removal filters based on iron-oxide-coated sand columns, which exploit the same surface chemistry that sequesters arsenic in natural oxic aquifer materials; and point-of-use treatment systems. Social barriers to adoption — including cultural preferences for tube well water over surface sources, scepticism of installation costs, and the absence of colour, taste, or odour that would signal contamination — have substantially limited the effectiveness of mitigation programmes in Bangladesh despite decades of effort.
Chapter 5: Toxic Trace Metals in the Geosphere — Lead, Mercury, Cadmium
Lead: Geochemistry and Neurological Effects
The primary ore minerals of lead are galena (PbS), cerussite (PbCO₃), and anglesite (PbSO₄). In natural, uncontaminated surface environments, lead concentrations in soils typically range from 10 to 40 mg/kg, and background blood lead concentrations in pre-industrial humans — estimated from measurements in archaeological bone — were approximately 0.016 μg/dL, roughly 100 times lower than the median blood lead concentration in contemporary Americans even after the phase-out of leaded petrol. The CDC’s current childhood blood lead reference value of 3.5 μg/dL (revised downward from 5 μg/dL in 2021) is defined as the 97.5th percentile of the US distribution in children aged one to five, not as a health-based threshold. The neurotoxicological evidence for effects below even 5 μg/dL is substantial: a pooled analysis by Lanphear and colleagues published in Environmental Health Perspectives in 2005, drawing on seven prospective cohort studies involving over 1,300 children, estimated that each 1 μg/dL increase in blood lead from 2.4 to 10 μg/dL was associated with a 1.9 IQ point decrement, and that the relationship was steeper — approximately 3.9 IQ points per μg/dL — below 10 μg/dL than above it, indicating a supralinear dose-response rather than a threshold.
The biochemical mechanisms by which lead disrupts neurodevelopment operate primarily through ionic mimicry. Lead(II) (Pb²⁺) has an ionic radius very close to calcium(II) (Ca²⁺) and competes with calcium at calcium-dependent biological processes including NMDA receptor gating, protein kinase C activation, and calmodulin-dependent signalling. Disruption of NMDA receptor function impairs long-term potentiation, the synaptic mechanism most closely associated with memory consolidation and learning. Lead also strongly inhibits delta-aminolevulinic acid dehydratase (ALAD), an enzyme in the haem biosynthetic pathway, producing elevation of urinary aminolaevulinic acid and erythrocyte protoporphyrin that were historically used as biomarkers of lead exposure before blood lead measurement became routine. In natural geological settings, the Broken Hill ore body in New South Wales, Australia — one of the world’s richest lead-zinc-silver deposits, exposed by erosion over millions of years — has produced a large aureole of naturally elevated soil lead concentrations extending for kilometres around the ore body, which was further intensified by over a century of mining and smelting activity. Studies of children living in Broken Hill have documented blood lead concentrations well above current reference values even in areas distant from the active mine site.
Mercury: Cinnabar, Methylation, and Minamata
Mercury’s geochemical cycle is uniquely complex because it undergoes microbially mediated chemical transformation between its inorganic and organic forms, and its organic methylmercury form has profoundly different bioavailability and toxicological properties than its inorganic counterpart. The principal mercury ore mineral is cinnabar (HgS), mined at Almadén in Spain, Idrija in Slovenia, and Monte Amiata in Tuscany, Italy, among other sites. Natural volcanic degassing releases approximately 100 megatonnes per year of gaseous elemental mercury (Hg⁰) into the atmosphere globally, where it has an atmospheric lifetime of 0.5 to 2 years before deposition to land and water surfaces. Once deposited in aquatic sediments, inorganic mercury is methylated by anaerobic sulphate-reducing bacteria of the genus Desulfovibrio and iron-reducing bacteria, forming monomethylmercury (CH₃Hg⁺, abbreviated MeHg). This process is most active in sulphate-rich, anoxic, organic-rich sediment environments such as wetlands, estuarine muds, and reservoir sediments. Methylmercury is lipophilic and accumulates in biological tissue far more efficiently than inorganic mercury, forming the basis of bioaccumulation and biomagnification through food chains.
The most thoroughly documented example of catastrophic anthropogenic methylmercury poisoning is the Minamata disease epidemic in Kumamoto Prefecture, Japan. The Chisso chemical company’s acetaldehyde plant discharged inorganic mercury and methylmercury directly into Minamata Bay from 1932 through 1968, with peak discharges in the 1950s. Mercury accumulated in sediments and was methylated; methylmercury bioaccumulated in fish and shellfish, which were the primary protein source for fishing families in the villages surrounding the bay. Neurological symptoms — sensory disturbance in the extremities, constriction of visual fields, hearing impairment, cerebellar ataxia, and in severe cases, paralysis and death — began appearing in 1956. By 2001, 2,265 patients had been officially certified as Minamata disease victims, though many thousands more have sought but been denied certification under Japanese government criteria that epidemiologists consider overly restrictive. Among the most devastating manifestations was congenital Minamata disease, in which mothers who were exposed but showed minimal symptoms themselves bore children with severe cerebral palsy-like neurological damage. Methylmercury crosses both the blood-brain barrier and the placenta more readily than inorganic mercury; it binds covalently to cysteine residues on proteins and uses the large neutral amino acid transporter (LAT1) to enter the brain. In the fetal and neonatal brain, MeHg inhibits tubulin polymerisation, disrupting neuronal migration during cortical development and damaging the cerebellar granule cells, producing the characteristic neurological picture of congenital Minamata disease.
Cadmium: Renal Toxicity and Itai-Itai Disease
Cadmium is geochemically coupled to zinc because it substitutes isomorphically for Zn²⁺ in sphalerite (ZnS), the primary zinc ore mineral, occurring typically at concentrations of 0.1 to 0.3% cadmium by mass in zinc ores. Natural background concentrations of cadmium in unpolluted soils are typically less than 1 mg/kg. Cadmium is unusual among heavy metals in being efficiently transported into plants by zinc and iron uptake pathways, making food chain transfer a more significant exposure route than direct ingestion of soil or water for most populations. The renal tubular epithelium is the primary target organ for chronic cadmium toxicity. Cadmium accumulates in the kidney with a biological half-life of approximately 10 to 30 years, because once incorporated into metallothionein in hepatocytes (where it is initially sequestered following gastrointestinal absorption), the cadmium-metallothionein complex is eventually filtered by the glomerulus, reabsorbed by proximal tubule cells, and concentrated there. Exceeding a critical renal cortical cadmium concentration of approximately 200 mg/kg causes proximal tubular dysfunction: impaired reabsorption of low-molecular-weight proteins (tubular proteinuria, measured as elevated urinary β₂-microglobulin or retinol-binding protein), amino acids, glucose, and phosphate. Phosphate wasting leads to hypophosphataemia and secondary hyperparathyroidism; combined with the direct toxic effect of cadmium on osteoblasts, this produces severe osteomalacia and osteoporosis.
Itai-itai disease (“ouch-ouch” disease, named for the agonised cries of affected patients) was the manifestation of cadmium poisoning in the Jinzū River basin of Toyama Prefecture, Japan, where mining and ore processing at the Kamioka mine upstream contaminated river water used for rice irrigation from approximately the 1910s onwards, with heavy contamination through the 1950s and 1960s. Over 200 confirmed Itai-itai deaths occurred, with hundreds more individuals diagnosed with the full syndrome of severe renal failure combined with multiple spontaneous fractures and agonising bone pain arising from the demineralised skeleton, occurring primarily in post-menopausal women whose physiological calcium stress (from repeated pregnancies and lactation compounded by dietary calcium insufficiency) made them most vulnerable to the bone consequences of cadmium nephropathy. Japan established a dietary cadmium standard of 0.4 mg/kg for polished rice as a direct response to the Itai-itai experience. The geological substrate — the Kamioka mine Pb-Zn ore body — was the natural source, but anthropogenic mining concentration and hydrological transport were required to produce exposure levels sufficient to cause clinical disease.
Chapter 6: Essential Elements and Deficiency Diseases with Geological Roots
Iodine: Marine Origins and Glacial Depletion
The uneven global distribution of iodine in soils and water is one of the most consequential geochemical facts in public health, responsible historically for more preventable neurological disability than perhaps any other single element. Iodine is a volatile halogen element that is concentrated in marine sediments and in soils derived from them, because ocean spray deposits iodine-rich aerosol on coastal land surfaces and marine sedimentary rocks contain iodine in organic matter at concentrations up to 40 mg/kg. Continental interiors, high mountain ranges, and regions that were covered by Pleistocene ice sheets are systematically iodine-poor for complementary reasons: ice sheets physically strip the accumulated iodine from soils, meltwater leaching removes soluble iodide, and mountainous terrain elevates the watershed well above marine iodine deposition gradients. The consequence of iodine-poor soil is iodine-poor food: crops grown on iodine-deficient soils and livestock raised on them both contain insufficient iodine to sustain normal human thyroid metabolism. The Himalayan foothills from Pakistan through Nepal to Myanmar, the Great Lakes basin of North America, the highland regions of Central Africa around the Democratic Republic of Congo, and the Andean altiplano were all historically severe iodine deficiency zones before the introduction of iodised salt programmes.
The thyroid physiology of iodine deficiency is straightforward: the thyroid gland requires iodine to synthesise the thyroid hormones thyroxine (T₄) and triiodothyronine (T₃). When dietary iodine intake falls below approximately 150 μg/day, plasma T₄ and T₃ concentrations fall, removing inhibitory negative feedback from the hypothalamic-pituitary-thyroid axis, and pituitary secretion of thyroid-stimulating hormone (TSH) rises. Chronically elevated TSH drives thyroid cell proliferation and hypertrophy, producing the goitre (enlarged thyroid) visible as a swelling at the front of the neck. Endemic goitre — defined as goitre prevalence above 5% in a population — was historically so common in the Alps that it was depicted in medieval art and described as a regional norm. The WHO estimated in the 1990s that approximately 655 million people globally had goitre and 1.5 billion were at risk from inadequate iodine intake. Severe iodine deficiency during fetal development, when thyroid hormones are essential for neurological maturation, produces cretinism — characterised by intellectual disability, deaf-mutism, spastic diplegia, and delayed growth — which was endemic at high prevalence in the severely iodine-deficient mountain valleys of the Himalayas, Andes, and central Africa before iodised salt became widely available.
Selenium: A Narrow Window Between Deficiency and Toxicity
Selenium provides perhaps the most instructive example of the U-shaped dose-response curve operating at a landscape scale. The geochemistry of selenium closely parallels that of sulphur — selenium substitutes for sulphur in sulphide minerals and follows sulphur through weathering and oxidation cycles — but the global distribution of soil selenium is highly heterogeneous. Oxidising, semi-arid soils derived from Cretaceous marine shales in the western United States, particularly in Wyoming, South Dakota, and Nebraska, contain selenium at concentrations of 2 to 10 mg/kg, producing plants that accumulate enough selenium to cause livestock selenosis (alkali disease and blind staggers), characterised by hoof loss, mane and tail hair loss, liver cirrhosis, and neurological signs, at dietary selenium intakes exceeding approximately 5 mg/day. Conversely, acid, highly leached soils in humid temperate and tropical regions are severely selenium-deficient. Keshan County in Heilongjiang Province, northeastern China, sits on Se-deficient volcanic parent material, and the resulting chronic selenium deficiency in the local population — dietary intakes as low as 3–11 μg/day against a recommended intake of 55 μg/day for adults — was associated with an endemic cardiomyopathy, Keshan disease, affecting primarily children and women of reproductive age, producing heart failure and high mortality.
Kashin-Beck disease is a chronic endemic osteoarthropathy affecting the epiphyseal growth plates and articular cartilage of growing children, historically endemic in a band extending from Siberia through northeastern China to Korea and Tibet, regions that share severe selenium and iodine deficiency and elevated mycotoxin exposure from grain stored under damp conditions. The relationship between selenium deficiency and Kashin-Beck disease is not as biochemically direct as for Keshan disease, and multiple cofactors including mycotoxin exposure and water fulvic acid have been proposed; however, supplementation trials and geographic correlations support selenium deficiency as a necessary if not sufficient contributory factor.
The optimal adult daily selenium intake is estimated by the WHO at 55 to 200 μg/day, with selenosis (hair and nail loss, nausea, neurological signs) appearing at chronic intakes above approximately 400 μg/day. Brazil nuts (Bertholletia excelsa) are a dramatic illustration of geological selenium uptake: grown on Se-rich Amazonian soils, individual Brazil nuts contain 10 to 95 μg of selenium, meaning that two to three nuts per day can provide the entire adult requirement, but consuming handfuls daily could approach toxicity. The soil selenium concentration in the Brazil nut grove is the direct geological determinant of this variability.
Iron, Zinc, and the Geological Basis of Micronutrient Malnutrition
Iron deficiency anaemia is the most prevalent nutritional disorder globally, affecting an estimated two billion people and causing a disproportionate burden of morbidity in women of reproductive age and young children. The geological link is indirect but real: the bioavailability of dietary iron depends heavily on the chemical form of iron in the soil and hence in food crops. In highly weathered lateritic soils of the humid tropics — soils derived from intense weathering of ferromagnesian rocks under conditions of high rainfall and temperature — iron is overwhelmingly present as crystalline Fe³⁺ oxides (haematite, goethite) that are essentially insoluble at the pH of the plant rhizosphere and therefore largely unavailable for plant uptake. The staple crops grown on lateritic soils in Sub-Saharan Africa and South Asia tend to have low iron concentrations despite being grown in iron-rich soils, because soil iron and food iron bioavailability are controlled not by total iron but by its mineralogical and chemical speciation.
Zinc deficiency was first characterised as a clinical entity by Ananda Prasad and colleagues in the 1960s through studies of young men in Iran and Egypt exhibiting severe growth retardation (dwarfism) and hypogonadism. The geological context was significant: the affected populations subsisted largely on unleavened flatbread (bread prepared without fermentation) made from wheat grown on calcareous, zinc-poor soils. The high phytate content of unfermented wheat bran strongly chelates zinc and prevents its intestinal absorption, compounding the already low zinc content of the grain grown on these soils. When fermentation is used to make leavened bread, phytase enzymes (both endogenous to grain and contributed by yeast) hydrolyse phytate, releasing zinc for absorption. The global zinc deficiency burden — estimated by the WHO at over 17% of the world’s population at risk — is therefore a product of both soil zinc geochemistry (calcareous soils derived from limestone and chalk are systematically low in plant-available zinc) and dietary processing practices that modulate the bioavailability of whatever zinc the food contains.
Chapter 7: Soil Geochemistry, Bioaccumulation, and Food-Chain Transfer
Soil as the Critical Interface
The three worst industrial accidents in Japanese history from an environmental health perspective — Minamata disease, Itai-itai disease, and the Yokkaichi asthma cases — all ultimately involved the soil or aquatic sediment as the medium through which toxic metals and organic compounds entered the food chain. Soil is not simply a mechanical substrate for plant growth but a dynamic biogeochemical reactor, hosting on the order of 10⁸ to 10¹⁰ microbial cells per gram of dry weight and mediating the chemical transformations of virtually every element that passes through the terrestrial component of biogeochemical cycles. The critical role of soil pH in controlling metal mobility illustrates this well: the majority of heavy metals including lead, cadmium, zinc, copper, and nickel exist in soil solution as divalent cations whose activity is governed by pH-dependent adsorption equilibria with soil organic matter, iron and manganese oxyhydroxides, and clay mineral surfaces. At soil pH values below approximately 5.5, these adsorption sites become protonated and release adsorbed metals into soil solution, increasing their bioavailability to plant roots and soil organisms. Agricultural liming — the addition of calcium carbonate or calcium hydroxide to raise soil pH — is therefore both an agronomic practice for optimising crop yields and an effective environmental management strategy for reducing metal bioavailability in contaminated soils.
Plant Uptake Mechanisms
Plants acquire essential metal micronutrients through specific membrane transport proteins. Iron uptake in Strategy I plants (non-graminaceous species including most vegetables and fruit trees) involves acidification of the rhizosphere by plasma membrane H⁺-ATPases, reduction of Fe³⁺ to Fe²⁺ by the ferric reductase FRO2, and uptake of Fe²⁺ through the transporter IRT1 (Iron-Regulated Transporter 1). Zinc is taken up through the ZIP family of transporters, particularly ZIP4. These specific transport systems have a critical vulnerability from a food chain toxicology perspective: cadmium, whose ionic radius and coordination chemistry are similar to those of zinc, is transported into roots by the same ZIP transporters, with little discrimination between Zn²⁺ and Cd²⁺ at the molecular level. Lead uptake by plants is more passive, occurring partly through calcium channels and partly through mass flow in the transpiration stream, and most plant species are effective barriers to lead translocation from root to shoot — the majority of lead taken up by roots is sequestered in root vacuoles and does not reach the edible above-ground portions of food plants. Cadmium, however, is translocated efficiently to shoots and seeds, making grain crops a significant dietary cadmium source.
Hyperaccumulator plants offer a biologically fascinating extreme of metal tolerance and accumulation. Thlaspi caerulescens (alpine pennycress), growing on zinc- and cadmium-rich calamine soils in the Ardennes of Belgium and Luxembourg and in the Peak District of England, accumulates zinc to concentrations exceeding 3% of shoot dry weight — roughly 1,000 times the concentration in non-accumulator species growing on the same soils — using hyperactivated ZIP transporter expression and enhanced vacuolar sequestration in the shoot via CDF (Cation Diffusion Facilitator) family transporters. Hyperaccumulators have attracted considerable interest as tools for phytoremediation, the use of plants to extract metals from contaminated soils, though the economics of phytoremediation are challenging given the typically low biomass production of hyperaccumulator species.
Mercury Biomagnification in Aquatic Food Webs
The food chain transfer of methylmercury provides the most quantitatively dramatic example of biomagnification in environmental toxicology. Dissolved inorganic mercury in seawater is typically present at concentrations of 0.5 to 1.5 ng/L. Phytoplankton accumulated methylmercury from dissolved concentrations, achieving bioconcentration factors (the ratio of organism concentration to water concentration) on the order of 10⁵ to 10⁶. Zooplankton grazing on phytoplankton achieve methylmercury concentrations roughly 3 to 5 times higher than their prey — the trophic magnification factor (TMF) — because MeHg is not metabolised or excreted efficiently. Small forage fish (herring, anchovies) feeding on zooplankton achieve another 3 to 5-fold concentration increase; large predatory fish (tuna, swordfish, shark) feeding on forage fish achieve another multiplication. The net result is that large tuna captured in the open Pacific contain methylmercury at concentrations of 0.3 to 1.0 μg/g wet weight — concentrations approximately one million times higher than the water from which the mercury originally entered the food web. Arctic polar bears and marine mammals at the top of food webs in Arctic and Subarctic regions, where mercury deposition from atmospheric transport is elevated and food chains are long, have liver methylmercury concentrations that routinely exceed one μg/g, concentrations that in experimental studies are associated with neurological and reproductive toxicity.
The FDA and EPA joint advisory for pregnant women recommends limiting consumption of high-mercury fish (shark, swordfish, king mackerel, tilefish, and bigeye tuna) to no more than one serving per week, while encouraging consumption of lower-mercury options such as salmon, sardines, and pollock. The geological underpinning of this advisory is the natural mercury flux from volcanic sources combined with anthropogenic mercury from coal combustion and artisanal gold mining, both of which introduce inorganic mercury that is ultimately converted to methylmercury in aquatic sediments.
Cadmium in Rice and the Japanese Dietary Standard
Rice (Oryza sativa) has unusual characteristics among cereal crops with respect to cadmium accumulation, attributable to two factors: the flooded (paddy) cultivation conditions and the efficient cadmium loading mechanism of the rice grain. Under flooded conditions, soil becomes anoxic and reducing, driving reduction of Fe³⁺ to Fe²⁺ and mobilising cadmium that was co-precipitated with or adsorbed to iron oxyhydroxides into soil solution. The efficient iron uptake system of rice — particularly the OsIRT1 and OsNramp5 transporters — takes up cadmium alongside iron and zinc, and the OsHMA3 vacuolar sequestration system in rice is less effective at retaining cadmium in root cells than in most other cereals, allowing cadmium to be loaded into the phloem and transported to the grain. Japan established a dietary cadmium standard of 0.4 mg/kg for polished rice in direct response to the Itai-itai epidemic, and the Jinzū River basin rice contamination episode drove the first comprehensive national survey of rice cadmium concentrations across Japanese prefectures. Regions downstream of the Kamioka mine in Toyama Prefecture consistently showed rice cadmium concentrations exceeding 0.4 mg/kg, while most of Japan showed concentrations of 0.05 to 0.15 mg/kg. For populations in Asia where polished rice provides 50 to 80% of total calorie intake, rice is the single dominant dietary cadmium source, and geological or anthropogenic enrichment of paddy soils with cadmium translates directly into elevated kidney cadmium burden and risk of tubular dysfunction.
Geophagy: Soil Consumption and Dual Exposure
Geophagy — the deliberate ingestion of soil, clay, or earth — is a widespread human practice documented across dozens of cultures and occurring at high prevalence in certain specific demographic groups: pregnant women in Sub-Saharan Africa, young children in both rural and urban settings globally, and some communities in Appalachian North America. In the Kisii highlands of Kenya, a systematic survey by Geissler and colleagues found that over 50% of pregnant women consumed clay on a regular basis during pregnancy, obtaining specific clay from particular hillside outcrops that were traded commercially at local markets. Proposed functional benefits of geophagy include supplementation of calcium, iron, and other minerals from the clay itself, and the protective binding of dietary toxins (phytates, mycotoxins) by clay mineral surfaces in the gastrointestinal tract. The risk side of geophagy is equally real: soils in mining-affected and smelter-impacted areas may contain lead, arsenic, and cadmium at concentrations that represent a direct ingestion hazard, particularly for children who ingest soil adventitiously through hand-to-mouth behaviour during play. The geological substrate of the clay being consumed determines whether the practice is net beneficial or net harmful: illitic and smectitic clays from uncontaminated Precambrian basement terrain may deliver bioavailable iron and calcium, while soils from minerally enriched terrains near ore bodies may deliver toxic metal doses that exceed tolerable daily intakes. The clay minerals themselves (kaolinite, smectite, attapulgite) have different iron content and different toxin-binding capacities, meaning that the mineralogical identity of the ingested material determines both its potential benefit and its potential hazard.
Chapter 8: Natural vs Anthropogenic Geochemical Signals — Distinguishing and Mapping
The Baseline Problem
In 1989, when environmental investigators first began characterising contamination in the sediments of the Sudbury Basin in Ontario, they faced a challenge that pervades environmental geochemistry: how to distinguish what was there before human activity from what has been added since. Sudbury sits atop one of the world’s largest nickel-copper ore bodies, and a century of smelting from the 1880s onwards had deposited enormous quantities of nickel, copper, cobalt, sulphur, and arsenic across a landscape that had already been geochemically enriched in these elements by the natural presence of the ore body. Defining “background” in this context requires determining what the natural geochemical variability would look like without the mining and smelting overlay, which in turn requires either pre-industrial reference materials (lake sediment cores predating industrialisation, peat bog profiles, ice cores) or spatial comparison with similar geological terrains lacking the anthropogenic source. The baseline problem has regulatory consequence: cleanup targets in contaminated land legislation are often benchmarked against “background,” and if background is defined too broadly to include naturally anomalous geological terrains, cleanup will be required even in areas that were never contaminated, while contaminated areas above natural anomalies may escape remediation.
Palaeoenvironmental archives offer the most rigorous solution to the baseline problem. Lake sediment cores from lakes in otherwise-pristine catchments accumulate a continuous stratigraphic record of atmospheric deposition that can be dated by radiometric methods (²¹⁰Pb for the past century, ¹⁴C for longer timescales). Metal concentrations in dated sediment sections provide a proxy for historical atmospheric deposition rates; spikes in lead, cadmium, and zinc concentrations in twentieth-century layers relative to pre-industrial levels in the same core quantify the anthropogenic enrichment factor. In Greenland ice cores, Boutron and colleagues demonstrated that lead concentrations in ice layers from the Roman period (roughly 200 BCE to 200 CE) were approximately four times higher than pre-Roman background, reflecting Roman lead smelting in Spain and Britain; twentieth-century concentrations were 200 times pre-Roman background, representing the leaded petrol era.
Stable Isotope Fingerprinting
Lead isotope geochemistry provides one of the most powerful tools available for tracing the geological and industrial origins of environmental lead contamination. Lead has four stable isotopes: ²⁰⁴Pb (non-radiogenic), ²⁰⁶Pb (radiogenic decay product of ²³⁸U), ²⁰⁷Pb (radiogenic decay product of ²³⁵U), and ²⁰⁸Pb (radiogenic decay product of ²³²Th). Because the proportions of the radiogenic isotopes in any ore deposit depend on the uranium, thorium, and lead contents of the source rock and the age of the mineralisation, different ore deposits have distinctive and measurable isotopic “fingerprints” expressed as ratios such as ²⁰⁶Pb/²⁰⁷Pb or ²⁰⁸Pb/²⁰⁶Pb. The ²⁰⁶Pb/²⁰⁷Pb ratio of approximately 1.17–1.19 is characteristic of European petrol-additive lead (derived primarily from Australian ore), whereas natural crustal lead has a higher ratio around 1.20–1.22, and Roman lead from Spanish mines had a ratio near 1.15–1.17. This fingerprinting capacity allowed Rosman and colleagues (1993, published in Environmental Science & Technology) to demonstrate, using Greenland ice cores, that the ²⁰⁶Pb/²⁰⁷Pb ratio declined sharply after 3000 BCE with the onset of Bronze Age metallurgy, continued declining through the Roman period, recovered briefly in the European Dark Ages, and then collapsed to its lowest values in the mid-twentieth century coinciding with the peak of leaded petrol production, before rising again after the phase-out.
Mercury isotope analysis has added another dimension to source attribution. Mercury has seven stable isotopes (¹⁹⁶Hg, ¹⁹⁸Hg, ¹⁹⁹Hg, ²⁰⁰Hg, ²⁰¹Hg, ²⁰²Hg, ²⁰⁴Hg), and in addition to conventional mass-dependent fractionation (MDF), mercury undergoes mass-independent fractionation (MIF) — expressed as Δ¹⁹⁹Hg and Δ²⁰¹Hg — during photochemical reactions in the atmosphere and surface waters. Because volcanic mercury, anthropogenic mercury, and methylmercury each carry distinctive MIF signatures, mercury isotope ratios in archived fish tissue, sediment cores, and biological samples can distinguish contributions from these different source types. Studies of Minamata Bay sediments using mercury MIF confirmed the dominance of the Chisso plant discharge in bay contamination, while Arctic studies have used mercury MIF to disentangle the contributions of Pacific Ocean methylmercury, atmospheric deposition, and local geothermal sources.
Geochemical Mapping and Health Correlation
National geochemical mapping programmes provide the spatial data needed to link geological substrate with population health statistics. The British Geological Survey’s G-BASE (Geochemical Baseline Survey of the Environment) project, begun in the 1990s and covering the whole of Great Britain, collected stream sediment, surface water, and soil samples at a density of approximately one site per 2 km², creating the most comprehensive national geochemical database in the world. Analysis of G-BASE data has allowed correlations between soil element concentrations and disease incidence rates from national cancer registries, controlling for socioeconomic and lifestyle confounders. In Italy and the broader Mediterranean basin, Senesi and colleagues (2005) correlated the geochemical atlas of southern Italy (based on stream sediment geochemistry from over 6,000 sampling points) with provincial cancer mortality statistics, finding significant associations between high soil arsenic and skin cancer mortality, and between high soil lead and bladder cancer mortality, in the Campania region. The USGS National Geochemical Survey, covering the conterminous United States with stream sediment, soil, and rock geochemical data, similarly allows spatial analysis of metal distributions at the continental scale and provides the baseline for assessing anomalies associated with ore bodies, mining districts, and volcanic terrains.
Acid Mine Drainage
Acid mine drainage (AMD) represents one of the most widespread and geochemically consequential forms of contamination associated with the interaction between human mining activity and natural geological processes. The generating mechanism begins with the oxidation of pyrite (FeS₂) and other iron sulphide minerals exposed to oxygen and water in mine tailings, waste rock piles, and underground mine workings. The initial abiotic oxidation reaction produces sulphuric acid and dissolved Fe²⁺; oxidation of Fe²⁺ to Fe³⁺ by the iron-oxidising bacterium Acidithiobacillus ferrooxidans proceeds much faster than the abiotic reaction and generates additional acidity while producing dissolved Fe³⁺, which is itself a powerful oxidising agent for further pyrite dissolution. As pH falls below approximately 3.5, Acidithiobacillus growth accelerates, creating a positive feedback cycle that can produce mine drainage with pH values below 1 and dissolved metal concentrations of hundreds to thousands of mg/L. The Wheal Jane mine in Cornwall, UK, produced an acid plume that discharged through the Carnon estuary into the Fal estuary in early 1992, producing a vivid orange-red iron hydroxide precipitation extending over several kilometres of water and causing acute toxicity to invertebrate and fish communities. Long-term passive remediation using limestone channels and constructed wetlands has since been implemented.
The Río Tinto river system in Andalusia, Spain, offers the remarkable counterpoint of a naturally acidic AMD-like system that predates any human mining by millions of years, generated by natural pyrite weathering in the Iberian Pyrite Belt. The Río Tinto runs bright orange-red due to dissolved iron and has pH values of 1.7 to 2.5 throughout its length; it nevertheless supports a specialised community of iron- and sulphur-oxidising archaea and bacteria, and has attracted astrobiological interest as an analogue for potentially habitable extreme environments on Mars or Europa.
Risk Assessment from Source to Receptor
The formal risk assessment framework for geologically-derived contaminants links source, pathway, and receptor in a quantitative chain that allows calculation of expected exposure and comparison with toxicological reference values.
The exposure calculation for soil ingestion combines several parameters into a daily intake estimate:
\[ \text{Daily Intake} \left(\frac{\text{mg}}{\text{day}}\right) = C_{\text{soil}} \times IR \times BW^{-1} \times ABS \]where \(C_{\text{soil}}\) is the contaminant concentration in soil (mg/kg), \(IR\) is the ingestion rate (kg/day, typically 0.0001 kg/day for adults and up to 0.0002 kg/day for children), \(BW\) is body weight (kg), and \(ABS\) is the oral bioavailability fraction (dimensionless, ranging from near zero for some lead forms to near unity for soluble arsenic species). This daily intake is then compared to a toxicological reference dose (RfD) — the dose below which no adverse effects are expected in a lifetime of exposure — to produce a hazard quotient; a hazard quotient greater than one triggers further investigation or remediation action.
The distinction between residential and agricultural land use scenarios is critical in practice. Residential scenarios assume that people spend extended periods on the property, including children playing in soil, and apply higher ingestion rates and longer daily contact durations than agricultural scenarios where workers visit periodically. Allotment gardens and smallholdings on urban periphery soils in former industrial areas represent an intermediate scenario of particular concern, because they combine residential-level contact duration with the food-chain pathway of growing vegetables in potentially contaminated soil. The geological and anthropogenic history of a site — expressed through its soil geochemical profile — is therefore the indispensable first input to any exposure assessment, and the convergence of geochemical mapping with population health surveillance represents the practical synthesis of everything that medical geology as a discipline aims to achieve.
| Element | Primary Geological Source | Key Health Effect (Excess) | Key Health Effect (Deficiency) | WHO Guideline (Drinking Water) |
|---|---|---|---|---|
| Fluoride | Fluorite, fluorapatite dissolution | Dental/skeletal fluorosis | Dental caries | 1.5 mg/L |
| Arsenic | Arsenopyrite, Fe-oxyhydroxide desorption | Bladder/skin/lung cancer; keratosis | None established | 10 μg/L |
| Lead | Galena weathering | Cognitive impairment (children); hypertension | None | 10 μg/L |
| Mercury (MeHg) | Cinnabar; volcanic degassing + methylation | Cerebellar ataxia; congenital neurotoxicity | None | 6 μg/L |
| Cadmium | Sphalerite, phosphate fertilisers | Renal tubular dysfunction; osteoporosis | None | 3 μg/L |
| Selenium | Marine shale oxidation | Hair/nail loss; selenosis | Keshan cardiomyopathy; Kashin-Beck disease | — |
| Iodine | Marine sediments; sea spray | Thyroid suppression (very high doses) | Goitre; cretinism | — |