NE 110: Introduction to Nanomaterials Health Risks
Howard Siu
Estimated study time: 18 minutes
Table of contents
Sources and References
- Oberdörster G., Oberdörster E., Oberdörster J. Principles of Nanotoxicology and foundational reviews on ultrafine and engineered nanoparticle toxicology (Environmental Health Perspectives).
- Monteiro-Riviere N. A., Tran C. L. (eds.) Nanotoxicology: Progress Toward Nanomedicine, 2nd ed., CRC Press.
- Fadeel B., Pietroiusti A., Shvedova A. A. (eds.) Adverse Effects of Engineered Nanomaterials: Exposure, Toxicology, and Impact on Human Health, Academic Press.
- Donaldson K., Poland C. A. and collaborators, reviews on high-aspect-ratio nanomaterials and the fibre pathogenicity paradigm.
- NIOSH Current Intelligence Bulletins on occupational exposure to carbon nanotubes, nanofibres, and titanium dioxide; NIOSH nanotechnology research centre guidance documents.
- OECD Working Party on Manufactured Nanomaterials, test guideline dossiers, and Guidance Manual for the Testing of Manufactured Nanomaterials.
- Rothman K. J., Greenland S., Lash T. L. Modern Epidemiology for study design, confounding, and causal inference.
- IARC Monographs on the Evaluation of Carcinogenic Risks to Humans (crystalline silica, asbestos, arsenic, MWCNT-7, glyphosate).
- WHO/UNEP Stockholm Convention materials on persistent organic pollutants (“Dirty Dozen”).
- U.S. EPA Risk Assessment Guidance for Superfund and ECHA REACH guidance on exposure and dose-response assessment.
- MIT OpenCourseWare toxicology and environmental health offerings; Harvard T. H. Chan School of Public Health nanotoxicology and occupational hygiene course materials.
Chapter 1: Hazard Identification and the Foundations of Toxicology
Nanomaterials health-risk thinking begins by distinguishing a hazard from a risk. A hazard is an intrinsic capacity to cause harm; risk is the probability that the hazard will produce injury under a realistic exposure scenario. Hazard identification therefore asks whether a substance can, in principle, damage living tissue, while risk assessment asks how likely that damage becomes once exposure, dose, and susceptibility are accounted for. Toxicology supplies the biological evidence that underlies hazard identification, drawing on in vitro assays, whole-animal studies, human data, and mechanistic reasoning about how a molecule or particle interferes with cellular machinery.
Paracelsus’ dictum that “the dose makes the poison” still structures modern toxicology, but nanoscale substances force a refinement: the metric used to express dose can change conclusions. Mass-based dose is the historical default, yet ultrafine and engineered nanoparticles exert effects that correlate more closely with surface area or particle number than with mass. A nominal microgram of 20 nm titanium dioxide contains orders of magnitude more particles and far more surface area than the same microgram of micrometre-scale powder, and the biological response differs accordingly. Expressing dose-response in alternative metrics is therefore a routine step in nanotoxicology.
A typical dose-response relationship is summarized using a sigmoidal model. The median effective dose, written as
\[ E(d) = E_{\max} \cdot \frac{d^{n}}{\mathrm{ED}_{50}^{n} + d^{n}} \]relates effect magnitude \(E\) to dose \(d\) through a Hill coefficient \(n\) and the dose producing half-maximal response. Thresholded effects are described by a no-observed-adverse-effect level (NOAEL) and a lowest-observed-adverse-effect level (LOAEL); benchmark dose modelling fits a statistical curve to experimental counts and identifies a benchmark dose lower confidence limit \(\mathrm{BMDL}_{x}\) producing an acceptable small excess response \(x\). Toxicity endpoints range from acute lethality through organ-specific injury, reproductive and developmental effects, immunotoxicity, genotoxicity, and carcinogenicity.
Chapter 2: Nanotoxicology, Fibre Pathogenicity, and Risk Assessment
Engineered nanomaterials (ENMs) are intentionally produced particles or structures with at least one external dimension between roughly 1 and 100 nm. Ultrafines are the unintentional counterparts, generated by combustion, welding, and other high-temperature processes. Both classes can evade mucociliary clearance, deposit deep in the respiratory tract, translocate across biological barriers, and generate reactive oxygen species at defect-rich surfaces. Nanotoxicology addresses the shared physicochemical drivers of these responses: particle size, aggregation state, crystal phase, surface chemistry, dissolution rate, shape, stiffness, and the evolving biomolecular corona that forms once a particle enters biological fluids.
The comparison between multi-walled carbon nanotubes (MWCNTs) and asbestos is a defining case. The fibre pathogenicity paradigm, developed from decades of asbestos research and extended to high-aspect-ratio nanomaterials, proposes that long, thin, biopersistent fibres frustrate macrophage clearance in the pleural space, sustaining chronic inflammation that can culminate in mesothelioma. Landmark animal studies, including intraperitoneal instillation work associated with Donaldson, Poland, and colleagues, demonstrated that long, rigid MWCNTs reproduce asbestos-like granulomas and inflammatory signatures, while short or tangled preparations did not. IARC subsequently classified MWCNT-7 as possibly carcinogenic to humans, underscoring that not all carbon nanotubes share equivalent hazard profiles.
Risk assessment formalizes the path from hazard to public-health decision. The classical four-step framework used by regulatory agencies comprises hazard identification, dose-response assessment, exposure assessment, and risk characterization. For threshold effects, a reference dose is derived by dividing a point of departure such as a NOAEL or \(\mathrm{BMDL}\) by uncertainty factors addressing interspecies extrapolation, intraspecies variability, subchronic-to-chronic conversion, and database deficiencies:
\[ \mathrm{RfD} = \frac{\mathrm{POD}}{\mathrm{UF}_{A} \cdot \mathrm{UF}_{H} \cdot \mathrm{UF}_{S} \cdot \mathrm{UF}_{D}} \]For non-threshold carcinogens, a cancer slope factor converts lifetime average daily dose to excess lifetime cancer risk. NIOSH has developed nanomaterial-specific recommended exposure limits, for example for ultrafine and fine titanium dioxide and for carbon nanotubes and nanofibres, using benchmark dose modelling on rodent pulmonary inflammation and fibrosis endpoints.
Chapter 3: Inhalation, Ingestion, and Dermal Exposure Pathways
Inhalation is the dominant occupational route for airborne ENMs. Deposition in the respiratory tract depends on particle aerodynamic diameter for larger particles and on diffusional mobility for ultrafines. The International Commission on Radiological Protection (ICRP) lung deposition model partitions inhaled aerosol into head airways, tracheobronchial, and alveolar regions. A useful simplification expresses the deposited dose rate in a region as
\[ \dot{D}_{\mathrm{dep}} = C_{\mathrm{air}} \cdot \dot{V}_{E} \cdot f_{\mathrm{dep}}(d_{p}) \]where \(C_{\mathrm{air}}\) is airborne concentration, \(\dot{V}_{E}\) is minute ventilation, and \(f_{\mathrm{dep}}(d_{p})\) is the size-dependent deposition fraction. Diffusional deposition of ultrafines peaks in the alveolar region for particles near 20 nm, while impaction dominates for micrometre fibres in the upper airways. Once deposited, clearance proceeds through mucociliary escalator transport, alveolar macrophage phagocytosis, and, for poorly soluble persistent particles, long-term retention that can trigger interstitialisation and lung overload.
Ingestion exposures arise from nanomaterials in food additives, food packaging migration, and hand-to-mouth transfer. Synthetic amorphous silica and titanium dioxide food colourants have attracted regulatory scrutiny, with the European Food Safety Authority ultimately judging titanium dioxide (E171) no longer safe as a food additive owing to unresolved genotoxicity concerns. Gastrointestinal fate depends on dissolution in gastric and intestinal fluids, transformation of the biomolecular corona, and uptake via Peyer’s patches or enterocytes. Dermal exposure is generally limited by the stratum corneum, but damaged skin, flexed skin, and certain surface chemistries can permit measurable penetration, and sunscreen-grade zinc oxide and titanium dioxide nanoparticles have been studied extensively to bound the dermal risk envelope.
Chapter 4: Carcinogenicity, Mutagenicity, and the Arsenic Case
Carcinogenicity assessment integrates genotoxicity data, animal bioassay evidence, mechanistic studies, and human epidemiology. IARC assigns substances to groups from 1 (carcinogenic to humans) through 2A, 2B, and 3 on the basis of evidence strength rather than potency. A Group 1 classification requires sufficient evidence in humans or, under the 2019 advances to mechanistic classification, strong mechanistic evidence in exposed humans coupled with sufficient animal evidence. The distinction between hazard identification (what IARC does) and risk quantification (what agencies such as the US EPA or Health Canada add) is frequently conflated in public discourse and is essential to interpret controversies such as the glyphosate (Roundup) debate.
Mutagenicity, the capacity to alter DNA sequence, is probed by the Ames reverse-mutation assay, mammalian cell gene mutation assays, chromosomal aberration and micronucleus tests, and the comet assay for strand breaks. Mutagens that act without metabolic activation are considered direct-acting; many procarcinogens require cytochrome P450 bioactivation. Arsenic illustrates how an element can be carcinogenic without a strong classical mutagenic signal: inorganic arsenic is a confirmed human carcinogen for skin, lung, and bladder, but its mechanisms involve oxidative stress, epigenetic dysregulation, inhibition of DNA repair, and co-carcinogenesis rather than direct mutation. The global burden of arsenic in drinking water, especially in the Ganges delta, remains a reference example of how geochemistry, hydrogeology, and exposure assessment intersect with public-health response.
Chapter 5: Environmental Chemistry, Bioaccumulation, and Ecotoxicology
Predicting where a chemical ends up after release relies on partitioning theory. The octanol-water partition coefficient \(K_{ow}\), expressed logarithmically as \(\log K_{ow}\), summarizes a neutral compound’s lipophilicity and correlates with sorption to organic matter, membrane permeability, and bioaccumulation potential. A multimedia fugacity model distributes mass among air, water, soil, sediment, and biota compartments based on fugacity capacities derived from vapour pressure, water solubility, and \(K_{ow}\). For ionisable species, pH-dependent speciation must be added. Persistence is assessed through hydrolysis, photolysis, and biodegradation half-lives; persistent, bioaccumulative, and toxic (PBT) substances rank highest for regulatory concern.
Bioaccumulation metrics include the bioconcentration factor (water-to-organism uptake at steady state) and the biomagnification factor (predator-to-prey concentration ratio). Mercury in aquatic food webs remains the textbook illustration: inorganic mercury deposited to water bodies is methylated by sulphate-reducing bacteria to monomethylmercury, which crosses biological membranes efficiently, binds cysteine residues, and biomagnifies through plankton, forage fish, and predatory fish, producing the Minamata tragedies and modern dietary advisories. The Stockholm Convention’s “Dirty Dozen” of persistent organic pollutants, including aldrin, dieldrin, DDT, PCBs, and dioxins, was selected on combined persistence, bioaccumulation, toxicity, and long-range transport criteria.
Ecotoxicology extends toxicity testing to environmental species across trophic levels. Standardized OECD test guidelines specify acute and chronic endpoints for algae, Daphnia, fish, earthworms, and sediment organisms. Species sensitivity distributions aggregate single-species data into a hazardous concentration for five percent of species (\(\mathrm{HC}_{5}\)), which anchors environmental quality criteria. Chemical-disaster case studies—Bhopal, Seveso, Love Canal, Flint—illustrate the systemic failures (engineering, regulatory, and communicational) that allow known hazards to translate into population harm.
Chapter 6: Life Cycle Assessment, Sustainability, and Occupational Control
Life cycle assessment (LCA) evaluates environmental loads across the entire trajectory of a product: raw material acquisition, manufacture, distribution, use, and end-of-life. The ISO 14040/14044 framework defines goal and scope, inventory analysis, impact assessment, and interpretation. Impact categories include global warming potential, acidification, eutrophication, photochemical ozone formation, ecotoxicity, and human toxicity, each translated from emission inventories through characterization factors. For nanomaterials, LCA is complicated by missing inventory data, unclear characterization factors, and the need to specify particle form, not only mass. Nanotechnology can reduce environmental footprint—through lighter composites, efficient catalysts, and improved photovoltaic or battery chemistries—yet these gains must be weighed against energy-intensive synthesis and end-of-life uncertainties. The United Nations Sustainable Development Goals provide a higher-order accounting frame, linking good health (SDG 3), clean water (SDG 6), sustainable industry (SDG 9), and responsible consumption (SDG 12).
Occupational health and safety applies the hierarchy of controls to limit worker exposure: elimination, substitution, engineering controls, administrative controls, and personal protective equipment, in descending order of effectiveness. For nanomaterial handling, engineering controls dominate—local exhaust ventilation with HEPA filtration, enclosed glove boxes, wet processing, and good work practices. NIOSH’s prevention-through-design guidance and its nanotechnology field studies demonstrate that ventilation failures and task-specific operations (weighing, transferring, sonicating) account for most elevated exposures. Respirators are a last resort and require fit testing, cartridge selection for relevant particle sizes, and a written respiratory protection programme.
Chapter 7: Epidemiology — Study Designs, Bias, and Confounding
Epidemiology studies disease occurrence and its determinants in human populations. Descriptive epidemiology characterizes who, where, and when; analytic epidemiology tests whether an exposure is associated with an outcome. Measures of disease frequency include prevalence, cumulative incidence, and incidence rate. Measures of effect compare occurrence between exposed and unexposed groups, most often as the risk ratio or rate ratio
\[ \mathrm{RR} = \frac{I_{\mathrm{exposed}}}{I_{\mathrm{unexposed}}} \]and, in case-control designs, the odds ratio. A cohort study follows exposed and unexposed people forward in time; it yields incidence directly and supports temporality but is costly for rare outcomes. A case-control study recruits people with and without disease and reconstructs exposure histories, efficient for rare outcomes but vulnerable to recall and selection bias. Cross-sectional surveys capture prevalence at a single time point. Randomized controlled trials sit atop the evidence hierarchy because randomization balances known and unknown confounders in expectation.
Bias is a systematic deviation from truth. Selection bias arises when enrolment or follow-up depends on exposure and outcome simultaneously; information bias arises from misclassification of exposure or outcome, which may be non-differential (attenuating effect estimates toward the null for dichotomous exposures) or differential (biasing in unpredictable directions). Recall bias is a well-known differential information bias in case-control studies. Confounding occurs when a third variable is associated with the exposure and is an independent cause of the outcome without being on the causal pathway. The putative MMR-vaccine and autism association, traced to fraudulent data and subsequently retracted, has become a canonical teaching example of how small, biased studies can seed durable misinformation when replication and methodological scrutiny are ignored. Confounding is addressed at design stage (restriction, matching, randomization) or analysis stage (stratification, standardization, regression adjustment, propensity scores, inverse-probability weighting).
Chapter 8: Causal Inference, Exposure Limits, and Clinical Trials
Statistical association is not causation. The Bradford Hill considerations—strength, consistency, specificity, temporality, biological gradient, plausibility, coherence, experimental evidence, and analogy—are heuristics for moving from an observed association toward a causal claim. Temporality (cause precedes effect) is the single indispensable criterion; the others supply weight of evidence. Modern causal inference formalizes these ideas through counterfactual reasoning: the causal effect on an individual is the contrast between the outcome under exposure and the outcome that would have occurred under no exposure, quantities that can never be jointly observed. Directed acyclic graphs make assumptions about confounders, mediators, and colliders explicit, which in turn dictates which variables to adjust for.
Occupational exposure limits translate toxicology and epidemiology into numerical workplace ceilings. Threshold Limit Values from the ACGIH, Permissible Exposure Limits from OSHA, and Recommended Exposure Limits from NIOSH are expressed as time-weighted averages, short-term exposure limits, or ceiling values. For nanomaterials, limits are often set as airborne mass concentration for a defined size fraction; NIOSH has recommended 0.3 mg/m³ for ultrafine titanium dioxide (as a time-weighted average) and 1 µg/m³ for carbon nanotubes and nanofibres, both derived from benchmark dose modelling of rodent lung endpoints and applied with interspecies dosimetric adjustments. 1-Bromopropane, used as a metal cleaner and spray adhesive, illustrates how delayed recognition of neurotoxic and reproductive hazards can lag behind product substitution decisions.
Clinical trials are the principal tool for establishing that a treatment works in humans. Phase I trials test safety and pharmacokinetics in small numbers; Phase II refines dose and examines preliminary efficacy; Phase III evaluates efficacy and safety in larger randomized comparisons; Phase IV monitors post-marketing outcomes. Blinding, allocation concealment, pre-registration, intention-to-treat analysis, and adverse event reporting are core safeguards.
Chapter 9: Diagnostic Accuracy, Nanocharacterisation, and Nanosensors
Any test—clinical diagnostic, environmental sensor, or occupational monitor—can be wrong. Sensitivity is the probability of a positive test given that the condition is present; specificity is the probability of a negative test given absence. In a 2x2 table of test result against true status, these are column conditionals. Predictive values are row conditionals and depend on underlying prevalence. The positive predictive value is
\[ \mathrm{PPV} = \frac{\mathrm{Se} \cdot P}{\mathrm{Se} \cdot P + (1 - \mathrm{Sp}) \cdot (1 - P)} \]where \(P\) is prevalence, \(\mathrm{Se}\) sensitivity, and \(\mathrm{Sp}\) specificity. The negative predictive value follows symmetrically. When prevalence is low, even a highly specific test produces many false positives in absolute terms, a key point for mass screening and for low-concentration environmental monitoring. Likelihood ratios and receiver operating characteristic curves provide prevalence-independent summaries of test performance.
Nanomaterial characterization is the foundation for exposure assessment, toxicology, and regulatory reporting. Core techniques include electron microscopy (transmission and scanning) for primary particle size and morphology, dynamic light scattering and nanoparticle tracking analysis for hydrodynamic size distribution in suspension, Brunauer-Emmett-Teller nitrogen adsorption for specific surface area, X-ray diffraction for crystal phase, X-ray photoelectron spectroscopy for surface composition, and inductively coupled plasma mass spectrometry for elemental content and single-particle detection at environmental concentrations. OECD test guidance specifies minimum characterization data accompanying any toxicology dossier, to ensure that results remain interpretable across laboratories and regulatory contexts.
Nanosensors exploit the large surface-to-volume ratio of nanoscale elements to achieve high sensitivity toward chemical or biological targets. Examples include gold nanoparticle colorimetric assays, quantum dot fluorescence tags, carbon nanotube field-effect transistors for gas and biomolecule detection, and graphene-based electrochemical sensors. Their analytical performance is characterized by limit of detection, limit of quantification, linear dynamic range, selectivity, and response time, and must ultimately be translated into diagnostic accuracy metrics, closing the loop between nanomaterial engineering and public-health interpretation.
Bringing these chapters together yields the working logic of nanomaterial risk science: define the hazard through mechanistic and bioassay evidence; characterize exposure by pathway, metric, and population; quantify dose-response with attention to the appropriate dosimetry; assemble human evidence through well-designed epidemiology; translate findings into exposure limits and engineering controls via the hierarchy of prevention; and evaluate the entire product system through life cycle and sustainability lenses so that the promise of nanotechnology is realized without shifting risk to workers, consumers, or ecosystems.