COMMST 193: Communication in the Sciences

Greg Campbell

Estimated study time: 39 minutes

Table of contents

Sources and References

Primary textbook — Joshua Schimel, Writing Science: How to Write Papers That Get Cited and Proposals That Get Funded (Oxford University Press). Supplementary texts — Scott L. Montgomery The Chicago Guide to Communicating Science; Robert A. Day & Barbara Gastel How to Write and Publish a Scientific Paper; Joseph Williams Style; Edward Tufte The Visual Display of Quantitative Information. Online resources — Purdue OWL; MIT OCW 21W.732 Science Writing; Council of Science Editors Scientific Style and Format.

Chapter 1 — Why Scientists Must Communicate: Audiences and Purposes

A scientific result that is not communicated effectively is, for practical purposes, not a result at all. The cleanest experiment, the most elegant derivation, the most careful field survey — none of it matters until it is transmitted, understood, and acted upon. Scientists are often trained as if their primary product were data; in fact, their primary product is a claim, supported by data, and delivered through language to a community of readers who will decide whether to accept, cite, extend, or ignore it.

The first move any science writer must make is to stop imagining a generic reader and start imagining a specific one. Montgomery, in The Chicago Guide to Communicating Science, puts the matter bluntly: good scientific prose is always directed. It is shaped by who will read it, what they already know, why they are reading, and what they need to take away. The same finding about a zinc-finger transcription factor or a quantum-dot fluorescence lifetime can be written up as a Nature letter, a specialist methods paper, a grant proposal, a conference abstract, a press release, a popular magazine feature, or a tweet — each with its own vocabulary, pacing, and tolerance for jargon.

It helps to think of science communication as occupying a spectrum. At one end is writing for peers who share your specialization: they tolerate heavy jargon, care about methodological detail, and check figures against their own mental model. In the middle sit scientists in neighbouring fields, undergraduates, and program officers, who need more signposting and background. At the far end sit journalists, policymakers, teachers, and the general public, who want narrative, stakes, and human scale rather than error bars. Moving along this spectrum is not “dumbing down”; it is translating, which requires understanding the material better, not less well.

Purpose matters as much as audience. Scientific prose can inform (a methods paper), persuade (a grant proposal), instruct (a protocol), evaluate (a peer review), or advocate (a policy brief). Each calls for different tools: precision and completeness for methods, momentum and stakes for proposals, numbered imperatives for protocols, fairness and evidence for reviews, and a clear recommendation for briefs.

A useful habit, borrowed from MIT’s 21W.732, is to write a one-sentence “audience and purpose statement” at the top of every draft: “I am writing for X, who needs to Y, and I want them to Z.” Revisiting that sentence as you revise prevents the common drift in which a document starts as a methods paper, slides into a literature review, and ends as an op-ed. Scientific prose is not about saying everything you know; it is about saying exactly what this reader needs, here, for this purpose.

Chapter 2 — Rhetoric of Science: Ethos, Pathos, Logos in the Lab

Many scientists are suspicious of rhetoric, associating it with advertising and political trickery. But rhetoric, properly understood, is just the study of how language makes an audience willing to believe something. Aristotle’s three appeals — ethos (credibility), pathos (emotion), and logos (reasoning) — are alive in every paragraph of every scientific paper, even when the writer insists they are not.

Logos, the appeal to logic and evidence, is the easiest for scientists to recognize. It is what we believe our papers are “made of”: data, statistics, controlled experiments, citations, and reasoned arguments. But logos in practice is never just raw data dumped on the page. It is data arranged into an argument — and arrangement is a rhetorical act. The decision to report the knockout experiment before the dose-response curve, or the decision to compare your method to Smith (2019) rather than Jones (2021), shapes the reader’s sense of what the evidence means. Schimel’s Writing Science hammers this point: the structure of a paper is the structure of an argument, and the reader who cannot follow your argument cannot be convinced by your evidence no matter how solid it is.

Ethos is the appeal of the author’s character and competence. It is built invisibly — through correct terminology, accurate citation, acknowledged limitations, clean figures, and appropriate tone — and destroyed equally invisibly, through overclaiming, ignored alternatives, sloppy references, and grievance against other researchers. Readers, including reviewers and grant panels, detect ethos before they detect logos. A paper that seems written by someone who knows the literature and reports limits honestly is given the benefit of the doubt; a paper that seems oversold is read adversarially.

Pathos is the appeal to emotion. Scientists are trained to distrust it, but it is unavoidable, and suppressing it is not the same as not using it. Pathos in science is not tears or outrage; it is significance. It is the reason a reader cares that your enzyme is more efficient, your detector more sensitive, your model more predictive. Every good introduction contains a pathos move: a statement of why this problem matters — to human health, to fundamental understanding, to a technology, to a policy. The question “so what?” is a pathos question. You can answer it with understatement or with hype, but you cannot avoid answering it without losing the reader.

A well-balanced piece of science writing uses all three appeals without over-relying on any one of them. Pure logos is a raw data table; pure ethos is a CV; pure pathos is a press release. Scientific communication braids them together: I am a careful worker (ethos), this is important (pathos), here is the evidence (logos) — in that order, because readers need to trust you and care about the question before they will work through your data.

Chapter 3 — Reading the Primary Literature

You cannot write like a scientist until you can read like one. Reading the primary literature is not the same as reading a textbook; undergraduates trained only on textbooks almost always read papers too slowly, too linearly, and with the wrong expectations. A paper is designed to be scanned by experts answering three questions: What is the claim? How convincing is the evidence? Does this matter to my work?

The standard tactical reading pattern, taught in most graduate science programs and echoed in Day & Gastel, is a multi-pass approach. On the first pass, read only the title, the abstract, and the figures and their captions. If the paper still seems relevant, read the introduction to understand the question and the conclusion to understand the answer. Only on a third pass, if the paper is central to your work, do you read the methods and results in detail, pausing to check whether the controls are appropriate and whether the statistics support the claims. This layered approach saves time and, more importantly, teaches a critical habit: the figures carry the scientific argument, and the text is there to interpret them.

Critical reading means asking a series of questions without mercy. Is the question stated clearly? Is it actually answered, or has a different question crept in by the end? Are the methods described in enough detail to be reproduced, or are key decisions hidden in references or supplementary files? Are the controls in place? Are the sample sizes adequate? Are the statistics the right kind for the data, or were parametric tests used on non-parametric data? Do the figures show individual data points, or only means and standard errors that could hide wild distributions? Does the discussion distinguish what is shown from what is speculated? Are the limitations acknowledged, or is the paper pretending to have answered more than it did?

Reading also means building a mental map of the field. One strategy, recommended by Schimel, is to keep a running annotated bibliography in which each entry captures, in two sentences, what the paper claims and why it matters. After ten or twenty such entries you will notice patterns: certain questions keep recurring, certain methods are dominant, certain authors disagree. That map is the background against which your own writing will be legible. Introducing a new paper without that map is like pointing at a place on a blank table and calling it “there.”

Finally, good reading shapes good writing. Pay attention not just to what a successful paper says but to how it says it: where the first paragraph starts, how the author moves from general to specific, how they signal transitions, how long the average sentence runs, and how they cite rivals. The best writing textbooks only describe what experienced readers already absorb from the literature itself.

Chapter 4 — IMRaD: The Anatomy of a Scientific Paper

IMRaD stands for Introduction, Methods, Results, and Discussion. It is the skeleton of nearly every empirical paper in the life and physical sciences, and understanding it is the first and most important piece of vocabulary in scientific writing. IMRaD is not a formula imposed by editors; it evolved in the late nineteenth and twentieth centuries because it corresponds to the logical structure of an experimental argument. As Day & Gastel note, IMRaD answers four questions, in order: What problem are you addressing? How did you approach it? What did you find? What does it mean?

The Introduction gives the reader the background they need to understand the question, defines the problem, reviews the relevant literature briefly, and states the specific aim or hypothesis. A well-written introduction moves from general to specific: from the broad importance of the topic, to the state of knowledge, to the gap in that knowledge, to the aim of this study. This is often described as a “funnel” or, in Schimel’s vocabulary, as an O-C-A-R structure: Opening, Challenge, Action, Resolution. The Introduction sets up the O and C and promises the A.

The Methods section describes what was done, in enough detail that a competent reader in the same field could reproduce the work. It is conventionally written in past tense and, increasingly, in active voice with first-person pronouns (“we measured” rather than “the sample was measured”). Methods are often the least loved section — they are unglamorous — but they are the section most carefully scrutinized by reviewers and by other researchers who want to replicate, adapt, or criticize the work.

The Results section reports what was observed. It should contain no interpretation; that belongs in the Discussion. A good Results section uses the text to point at the figures and tables: “Figure 2 shows a roughly linear increase in enzyme activity between 20°C and 40°C, above which activity falls sharply.” The prose does not repeat the figure; it directs attention to the features that matter and notes any that would be easy to miss.

The Discussion answers the so what question. It re-states the main finding in plain language, relates it to previous work, explains its significance, acknowledges its limitations, and suggests what should happen next. It is the most rhetorical part of the paper and the part where the author’s voice is most audible. Schimel’s advice is essential: a Discussion should not be a mechanical list of every citation and every caveat; it should tell the reader why your result changes the story, and it should finish with a punchline that a reader can repeat to a colleague in one sentence.

Variations are common: merged “results and discussion” sections, “model” sections in theoretical papers, rigid substructures like CONSORT for clinical trials, and review articles that abandon IMRaD entirely. But the underlying logic — problem, approach, evidence, meaning — is nearly universal, and any deviation should be justified by content, not preference.

Chapter 5 — Writing the Introduction and Framing the Problem

More papers are lost at the Introduction than at any other section. A weak introduction can bury a beautiful result; a strong one can carry a modest result to a wide audience. Framing, therefore, is not cosmetic. It is the load-bearing work of scientific communication.

Schimel’s central concept for introductions is “the story spine”: every paper must have a clear story, and the shape of that story is Opening–Challenge–Action–Resolution. The Opening states what is generally accepted or what situation the reader should hold in mind. The Challenge describes the gap, the contradiction, the mystery, or the unmet need that makes your study necessary. The Action is what you did about it. The Resolution is what you found. A reader who finishes the Introduction without knowing all four elements will finish the paper confused. A reader who finishes the Introduction with all four clear in their head will read the rest of the paper looking for confirmation rather than hunting for meaning, which is a dramatically easier reading experience.

A common undergraduate mistake is to treat the Introduction as a mini-literature-review: “Many studies have looked at X. Smith (2012) did A. Jones (2014) did B. Patel (2018) did C. In this study, we do D.” This kind of writing has no Challenge: it never says what is wrong with the state of the field, only what has been done. Reviewers call such openings “catalogues.” Fix them by asking, for each cited work, why is it mentioned? Only cite papers whose presence earns its keep — because they set up your problem, because they disagree with each other, because they used a method you will now improve on, or because they made a claim you will now test.

Another common mistake is starting too broadly. An Introduction that opens with “Cancer is a major cause of death worldwide” or “Climate change is one of the defining challenges of our time” tells specialist readers nothing they do not know and signals to them that the writer is not confident about where their own work fits. The cure is to start at the level of specificity that will be needed throughout the paper and climb out only as far as is genuinely useful. A strong opening for a mass-spec paper might begin not with the importance of proteomics but with a specific unresolved question about peptide fragmentation.

The final sentence or two of an Introduction almost always contains a version of “Here we show…”. This is a convention worth honouring. Readers expect, at the bottom of the funnel, an explicit statement of the study’s aim, approach, and result, roughly in that order. “Here we measured X in Y using Z and found W.” Learning to write this sentence is learning to write scientific papers — if you cannot write it for your own study, you do not yet know what you are reporting.

Chapter 6 — Writing Methods and Results

Methods and Results are often lumped together in teaching because they share a concern with precision, but they are governed by very different rhetorical rules.

The Methods section is the section that proves the paper is reproducible science and not a personal memoir. Its cardinal virtue is completeness. A competent reader in your field, given your Methods, should be able to do the same experiment and expect similar results. Completeness does not mean listing every twist of a micropipette; it means including every decision that could plausibly affect the outcome — the organism, strain, or material; the instruments and their critical settings; the reagents and their suppliers or lot numbers where this matters; the sample sizes; the randomization procedure; the statistical tests and the software used; the ethics approvals. The rule of thumb is: if omitting a detail could make another lab’s replication fail, the detail belongs here.

The most useful move in Methods writing is to impose a logical order. Methods should read as a story of the experiment’s flow: subjects or samples first, then preparation, then the manipulation, then the measurements, then the statistics. Readers should not have to hunt for the sample size or the statistical test; they should know where to find it on the page. Many journals now allow or require Methods as numbered subsections, which makes the structure explicit.

Passive voice once dominated Methods sections — “Samples were collected and analyzed” — on the theory that passive construction conveyed impersonal objectivity. That theory is now widely recognized as a myth. The modern consensus, endorsed by Day & Gastel, Schimel, and most style manuals, is that active voice with first-person plural (“We collected samples and analyzed them”) is clearer, shorter, and makes responsibility for each choice explicit. Journals still vary, and some prefer a mix, but the days in which passive voice was mandatory are over.

The Results section has different virtues. Its cardinal virtue is selection. Results are not a data dump. They are a walk through the figures and tables, pausing to point out what matters. A good Results section answers three questions: What did you measure? What did you find? How reliable is what you found? In that order. The writing must distinguish sharply between observation and interpretation. “The treated group grew 30% faster than controls (p = 0.003)” is a Result. “This suggests that the hormone accelerates cell division” is a Discussion sentence that has sneaked in. Holding the line between observation and interpretation is what makes the paper’s argument testable.

Statistics in Results should be reported with enough information to be meaningful: effect sizes, not just p-values; confidence intervals; and where relevant, the number of replicates at each level. A finding described only as “significant” is not a finding; a finding described as “a 12% increase (95% CI: 8–16%, n = 24)” is.

Chapter 7 — Writing Discussion and Conclusions

If the Introduction sets the stakes and the Results deliver the data, the Discussion is where the paper earns its place in the literature. It is also the section in which young scientists most often drift into vagueness, hedging, and padding — because it is the section in which they are most tempted to hide.

Schimel advises treating the Discussion as the mirror image of the Introduction. The Introduction moves from general to specific, opening a question; the Discussion moves from specific to general, answering it. The first paragraph should restate the main finding in plain language — not merely repeat the Results, but frame them as an answer to the question posed in the Introduction. “We set out to test whether X causes Y, and we found that it does, under conditions Z.” This paragraph orients the reader and prevents the rest of the Discussion from wandering.

Subsequent paragraphs typically compare the finding to previous work, either by harmonizing with it (“our results are consistent with Smith 2019, who…”) or by explaining apparent disagreements (“in contrast to Jones 2021, we observed…”). The point is not to be polite to the literature but to situate the new result within the ongoing conversation of the field. A good Discussion acknowledges that the conversation existed before the author joined it and will continue after.

Limitations must appear somewhere in the Discussion, and how they appear matters. Burying limitations in a reluctant footnote is a mistake; so is flagellating the reader with a long list of every possible weakness. The right approach is to identify the two or three limitations that a thoughtful reader would care about and to address each one directly: what it is, how much it likely affects the interpretation, and why the finding survives it. Acknowledging limitations well builds ethos; pretending they do not exist destroys it.

A Discussion should end with a payoff: what the reader should now believe, what this changes, and — sparingly — what should come next. Avoid formulaic “further studies are needed” sentences unless you can say what kind of further studies, why they are needed, and what they would show. The best closing lines tell the reader, in one sentence, why the paper was worth reading.

Chapter 8 — Abstracts and Titles

The abstract and title are the most-read parts of a paper. Most readers will see them without ever opening the full text: in search results, in alerts, in reading lists. Writing them well is therefore disproportionately valuable.

A scientific title should be specific, accurate, and searchable. Searchable means it should contain the key terms that someone looking for work on your topic would type into a database. Accurate means it should not overstate the finding: “A novel treatment for Alzheimer’s disease” is almost certainly overstated; “Reduced amyloid plaque in a mouse model of Alzheimer’s following treatment with compound X” is not. Titles in the sciences increasingly come in two flavours: the neutral descriptive title (“Effects of temperature on enzyme X activity”) and the assertive results-forward title (“Temperature accelerates enzyme X activity by promoting substrate binding”). Both are acceptable; the latter is more common in high-profile journals and is more likely to attract citations, but it must be fully supported by the paper.

An abstract is a miniature IMRaD: context, question, approach, result, meaning, typically in 150–300 words. Day & Gastel recommend drafting the abstract last, even though it appears first, because you cannot summarize a paper you have not written. Structured abstracts — with explicit Background, Methods, Results, and Conclusions headings — are required by many journals in biomedicine. Unstructured abstracts in physics and chemistry nevertheless follow the same logic: they just hide the headings.

Two common mistakes in abstracts are worth naming. The first is starting too broadly: an abstract that spends its first two sentences on the importance of the field wastes precious words. Start with the specific question. The second is vague results: “We investigated the effects of X on Y and discuss the implications.” This is not an abstract; it is a promise to have written one. Concrete, quantitative results belong in the abstract. “X increased Y by 27% (p < 0.001)” is an abstract sentence; “X affected Y” is not.

Chapter 9 — Figures, Tables, and Data Visualization

Scientific figures carry the argument of the paper. Expert readers often look at the figures before reading a single word, and many decisions about whether to read a paper are made on the figures alone. For this reason, figures must be designed with the same care as prose.

Edward Tufte’s The Visual Display of Quantitative Information is the foundational text here. Tufte’s key idea is the data–ink ratio: the proportion of ink (or pixels) on a graph that represents actual data, as opposed to decoration, gridlines, borders, and 3-D effects. Maximize data-ink. Every line that does not carry information should be questioned. A good bar chart has no shadows, no chart-junk, no gradient fills. It has a title, axis labels with units, and — this is important — the data.

A related principle is that the chart type should fit the data. Bar charts are for counts or comparisons of categories. Line charts are for changes over a continuous variable like time, dose, or wavelength. Scatter plots are for relationships between two continuous variables and for showing the individual data points that a bar chart hides. Box plots and violin plots are for distributions. Pie charts, which make comparisons between slices very hard for the eye, should almost never be used in scientific papers. Log scales are appropriate whenever data span multiple orders of magnitude, and should be clearly labelled as log.

A specifically scientific virtue, emphasized in modern publication guidelines, is showing the individual data points whenever the sample size permits. A bar chart with an error bar hides the distribution and invites fraud; a scatter of individual points with a summary overlay tells the reader much more. Guidelines such as those from CSE and many high-profile journals now explicitly recommend this.

Captions do as much work as the figures themselves. A figure caption should be long enough that a reader skimming figures-only could understand the point. It should state what is shown, what the variables are, how many samples contributed, what the error bars represent, and what statistical test was used. A cryptic caption (“Effect of temperature”) is almost always a mistake; a caption that reads like a miniature paragraph is almost always better.

Tables obey parallel rules. A good table has clear column headers with units, appropriate precision (not more decimal places than the measurement supports), horizontal lines only where needed for separation, and a caption that explains what is tabulated. Tufte’s core warning about tables is the same as for graphs: do not decorate; strip everything that does not carry information.

One final warning: colour. About 8% of men of European descent have red-green colour blindness. A figure whose message depends on distinguishing red from green is unreadable for a sizeable minority of readers. Use colour-blind-safe palettes (viridis, cividis, or ColorBrewer qualitative schemes), use redundant cues like line type or marker shape, and remember that many readers will still print your paper in black and white.

Chapter 10 — Communicating Uncertainty and Statistics

Uncertainty is the most honest thing a scientist has to offer and also the most commonly miscommunicated. Almost no scientific result is certain, and almost no sentence in a scientific paper should be written as if it were.

Hedging — the use of qualifying words like suggests, appears, may, is consistent with — is a legitimate and necessary feature of scientific prose. It allows writers to match the strength of their claims to the strength of their evidence. The problem is not that scientists hedge; it is that they often hedge inconsistently, hedging strong claims and leaving weak claims unhedged, or using stock hedges that float free of any real uncertainty (“it has been suggested that…”). Good hedging is specific: “under the conditions tested, the drug reduced tumour growth; whether this generalizes to clinical doses remains unknown.”

Quantitative uncertainty should be reported explicitly: with confidence intervals, standard errors, or credible intervals; with effect sizes; and with information about sample size. A p-value alone is almost useless and has been the subject of repeated warnings from statistical societies. The American Statistical Association’s 2016 statement on p-values is essential reading for any scientist who uses them: p-values measure compatibility between data and a specific null hypothesis, not the probability that a hypothesis is true, nor the size or importance of an effect. Reporting a result as “significant (p < 0.05)” and nothing more is a rhetorical sleight of hand that turns a continuous measure into a binary verdict.

Beyond the statistics themselves, there is the rhetorical problem of communicating uncertainty to non-specialists. Readers consistently misinterpret probabilistic language: “likely” is read as anything from 40% to 90%; “rare” is read as anything from one in a hundred to one in a million. The IPCC’s practice of assigning specific numerical ranges to verbal terms (“very likely” = 90–100%) is a model that other fields can borrow. Where possible, give numbers; where numbers would overwhelm, give numbers and a plain-language translation. Avoid the twin failure modes of false certainty (“this proves X”) and false balance (“some scientists say X, some say Y”). The former overstates; the latter can make a 95%-vs-5% disagreement sound like a tie.

An ethical corollary: never convert “we did not find an effect” into “there is no effect.” Absence of evidence is not evidence of absence unless the study was powered to detect the effect you care about. Reporting a null result responsibly means reporting the effect size you could have detected at your power, not just the p-value that crossed 0.05.

Chapter 11 — Research Ethics, Authorship, and Integrity

Research integrity is not a side topic bolted on to a writing course; it is part of what makes scientific writing scientific. A paper whose data are falsified, whose authorship is misassigned, or whose prior work is plagiarized is not merely poor writing — it is not science at all.

The most visible integrity violations are fabrication (inventing data), falsification (manipulating data), and plagiarism (passing off someone else’s work or words as one’s own). These are the “big three” of research misconduct as defined by most national bodies. They are career-ending when detected, and increasingly are detected: image-forensics tools, text-similarity software, and post-publication review by sleuths like those on PubPeer have made fraud harder to hide.

Plagiarism is the most common integrity problem in student writing and is often committed inadvertently. There are three kinds. Word-for-word plagiarism copies text without quotation marks and without citation. Mosaic plagiarism patches together phrases and sentences from one or more sources with cosmetic changes. Idea plagiarism adopts another’s concept, method, or finding without crediting the source. All three are forbidden in scientific writing, and all three can be avoided by three habits: always note the source when taking notes; never paste text into your draft even as a placeholder; and cite whenever a sentence rests on someone else’s specific argument or result, even if the words are your own.

Authorship raises subtler questions. Who gets to be an author on a scientific paper? The International Committee of Medical Journal Editors (ICMJE) criteria are the most widely used: an author must have (1) contributed substantially to conception, design, or data; (2) drafted or critically revised the manuscript; (3) approved the final version; and (4) agreed to be accountable for the work. Ghost authors (who wrote but are not listed) and gift authors (who did not contribute but are listed for reasons of status or reciprocity) both violate these rules. Author order has field-specific meaning: in biomedicine, first author usually did most of the work, last author is usually the principal investigator, and middle authors contributed in varying ways; in physics and some other fields, authorship is alphabetical. When in doubt, discuss authorship before the work begins, not after the paper is drafted — this is the single best way to prevent conflict.

Other integrity obligations include proper data management, reporting conflicts of interest, avoiding selective reporting (“only publishing the experiments that worked”), preregistering hypotheses where appropriate, and sharing data and code where possible. Open-science practices — preregistration, open data, open materials, and preprints — are increasingly the norm, not the exception, in most scientific fields. None of these are merely bureaucratic; they are the modern tools for keeping scientific claims honest.

Chapter 12 — Peer Review: Giving and Receiving Feedback

Peer review is the mechanism through which the scientific community decides what counts as the literature. It is also, from the inside, a slow, uncomfortable, sometimes infuriating process — and learning how to participate in it well, both as author and as reviewer, is one of the most important professional skills a scientist develops.

Receiving peer review is the more common starting experience. Reviewers will misunderstand your paper. They will ask for experiments you already did. They will object to things you did not know were controversial. They will, occasionally, be rude. The temptation, on receiving a critical review, is to write an immediate angry reply. The correct move is to do nothing for a day, then read the reviews again with the assumption that the reviewers are acting in good faith and asking reasonable questions. Most of the time they are.

The standard structure of a response to reviewers is the “point-by-point response letter.” Quote each comment in full. Then, beneath each comment, respond. Thank the reviewer if the comment is helpful (briefly — do not grovel). Explain what you changed, quoting the new or altered text so the editor can see it without flipping back and forth. If you disagree, say so — politely — and explain why. Editors read these letters carefully, and a respectful, organized response is a strong signal that the revised paper deserves acceptance.

Giving peer review — which you will be asked to do starting in graduate school — carries its own etiquette. A good review is focused, specific, and kind. Focused means it addresses the main claim of the paper first and the minor details last. Specific means every criticism is backed by a reason and, where possible, a page reference or a concrete suggestion. Kind means it remembers that the author is a human being who has spent months or years on the work. The best reviewers write the review they would want to receive: they flag genuine problems clearly, they propose solutions where they can, and they avoid snark, even when they strongly disagree.

Editors are looking for two things from reviewers: a recommendation (accept, revise, reject) and a justification (why). The recommendation matters less than the justification. A thoughtful reject with concrete criticisms is useful; a glowing accept with no reasons is not. Reviews are confidential and should be treated as such; they should not circulate on social media, and the data in the paper should not be used for the reviewer’s own work until the paper is public.

Chapter 13 — Oral Presentations and Posters

Scientists talk almost as much as they write. Conference talks, lab meetings, seminars, and thesis defences are the oral side of the same communication skill, and many of the same rules apply — but the constraints are different, and so are the opportunities.

A conference talk is usually 10–20 minutes long. Anything complicated that you would put in a paper must be left out of a talk, because the audience cannot rewind. Strong talks are built around a single take-home message, one that a listener could recite to a colleague in the hallway afterward. Everything in the talk should support that message; everything that does not should be cut. A classic structure is to state the question, motivate it, state the answer, show the one or two pieces of evidence that most convincingly support the answer, and then return to the question to say what the answer changes. This is a compressed IMRaD with the Discussion pushed to the front.

Slides are a visual aid, not a document. The single most common mistake in scientific talks is the “wall of text” slide: bullet points the speaker reads aloud. Reading slides aloud is disastrous because the audience can read faster than the speaker can talk, and they stop listening. The fix is to put on each slide only what cannot be said in words: a figure, a photograph, an equation, a single number. Let your voice carry the argument and let the slide carry the evidence.

Rehearsal is non-negotiable. The difference between a rehearsed talk and an unrehearsed one is obvious to every audience member in the first two minutes. Rehearse out loud, on your feet, with a timer. Rehearse the transitions (“So that was the first experiment; the second experiment tested whether…”) because those are the moments the audience uses to orient themselves. Rehearse the answer to the question you are most afraid of being asked, because it will be asked.

Posters are a different medium. A poster must work on its own, without you standing next to it, because many viewers will read it when you are on a coffee break. Good posters have a clear title, a summary box or “key finding” at the top, a small number of figures arranged in a readable left-to-right, top-to-bottom flow, and minimal text. The currently fashionable “better poster” design — popularized by Mike Morrison — puts a single plain-language sentence in enormous type in the middle of the poster and moves the details to the margins. Whether or not you adopt that style, the underlying insight is correct: a conference-goer walking past should be able to absorb your main finding in five seconds. Bring a short handout or a QR code so that interested viewers can take the details away with them.

Chapter 14 — Science for the Public and for Policy

Writing science for non-scientists is not a second-class activity; it is increasingly a core professional responsibility. Research is funded by taxpayers, affects lives, and operates in a media environment where misinformation travels faster than peer-reviewed papers. Scientists who can communicate their work clearly to the public serve both their field and their communities.

The first rule of public science writing is that the audience is not you. They do not share your jargon, your background, or your built-in sense of why the topic matters. Assume they are intelligent, curious, and completely unfamiliar. Montgomery’s guidance, echoed by MIT’s 21W.732, is to replace every specialized term with either a plain-language equivalent or a short explanation — never both, and never neither. “The mitochondria, which are the cell’s power plants” is a useful kind of translation. “The mitochondria (the organelles responsible for oxidative phosphorylation)” is not.

The second rule is that narrative matters. Public readers follow stories more easily than arguments. A feature article about a drug discovery might open with a patient, a researcher, or a moment in the lab, not with a statement of methodology. This is not dumbing down; it is how humans understand information they do not yet have a framework for. Journalists use this move not because they are bad at science but because they are good at audiences.

The third rule is about uncertainty, and it is the hardest. Public writing must convey both what is known and what is not, without collapsing into either overclaiming or nihilism. Good science journalism says “this study, in mice, found that…” rather than “scientists say…”. It distinguishes a single striking result from a well-established consensus. It explains the difference between a pre-print, a peer-reviewed paper, a review article, and a meta-analysis. It names limitations as concretely for a general audience as a good Discussion does for a specialist one.

Writing for policy is a related but distinct activity. Policymakers are time-pressed and decision-driven. They do not want the full story; they want to know what they should do. The standard policy genre is the “policy brief”: one to two pages, with a title that states the finding, a one-paragraph summary of the issue, two or three bullet points of evidence, and one or two explicit recommendations. The voice is plain, the citations are minimal but rigorous, and the structure is inverted pyramid — the most important information first. A scientist who can write such a brief has an outsized influence compared with one who can only write a paper.

Across all public-facing science communication, a final principle applies: honesty about one’s own position. Scientists have values, but they should be explicit about when they are reporting evidence and when they are recommending action. Mixing the two without flagging the shift erodes trust; distinguishing them preserves it.

Chapter 15 — Grant Writing for Beginners

Grant writing is the most consequential form of writing most working scientists do, and it is the form that receives the least explicit training. A grant proposal is not a paper written in the future tense; it is a different genre, with different rhetorical priorities, and it must be approached as such.

A grant reviewer is not a reader. They are an evaluator, under time pressure, reading a stack of proposals in one sitting, trying to decide which ones deserve funding. Schimel’s Writing Science has an entire chapter on this audience, and his advice is simple: make it easy on them. Use clear headings. Open with a compelling statement of the problem. State the aim early. Use white space. Do not make the reviewer work to find the key sentences. The reviewer who has to hunt will hunt for reasons to reject.

A standard grant proposal has three main components: the specific aims page (or equivalent summary), the background and significance, and the research strategy (approach, preliminary data, and plan). The specific aims page is the most important page in the document. Many reviewers read it first and form their opinion there; other reviewers read nothing else. It should contain, in one page or less, the problem, the gap, the overall hypothesis, the specific aims (usually two or three), and the expected impact. Each word counts.

Significance sections are where young grant writers often stumble, because they default to generic importance (“understanding cancer is important”) instead of specific importance (“no existing therapy addresses this subset of triple-negative breast cancer, which has a five-year survival of…”). Specific significance motivates the gap. It makes the reviewer believe that funding this work will matter in a way that funding the next proposal on the pile will not.

The research strategy must balance ambition and feasibility. Reviewers fund projects that seem worth doing and seem likely to work. Preliminary data are the main tool for demonstrating feasibility: they show that the investigator can actually do the techniques proposed and has already begun to see encouraging results. The plan should anticipate what can go wrong and explain the backups. Reviewers penalize proposals that pretend everything will go smoothly; they trust ones that admit risks and describe how they will be managed.

Finally, grant writing shares with all other scientific writing the virtues emphasized throughout this course: clarity, structure, narrative, and honest handling of uncertainty. A grant that tells a clear story in plain and confident prose will beat a grant of equal scientific merit that buries its message in jargon. The grant reviewer, like every other reader in this course, rewards writers who have thought about them.

Afterword — Communication as a Lifelong Practice

Scientific communication is not a skill you finish learning. Every new paper, talk, and grant is a new rhetorical situation, and the habits this course teaches — analyzing the audience, framing the problem, building a story, respecting evidence, acknowledging uncertainty, listening to feedback, revising — are habits scientists practice for the rest of their careers. The shortest summary of everything here is: respect the reader. Every technique in these chapters is a way of making the reader’s job easier without compromising the truth, and that is how scientific knowledge actually enters the world.

Back to top