SMF 220: Research Methods

Carl Rodrigue

Estimated study time: 52 minutes

Table of contents

Chapter 1: Introduction to Research Methods in Social Science

Why Study Research Methods?

Research methods are the systematic procedures that scholars use to investigate questions about the social world. For students in Social Development Studies, Sexuality, Marriage, and Family Studies, and related fields, a working knowledge of research methods is not merely academic — it is a practical necessity. Whether reading peer-reviewed journal articles, evaluating policy proposals, or conducting original inquiry, the ability to understand how knowledge is produced is as important as the knowledge itself.

This course approaches research methods from an explicitly interdisciplinary perspective, drawing on sociology, psychology, and the broader social sciences. The central aim is twofold: to equip students with the foundational skills needed to conduct social science research, and to cultivate the critical sensibility required to evaluate research produced by others.

Research as a Process

Research is not a single act but an iterative process. At its most general, the research process involves:

  1. Identifying a topic or problem
  2. Reviewing existing literature
  3. Formulating a research question or hypothesis
  4. Selecting an appropriate methodology
  5. Collecting data
  6. Analyzing data
  7. Interpreting and reporting findings

Each of these steps is shaped by the researcher’s theoretical commitments, disciplinary training, ethical obligations, and practical constraints. A hallmark of good research is transparency about these influences.

Consumers and Producers of Research

Not every student who takes a research methods course will go on to conduct original research. However, every student will encounter research findings in their professional and civic lives. Learning to ask critical questions — Who funded this study? How were participants selected? What are the limitations of this design? — is an essential competency for informed citizenship.

Key Distinction: Being a critical consumer of research means evaluating the quality and applicability of others' findings. Being a producer of research means generating new knowledge through systematic inquiry.

The Landscape of Social Science Research

Social science research encompasses a broad array of disciplines — sociology, psychology, political science, economics, social work, and interdisciplinary fields such as Sexuality, Marriage, and Family Studies. Despite their diversity, these disciplines share a commitment to empirical inquiry: the idea that claims about the social world should be grounded in systematically gathered evidence.

Basic vs. Applied Research

Basic research (also called pure research) seeks to expand fundamental knowledge about social phenomena without an immediate practical application. A sociologist studying the formation of social norms, for example, is conducting basic research.

Applied research, by contrast, is oriented toward solving a specific practical problem. A program evaluation measuring whether an intervention reduces intimate partner violence is applied research.

Many research projects contain elements of both. A study of how families negotiate gender roles might simultaneously advance theoretical understanding (basic) and inform family counselling practice (applied).

The Role of Theory

Theory in social science refers to a systematic explanation of observed phenomena. Theories do not simply describe what happens; they propose why and how things happen. Good research is theory-informed: it situates its questions within an existing body of knowledge and uses its findings to refine, extend, or challenge theoretical claims.

Common theoretical frameworks encountered in SMF-related research include feminist theory, queer theory, family systems theory, symbolic interactionism, structural functionalism, and critical race theory. Each framework foregrounds certain questions and methods while backgrounding others.


Chapter 2: The Foundations of Scientific Inquiry

From Epistemology to Methods

Before selecting a survey instrument or an interview protocol, researchers must grapple with more fundamental questions: What counts as knowledge? How can we know the social world? What is the relationship between the knower and the known? These questions belong to the domain of epistemology — the branch of philosophy concerned with the nature, sources, and limits of knowledge.

Closely related is ontology, the branch of philosophy concerned with the nature of reality and what exists. Together, ontology and epistemology form the philosophical foundation upon which all research methodology rests.

Ontological Positions

Two broad ontological positions structure debates in social science:

Realism holds that a social reality exists independently of our perceptions and interpretations of it. Social structures, institutions, and patterns of behaviour have an objective existence that can be studied.

Constructionism (or social constructionism) holds that social reality is not pre-given but is actively created through human interaction, language, and meaning-making. What we take to be “real” — gender, race, family — is the product of social processes.

Most researchers do not occupy the extreme poles of this spectrum. Instead, they adopt positions that acknowledge both the constraining force of social structures and the creative agency of social actors.

Epistemological Positions

Epistemological positions specify how researchers believe knowledge about the social world can be obtained:

Positivism maintains that social science should emulate the natural sciences. Knowledge is best obtained through objective observation, measurement, and the testing of hypotheses. The researcher strives for value-neutrality and seeks to identify causal laws.

Interpretivism (or anti-positivism) argues that the social world is fundamentally different from the natural world because it is constituted by meaning. The task of social science is not to discover causal laws but to understand (German: Verstehen) the meanings that social actors attach to their actions.

Critical theory adds a normative dimension: the purpose of research is not only to understand the social world but to change it, particularly by exposing and challenging structures of power and inequality.

Crotty's Research Process Framework (1998): Michael Crotty proposed four interconnected elements that shape any research project: (1) epistemology, (2) theoretical perspective, (3) methodology, and (4) methods. Each level informs the next. Epistemological commitments shape theoretical perspectives, which in turn guide the choice of methodology and specific methods.

Research Paradigms

A research paradigm is a worldview or framework of beliefs and assumptions that guides research. Thomas Kuhn introduced the concept of paradigms in The Structure of Scientific Revolutions (1962), arguing that scientific progress occurs not through steady accumulation but through revolutionary shifts in the frameworks scientists use to understand the world.

In contemporary social science, several paradigms coexist:

ParadigmOntologyEpistemologyMethodology
PositivismObjective reality existsKnowledge through observation and measurementQuantitative, experimental
Post-positivismReality exists but is imperfectly knowableKnowledge is probabilistic, shaped by theoryQuantitative, quasi-experimental
ConstructivismMultiple realities constructed by individualsKnowledge is co-created by researcher and participantQualitative, interpretive
Critical theoryReality shaped by power structuresKnowledge is political; research should emancipateQualitative, participatory
PragmatismReality is what works in practiceKnowledge judged by practical consequencesMixed methods

A Brief History of Scientific Research

The Emergence of the Scientific Method

The scientific method — the systematic process of observation, hypothesis formation, experimentation, and theory development — emerged from the Enlightenment tradition of the 17th and 18th centuries. Figures such as Francis Bacon, Rene Descartes, and Isaac Newton championed the idea that knowledge should be derived from empirical evidence rather than tradition, authority, or revelation.

Social Science as Science

The application of scientific methods to social phenomena dates to the 19th century. Auguste Comte coined the term sociology and argued that the study of society could and should be as rigorous as the study of nature. Emile Durkheim’s Suicide (1897) demonstrated that even the most seemingly individual act could be explained by social forces, using statistical analysis of suicide rates across European countries.

At the same time, Max Weber argued for a distinctive social science methodology centred on Verstehen (interpretive understanding). Weber maintained that while natural scientists explain events by identifying causes, social scientists must also understand the subjective meanings that actors attach to their behaviour.

The Quantitative-Qualitative Divide

Throughout the 20th century, social science was marked by recurring debates between advocates of quantitative and qualitative approaches. By the late 20th century, many scholars came to see these approaches as complementary rather than competing, giving rise to the mixed methods movement.

Ways of Knowing

Before turning to scientific methods specifically, it is useful to consider the broader range of ways that humans acquire knowledge:

Authority: Accepting claims because they come from a trusted source (a parent, a teacher, a religious leader). This is efficient but risks perpetuating error.

Tradition: Accepting claims because “this is how things have always been done.” Tradition provides social stability but can resist necessary change.

Common sense: Relying on everyday experience and intuition. Common sense is practical but often contradictory (e.g., “birds of a feather flock together” vs. “opposites attract”).

Personal experience: Drawing conclusions from one’s own observations. Personal experience is vivid but susceptible to bias and limited generalizability.

Scientific inquiry: Systematically gathering and analyzing evidence according to established rules and procedures. Scientific inquiry is self-correcting, transparent, and subject to peer scrutiny.

Why Scientific Inquiry? The advantage of scientific inquiry over other ways of knowing is not that it is infallible --- it is not --- but that it is self-correcting. Built into the scientific process are mechanisms for identifying and correcting errors: replication, peer review, and the public sharing of methods and data.

Chapter 3: Ethics in Research

Why Research Ethics Matter

The history of social and biomedical research is marked by episodes of serious ethical violation. The Tuskegee syphilis study (1932–1972), in which African American men with syphilis were left untreated so that researchers could observe the disease’s progression, is perhaps the most infamous example. In social science, Stanley Milgram’s obedience experiments (1961) and Philip Zimbardo’s Stanford Prison Experiment (1971) raised troubling questions about the psychological harm that research participation can inflict.

These and other cases led to the development of formal ethical guidelines and oversight mechanisms. In Canada, the governing framework is the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2).

The Tri-Council Policy Statement (TCPS 2)

The TCPS 2 is a joint policy of Canada’s three federal research agencies:

  • CIHR (Canadian Institutes of Health Research)
  • NSERC (Natural Sciences and Engineering Research Council of Canada)
  • SSHRC (Social Sciences and Humanities Research Council of Canada)

As a condition of receiving federal funding, all Canadian research institutions must comply with the TCPS 2. The most recent edition, TCPS 2 (2022), replaced the 2018 version.

The Three Core Principles

The TCPS 2 is organized around three core ethical principles:

Respect for Persons: This principle recognizes the intrinsic value and autonomy of every individual. It requires that participation in research be voluntary and based on informed consent. It also demands special protections for persons whose autonomy may be diminished (e.g., children, persons with cognitive impairments, prisoners).

Concern for Welfare: Researchers must consider the impact of their research on participants’ physical, mental, emotional, economic, and social well-being. This includes protecting privacy and confidentiality, minimizing risks, and ensuring that potential benefits justify any risks.

Justice: The benefits and burdens of research should be distributed fairly. No group should bear a disproportionate share of research risks, and no group should be systematically excluded from the benefits of research participation.

Key Requirements

Informed consent is the cornerstone of ethical research. Participants must be told:

  • The purpose and procedures of the study
  • Any foreseeable risks and benefits
  • That participation is voluntary and can be withdrawn at any time
  • How their data will be used, stored, and protected

Research Ethics Boards (REBs): All research involving human participants at Canadian universities must be reviewed and approved by an institutional Research Ethics Board before data collection begins. The REB evaluates whether the research design adequately protects participants’ rights and welfare.

Privacy and confidentiality: Researchers must take appropriate measures to protect participants’ personal information. This includes using pseudonyms, securing data storage, and limiting access to identifiable information.

TCPS 2 Tutorial (CORE-2022): All researchers at Canadian institutions are required to complete the TCPS 2 online tutorial (Course on Research Ethics, or CORE) before submitting a research ethics application. The tutorial covers the three core principles, the scope and governance of the TCPS 2, and the processes for ethical review.

Ethics and Vulnerable Populations

Research with Gender and Sexually Diverse Persons

Research involving 2SLGBTQ+ communities raises particular ethical considerations. Henrickson et al. (2020) identify several principles for ethical research with gender and sexually diverse persons:

Do no harm: Researchers must be attentive to the potential for research to reinforce stigma, “out” participants involuntarily, or pathologize non-normative identities.

Inclusive language and categories: Survey instruments and interview protocols should use language that is affirming and inclusive. Binary gender categories (male/female) may exclude or misrepresent non-binary and gender-diverse individuals.

Community engagement: Research with marginalized communities should involve community members in the design, conduct, and dissemination of research. This is consistent with the principles of community-based participatory research (CBPR).

Intersectionality: Researchers should attend to the ways that sexual and gender identity intersect with race, class, disability, Indigeneity, and other axes of difference.

Research with Indigenous Peoples

Chapter 9 of the TCPS 2 addresses research involving First Nations, Inuit, and Metis peoples in Canada. It emphasizes:

  • Community engagement and partnership
  • Respect for Indigenous knowledge systems
  • Community ownership of data (OCAP principles: Ownership, Control, Access, Possession)
  • Free, prior, and informed consent

Ethical Decision-Making in Practice

Ethical dilemmas in research rarely have simple solutions. Consider the following scenarios:

Deception: Some research designs require that participants not know the true purpose of the study (as in Milgram’s obedience experiments). The TCPS 2 permits deception only when the research question cannot be addressed otherwise, the risks are minimal, and participants are debriefed afterward.

Dual relationships: A researcher who is also a service provider (e.g., a social worker studying their own clients) faces conflicts of interest that can compromise both the research and the therapeutic relationship.

Secondary use of data: Using data collected for one purpose in a new study raises questions about the scope of the original consent.


Chapter 4: Quantitative Research Design

What Is Quantitative Research?

Quantitative research is a systematic approach to investigating phenomena through the collection and analysis of numerical data. Rooted primarily in positivist and post-positivist epistemologies, quantitative research seeks to identify patterns, test hypotheses, and establish generalizable relationships among variables.

Key characteristics of quantitative research include:

  • Emphasis on objectivity and value-neutrality
  • Use of structured instruments (surveys, scales, standardized tests)
  • Statistical analysis of numerical data
  • Concern with reliability, validity, and generalizability
  • Deductive reasoning (moving from theory to data)

Variables and Hypotheses

Types of Variables

A variable is any characteristic that can take on different values. Quantitative research is fundamentally concerned with the relationships among variables.

Independent variable (IV): The variable that the researcher manipulates or that is presumed to cause or influence an outcome. In an experiment studying the effect of a parenting program on child behaviour, the parenting program is the independent variable.

Dependent variable (DV): The variable that is measured or observed as the outcome. In the example above, child behaviour is the dependent variable.

Control variable: A variable that the researcher holds constant or accounts for statistically to isolate the relationship between the IV and DV.

Confounding variable (extraneous variable): A variable that is not controlled and that may provide an alternative explanation for the observed relationship.

Mediating variable: A variable that explains the mechanism through which the IV affects the DV. For example, parental self-efficacy might mediate the relationship between a parenting program and child behaviour.

Moderating variable: A variable that influences the strength or direction of the relationship between the IV and DV. For example, child age might moderate the effect of a parenting program.

Hypotheses

A hypothesis is a testable prediction about the relationship between variables. Hypotheses are derived from theory and are stated before data collection begins.

Null hypothesis (H0): There is no relationship between the variables (or no difference between groups).

Alternative hypothesis (H1 or Ha): There is a relationship (or a difference).

Statistical testing evaluates whether the observed data provide sufficient evidence to reject the null hypothesis.

Experimental Research

True Experiments

A true experiment is the gold standard for establishing causal relationships. It has three essential features:

  1. Manipulation: The researcher actively manipulates the independent variable by assigning participants to different conditions (e.g., treatment vs. control).
  2. Random assignment: Participants are assigned to conditions through a random process, ensuring that any pre-existing differences between groups are distributed equally.
  3. Control group: At least one group does not receive the treatment, providing a baseline for comparison.
Example: A researcher randomly assigns 100 couples to either a communication skills workshop (treatment) or a waitlist (control), then measures relationship satisfaction six months later. Because of random assignment, any observed difference in satisfaction can be attributed to the workshop rather than to pre-existing differences between the groups.

Common Experimental Designs

Pretest-posttest control group design: Participants in both the treatment and control groups are measured before and after the intervention. This design allows the researcher to assess change over time while controlling for external influences.

Posttest-only control group design: Participants are measured only after the intervention. This design avoids problems with pretesting (e.g., sensitization effects) but cannot assess pre-existing group differences.

Solomon four-group design: Combines the two designs above by including four groups: two with pretesting and two without. This design isolates the effects of pretesting itself.

Factorial design: Simultaneously manipulates two or more independent variables, allowing the researcher to examine main effects and interaction effects.

Internal Validity

Internal validity refers to the degree to which a study establishes a causal relationship between the independent and dependent variables. Threats to internal validity include:

  • History: External events occurring during the study
  • Maturation: Natural developmental changes in participants
  • Testing: The effect of taking a pretest on posttest performance
  • Instrumentation: Changes in measurement instruments over time
  • Regression to the mean: Extreme scores naturally moving toward the average
  • Selection bias: Systematic differences between groups at baseline
  • Attrition (mortality): Differential dropout from groups

Non-Experimental Research

Not all research questions can or should be investigated through experiments. Non-experimental research (also called observational or correlational research) examines relationships among variables without manipulating an independent variable.

Correlational Research

Correlational research measures two or more variables and examines the statistical relationship between them. The key limitation is that correlation does not establish causation. A positive correlation between income and life satisfaction, for example, might reflect a causal effect of income on satisfaction, a causal effect of satisfaction on income, or the influence of a third variable (e.g., education) on both.

Cross-Sectional Research

Cross-sectional research collects data at a single point in time from a sample representing different segments of a population. It provides a snapshot but cannot establish temporal ordering or causal direction.

Survey Research

Survey research uses questionnaires or structured interviews to collect data from a sample of a population. Surveys are widely used in social science because they are efficient, can reach large samples, and can measure a wide range of variables.

Quasi-Experimental Research

Quasi-experiments resemble true experiments but lack one or more of their defining features — most commonly, random assignment. Quasi-experiments are used when random assignment is impractical or unethical.

Common Quasi-Experimental Designs

Non-equivalent control group design: Similar to a pretest-posttest control group design, but without random assignment. Researchers must be cautious about attributing differences to the treatment, as pre-existing group differences may be responsible.

Interrupted time-series design: Repeated measurements are taken before and after an intervention. This design is useful for evaluating policies or programs implemented at a specific point in time (e.g., the effect of a new domestic violence law on reported incidents).

Regression discontinuity design: Participants are assigned to treatment or control based on a cutoff score on a pre-treatment measure. This design can provide strong causal evidence when properly implemented.

When to Use Each Design: True experiments offer the strongest evidence for causation but are often impractical in social research. Quasi-experiments provide a useful middle ground. Non-experimental designs are appropriate when the goal is to describe or explore relationships rather than to establish causation.

Chapter 5: Sampling Techniques

The Logic of Sampling

Researchers rarely have the resources to study every member of a population (the entire group to which the researcher wishes to generalize). Instead, they study a sample — a subset of the population — and use statistical techniques to draw inferences about the population based on the sample.

The quality of these inferences depends on how the sample is selected. A well-chosen sample enables the researcher to generalize findings with known levels of confidence; a poorly chosen sample can produce misleading results regardless of how carefully the data are analyzed.

Probability Sampling

Probability sampling methods give every member of the population a known, non-zero chance of being selected. These methods allow researchers to estimate sampling error — the degree to which a sample statistic is likely to differ from the population parameter.

Simple Random Sampling

In simple random sampling (SRS), every member of the population has an equal probability of being selected. This is achieved by assigning each member a number and using a random number generator to select the sample.

Advantage: Eliminates systematic bias in selection. Limitation: Requires a complete list of the population (sampling frame), which is often unavailable.

Systematic Sampling

In systematic sampling, the researcher selects every kth member from the sampling frame (e.g., every 10th name on a list). The starting point is chosen randomly.

Advantage: Simpler to implement than SRS. Limitation: Can produce biased results if the list has a periodic pattern that coincides with the sampling interval.

Stratified Sampling

In stratified sampling, the population is divided into strata (subgroups) based on a characteristic of interest (e.g., gender, age group, income level), and a random sample is drawn from each stratum.

Advantage: Ensures representation of key subgroups and can increase precision. Limitation: Requires knowledge of the population’s composition on the stratifying variable.

Cluster Sampling

In cluster sampling, the population is divided into naturally occurring groups (clusters), such as schools or neighbourhoods. A random sample of clusters is selected, and all members within the selected clusters are studied (or a random sample within each cluster is drawn — multistage cluster sampling).

Advantage: Practical when no complete sampling frame exists and the population is geographically dispersed. Limitation: Higher sampling error than SRS for a given sample size, because members within clusters tend to be similar.

Non-Probability Sampling

Non-probability sampling methods do not give every member of the population a known chance of being selected. These methods are common in qualitative research and in quantitative studies where probability sampling is impractical.

Convenience Sampling

Convenience sampling selects participants who are readily available. University students are frequently used as convenience samples in psychological research.

Advantage: Quick, easy, and inexpensive. Limitation: High risk of bias; findings may not generalize beyond the sample.

Purposive (Purposeful) Sampling

Purposive sampling selects participants based on specific criteria relevant to the research question. The researcher deliberately chooses individuals who can provide rich, relevant information.

Advantage: Efficient for studying specific phenomena or populations. Limitation: Dependent on the researcher’s judgement; not generalizable in the statistical sense.

Snowball Sampling

Snowball sampling asks initial participants to refer others who meet the study criteria. This method is particularly useful for accessing hard-to-reach populations (e.g., undocumented immigrants, people who use illicit drugs, members of stigmatized groups).

Advantage: Can access populations for which no sampling frame exists. Limitation: Sample is shaped by participants’ social networks and may overrepresent certain segments.

Quota Sampling

Quota sampling sets targets for the number of participants in specific categories (e.g., 50 men and 50 women) but uses non-random methods to fill those quotas.

Advantage: Ensures diversity on key characteristics. Limitation: Selection within categories is non-random, introducing potential bias.

Sample Size: There is no single "correct" sample size. In quantitative research, sample size is determined by the desired level of statistical power, the expected effect size, and the complexity of the analysis. In qualitative research, sample size is guided by the concept of data saturation --- the point at which additional interviews or observations yield no new themes.

Chapter 6: Quantitative Data Collection

Surveys and Questionnaires

Surveys are among the most widely used data collection methods in social science. They can be administered in person, by telephone, by mail, or online.

Question Types

Closed-ended questions provide a fixed set of response options. Examples include yes/no questions, multiple-choice items, and rating scales. Closed-ended questions are easy to code and analyze statistically but may force participants into categories that do not capture their true views.

Open-ended questions allow participants to respond in their own words. They yield richer data but are more difficult and time-consuming to code and analyze.

Constructing Good Survey Questions

Effective survey questions share several qualities:

  • Clarity: Questions should be phrased in simple, unambiguous language.
  • Neutrality: Questions should not lead participants toward a particular response (avoid leading questions).
  • Single-barrelled: Each question should address only one issue (avoid double-barrelled questions, e.g., “Do you think the government should increase funding for education and healthcare?”).
  • Appropriate response options: Response options should be mutually exclusive and collectively exhaustive.
  • Avoid jargon and acronyms unless the sample is known to understand them.

Common Response Scales

Likert scale: A series of statements to which participants indicate their level of agreement (e.g., Strongly Disagree to Strongly Agree, typically on a 5- or 7-point scale). Named after psychologist Rensis Likert.

Semantic differential scale: Participants rate a concept on a series of bipolar adjective pairs (e.g., good–bad, strong–weak).

Visual analogue scale (VAS): Participants mark a point on a continuous line anchored by two extremes.

Measurement

Levels of Measurement

Understanding the level of measurement of a variable is essential for choosing appropriate statistical tests:

Nominal: Categories with no inherent order (e.g., religion, ethnicity, marital status). The only meaningful operation is counting frequencies.

Ordinal: Categories with a meaningful order but unequal intervals (e.g., education level: high school, bachelor’s, master’s, doctorate). We know the order but not the precise distance between categories.

Interval: Ordered categories with equal intervals but no true zero point (e.g., temperature in Celsius, IQ scores). Addition and subtraction are meaningful, but ratios are not.

Ratio: Like interval but with a true zero point (e.g., income, age, number of children). All mathematical operations are meaningful.

Reliability

Reliability refers to the consistency of a measurement instrument. A reliable instrument produces similar results under consistent conditions.

Types of reliability include:

  • Test-retest reliability: Consistency of scores across two administrations of the same instrument.
  • Internal consistency: The degree to which items within a scale measure the same construct (commonly assessed using Cronbach’s alpha).
  • Inter-rater reliability: Agreement among different raters or coders.

Validity

Validity refers to the degree to which an instrument measures what it claims to measure. Key types include:

  • Face validity: The instrument appears, on the surface, to measure the intended construct.
  • Content validity: The instrument covers all relevant aspects of the construct.
  • Criterion validity: Scores on the instrument correlate with an external criterion (concurrent or predictive).
  • Construct validity: The instrument measures the theoretical construct it is intended to measure, as demonstrated by convergent and discriminant validity.
Reliability vs. Validity: An instrument can be reliable without being valid (it consistently measures something, but not the right thing). However, an instrument cannot be valid without being reliable (if measurements are inconsistent, they cannot accurately capture the intended construct).

Quantitative Data Analysis: An Overview

Descriptive Statistics

Descriptive statistics summarize and organize data:

  • Measures of central tendency: Mean (arithmetic average), median (middle value), mode (most frequent value).
  • Measures of dispersion: Range (difference between highest and lowest values), standard deviation (average distance of scores from the mean), variance (squared standard deviation).
  • Frequency distributions: Tables or graphs showing how often each value or category occurs.

Inferential Statistics

Inferential statistics allow researchers to draw conclusions about a population based on sample data. Key concepts include:

Statistical significance: The probability that an observed result occurred by chance. The conventional threshold (alpha level) is .05, meaning there is a 5% or less probability that the result is due to chance.

p-value: The exact probability of obtaining the observed result (or a more extreme result) if the null hypothesis were true. A p-value less than .05 is typically considered statistically significant.

Confidence intervals: A range of values within which the true population parameter is expected to fall with a specified level of confidence (commonly 95%).

Effect size: A standardized measure of the magnitude of an observed effect, independent of sample size. Common measures include Cohen’s d (for group differences) and Pearson’s r (for correlations).

Common Statistical Tests

TestPurposeData Requirements
Chi-squareAssociation between categorical variablesNominal/ordinal
t-testDifference between two group meansInterval/ratio DV, categorical IV
ANOVADifference among three or more group meansInterval/ratio DV, categorical IV
Correlation (Pearson’s r)Linear relationship between two variablesInterval/ratio
RegressionPrediction of one variable from one or more othersInterval/ratio DV

Rigour in Quantitative Research

Guetterman (2019) emphasizes that rigour in quantitative research depends on:

  • Appropriate research design for the question
  • Valid and reliable instruments
  • Adequate sample size
  • Correct application of statistical tests
  • Transparent reporting of results, including effect sizes and confidence intervals
  • Acknowledgement of limitations

Chapter 7: Qualitative Research Design

What Is Qualitative Research?

Qualitative research is a broad approach to studying social phenomena that emphasizes understanding meanings, experiences, and perspectives from the standpoint of the people involved. Rather than reducing social life to numerical measurements, qualitative researchers work with textual, visual, and auditory data to build rich, contextual accounts of the social world.

Key characteristics of qualitative research include:

  • Emphasis on understanding and interpretation rather than measurement
  • Use of flexible, emergent designs
  • Data collected in natural settings
  • The researcher as the primary instrument of data collection
  • Inductive reasoning (moving from data to theory)
  • Attention to context, process, and meaning
  • Small, purposively selected samples
  • Thick description of findings

Epistemological Roots

Qualitative research is most commonly associated with constructivist and interpretivist epistemologies, although critical and post-structural perspectives also inform qualitative inquiry. The shared assumption is that social reality is multiple, constructed, and best understood from the perspectives of those who live it.

Major Qualitative Research Designs

Creswell et al. (2007) identify five major traditions of qualitative inquiry, each with its own assumptions, methods, and analytic procedures:

Phenomenology

Phenomenology seeks to describe the essence of a lived experience. The researcher interviews individuals who have experienced a particular phenomenon (e.g., the experience of coming out as transgender) and identifies common themes that capture the essential structure of that experience.

Key features:

  • Philosophical roots in Husserl and Heidegger
  • Focus on lived experience
  • Bracketing (epoche): the researcher sets aside their own assumptions
  • Analysis identifies essential themes or structures

Ethnography

Ethnography is the study of a cultural group in its natural setting over a prolonged period. The ethnographer immerses themselves in the community, observing daily life, participating in activities, and conducting informal conversations and formal interviews.

Key features:

  • Extended fieldwork
  • Participant observation
  • Focus on shared patterns of behaviour, beliefs, and language
  • Thick description (Clifford Geertz)

Grounded Theory

Grounded theory aims to develop a theory that is “grounded” in systematically collected and analyzed data. Rather than testing an existing theory, the grounded theory researcher builds theory inductively from the data.

Key features:

  • Simultaneous data collection and analysis
  • Theoretical sampling: sampling decisions guided by emerging theory
  • Constant comparative method: systematically comparing new data with existing codes and categories
  • Theoretical saturation: data collection continues until no new categories emerge
  • Roots in Glaser and Strauss (1967)

Narrative Research

Narrative research focuses on the stories people tell about their lives. The researcher collects narratives (through interviews, diaries, letters, or other sources) and analyzes how individuals construct meaning through storytelling.

Key features:

  • Focus on individual stories and life histories
  • Attention to temporal ordering and plot
  • Analysis considers both content and form of narratives

Case Study

A case study is an in-depth investigation of a single case — an individual, a family, an organization, a community, or an event. The case study draws on multiple sources of data (interviews, documents, observations) to provide a comprehensive understanding.

Key features:

  • Bounded system (the “case”)
  • Multiple data sources
  • Rich, contextualized description
  • Can be exploratory, descriptive, or explanatory
Choosing a Design: The choice of qualitative design should be driven by the research question. If the question asks about the essence of an experience, phenomenology is appropriate. If it asks about cultural patterns, choose ethnography. If the goal is to build theory, use grounded theory.

Sampling in Qualitative Research

Robinson (2014) outlines a systematic four-point approach to sampling for qualitative interview-based research:

  1. Define the target population: Specify the group of interest and the inclusion/exclusion criteria.
  2. Determine sample size: Guided by the research tradition (e.g., phenomenology typically uses 5–25 participants; grounded theory may require 20–60), the depth of data needed, and practical constraints.
  3. Select a sampling strategy: Common strategies include purposive, snowball, maximum variation, typical case, and critical case sampling.
  4. Source the sample: Identify recruitment channels (community organizations, social media, institutional gatekeepers).

Data Saturation

Data saturation occurs when additional data collection no longer yields new information, themes, or categories. Saturation is the primary criterion for determining when to stop collecting data in qualitative research, although it is difficult to determine in advance exactly when saturation will occur.


Chapter 8: Qualitative Data Collection

Interviews

The qualitative interview is the most widely used data collection method in qualitative research. Unlike structured survey interviews, qualitative interviews are conversational, flexible, and oriented toward eliciting participants’ own accounts and interpretations.

Types of Interviews

Unstructured interviews: The researcher has a general topic in mind but no predetermined questions. The interview unfolds as a natural conversation, allowing the participant to direct the discussion.

Semi-structured interviews: The researcher prepares an interview guide — a list of topics and open-ended questions — but is free to probe, follow up, and deviate from the guide as the conversation develops. Semi-structured interviews balance flexibility with consistency.

Structured interviews: The researcher asks a fixed set of questions in a fixed order. These are more common in quantitative research but can be used in qualitative studies that require standardization across a large number of participants.

Conducting Effective Interviews

Key principles for qualitative interviewing include:

  • Build rapport: Establish a trusting, comfortable relationship with the participant before asking sensitive questions.
  • Ask open-ended questions: Begin questions with “how,” “what,” or “tell me about” rather than “do you” or “is it.”
  • Listen actively: Pay attention not only to what is said but how it is said. Silence, hesitation, and emotion are all data.
  • Probe: Use follow-up questions to explore interesting or ambiguous responses (“Can you say more about that?” “What did you mean by…?”).
  • Avoid leading questions: Do not suggest answers or express judgement.
  • Record and transcribe: Audio-record interviews (with permission) and produce verbatim transcripts for analysis.

Focus Groups

A focus group is a facilitated group discussion, typically involving 6–10 participants who share a relevant characteristic. The researcher serves as a moderator, guiding the discussion and encouraging interaction among participants.

Focus groups are particularly useful for:

  • Exploring how people discuss and negotiate meaning collectively
  • Generating a range of perspectives on a topic
  • Studying topics where group interaction may yield richer data than individual interviews

Limitations include:

  • Groupthink: Participants may conform to dominant views
  • Power dynamics: Some voices may dominate while others are silenced
  • Confidentiality: Difficult to ensure when multiple participants are present

Observation

Observation involves systematically watching and recording behaviour in natural settings. It is a central method in ethnographic research.

Levels of Participation

Observations can be classified by the degree of researcher participation:

  • Complete observer: The researcher observes without participating or being known to those observed.
  • Observer-as-participant: The researcher is primarily an observer but may interact with participants.
  • Participant-as-observer: The researcher participates in the setting while openly conducting research.
  • Complete participant: The researcher is fully immersed in the setting and may conceal their researcher identity (raising ethical concerns).

Field Notes

Field notes are the primary record of observational data. They should be written as soon as possible after each observation period and should include:

  • Descriptive notes (what happened, who was present, what was said)
  • Reflective notes (the researcher’s impressions, feelings, questions, and emerging interpretations)
  • Contextual details (time, place, physical setting)

Document Analysis

Document analysis involves the systematic review and interpretation of existing documents — letters, diaries, policy documents, media reports, social media posts, organizational records, and other textual or visual materials.

Documents can serve as primary data sources or as supplements to interviews and observations. Researchers must consider the original purpose, audience, and context of production of any document they analyze.

Reflexivity: Qualitative researchers must engage in reflexivity --- ongoing critical self-examination of how their own backgrounds, assumptions, and relationships with participants shape the research process and findings. Reflexivity is documented through researcher memos, reflexive journals, and methodological notes.

Chapter 9: Qualitative Data Analysis

From Data to Findings

Qualitative data analysis is the process of systematically organizing, reducing, and interpreting textual, visual, or auditory data. Unlike quantitative analysis, which follows standardized statistical procedures, qualitative analysis is iterative, interpretive, and deeply engaged with the data.

Coding

Coding is the foundational analytic process in most qualitative traditions. It involves assigning labels (codes) to segments of data that represent meaningful concepts, patterns, or themes.

Stages of Coding

Merriam and Tisdell (2016, Chapter 8) describe a multi-stage coding process:

Open coding (initial coding): The researcher reads through the data line by line, assigning preliminary codes to meaningful segments. At this stage, codes are descriptive and close to the data.

Axial coding: The researcher begins to identify relationships among open codes, grouping related codes into broader categories. Axial coding involves asking: How are these codes related? What is the larger concept that connects them?

Selective coding (theoretical coding): The researcher identifies the core category or central theme that integrates all other categories into a coherent narrative or theory. This stage is particularly associated with grounded theory.

In Vivo Codes

In vivo codes use participants’ own words as code labels. For example, if a participant describes their experience as “walking on eggshells,” the researcher might use that phrase as an in vivo code. In vivo codes preserve participants’ voices and can be particularly powerful in conveying experiential meaning.

Thematic Analysis

Thematic analysis is a widely used method for identifying, analyzing, and reporting patterns (themes) within data. It is flexible and not tied to a particular theoretical framework.

Braun and Clarke (2006) outline six phases of thematic analysis:

  1. Familiarization: Immersing yourself in the data through repeated reading
  2. Generating initial codes: Systematically coding features of the data
  3. Searching for themes: Collating codes into potential themes
  4. Reviewing themes: Checking that themes work in relation to coded extracts and the full data set
  5. Defining and naming themes: Refining the specifics of each theme and the overall story
  6. Producing the report: Selecting compelling extract examples and writing up the analysis

Content Analysis

Content analysis can be conducted qualitatively or quantitatively. Qualitative content analysis focuses on the latent (underlying, interpretive) meaning of texts, while quantitative content analysis counts the frequency of specific words, phrases, or themes.

Rigour in Qualitative Research

Because qualitative research does not use the same criteria as quantitative research (reliability, validity, generalizability in the statistical sense), alternative criteria have been proposed.

Trustworthiness

Lincoln and Guba (1985) proposed four criteria for establishing trustworthiness in qualitative research:

Credibility (analogous to internal validity): Do the findings accurately represent participants’ experiences? Strategies include:

  • Prolonged engagement in the research setting
  • Triangulation (using multiple data sources, methods, or researchers)
  • Member checking (sharing findings with participants for verification)
  • Peer debriefing (discussing findings with colleagues)

Transferability (analogous to external validity): Can the findings be applied to other contexts? The researcher provides thick description so that readers can judge whether the findings are relevant to their own settings.

Dependability (analogous to reliability): Would the findings be consistent if the study were repeated? The researcher maintains an audit trail — a transparent record of data collection and analytic decisions.

Confirmability (analogous to objectivity): Are the findings shaped by the data rather than by the researcher’s biases? Reflexivity and audit trails contribute to confirmability.

Triangulation: Triangulation involves using multiple sources of data (data triangulation), multiple researchers (investigator triangulation), multiple theoretical perspectives (theory triangulation), or multiple methods (methodological triangulation) to corroborate findings and enhance credibility.

Chapter 10: Cross-Sectional and Longitudinal Research

Time in Research Design

One of the most consequential decisions a researcher makes is whether to collect data at a single point in time or over multiple points. This choice determines the kinds of questions the study can address and the strength of the conclusions it can support.

Cross-Sectional Research

A cross-sectional study collects data from a sample at one point in time. It provides a snapshot of the variables of interest at a particular moment.

Strengths

  • Relatively quick and inexpensive
  • Can study a wide range of variables simultaneously
  • Useful for estimating prevalence (e.g., what percentage of Canadians support same-sex marriage?)
  • Can identify correlations between variables

Limitations

  • Cannot establish temporal ordering (which variable came first)
  • Cannot establish causation
  • Susceptible to cohort effects — differences between age groups may reflect generational differences rather than developmental changes

Longitudinal Research

Longitudinal research collects data from the same (or comparable) participants at multiple points in time. It is essential for studying change, development, and causation.

Types of Longitudinal Designs

Panel study: The same individuals are studied at multiple time points. Panel studies can track individual change over time but are vulnerable to attrition (participants dropping out) and panel conditioning (participants changing their behaviour because they are being studied repeatedly).

Cohort study: A group of individuals who share a common experience or characteristic (e.g., born in the same year, married in the same decade) is followed over time. Cohort studies are common in epidemiology and demography.

Trend study: Data are collected from different samples of the same population at multiple time points. Trend studies track changes in the population over time (e.g., annual surveys of attitudes toward immigration) but cannot track individual change.

Strengths of Longitudinal Research

  • Can establish temporal ordering
  • Can track individual and group change over time
  • Stronger basis for causal inference than cross-sectional designs

Limitations

  • Expensive and time-consuming
  • Vulnerable to attrition
  • Changes in measurement instruments over time can compromise comparability
Example from Family Research: A cross-sectional study might compare relationship satisfaction in couples married for different lengths of time and conclude that satisfaction declines with duration. A longitudinal study that follows the same couples over time might reveal a more nuanced pattern: satisfaction dips after the birth of the first child but recovers in later years.

Chapter 11: Mixed Methods Research

Beyond the Quantitative-Qualitative Divide

For much of the 20th century, quantitative and qualitative approaches were treated as fundamentally incompatible, rooted in irreconcilable philosophical assumptions. This perspective, sometimes called the paradigm wars, held that researchers must choose one side or the other.

The mixed methods movement emerged in the late 20th century as a pragmatic response to this impasse. Mixed methods researchers argue that the choice of method should be driven by the research question, not by philosophical allegiance, and that many research questions are best addressed by combining quantitative and qualitative approaches.

Defining Mixed Methods Research

Mixed methods research involves the intentional collection, analysis, and integration of both quantitative and qualitative data within a single study or program of inquiry. The key word is integration: simply including both types of data is not sufficient. The two strands must be connected, combined, or synthesized to produce insights that neither approach could yield alone.

Rationales for Mixed Methods

Researchers choose mixed methods for several reasons:

  • Complementarity: Using one method to elaborate, enhance, or clarify findings from the other
  • Triangulation: Seeking convergence of findings across methods to strengthen conclusions
  • Development: Using results from one method to inform the design of the other
  • Expansion: Extending the breadth and depth of inquiry
  • Initiation: Discovering paradoxes and contradictions that lead to new research questions

Major Mixed Methods Designs

Creswell and Plano Clark (2018) identify three core mixed methods designs:

Convergent Design

In a convergent design (also called parallel or concurrent), quantitative and qualitative data are collected simultaneously, analyzed separately, and then merged for interpretation. The purpose is to compare and contrast findings from the two data sets to develop a more complete understanding of the phenomenon.

When to use: When the researcher wants to validate quantitative findings with qualitative data, or to explore a topic from multiple angles simultaneously.

Example: A study of caregiving experiences might administer a standardized burden scale (quantitative) while conducting in-depth interviews about caregivers’ daily lives (qualitative), then compare the two sets of findings.

Explanatory Sequential Design

In an explanatory sequential design, the researcher first collects and analyzes quantitative data, then uses the findings to inform a qualitative follow-up phase. The purpose is to explain or elaborate on quantitative results.

When to use: When unexpected quantitative results need explanation, or when the researcher wants to explore statistical findings in greater depth.

Example: A survey reveals that divorced fathers report higher parenting satisfaction than expected. The researcher then interviews a subset of divorced fathers to understand the factors underlying this finding.

Exploratory Sequential Design

In an exploratory sequential design, the researcher first collects and analyzes qualitative data, then uses the findings to develop and test a quantitative instrument or intervention.

When to use: When existing instruments are inadequate for the population or phenomenon of interest, or when the researcher wants to generalize qualitative findings to a larger sample.

Example: Focus groups with immigrant families reveal culturally specific conceptions of “family involvement in education.” The researcher uses these findings to develop a culturally sensitive survey instrument, which is then administered to a large sample.

Embedded Design

In an embedded design, one type of data plays a supporting role within a primarily quantitative or qualitative study. For example, a randomized controlled trial (primarily quantitative) might include qualitative interviews to understand participants’ experiences of the intervention.

Challenges of Mixed Methods Research

Giddings and Grant (2006) caution that mixed methods research is not a neutral, apolitical solution to the paradigm wars. They argue that:

  • Mixed methods can privilege quantitative approaches by treating qualitative data as merely supplementary
  • The philosophical assumptions underlying each approach may genuinely conflict
  • Researchers must be competent in both quantitative and qualitative methods, which requires extensive training
  • Integration is often claimed but not achieved in practice
Quality in Mixed Methods: Rigorous mixed methods research requires: (1) a clear rationale for using mixed methods, (2) appropriate application of both quantitative and qualitative procedures, (3) genuine integration of findings, and (4) transparent reporting of how integration was achieved.

Chapter 12: Interdisciplinary Research

What Is Interdisciplinary Research?

Interdisciplinary research (IDR) integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines to advance understanding or solve problems whose solutions lie beyond the scope of a single discipline.

Borrego and Newswander (2010) distinguish among several levels of cross-disciplinary engagement:

Multidisciplinary Research

Multidisciplinary research involves researchers from different disciplines working on a shared problem but approaching it from their respective disciplinary perspectives. The disciplines remain distinct; there is juxtaposition but not integration.

Example: A team studying homelessness might include a sociologist studying structural factors, a psychologist studying individual risk factors, and a social worker studying service access — each using their own theories and methods.

Interdisciplinary Research

Interdisciplinary research goes further by integrating disciplinary perspectives to create a more holistic understanding. The researcher (or research team) synthesizes concepts, theories, and methods across disciplinary boundaries.

Example: A researcher studying intimate partner violence might integrate feminist theory (from women’s studies), attachment theory (from psychology), and ecological systems theory (from social work) to develop a more comprehensive framework.

Transdisciplinary Research

Transdisciplinary research transcends disciplinary boundaries altogether, creating new conceptual frameworks that cannot be reduced to any single discipline. It often involves collaboration with non-academic stakeholders (community members, policymakers, practitioners).

Why Interdisciplinary Research Matters for SMF

The field of Sexuality, Marriage, and Family Studies is inherently interdisciplinary. The phenomena it studies — sexuality, intimate relationships, family formation and dissolution, gender, care work — cannot be adequately understood from any single disciplinary perspective. Understanding family violence, for example, requires insights from psychology (individual risk and protective factors), sociology (structural inequality and social norms), law (legal protections and remedies), social work (intervention and support), and gender studies (patriarchal power structures).

Challenges of Interdisciplinary Research

  • Epistemological tensions: Different disciplines may have fundamentally different assumptions about what counts as knowledge and evidence.
  • Methodological pluralism: Integrating methods from different traditions requires flexibility and competence across approaches.
  • Communication barriers: Disciplinary jargon can impede mutual understanding.
  • Institutional barriers: Academic reward structures (hiring, tenure, publication) often privilege disciplinary specialization.

Integrating Methods Across Disciplines

Interdisciplinary research in the social sciences often employs mixed methods designs, community-based participatory approaches, or other integrative frameworks. The key is not merely to combine methods but to articulate how different disciplinary perspectives interact and what the integration adds to understanding.

The Value of Methodological Pluralism: Sexuality, Marriage, and Family Studies benefits from a wide methodological toolkit. Surveys can reveal the prevalence of phenomena; interviews can illuminate lived experiences; experiments can test causal mechanisms; document analysis can trace policy trajectories. The strongest research programs use multiple methods in complementary ways.

Chapter 13: Becoming a Critical Consumer of Research

Reading Research Critically

A central goal of this course is to develop the capacity to read research not passively but critically — to evaluate the quality of evidence, the soundness of reasoning, and the limitations of conclusions.

Questions to Ask of Any Study

About the research question:

  • Is the question clearly stated?
  • Is it significant — does it address a meaningful gap in knowledge?
  • Is it appropriately scoped?

About the literature review:

  • Does the study situate itself within relevant existing research?
  • Does it identify gaps or contradictions in the literature?

About the methodology:

  • Is the research design appropriate for the question?
  • Are the methods described in sufficient detail for replication?
  • Are the sampling procedures clearly explained?
  • Are the instruments valid and reliable?

About ethics:

  • Was ethical approval obtained?
  • Were participants’ rights and welfare protected?

About the results:

  • Are the findings presented clearly?
  • Are statistical tests appropriate and correctly interpreted?
  • Are qualitative findings supported by data (quotations, thick description)?

About the discussion:

  • Do the conclusions follow logically from the results?
  • Are limitations acknowledged?
  • Are claims of causation justified by the design?
  • Are findings contextualized within the broader literature?

Common Pitfalls in Research Interpretation

Confusing correlation with causation: A statistically significant correlation between two variables does not mean that one causes the other.

Overgeneralization: Findings from a convenience sample of university students may not apply to the general population.

Publication bias: Studies with statistically significant results are more likely to be published, which can distort the overall picture of evidence on a topic.

Ecological fallacy: Drawing conclusions about individuals based on group-level data.

Reductionism: Reducing complex social phenomena to a single variable or explanation.

Research Literacy as Social Responsibility

In an era of misinformation, the ability to distinguish well-conducted research from poorly designed studies, to identify ideological bias masquerading as scientific objectivity, and to understand the inherent uncertainty of all empirical findings is a form of civic literacy. This course provides the intellectual tools for that task.

Final Reflection: Research methods are not merely technical procedures. They embody assumptions about the nature of reality, the possibilities of knowledge, and the ethical obligations of inquiry. To study research methods is to engage with fundamental questions about how we can know the social world and how that knowledge can serve human well-being.

Glossary of Key Terms

TermDefinition
Alpha levelThe threshold probability (typically .05) used to determine statistical significance
Axial codingA stage of qualitative coding in which open codes are grouped into broader categories
BracketingThe practice of setting aside the researcher’s assumptions in phenomenological research
Case studyAn in-depth investigation of a single bounded case
CodingAssigning labels to segments of qualitative data to identify patterns and themes
Confidence intervalA range of values within which the true population parameter is expected to fall
Confounding variableAn uncontrolled variable that may explain an observed relationship
ConstructionismThe ontological position that social reality is created through human interaction
Convergent designA mixed methods design in which quantitative and qualitative data are collected simultaneously and merged
Correlational researchResearch that examines relationships among variables without manipulation
Cronbach’s alphaA measure of internal consistency reliability
Cross-sectional researchData collection at a single point in time
Data saturationThe point at which additional data collection yields no new themes
Deductive reasoningMoving from general theory to specific predictions
Dependent variableThe outcome variable measured in a study
Effect sizeA standardized measure of the magnitude of an observed effect
EpistemologyThe branch of philosophy concerned with the nature and limits of knowledge
EthnographyThe study of cultural groups through prolonged immersion and participant observation
Explanatory sequential designA mixed methods design in which quantitative data collection is followed by qualitative follow-up
Exploratory sequential designA mixed methods design in which qualitative data collection informs subsequent quantitative inquiry
External validityThe degree to which findings can be generalized to other settings and populations
Focus groupA facilitated group discussion used as a data collection method
Grounded theoryA qualitative approach that develops theory inductively from data
HypothesisA testable prediction about the relationship between variables
Independent variableThe variable that is manipulated or presumed to cause an outcome
Inductive reasoningMoving from specific observations to general theory
Informed consentParticipants’ voluntary agreement to participate based on full disclosure of relevant information
Internal validityThe degree to which a study establishes a causal relationship
InterpretivismThe epistemological position that social science should focus on understanding meaning
Likert scaleA response scale measuring level of agreement with a statement
Longitudinal researchData collection from the same or comparable participants at multiple time points
Member checkingSharing qualitative findings with participants for verification
Mixed methodsResearch combining quantitative and qualitative approaches within a single study
Narrative researchQualitative research focused on the stories people tell about their lives
NominalA level of measurement involving categories with no inherent order
Null hypothesisThe hypothesis that there is no relationship or difference between variables
OntologyThe branch of philosophy concerned with the nature of reality
Open codingThe initial stage of qualitative coding in which preliminary labels are assigned
OrdinalA level of measurement involving ordered categories with unequal intervals
p-valueThe probability of obtaining the observed result if the null hypothesis were true
ParadigmA worldview or framework of beliefs guiding research
Participant observationA method in which the researcher participates in the setting being studied
PhenomenologyA qualitative approach seeking the essence of a lived experience
PositivismThe epistemological position that knowledge comes from objective observation and measurement
Probability samplingSampling methods in which every population member has a known chance of selection
Purposive samplingSelecting participants based on specific criteria relevant to the research question
Quasi-experimentA study resembling an experiment but lacking random assignment
Random assignmentAssigning participants to conditions through a random process
ReflexivityCritical self-examination of the researcher’s influence on the research process
ReliabilityThe consistency of a measurement instrument
Research Ethics Board (REB)An institutional body that reviews and approves research involving human participants
Sampling frameA list of all members of the population from which a sample is drawn
Snowball samplingAsking participants to refer others who meet study criteria
Statistical significanceThe probability that an observed result is not due to chance
Stratified samplingDividing the population into subgroups and sampling from each
TCPS 2Tri-Council Policy Statement: Canada’s federal policy on research ethics
Thematic analysisA method for identifying and analyzing patterns in qualitative data
Thick descriptionDetailed, contextual account of qualitative findings
TriangulationUsing multiple data sources, methods, or researchers to corroborate findings
TrustworthinessThe quality criteria for qualitative research (credibility, transferability, dependability, confirmability)
ValidityThe degree to which an instrument measures what it claims to measure
VariableAny characteristic that can take on different values
VerstehenInterpretive understanding; a key concept in Weberian sociology
Back to top