PSYCH 253: Social Psychology

Siobhan Sutherland

Estimated study time: 58 minutes

Table of contents

Sources and References

Primary Textbook: Aronson, E., Wilson, T. D., Akert, R. M., & Sommers, S. R. (2020). Social Psychology (10th ed.). Pearson.

Supplementary Texts:

  • Gilbert, D. T., Fiske, S. T., & Lindzey, G. (Eds.). (2010). The Handbook of Social Psychology (5th ed.). McGraw-Hill.
  • Gilovich, T., Keltner, D., Chen, S., & Nisbett, R. E. (2019). Social Psychology (5th ed.). W. W. Norton.

Online and Additional Resources:

  • Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67(4), 371–378.
  • Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. Journal of Abnormal and Social Psychology, 58(2), 203–210.
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies. Journal of Personality and Social Psychology, 8(4), 377–383.
  • Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69(5), 797–811.
  • Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123–205.

Chapter 1: Situational Influence

The Power of the Situation

Social psychology rests on a single, deceptively simple premise: the social situation matters enormously in shaping human behavior. People are not simply autonomous agents acting on the basis of stable internal traits; they are responsive, often profoundly so, to the context in which they find themselves. This is perhaps the discipline’s most fundamental and counterintuitive lesson, one that distinguishes social psychology from both personality psychology and common folk wisdom.

The field defines social psychology as the scientific study of how people’s thoughts, feelings, and behaviors are influenced by the actual, imagined, or implied presence of others. What this definition captures is the dual nature of social influence: it operates both when others are physically present and when their presence is merely anticipated or implied. A student preparing an essay knows she will be evaluated, and that knowledge shapes her writing even in solitude.

Fundamental Attribution Error

Perhaps no concept in social psychology has been replicated, extended, and debated more than the fundamental attribution error (FAE), first named by Lee Ross in 1977. The FAE refers to the systematic tendency to overestimate the role of dispositional factors — stable personality traits, intentions, character — and to underestimate the role of situational factors when explaining other people’s behavior.

Fundamental Attribution Error (FAE): The tendency to overattribute others' behavior to their internal dispositions while neglecting the power of situational constraints. Also called correspondence bias in the attribution literature because observers tend to infer that behavior "corresponds" to a stable underlying trait even when situational explanations are equally or more plausible.

The classic demonstration was provided by Jones and Harris (1967), who asked participants to read essays either supporting or opposing Fidel Castro’s regime. When participants were told the essay writer had freely chosen their position, they unsurprisingly attributed the essay’s content to the writer’s genuine beliefs. More strikingly, even when participants were explicitly told the essay position had been assigned — that the writer had no choice — they still inferred that the essay reflected the writer’s true attitudes. The situational constraint (assignment) was cognitively available, yet participants discounted it.

Why does the FAE occur? Several accounts have been proposed. Gilbert and Malone (1995) offer a two-stage model: perceivers first automatically categorize the behavior as consistent with a corresponding trait (Stage 1, automatic), and only subsequently attempt to correct for situational factors (Stage 2, effortful). Because Stage 2 requires cognitive resources and motivation, it is frequently incomplete or abandoned. When people are cognitively busy, distracted, or unmotivated, they remain anchored to the dispositional characterization produced in Stage 1.

A second account emphasizes perceptual salience. When we observe another person act, that person is the most salient element in our perceptual field — the figure against the situational ground. The situation, by contrast, fades into background. Because attention directs causal attribution, the salient agent receives the causal credit (or blame).

Attribution Theory: From Heider to Kelley

The systematic study of how people explain behavior — attribution theory — was launched by Fritz Heider’s 1958 book The Psychology of Interpersonal Relations. Heider drew a distinction between personal causation (behavior caused by something about the person, their ability or effort) and impersonal causation (behavior caused by external forces or luck). He noted that observers have a “perceptual bias” toward personal causation that mirrors what Ross later called the FAE.

Edward Jones and Keith Davis (1965) elaborated Heider’s framework into the correspondent inference theory. When observing an intentional act, they argued, perceivers ask whether the behavior corresponds to a stable underlying disposition. Correspondence is inferred most confidently when: (a) the act has few non-common effects (it does something few alternatives would have done); (b) the behavior is socially undesirable (a freely chosen objectionable act tells us more about a person than a socially acceptable one); and (c) the behavior was freely chosen rather than role-prescribed.

Harold Kelley (1967, 1972) extended attribution theory by asking not just what disposition is inferred from a single act but how people reason across multiple observations. His covariation model proposes that people act as naive scientists, attending to three dimensions of information:

  • Consensus: Do other people behave the same way in this situation? (High consensus: most people do.)
  • Distinctiveness: Does this person behave this way only in this situation, or in many others? (High distinctiveness: only in this situation.)
  • Consistency: Does this person behave this way reliably across time? (High consistency: always.)
Kelley's Covariation in Practice: Imagine learning that your friend Mariam laughed uproariously at a comedian. If consensus is high (everyone laughed), distinctiveness is high (Mariam doesn't usually laugh at comedians), and consistency is high (she always laughs at this comedian), Kelley's model predicts a situational attribution: there must be something genuinely funny about this comedian. Conversely, if consensus is low, distinctiveness is low, and consistency is high, the model predicts a personal attribution: Mariam simply finds everything funny.

Actor-Observer Asymmetry

Jones and Nisbett (1972) identified an extension of the FAE they called the actor-observer asymmetry: while observers attribute actors’ behavior to dispositions, actors tend to attribute their own behavior to situational factors. When you fail an exam, you explain it by reference to the unfair questions, your illness, the ambient noise — situational factors. When you see a classmate fail, you infer she did not study enough.

Two mechanisms underlie this asymmetry. First, there is a difference in perceptual perspective: the actor literally cannot see herself acting (she looks outward at the situation), whereas the observer looks at the actor. Second, actors have privileged access to their own situational history — they know about the extraordinary pressures they were under — whereas observers lack this information and must infer from observed behavior alone.

Qualification: The actor-observer asymmetry is real but culturally modulated. Research by Joan Miller (1984) showed that while American children and adults show the asymmetry strongly, Hindu Indian respondents show much less of it, preferring situational explanations for others' behavior. This suggests the asymmetry partly reflects cultural norms about persons and causation rather than universal cognitive architecture.

Self-Serving Bias

A further distortion in attribution concerns the self. People systematically take credit for successes (attributing them to ability or effort) while deflecting responsibility for failures (attributing them to bad luck or situational constraints). This is the self-serving attributional bias. The asymmetry benefits self-esteem and is virtually ubiquitous in Western samples. However, cross-cultural research reveals that the bias is attenuated or reversed in collectivist cultures where modesty about personal contribution is normatively valued.

The Milgram Obedience Studies

Stanley Milgram’s obedience experiments, conducted at Yale University in the early 1960s, stand as the most famous and morally disturbing demonstrations of situational power in the entire social-scientific literature. Milgram’s question was stark: could ordinary Americans be induced to inflict apparently lethal electric shocks on an innocent person, simply because an authority figure instructed them to do so?

The basic procedure involved three roles: the experimenter (dressed in a grey lab coat, representing authority), the participant (designated “Teacher”), and a confederate (designated “Learner” and secretly an actor). The Teacher administered what appeared to be progressively intensifying electric shocks (labeled from 15 volts to 450 volts, with the highest three switches ominously labeled “XXX”) each time the Learner gave a wrong answer on a memory task. The Learner’s distress — grunts, protests, pleas to stop, and finally ominous silence — was pre-recorded and played at designated shock levels.

Milgram found that approximately 65% of participants in the baseline condition administered the maximum 450-volt shock, continuing despite apparent anguish. Before running the studies, Milgram had polled psychiatrists who predicted that fewer than 1% would do so. The discrepancy between prediction and reality encapsulates the fundamental attribution error at a collective level: people systematically underestimated situational influence.

What is scientifically most instructive are the situational manipulations Milgram conducted across his variants:

  • Proximity of authority: When the experimenter gave commands by telephone rather than in person, obedience dropped to about 20%. Physical proximity of the authority increased its power dramatically.
  • Proximity of victim: In the “voice feedback” condition (Learner in adjacent room, screams audible), 62.5% administered maximum shock. When the Learner was in the same room (touch proximity), obedience dropped to 40%. When the Teacher was required to hold the Learner’s hand against a shock plate, obedience fell to 30%. As the victim became more psychologically real, resistance increased.
  • Institutional legitimacy: When the experiment was moved from Yale’s prestigious campus to a run-down commercial building in Bridgeport, CT, obedience dropped from 65% to 47.5%, demonstrating that institutional prestige confers authority.
  • Peer support: When two confederate “co-teachers” defied the experimenter at the 150-volt mark, only 10% of participants continued to maximum shock. Social models of resistance were extremely liberating.
  • Conflicting authorities: When two experimenters disagreed about whether to continue, no participant administered maximum shock. Conflicting authority undermines compliance.
Ethical and Empirical Reanalysis: Milgram's studies generated enormous ethical controversy about deception and psychological harm to participants. More recently, Gina Perry (2012) and historian Ian Nicholson have questioned whether participants were as fully deceived as Milgram claimed, and archival analysis suggests some experimenters deviated from the scripted prods, potentially inflating obedience rates. Thomas Blass (2004) provides the most comprehensive scholarly treatment. Subsequent partial replications (e.g., Burger, 2009, stopping at 150 volts for ethical reasons) suggest the phenomenon is real but may be somewhat smaller in magnitude than Milgram reported.

The Stanford Prison Experiment and Its Reanalysis

Philip Zimbardo’s 1971 Stanford Prison Experiment (SPE) was designed to explore whether the brutality observed in real prisons reflected the dispositions of guards and inmates or the situational pressures of the prison role itself. Twenty-four college men were randomly assigned to roles as “guards” or “prisoners” in a simulated prison environment in the basement of Stanford’s psychology building. The study was terminated after six days when “guards” began subjecting “prisoners” to psychological abuse, and several “prisoners” showed signs of emotional breakdown.

The SPE was widely interpreted as a demonstration of the power of roles and situational forces to corrupt ordinary people. However, subsequent reanalysis — most critically by psychologist Thibault Le Texier (2019), who obtained Zimbardo’s archive — revealed serious methodological problems. Guards were explicitly coached by Zimbardo to be “tough”; Zimbardo himself played the role of Prison Superintendent, blurring the boundary between experimenter and participant; and prisoners who showed resilience were actively pressured to behave in distressed ways. What was presented as emergent situational behavior was, in part, experimentally engineered.

These critiques do not fully negate the SPE’s core message — situational roles do shape behavior — but they caution against treating it as a pure demonstration of unguided situational power. The study is best read as a complex interaction of institutional framing, experimenter demand, and genuine role-based conformity pressure.


Chapter 2: Social Psychology Research Methods

The Scientific Foundation

Social psychology is an empirical science: its claims are evaluated against data collected through systematic observation and experimentation. Understanding the discipline’s methodology is not merely a technical concern but an epistemological one — the methods used determine what conclusions can legitimately be drawn. Dr. Sutherland emphasizes that every substantive claim encountered in this course rests on a methodological foundation that students should be able to evaluate critically.

Experimental Design: The Logic of Random Assignment

The experiment is social psychology’s most powerful tool for establishing causation. In a true experiment, the researcher manipulates one or more independent variables (IVs) — the presumed causes — and measures one or more dependent variables (DVs) — the presumed effects — while holding other factors constant.

Random Assignment: The procedure of assigning participants to experimental conditions entirely by chance (e.g., coin flip, random number generator). When properly implemented, random assignment ensures that participant characteristics (intelligence, personality, prior experience) are distributed equivalently across conditions on average, making the groups comparable at baseline. This is the feature that allows causal inference.

Random assignment must be distinguished from random sampling (selecting participants randomly from a population to ensure representability). Experiments frequently use convenience samples (e.g., undergraduate participants) and thus sacrifice external validity (generalizability) in exchange for internal validity (causal clarity). Social psychologists have generally prioritized internal validity, accepting that exact results may not generalize but hoping that underlying processes do.

Threats to Validity: Demand Characteristics and Cover Stories

Because human participants are intelligent, motivated social agents, they often attempt to figure out what an experiment is “about” and behave in ways they believe the researcher wants (or, occasionally, in contrary ways). These participant-generated hypotheses about experimental purpose constitute demand characteristics (Orne, 1962).

To reduce demand characteristics, social psychologists routinely employ cover stories — plausible alternative explanations of the study’s purpose presented to participants before and during the experiment. Milgram’s participants, for instance, believed they were in a study of the effects of punishment on learning, not a study of obedience. The cover story must be believable enough that participants do not see through it, yet the study must ultimately be debriefed — the true purpose explained — at the end.

A related threat is experimenter expectancy effects: experimenters who know which condition a participant is in may inadvertently signal expected behavior through subtle cues (tone, timing, facial expression). Double-blind procedures, in which neither the experimenter administering the treatment nor the participant knows the condition assignment (used more in drug trials than typical social experiments), and scripted interaction protocols mitigate this threat.

Mundane Realism versus Experimental Realism

Elliot Aronson and Merrill Carlsmith (1968) drew an important distinction between two types of realism in social experiments:

  • Mundane realism refers to the degree to which the experimental situation resembles real-world settings. A laboratory discussion group differs from a naturally occurring workplace meeting in many respects; it has low mundane realism.
  • Experimental realism refers to the degree to which the experimental manipulation has genuine psychological impact on participants — whether it makes them think, feel, or behave in ways that are real and meaningful to them, not merely performed.
A study can have low mundane realism but high experimental realism. Milgram's shock generator was not something participants encountered in everyday life (low mundane realism), yet the distress they experienced was entirely genuine (high experimental realism). Conversely, a study that places participants in a perfectly realistic office setting but asks them to rate hypothetical scenarios may have high mundane realism but low experimental realism if participants do not engage authentically with the scenario.

Ethical Principles: Deception, Harm, and Debriefing

The use of deception in social psychology raises genuine ethical tensions. Deception is arguably necessary to obtain uncontaminated behavioral data, yet it violates principles of informed consent central to research ethics codes. The American Psychological Association’s ethical guidelines permit deception only when: the research question cannot be adequately addressed without it, the potential benefits outweigh potential harms, no substantial harm is expected, and participants are fully debriefed afterward.

Debriefing serves multiple functions: it restores participants’ accurate understanding of what occurred, ensures they leave the lab without lingering negative effects, and ideally provides an educational experience in which participants understand the scientific rationale for the deception. Good debriefing is nuanced — it must detect whether deception was successful (was the cover story believed?) and address any distress induced by the procedure.

The Replication Crisis

Beginning around 2011, social psychology — along with psychology more broadly — confronted a replication crisis when systematic attempts to reproduce landmark findings yielded disappointing rates of replication. The Open Science Collaboration’s (2015) large-scale replication project found that only about 36% of social and cognitive psychology findings replicated with similar effect sizes. High-profile failures included attempted replications of ego depletion, social priming effects, and several classic social psychology demonstrations.

The crisis prompted methodological reforms: pre-registration of hypotheses and analysis plans before data collection (to prevent HARKing — Hypothesizing After Results are Known), increased emphasis on statistical power, greater reliance on meta-analyses, open data and materials sharing, and multi-site replications. Dr. Sutherland frames the replication crisis not as a reason to dismiss social psychology but as evidence that the scientific process is self-correcting and that methodological rigor matters profoundly.


Chapter 3: Conformity, Social Norms, and Culture

The Pressure to Be Like Others

Humans are intensely social animals, and the pressure to align one’s behavior, beliefs, and attitudes with those of others — conformity — is among the most powerful and pervasive forces in social life. Social psychology has mapped conformity with considerable precision, distinguishing between the processes that drive it, the conditions that amplify or reduce it, and the cultural context in which it occurs.

Asch’s Line Studies

Solomon Asch’s conformity experiments (1951, 1956) are among the most replicated and pedagogically important studies in social psychology. Asch assembled groups of seven to nine participants, all but one of whom were confederates. The task appeared simple: judge which of three comparison lines matched a standard line in length. The correct answer was always unambiguous.

However, on twelve of eighteen trials, the confederates unanimously gave a clearly incorrect answer. Real participants (seated second-to-last) were thus faced with a conflict between their own veridical perception and a unanimous group consensus. Across the studies, approximately 75% of participants conformed to the incorrect majority on at least one trial, and overall conformity occurred on about 37% of critical trials. Only about 25% of participants never conformed.

Asch's Manipulations: Asch varied several factors systematically. When the unanimous majority was reduced to two persons, conformity dropped sharply. With three or more, conformity reached its plateau. Crucially, providing the participant with a single ally — even an ally who gave a different wrong answer — dramatically reduced conformity, demonstrating that unanimity rather than majority size per se is the critical factor. Asch also found that public conformity did not always reflect private belief: many participants who went along with the group privately maintained their own correct judgment.

Informational versus Normative Social Influence

Morton Deutsch and Harold Gerard (1955) provided a conceptual framework distinguishing two fundamentally different reasons why people conform:

Informational Social Influence: Conformity that occurs because people genuinely accept others' behavior or judgments as evidence about reality. When uncertain about the correct answer, other people's views serve as data. This type of conformity tends to produce private acceptance — the person genuinely changes their belief.
Normative Social Influence: Conformity that occurs because people desire to be liked, accepted, and to avoid rejection. People go along with the group not because they believe the group is right but because deviation is socially costly. This type of conformity typically produces public compliance without private acceptance.

Asch’s ambiguous-answer variants showed that when the line judgments were made genuinely difficult (lines of similar length), conformity increased and was more likely informational. When the correct answer was unambiguous, conformity was predominantly normative — participants knew the group was wrong but did not want to be the outlier.

Minority Influence: Moscovici and Consistent Dissent

The dominant model of conformity treats influence as flowing from majority to minority. Serge Moscovici challenged this unidirectional view, arguing that minorities can exert influence on majorities under certain conditions. In his classic studies (Moscovici, Lage, & Naffrechoux, 1969), a consistent minority of two confederates labeled blue slides “green.” When the minority was perfectly consistent across all trials, approximately 8% of naive majority participants’ responses were influenced. Though seemingly small, this demonstrated genuine minority-to-majority influence.

Moscovici’s conversion theory proposes that minority influence operates differently from majority influence. Majority influence tends to produce surface-level public compliance (normative influence). Minority influence, by virtue of its consistency and apparent certainty, causes the majority to engage in deeper cognitive processing of the minority’s position, sometimes producing genuine private attitude change — conversion — even without immediate public agreement. This indirect, latent influence sometimes appears only on indirect or delayed measures.

Social Norms: Descriptive and Injunctive

Robert Cialdini and colleagues made a foundational distinction between two types of social norms:

Descriptive Norms: Perceptions of what most people do in a given situation — what is common or typical. (Example: "Most hotel guests reuse their towels.")
Injunctive Norms: Perceptions of what most people approve or disapprove of — what is considered right or wrong. (Example: "Recycling is the right thing to do.")

Cialdini’s research on littering (Cialdini, Reno, & Kallgren, 1990) demonstrated that descriptive and injunctive norms can work in the same direction or at cross-purposes. A sign pointing out that “many previous visitors have removed petrified wood” (descriptive norm: theft is common) paradoxically increased theft by normalizing the behavior. A message emphasizing that most visitors do not take wood (descriptive: compliance is typical) was more effective. The norm-focus theory predicts that whichever norm is salient at the moment of decision will guide behavior.

Culture and Conformity

Conformity is not culturally uniform. Rod Bond and Peter Smith’s (1996) meta-analysis of Asch-type studies conducted in seventeen countries found significantly higher conformity rates in collectivist cultures (such as Fiji, Brazil, and Japan) compared to individualist cultures (such as the United States and the UK). Collectivist cultural frameworks emphasize harmony, group cohesion, and interdependence, making deviation from group consensus particularly costly. Individualist frameworks celebrate independence and personal authenticity, making conformity less normatively compelling.

It would be an error to interpret collectivist conformity as "mere" compliance without understanding. Within collectivist frameworks, sensitivity to others' views and adjusting one's behavior accordingly is a mark of social intelligence and maturity, not weakness. The meaning of conformity — and its relationship to identity — differs across cultural contexts.

Chapter 4: Social Cognition

How We Think About the Social World

Social cognition refers to the processes by which people perceive, remember, and make sense of social information. Its central finding — established across decades of research — is that human social thinking is often rapid, effortful-avoidant, and systematically biased. We do not process social information like an impartial computer; we use cognitive shortcuts that are usually adequate but sometimes produce striking errors.

Schemas and Their Influence

Schemas: Organized, pre-existing knowledge structures that represent categories of people, objects, events, or situations. Schemas guide attention (we notice schema-consistent information more readily), encoding (schema-consistent details are better remembered), and retrieval (schemas fill in gaps in memory). They are cognitively efficient but can perpetuate stereotypes and cause distortions.

Schemas operate at multiple levels: we have person schemas (traits associated with particular individuals), role schemas (expectations about how certain social roles are enacted), event schemas (scripts — Schank & Abelson, 1977 — representing the expected sequence of events in familiar situations), and content-free schemas (abstract structural knowledge about cause-effect relationships). When a schema is activated, it functions as a lens that filters and interprets incoming information.

Heuristics: Cognitive Shortcuts

Amos Tversky and Daniel Kahneman’s program of research identified systematic heuristics — mental shortcuts — that people use when making judgments under uncertainty. Three are especially relevant to social cognition:

Availability heuristic: People judge the frequency or probability of events by the ease with which examples come to mind. Events that are vivid, recent, or personally experienced are more cognitively available and thus seem more probable. This can lead to systematic over- or underestimation: people overestimate death rates from dramatic causes (plane crashes, shark attacks) and underestimate rates from mundane causes (heart disease) partly because the former are more available in memory.

Representativeness heuristic: People judge the probability that a target belongs to a category by the degree to which the target resembles the prototype of that category. Ignoring base rates — the actual frequency of categories in the population — leads to the conjunction fallacy (judging a conjunction of attributes as more probable than either attribute alone) and other systematic errors.

Anchoring and adjustment heuristic: When making quantitative judgments, people start from an initial value (anchor) and adjust upward or downward. Adjustment is typically insufficient, leaving final judgments biased toward the anchor even when the anchor was arbitrary.

Automatic versus Controlled Processing

John Bargh and others drew a fundamental distinction between two modes of cognitive processing that has organized much of modern social cognition:

Automatic Processing: Processing that is fast, effortless, unconscious, and difficult to control. Automatic processes can run in parallel with other tasks and are triggered by familiar stimuli. Stereotypic activation upon perceiving a social category member is often automatic.
Controlled Processing: Processing that is slow, effortful, conscious, and intentional. Controlled processes operate sequentially, require working memory capacity, and can override automatic responses when motivation and resources are sufficient.

Bargh’s (1994) “auto-motive” model proposed that even complex goals and behaviors can become automatized with sufficient practice. His “priming” studies suggested that exposure to concept-relevant stimuli could activate associated behaviors without awareness — though many of these specific priming effects (e.g., the elderly-priming-slow-walking study) have faced replication difficulties.

Confirmation Bias and Belief Perseverance

Once a belief is formed, people preferentially seek, interpret, and remember information consistent with it — a tendency called confirmation bias. In social contexts, this means we look for evidence that confirms our initial impressions of others, ask questions that presuppose the traits we expect to find (Snyder & Swann, 1978), and recall information consistent with our prior views more easily than inconsistent information.

Belief perseverance refers to the related phenomenon that initial beliefs often persist even after the evidence supporting them has been explicitly discredited. Anderson, Lepper, and Ross (1980) showed participants case studies supposedly demonstrating a relationship between firefighter risk-preference and job performance, then informed them the cases were fabricated. Participants continued to believe in the relationship more strongly than control participants who had never seen the false evidence. The mechanism appears to be that forming causal explanations to make sense of the initial evidence leaves those explanations intact even when the evidence is removed.

Correspondence Bias and Social Perception

The correspondence bias — the tendency to infer stable, correspondent traits from observed behavior even when situational causes are plausible — is the inferential engine underlying the FAE. It extends beyond attributional judgments about causation to broader social perception. When we observe someone behave aggressively once, we rapidly update our trait impressions toward “aggressive person” rather than holding the inference provisionally in light of situational uncertainty. Gilbert and Malone (1995) identify four conditions that amplify correspondence bias: behavior that appears voluntary, behavior whose situational causes are non-salient, cognitive busyness (reducing Stage 2 correction), and no prior expectation about situational pressure.


Chapter 5: The Self

The Social Nature of the Self

The self is not a pre-social given but an achievement constructed in and through social interaction. William James (1890) distinguished between the self-as-knower (the “I,” the experiencing subject) and the self-as-known (the “Me,” the object of reflection). Modern social psychology focuses predominantly on the “Me” — what people believe, feel, and know about themselves — and on how those self-representations are formed, maintained, and used.

Self-Concept and Self-Schemas

Self-Concept: The totality of one's beliefs about oneself — including one's traits, roles, values, and relationships. The self-concept is not unitary but multi-faceted: people have different self-descriptions for different contexts (the professional self, the familial self, the intimate self), though these are organized into a coherent whole.

Hazel Markus (1977) proposed the concept of self-schemas — cognitive generalizations about the self derived from past experience that organize and guide the processing of self-relevant information. A person with a strong self-schema for “independence” will process information about independence-related behaviors faster, remember more self-relevant instances, and resist attempts to describe them as dependent. People who lack a self-schema on a dimension (aschematic) do not show these processing advantages.

Self-Esteem: Sources and Consequences

Self-esteem refers to the evaluative dimension of the self-concept — how positively or negatively one regards oneself overall. It can be usefully decomposed into global self-esteem (overall positive or negative evaluation), domain-specific self-esteem (competence evaluations in particular areas), and state versus trait self-esteem (moment-to-moment fluctuations versus stable dispositional level).

Several theories address the sources of self-esteem. The sociometer theory (Leary & Baumeister, 2000) proposes that self-esteem functions as a monitor of social acceptance — it rises when we feel valued by others and falls when we feel rejected. On this account, self-esteem is not intrinsically valuable but is an indicator of the social landscape. High self-esteem signals social inclusion; low self-esteem signals exclusion risk and motivates social reconnection behavior.

The consequences of self-esteem are substantial. People with high self-esteem are more resilient in the face of failure, more willing to take social risks, and less reactive to negative social feedback. However, the relationship between self-esteem and academic performance is more modest than early research suggested, and high self-esteem among individuals with narcissistic features can paradoxically increase aggression when self-esteem is threatened.

Self-Serving Biases

People engage in a variety of cognitive maneuvers to protect and enhance their self-views. Beyond the self-serving attributional bias (discussed in Chapter 1), these include:

  • Downward social comparison: Comparing oneself to others who are worse off on a dimension, thereby elevating self-evaluation.
  • Self-handicapping: Creating obstacles to one’s own performance in advance, so that subsequent failure can be attributed to the obstacle rather than to insufficient ability (Berglas & Jones, 1978).
  • Unrealistic optimism: Believing that positive events are more likely and negative events are less likely to happen to oneself than to others (Weinstein, 1980).
  • The better-than-average effect: Rating oneself above average on most positive traits — a statistical impossibility at the group level, but psychologically pervasive (Alicke, 1985).

Self-Discrepancy Theory

Edward Tory Higgins (1987) proposed self-discrepancy theory, distinguishing three self-representations: the actual self (how one currently is), the ideal self (how one would ideally like to be), and the ought self (how one believes one should be — in terms of duty, obligation, and responsibility). Discrepancies between these representations generate specific emotional states:

  • Actual–ideal discrepancies produce dejection-related emotions (sadness, disappointment) because they represent the absence of positive outcomes.
  • Actual–ought discrepancies produce agitation-related emotions (anxiety, guilt) because they represent the presence of negative outcomes or failures of obligation.

Social Comparison Theory

Leon Festinger (1954) proposed social comparison theory, arguing that people have a drive to evaluate their opinions and abilities and, in the absence of objective physical standards, do so by comparing themselves to others. Festinger proposed that people prefer to compare with others who are similar to themselves on relevant dimensions (similar comparison targets provide the most informative benchmarks).

Subsequent research identified motives for comparison beyond accuracy: self-enhancement (comparing to inferior others to feel better — downward comparison) and self-improvement (comparing to superior others to guide improvement — upward comparison). The choice of comparison target is therefore strategic, driven by the dominant motive in a given context.

Self-Presentation and Impression Management

Erving Goffman’s (1959) dramaturgical model of social life conceived of everyday interaction as theatrical performance: people present themselves to audiences, managing impressions to achieve desired identities. Impression management refers to the strategic regulation of self-presentation to create particular impressions in others’ minds.

Social psychological research has documented several impression management strategies: self-promotion (presenting one’s accomplishments and competencies); ingratiation (making oneself likable through flattery, agreement, and favors); intimidation (projecting toughness to influence others through fear); and supplication (presenting oneself as needy to elicit assistance). The use of these strategies is modulated by audience awareness, relationship type, and cultural norms about appropriate self-presentation.


Chapter 6: Attitudes and Persuasion

The Structure of Attitudes

An attitude is a psychological tendency expressed by evaluating a particular entity with some degree of favor or disfavor (Eagly & Chaiken, 1993). The tripartite (ABC) model distinguishes three components:

  • Affective component: Feelings and emotions toward the attitude object.
  • Behavioral component: Past behaviors toward the attitude object and behavioral intentions.
  • Cognitive component: Beliefs and knowledge about the attitude object.

Attitudes vary in strength (importance, certainty, accessibility), in their internal consistency (whether affect, cognition, and behavior are aligned), and in their degree of implicitness. Strong, accessible, internally consistent attitudes are better predictors of behavior. Implicit attitudes — measured by response latency methods like the Implicit Association Test — often diverge from explicitly reported attitudes, particularly in socially sensitive domains.

Cognitive Dissonance

Leon Festinger’s (1957) cognitive dissonance theory is one of the most generative and empirically supported theories in social psychology. Festinger proposed that when a person holds two cognitions (thoughts, attitudes, beliefs) that are psychologically inconsistent — dissonant — she experiences an uncomfortable state of tension that motivates attitude or behavioral change to restore consistency.

Cognitive Dissonance: The discomfort experienced when holding two or more cognitions that are psychologically inconsistent or when one's behavior conflicts with one's self-image. The theory predicts that people will change attitudes, add new cognitions, or trivialize the importance of conflicting cognitions to reduce this discomfort.

The landmark empirical demonstration was Festinger and Carlsmith’s (1959) induced compliance study. Participants performed an extremely boring task (turning pegs on a board for an hour). Afterward, half were offered $20 and the other half $1 to tell the next “participant” (a confederate) that the task had been interesting. Later, all rated their actual enjoyment of the task.

Counterintuitively, $1 participants rated the task as significantly more enjoyable than $20 participants. The logic is that $20 provides sufficient external justification for lying — the large payment explains the behavior — so no dissonance arises and attitudes need not change. One dollar provides insufficient justification, creating dissonance (“I said it was fun, but I have no good reason for lying”), which is resolved by genuinely persuading oneself that the task was not so bad.

Beyond induced compliance, dissonance arises in other contexts:

  • Effort justification: When people expend significant effort to achieve an outcome, they value that outcome more highly, presumably to justify the effort invested. Aronson and Mills (1959) showed that women who underwent a severe initiation to join a discussion group rated the group more positively than those with mild or no initiation.
  • Free choice: After choosing between two attractive alternatives, people enhance their evaluation of the chosen option and diminish their evaluation of the unchosen one (post-decisional dissonance reduction).
  • Hypocrisy paradigm: Stone et al. (1994) found that making people publicly advocate for a behavior they had themselves failed to perform (using condoms) — activating awareness of their own hypocrisy — produced greater subsequent behavior change than either advocacy or personal reflection alone.

Elaboration Likelihood Model

Petty and Cacioppo’s (1986) Elaboration Likelihood Model (ELM) provides the most influential account of persuasion processes. The model proposes two routes to attitude change:

Central Route: Attitude change resulting from careful, effortful consideration of the quality of arguments in a persuasive message. When motivation and ability to process the message are high, people elaborate on the arguments — scrutinizing their logic and evaluating the evidence. Attitude changes via the central route are generally stronger, more durable, and more predictive of behavior.
Peripheral Route: Attitude change resulting from superficial cues associated with the message rather than its argumentative content — for example, the communicator's attractiveness, expertise, the number of arguments regardless of their quality, or the emotional tone of the message. Peripheral route attitude changes tend to be weaker and less durable.

Motivation and ability to process the message determine which route dominates. High personal relevance (the issue directly affects the recipient), high need for cognition (intrinsic motivation to think carefully), and absence of distraction increase elaboration. Source expertise and message length serve as peripheral cues when elaboration is low.

Fear Appeals and Persuasion Resistance

Fear appeals — messages that emphasize the threatening consequences of failure to comply with a recommendation — are widely used in health communication but produce paradoxical effects when misapplied. Witte’s (1992) extended parallel process model predicts that when perceived threat is high and perceived efficacy (belief that one can effectively perform the recommended behavior) is also high, people engage in danger control (adaptive behavior change). When perceived threat is high but efficacy is low, people instead engage in fear control (denial, avoidance, reactance) — resulting in no behavior change or even counterproductive responses.

Psychological reactance (Brehm, 1966) is another obstacle to persuasion: when people perceive that their freedom to hold a particular attitude or engage in a particular behavior is being threatened, they respond by asserting that freedom — often moving in the opposite direction from the message. Heavy-handed persuasive attempts can thus backfire. Pre-emptive inoculation — exposing people to weakened forms of counterarguments and having them refute those arguments — has been shown to make attitudes more resistant to subsequent persuasion attempts.


Chapter 7: Attraction and Close Relationships

The Determinants of Interpersonal Attraction

Who do we like, and why? Social psychology has identified several robust determinants of interpersonal attraction, operating from the initial formation of acquaintance through the deepening of close relationships.

Proximity and mere exposure: One of the strongest predictors of friendship and romantic partnership is simple physical proximity. Festinger, Schachter, and Back’s (1950) classic study of housing-unit friendships at MIT found that residents were most likely to befriend others on the same floor, and even on the same floor, those whose apartments were nearer (or whose paths naturally converged near stairwells and mailboxes) became friends more often. Proximity increases attraction partly through mere exposure (Zajonc, 1968): repeated exposure to any stimulus — including people — tends to increase liking, even without awareness of the prior exposure.

Similarity: People are attracted to others who are similar to themselves in attitudes, values, personality, and background. The similarity-attraction effect (Byrne, 1971) is one of the most replicated findings in the attraction literature. The mechanism is partly cognitive (similar others confirm our worldview) and partly anticipatory (we expect similar others to like us back). Similarity in attitudes is a stronger predictor of attraction than similarity in personality, and attitude similarity at early stages of relationship formation is particularly potent.

Physical Attractiveness

Physical attractiveness exerts a powerful effect on social judgments and outcomes, operating through what has been called the physical attractiveness stereotype — the implicit belief that “what is beautiful is good.” Dion, Berscheid, and Walster (1972) showed that attractive individuals were attributed more socially desirable traits, better life outcomes, and higher competence than unattractive individuals on the basis of photographs alone.

Evolutionary accounts (Buss, 1989) propose that physical attractiveness serves as a cue to underlying genetic quality and reproductive fitness. Features associated with attractiveness — bilateral symmetry, clear skin, secondary sexual characteristics reflecting hormonal health — may have been reliable cues to health and fertility in ancestral environments. Gender differences in mate preferences partially support evolutionary predictions: men across cultures prioritize physical attractiveness (cue to fertility) somewhat more than women do, while women somewhat more strongly prioritize resource acquisition and status (Buss, 1989). However, critics note that these differences are modest in absolute terms, culturally variable, and shaped substantially by contemporary socioeconomic conditions.

Social accounts emphasize that attractiveness norms are culturally constructed, historically variable, and racialized in ways that evolutionary accounts insufficiently address. What counts as beautiful is not universal; it is produced through media, economic power, and social hierarchy.

Attachment Theory in Romantic Relationships

John Bowlby’s attachment theory, originally developed to explain infant-caregiver bonds, has been extended to adult romantic relationships by Hazan and Shaver (1987) and elaborated by Brennan, Clark, and Shaver (1998). The adult attachment framework identifies two dimensions — attachment anxiety (fear of abandonment, preoccupation with the relationship) and attachment avoidance (discomfort with closeness, preference for self-reliance) — yielding four attachment styles:

StyleAnxietyAvoidanceDescription
SecureLowLowComfortable with intimacy and independence
Anxious-preoccupiedHighLowCraves intimacy, fears abandonment
Dismissing-avoidantLowHighValues independence, minimizes closeness
Fearful-avoidantHighHighDesires closeness but fears rejection

Securely attached adults tend to have more satisfying, stable relationships, communicate more openly about relationship concerns, and recover more adaptively from relationship stress. Insecure attachment styles are associated with lower relationship quality, greater conflict, and higher dissolution rates, though attachment style is not destiny — it is modulated by relationship-specific experiences.

Sternberg’s Triangular Theory

Robert Sternberg (1986) proposed that love comprises three components arranged in a triangle:

  • Intimacy: Feelings of closeness, connectedness, and bondedness.
  • Passion: The drives that lead to romance, physical attraction, and sexual consummation.
  • Commitment: The decision that one loves another, and the long-term maintenance of that love.

Different combinations of these components produce qualitatively different love experiences. Consummate love — the fullest form — involves all three. Romantic love involves intimacy and passion without commitment. Companionate love involves intimacy and commitment without passion. Empty love involves commitment without intimacy or passion. Sternberg’s model is descriptively useful though empirically difficult to test, as the components are correlated and the theory makes few strong directional predictions.

Relationship Maintenance and Dissolution

Caryl Rusbult’s investment model (1983) proposes that relationship commitment — and thus relationship stability — is a function of three factors: satisfaction with the relationship (rewards minus costs), quality of alternatives (how attractive other relationships or being alone appear), and investment size (resources — time, self-disclosure, shared friends, memories — that would be lost if the relationship ended). High commitment predicts accommodation (responding constructively to a partner’s negative behavior), willingness to sacrifice, and relationship maintenance behaviors.

Dissolution follows predictable trajectories. Gottman’s (1994) longitudinal research identified four interaction patterns that predict divorce with considerable accuracy: criticism (attacking the partner’s character), contempt (treating the partner as inferior — the single strongest predictor), defensiveness (self-protection in response to criticism), and stonewalling (emotional withdrawal). These “Four Horsemen” create negative sentiment override — a state in which even neutral partner behaviors are interpreted negatively — eroding the positive affect that sustains relationships.


Chapter 8: Stereotyping, Prejudice, and Stigma

Social Categorization and Illusory Correlation

The cognitive foundation of stereotyping is social categorization — the largely automatic tendency to classify people into social groups (race, gender, age, occupation) upon perception. Categorization is cognitively efficient: it allows us to use group-level knowledge to fill in what we do not know about individuals. However, it also activates stereotypic associations that may be inaccurate, outdated, or malicious.

Illusory correlation (Hamilton & Gifford, 1976) describes the tendency to perceive a relationship between two variables — typically a minority group and a negative behavior — even when no such relationship exists in the presented data. The mechanism involves the co-occurrence of two distinctive stimuli (minority group membership + undesirable behavior) being disproportionately encoded because both are statistically infrequent. This distortion provides a cognitive explanation for how stereotypes linking minority groups to negative traits can form and persist even without actual group differences in behavior frequency.

Implicit Bias and the IAT

The Implicit Association Test (IAT), developed by Greenwald, McGhee, and Schwartz (1998), measures the strength of automatic associations between concepts. The race IAT measures the ease with which participants associate Black or White faces with positive or negative words. Most participants (including many Black participants, though to a smaller degree) show faster associations between White faces and positive words — evidence, proponents argue, of implicit racial bias.

The predictive validity of IAT scores has been extensively debated. A meta-analysis by Oswald et al. (2013) found modest correlations between IAT scores and discriminatory behavior. A subsequent meta-analysis by Kurdi et al. (2019) found that IAT scores predicted behavior incrementally over explicit measures but that the effect sizes were modest. The IAT is better characterized as a measure of cultural knowledge about associations — what associations one has absorbed from the social environment — than as a direct measure of prejudiced motivation or intentional discriminatory behavior.

Stereotype Threat

Stereotype Threat: The situational predicament experienced by members of negatively stereotyped groups who face the risk of confirming a negative stereotype about their group through their performance. Awareness of the stereotype — and concern about confirming it — creates cognitive and motivational interference that can depress performance.

Steele and Aronson’s (1995) foundational studies gave Black and White Stanford undergraduates a difficult verbal test. In the stereotype-threat condition, the test was described as diagnostic of intellectual ability (activating the Black intellectual inferiority stereotype). In the no-threat condition, the same test was described as a laboratory problem-solving task. Black participants performed significantly worse than White participants in the diagnostic condition but showed no performance gap in the non-diagnostic condition. Importantly, all participants in this study were matched on SAT scores, ruling out differences in underlying ability.

Stereotype threat has since been documented across many groups and domains: women in mathematics (Spencer, Steele, & Quinn, 1999), older adults on memory tasks, White men when told Asian men perform better at mathematics, and first-generation college students in academic settings. The mechanisms are multiple: working memory depletion from ruminative monitoring, arousal that disrupts performance, and cognitive load from self-regulatory efforts.

A recent wave of replication studies and meta-analyses has suggested that earlier estimates of stereotype threat effect sizes may be inflated due to publication bias, and that effects in ecologically realistic high-stakes settings may be smaller than laboratory demonstrations suggest. Flore and Wicherts (2015) and subsequent analyses indicate that while stereotype threat is a real phenomenon, caution is warranted about strong causal claims linking it to group achievement gaps at a population level.

Aversive Racism

John Dovidio and Samuel Gaertner (1986) proposed aversive racism to describe the prejudice of people who consciously endorse egalitarian values and would never knowingly discriminate, yet harbor unconscious negative affect toward racial minorities. Aversive racists’ prejudice surfaces not in explicit, unambiguous situations (where egalitarian norms clearly prescribe non-discrimination) but in ambiguous situations where racial motivation is not apparent, or where a non-racial justification is available.

Empirically, aversive racism manifests in hiring decisions when a Black or White candidate’s qualifications are mixed (where the racial differential appears), and in helping behavior when helping in front of other bystanders versus alone. This formulation has policy implications: anti-discrimination training that focuses only on explicit prejudice may be insufficient.

Contact Hypothesis: Allport’s Four Conditions

Gordon Allport (1954) proposed the contact hypothesis: that intergroup contact reduces prejudice. However, Allport recognized that not all contact is equally beneficial; in fact, negative contact can increase prejudice. He specified four key conditions under which contact reduces intergroup hostility:

  1. Equal status between the groups in the contact situation (not the broader society, but the specific setting).
  2. Cooperative interdependence — working toward common goals that require both groups’ contributions.
  3. Institutional support — the contact must be sanctioned and encouraged by authorities, law, or social norms.
  4. Acquaintance potential — the contact must allow personalized interaction that enables participants to know each other as individuals rather than as group representatives.

A meta-analysis by Pettigrew and Tropp (2006), encompassing 515 studies and 250,000 participants, confirmed that intergroup contact reduces prejudice on average, and that the effect is significantly stronger when Allport’s conditions are met. Importantly, even contact not meeting all four conditions reduces prejudice on average, though to a lesser degree.


Chapter 9: Groups, Social Identities, and Intergroup Conflict

Social Facilitation and Social Loafing

The earliest experimental work in social psychology addressed what happens to individual performance in the presence of others. Norman Triplett (1898) observed that cyclists rode faster when pacing against others than alone, and subsequently demonstrated that children wound fishing reels faster in the presence of coworkers. This social facilitation effect was puzzling because later research showed that presence sometimes impairs rather than enhances performance.

Robert Zajonc (1965) resolved the contradiction with drive theory: the presence of others increases physiological arousal, which enhances the emission of dominant responses — the most practiced or likely behaviors in a given situation. For well-learned tasks, the dominant response is the correct one; arousal enhances performance. For novel or complex tasks, the dominant response may be incorrect; arousal impairs performance.

Social loafing (Latané, Williams, & Harkins, 1979) is the tendency for individuals to exert less effort when working collectively than when working individually. As group size increases and individual contributions become submerged in the group product, identifiability decreases — individuals can free-ride on others’ efforts. Social loafing is reduced when individual outputs are identifiable, when the task is meaningful or involving, and when group members have a strong group identity.

Deindividuation

Deindividuation refers to the loss of individual self-awareness and identity that occurs in group contexts — particularly in crowds, mobs, and online anonymity. Philip Zimbardo (1969) proposed that deindividuation reduces self-monitoring and inhibition, freeing behavior from internalized norms. Experimentally, anonymous participants (wearing hoods) delivered longer shocks to a confederate than identifiable participants in lab studies. However, Diener (1980) and later Postmes and Spears (1998) argued that deindividuated behavior does not simply release anti-social impulses but reflects conformity to whatever norms are made salient in the situation — which can be either prosocial or antisocial.

Group Polarization and Groupthink

Group polarization refers to the phenomenon whereby group discussion tends to shift members’ views toward a more extreme position in the direction of the initially dominant opinion in the group. If group members lean toward risk before discussion, discussion makes them even more risk-accepting; if they lean toward caution, discussion makes them more cautious. Two mechanisms drive polarization: informational influence (discussion generates arguments favoring the dominant side, exposing members to more supporting evidence) and normative influence (members compete to be seen as more extreme advocates of the group’s valued position).

Irving Janis (1972) proposed groupthink — a mode of thinking in which the desire for unanimity and cohesion in a highly cohesive group overrides realistic appraisal of alternatives. Janis identified groupthink as the process underlying catastrophic foreign policy decisions including the Bay of Pigs invasion and the Challenger disaster. Symptoms include illusions of invulnerability, collective rationalization of warnings, stereotyped views of out-groups, pressure on dissenters, and the emergence of self-appointed “mindguards” who filter out disconfirming information.

Subsequent empirical testing of Janis's groupthink model has yielded mixed results. The model as originally formulated is difficult to test because many of its variables are defined post-hoc. More rigorous experimental and archival research suggests that cohesion alone does not produce groupthink; rather, directive leadership and a lack of established procedures for critical evaluation are more consistently implicated in poor group decision quality.

Social Identity Theory

Henri Tajfel and John Turner’s (1979, 1986) social identity theory (SIT) proposed that people’s self-concept includes not only personal identity (individual attributes) but also social identity — the part of self derived from membership in valued social groups. People are motivated to maintain positive social identity, which they do by favorably comparing the in-group to relevant out-groups on valued dimensions (in-group favoritism).

The minimal group paradigm (Tajfel et al., 1971) demonstrated that in-group favoritism arises even when groups are created on the basis of trivial, arbitrary criteria (preference for Klee vs. Kandinsky paintings). Participants allocated resources between in-group and out-group members, preferring in-group members even at the cost of maximizing absolute group outcomes — choosing relative advantage over absolute gain.

Turner’s self-categorization theory (1987) extended SIT by analyzing the conditions under which personal versus social identity is salient, and the consequences of social categorization for perception of self and others. When social identity is salient, people perceive themselves and others in terms of group membership, accentuating in-group similarities and intergroup differences.

Realistic Group Conflict and Intergroup Contact

Muzafer Sherif’s Robbers Cave Experiment (1954/1961) demonstrated that intergroup conflict can emerge simply from competition over scarce resources — realistic group conflict theory. Boys at a summer camp, randomly divided into two groups, developed strong in-group loyalty and intense out-group hostility during a competitive tournament. Crucially, hostility was reduced not by mere contact (which initially increased tension) but by introducing superordinate goals — goals that required both groups’ cooperation to achieve and that neither group could accomplish alone. This finding directly anticipates Allport’s cooperative interdependence condition and provides experimental support for the contact hypothesis under structured conditions.


Chapter 10: Altruism and Prosocial Behavior

The Bystander Effect

On the night of March 13, 1964, Kitty Genovese was stabbed to death outside her apartment building in Kew Gardens, New York. Initial press reports claimed 38 neighbors witnessed the attack without calling the police, though subsequent investigation has substantially revised this account — the number of actual witnesses was smaller, and the attack occurred in fragmented, non-simultaneous episodes. Nevertheless, the case catalyzed John Darley and Bibb Latané’s (1968) systematic investigation of why bystanders fail to help.

Darley and Latané’s experiments demonstrated the bystander effect: the greater the number of bystanders present during an emergency, the less likely any individual bystander is to help, and the longer helping is delayed. In their classic experiment, participants overhearing what appeared to be a seizure over an intercom were alone, or believed other participants were also listening. When alone, 85% reported the emergency within six minutes. When five other bystanders were believed present, only 31% reported it.

Diffusion of Responsibility: When multiple bystanders are present, each individual feels less personally responsible for taking action. Responsibility is diffused across the group, reducing each member's subjective obligation. This is distinct from pluralistic ignorance — the misperception that because others appear calm, there is no emergency.

The Decision Model of Helping

Latané and Darley (1970) proposed a five-step decision model specifying the cognitive sequence a bystander must complete before intervening:

  1. Notice the event: Distraction, self-focus, or environmental noise can prevent noticing.
  2. Interpret the event as an emergency: Ambiguous cues are often normalized, especially when others appear unconcerned (pluralistic ignorance).
  3. Assume responsibility: Diffusion of responsibility reduces felt obligation when others are present.
  4. Know how to help: Competence and knowledge of appropriate response.
  5. Decide to help: Overcoming audience inhibition — fear of embarrassment if the help turns out to be unnecessary.

Failure at any step aborts helping. This model predicts when bystander presence will most strongly inhibit helping (steps 2 and 3) and suggests intervention points for designing pro-helping environments.

Empathy-Altruism Hypothesis

A longstanding philosophical and psychological debate concerns whether humans are capable of genuine altruism — behavior motivated by concern for another’s welfare rather than by self-interest. C. Daniel Batson’s (1991) empathy-altruism hypothesis holds that empathic concern — feeling compassionate sympathy for a suffering other — produces genuinely altruistic motivation. When empathy is high, the goal of helping becomes reducing the other’s suffering, not reducing one’s own distress.

Batson’s program of research involved elegant experimental separations of empathy-based altruism from various egoistic alternatives. Participants who felt high empathy for a suffering target helped even when helping was difficult and escape was easy — a result inconsistent with the hypothesis that helping is motivated by reducing one’s own discomfort (which would predict that when escape is easy, high-empathy people would escape rather than help). Multiple replications supported the empathy-altruism sequence, though debate about whether completely selfless motivation exists at the evolutionary or mechanistic level continues.

Evolutionary Accounts: Kin Selection and Reciprocal Altruism

Evolutionary biology offers two mechanisms for the emergence of prosocial behavior:

Kin selection (Hamilton, 1964): Genes that predispose organisms to help close genetic relatives can increase in frequency because relatives share copies of those genes. The coefficient of relatedness (r) determines the probability that two individuals share a given allele identical by descent. Hamilton’s rule states that altruistic behavior will be favored when the cost to the helper (c) is less than the benefit to the recipient (b) multiplied by r: \( b \cdot r > c \). People do preferentially help close kin, and the degree of preference scales with genetic relatedness.

Reciprocal altruism (Trivers, 1971): Helping non-kin can evolve if the favor is reciprocated at a later time. The conditions for reciprocal altruism — repeated interactions, recognition of individuals, ability to detect cheaters — are met in human social life, which may explain why humans help unrelated others and track reciprocity carefully. Norms of reciprocity (Gouldner, 1960) institutionalize this evolutionary logic as a cultural rule.


Chapter 11: Aggression

Defining and Measuring Aggression

Social psychology defines aggression as behavior intended to harm another person who is motivated to avoid that harm. This definition excludes accidental harm (lacking intent) and self-harm. It distinguishes between hostile aggression (motivated by anger, the goal being the harm itself) and instrumental aggression (harm is a means to another end — money, status, territory). It also distinguishes direct aggression (physically or verbally harming the target) from relational aggression (damaging the target’s social relationships through gossip, exclusion, or rumor).

Frustration-Aggression Hypothesis

Dollard, Doob, Miller, Mowrer, and Sears (1939) proposed the original frustration-aggression hypothesis: frustration — the blocking of goal-directed behavior — always leads to aggression, and aggression is always preceded by frustration. The strong version of this hypothesis was quickly challenged empirically: frustration sometimes leads to withdrawal or depression rather than aggression, and aggression frequently occurs without prior frustration.

Leonard Berkowitz (1989) revised the hypothesis into the cognitive neoassociation model: frustration and other aversive stimuli generate negative affect, which primes aggressive thoughts, memories, and behavioral tendencies through associative networks. Whether aggression occurs depends on higher-order cognitive appraisal. Crucially, almost any aversive stimulus (pain, heat, foul odors, unpleasant noise) can prime aggressive cognitions, not only frustration. This explains the well-replicated finding that heat is associated with increased aggression and violent crime (Anderson, 1989).

Social Learning Theory: Bandura’s Bobo Doll Studies

Albert Bandura’s (1961, 1963) observational learning studies provided the most influential theoretical alternative to drive-based models of aggression. In the classic Bobo doll experiments, children watched an adult model interact with an inflatable Bobo doll. In the aggressive condition, the model punched, kicked, and verbally abused the doll. Children subsequently placed in the room with the Bobo doll imitated the model’s specific aggressive acts with high fidelity, including novel aggressive behaviors they had not performed before.

Bandura showed that learning was observational and required neither reinforcement of the observer nor reinforcement of the model during observation. However, whether learned aggression was actually performed depended on anticipated consequences: when the model was punished for aggression (vicarious punishment), children performed fewer aggressive acts — but matched children in the rewarded and no-consequence conditions when subsequently offered incentives, demonstrating that acquisition and performance are distinct.

Key Findings from the Bobo Doll Paradigm: Boys imitated physical aggression at higher rates than girls in most conditions, but girls showed relatively less suppression of verbal aggression. Same-sex model effects were found (children imitated same-sex models more strongly), and the model's prestige and power affected imitation rates. These findings shaped Bandura's broader social learning theory, which later became social cognitive theory emphasizing the role of self-efficacy, observational learning, and reciprocal determinism.

Media Violence Research

The relationship between media violence exposure and aggression is among the most intensely studied questions in applied social psychology. Meta-analyses by Anderson et al. (2010) covering decades of experimental, cross-sectional, and longitudinal research report consistent positive associations between violent media consumption and aggressive thoughts, feelings, and behavior. The effect sizes are modest by social science standards (r ≈ .15–.20) but are comparable to other accepted public health relationships.

Critics including Ferguson (2015) argue that the literature suffers from publication bias, inconsistent operationalization of “aggression” (measuring aggressive noise blasts rather than real-world violence), and failure to control for confounding variables such as family environment and trait aggression. The relationship between video game violence and real-world violent crime is particularly contested: historical data show violent crime rates declining over the period of greatest video game proliferation. The current scientific consensus accepts a link between media violence and subclinical aggression outcomes but maintains greater uncertainty about effects on serious violent crime.

Biological Factors: Testosterone

Testosterone has long been implicated in aggression in both animal and human research. The relationship in humans is more complex and bidirectional than popular accounts suggest. Testosterone levels are associated with dominant and competitive behavior, and meta-analyses find modest positive correlations with aggression. Crucially, Mazur and Booth (1998) documented biosocial interactions: testosterone rises in anticipation of competition and rises further following victory; it declines after defeat. Social context thus modulates testosterone, not just the reverse. Furthermore, testosterone may motivate dominance-seeking rather than aggression per se — aggression is one path to dominance but not the only one.

Culture of Honor

Dov Cohen and Richard Nisbett (1994) documented regional and subcultural differences in aggression within the United States that they attributed to a culture of honor — a normative framework in which personal reputation is seen as requiring active defense, and insults or threats to honor demand aggressive response. White male homicide rates in the American South and West exceed those in the North, particularly for argument-related homicides (not felony homicides). Cohen and Nisbett showed experimentally that Southern White men who were insulted showed greater physiological arousal, behavioral aggression, and cortisol responses than Northern White men — and more strongly endorsed retaliatory aggression as appropriate.

The culture of honor thesis situates aggression firmly in historical-ecological context: herding economies (which characterized the South and West) placed a premium on deterring theft through reputation for toughness, whereas farming economies (the North) could rely on community monitoring and legal institutions. This analysis illustrates the interactive, contextual nature of aggression causation.

Reducing Aggression

Social psychological research points to several strategies for reducing aggression:

  • Catharsis is a myth: The popular belief that venting aggression reduces it is not empirically supported. Engaging in aggressive behavior (punching pillows, violent video games framed as cathartic) typically maintains or increases aggressive affect rather than reducing it.
  • Reducing frustration and aversive stimulation: Removing or mitigating sources of frustration, heat, and crowding can reduce aggressive priming.
  • Social learning interventions: If aggression is learned observationally, it can also be countered by providing prosocial models and modifying norms about aggressive behavior.
  • Empathy training and perspective-taking: Increasing empathic concern for potential victims reduces willingness to aggress and is a core component of many evidence-based aggression intervention programs.
  • Cognitive reappraisal: Interventions that help high-aggression individuals reinterpret ambiguous social cues as benign (addressing the hostile attribution bias — the tendency to perceive hostile intent in ambiguous situations) have shown efficacy in school-based programs (Crick & Dodge, 1994).

Notes compiled for PSYCH 253: Social Psychology, Fall 2025. Based on lectures by Dr. Siobhan Sutherland and readings from Aronson et al. (2020), Gilbert et al. (2010), and Gilovich et al. (2019).

Back to top