Kahneman’s Influence
Princeton Alumni Weekly Newspaper
February 2003
Princeton is indeed fortunate to have Daniel Kahneman on its faculty.
The fact that as a psychologist he received the Nobel Prize in economics
is indicative of how he, together with his collaborator Amos Tversky,
has made fundamental contributions across disciplinary boundaries. Even
many years ago, when I met Amos Tversky and his colleague Maya Bar Hillel
when they each visited Harvard, it was clear that their contribution
to understanding clinical judgment raised central questions about the
nature of human rationality under conditions of uncertainty.
Thus, when in 1981, together with Robert Hamm ’72, I coauthored a book, Medical
Choices, Medical Chances: How Patients, Families and Physicians Can
Cope with Uncertainty, we found it essential to cite their early
work. Today, now illuminated by Kahneman and Tversky’s work, exploring
the relationship between human judgment and motivation under conditions
of uncertainty continues to be a great adventure.
Harold J. Bursztajn ’72, M.D.
Cambridge, Mass.
New brain research helps explain how we do and don't reason
By Billy Goodman, '80
Jonathan Cohen, professor of psychology, and Joshua Greene *02, a former
graduate student in philosophy, used a pair of ethical dilemmas the "trolley
dilemma" and the "footbridge dilemma" to examine
how people make moral judgements.
Two months after psychology professor Jonathan Cohen came to Princeton,
he received a visit from Joshua Greene, then a graduate student in the
Department of Philosophy. Greene had an idea for some experiments to
better understand what was happening in the brain during moral reasoning.
Cohen, director of the university's Center for the Study of Brain, Mind,
and Behavior, was working to bring a powerful brain-imaging tool rare
outside medical centers to Princeton. He was somewhat taken aback
that a student in philosophy would be proposing an imaging experiment.
"It sounded crazy that a philosophy student would have anything
concrete to do," says Cohen, "but, as it turned out, it was
an incredibly exciting and important project."
The research, ultimately, was published about a year ago in Science,
the most important broad-based science journal in the country, if not
the world. The work addressed the role played by emotions in the moral
judgments that people make. For a long time the dominant view among moral
psychologists and philosophers influenced largely by Immanuel
Kant
has been that moral judgment is primarily a matter of reasoning.
Recently, a different perspective has gained prominence, led by researchers
such as University of Virginia psychologist Jonathan Haidt. Haidt argues
that moral thinking is highly intuitive and emotional, and only appears
to be a product of careful reasoning because of people's after-the-fact
rationalizations of their thinking. Greene's experiments support the
view that at least some kinds of moral judgments strongly engage the
emotions.
The work was inspired by a family of ethical dilemmas that have tantalized
philosophers. Here is a pair:
The trolley dilemma features a runaway trolley that will kill five people
on the track ahead if it continues on its course. The only way to save
the five people is for you to hit a switch that will turn the trolley
onto a side track where it will kill one person. Do you hit the switch,
thereby saving five people but leading to one death?
The footbridge dilemma is similar, with a trolley threatening to kill
five people. You are standing on a footbridge over the tracks, next to
a large stranger. If you push the stranger onto the tracks, killing him,
his body will prevent the train from reaching the others, saving them.
Do you push?
Most people answer yes to the first question, no to the second. Philosophers
have puzzled over why people believe it is morally acceptable to sacrifice
one life for five in one case, but unacceptable in the other. Greene,
now a postdoctoral researcher in Cohen's lab, notes that solving this
puzzle has proven to be a formidable philosophical challenge. And this,
he explains, gives rise to a distinct psychological puzzle: "The
fact that it's tough to justify in rational terms raises the question:
How do people arrive at these conclusions?"
In search of an answer, Greene, working with Cohen and others in the
Center and the psychology department, used the new magnetic resonance
imaging scanner, which Cohen succeeded in bringing to Princeton, to measure
the neural activity of people as they pondered these dilemmas. (The scanner
is related to diagnostic MRI machines, with the added ability to measure
brain activity as volunteers perform mental tasks; hence it is known
as functional magnetic resonance imaging or fMRI.)
Research projects in psychology and neuroscience these days frequently
include fMRI scanning because it provides a chance to test hypotheses
about the involvement of brain regions in various cognitive activities.
For example, a group of structures in the center of the brain make up
the limbic system, one seat of the emotions. The outer folds of the brain
are the cerebral cortex, which houses regions devoted to vision, speech,
and memory. The frontal lobes of the cerebral cortex are where the most
complicated forms of reasoning and problem-solving take place. All complex
tasks, such as thinking or forming and responding to emotions, require
the involvement of many brain regions. But finding that a region of the
brain is, or is not, involved in a cognitive task opens up possibilities
for understanding brain function and dysfunction.
The Princeton researchers found that problems of the footbridge type
which they called "personal" moral dilemmas because
they require active, personal involvement activated areas of the
brain associated with emotion, much more consistently than problems like
the "impersonal"
trolley dilemma, which activated brain areas associated with memory.
Moreover, says Greene, the few people who said it's OK to push a person
off a footbridge to save five others tended to take more than twice as
long to make the decision.
Greene says the study is another piece of evidence that intuitive emotional
reactions affect moral judgments. "In the footbridge case, most
people say "no," but people who say "yes' take a long
time,"
he says. "According to our model, you've got an emotional response
saying, "no, no, no," so anyone who's going to say "yes" will
have to fight that response. Whatever is going on in the brain that is
different in these two cases is really affecting people's judgments,
and you can see it in how it slows people down when they go against emotion."
Photo: Professor Daniel Kahneman, winner of
the Nobel Prize in economics. His research found heuristics frequently
fail people, leading to errors in reasoning. (Frank Wojciechowski)
The
"Linda problem"
This notion that reasoning, judgment, and decision-making often can be
less than rational has a long history in psychology, and it is permeating
other fields. Greene's work in philosophy is one example. Another is
the work of psychology professor Eldar Shafir, whose research in behavioral
economics looks at how people make choices and decisions under conditions
of uncertainty or conflicting information.
Shafir's colleague, Daniel Kahneman, the Eugene Higgins Professor of
Psychology and Public Affairs, won the 2002 Nobel Prize in Economics
for his trail-blazing work in behavioral economics. Kahneman and his
longtime Stanford University colleague Amos Tversky, who died in 1996,
wrote papers over 25 years examining reasoning and decision-making under
conditions of uncertainty. They found that most people use heuristics quick
rules of thumb when thinking about many sorts of problems. And
while these heuristics often are good enough for the decisions people
make every day, they also frequently lead to repeatable errors.
Kahneman and Tversky have had great influence in economics, where standard
models of economic behavior assume that people will act rationally in
a specific way, that is, to maximize "expected utility," or
satisfaction. Instead, the pair showed that when forced to reason about
problems where relevant information is uncertain or perplexing, people
fall back on heuristics that frequently fail them.
A common example that trips up the statistically naïve, as well
as professionals who should know better, is the gambler's fallacy. Let's
say a fair coin turns up heads five times in a row. Many a gambler would
bet on the next toss to be a tail, despite the fact that each toss is
independent and has an equal chance of being heads or tails. Kahneman
and Tversky showed that scientists also fall into this sort of faulty
reasoning when they have undue confidence in the results of a small experiment such
as a clinical trial with few patients that may be due to chance
alone. Such reasoning is like flipping five heads in a row and concluding
that the coin was biased.
Although humans invented the concept of probability, it seems many people
do not apply it properly, at least when thinking intuitively and making
snap judgments. A famous demonstration of this is what Kahneman and Tversky
called "the Linda problem":
Linda is 31 years old, single, outspoken, and very bright. She
majored in philosophy. As a student, she was deeply concerned with issues
of discrimination and social justice and also participated in antinuclear
demonstrations.
Kahneman and Tversky wrote this description and listed eight possible
occupations or activities for this fictional Linda, including three of
real interest to the researchers that she is active in the feminist
movement, that she is a bank teller, or that she is a bank teller active
in the feminist movement and five distracting fillers. Test subjects
were asked to rank the eight possibilities from most likely to least
likely. Subjects consistently rank the possibility that Linda is a bank
teller active in the feminist movement as more likely than that Linda
is a bank teller.
That, of course, is impossible, since the category "bank teller" includes
the category "bank teller active in the feminist movement." Kahneman
discussed the problem recently in his Wallace Hall office, while leaning
back in a leather easy chair, the only luxury in the spartan space. He
noted that the problem had been widely debated in the 20 years since
he and Tversky published it, and that much nonsense had been written
about it. He and Tversky were perplexed that the overwhelming number
of people failed to reason proficiently on this problem, including people
who should have known better. "We never predicted that people would
be blind," he says.
As a result of the Linda problem and many others designed to elicit certain
heuristic or intuitive judgments, Kahneman believes that "people
will think intuitively unless they have very strong cues pointing them
to apply logic."
What's for dinner?
Say you entered a restaurant, took your seat, and as you perused the menu
the waiter sidled up and said, "You can have the fish, or else not
both the soup and the salad." You exchange befuddled looks with
your companions and wonder whether waiters these days are out-of-work
logicians rather than unemployed actors.
But, it turns out, the waiter is Stuart Professor of Psychology Philip
Johnson-Laird, and he simply wants to know what you want to eat. What
is possible? Because of the force of "or else," it is not obvious
that one possibility, according to the waiter's instructions, is that
you can have all three dishes at once. (See answers,
page 31).
Johnson-Laird uses these sorts of intricate logical puzzles to help provide
clues to the ways people think. A longstanding view in philosophy and
psychology is that humans are intrinsically rational and follow logical
laws "tacitly represented in the brain," he says. In that view,
reasoning is akin to applying formal rules of inference to propositions.
(One famous rule, for example, is called "modus ponens": Given
two propositions
"if a, then b" and "a," one can conclude, "then
b.") Johnson-Laird himself once held that view that people
followed such rules of thinking and it was taken for granted 30
years ago.
No longer. "People discovered that the content of what you're reasoning
about can affect the conclusions that you draw," he says. In other
words, people apply general and background knowledge to reasoning tasks.
This finding was, he says, "mildly embarrassing" to those who
said we reason using formal rules of inference, because such rules should
be blind to content.
An alternative view, promoted by Johnson-Laird and colleagues, comes
close to common sense, he admits. When we reason, he says, "we think
about possibilities that are compatible with the premises that we're
reasoning from." In practice, that means that people focus on what
is true or possible from a given set of premises; they often fail to
consider what they believe must be false. For example, ponder this problem:
In a hand of cards, only one of the following three assertions is true:
There is a king or an ace or both in the hand. There is a queen or an
ace or both in the hand. There is a jack or a ten or both in the hand.
Is it possible that the hand contains an ace?
Johnson-Laird and his colleagues have given this problem to many smart
people and most answer, "yes." They are wrong: If the first
assertion is true, the second and third must be false so there
is no ace. The same reasoning applies if the second assertion is true.
Johnson-Laird calls this theory of reasoning, in which people focus on
what's possible, a "mental models" approach. It lends itself
to various predictions, such as, the more possibilities we have to think
about in solving a problem, the harder the task should be. "We can't
hold in mind many possibilities," he says, which gets in the way
of promptly understanding assertions of the sort the waiter made about
the fish, the soup, and the salad.
He recently teamed up with two Princeton colleagues on an experiment
using fMRI to look for regions of the brain that perform logical reasoning
and to look for evidence that might favor one of the competing theories
rules of inference or mental models for how logical reasoning
is carried out in the brain.
If logical reasoning primarily involves applying rules of inference to
sentences, then language areas of the brain in the left hemisphere ought
to be active. If instead people reason mainly by imagining possibilities
in mental models, then regions of the right hemisphere thought to be
involved in spatial representation should be more active.
To study the issue, Johnson-Laird, postdoctoral researcher James Kroger
(now at the University of Michigan), and Center director Cohen devised
problems for volunteers to solve as their brains were scanned. The problems
included both easy and hard logic problems. The researchers surmised
that language areas of the brain initially would be active for both kinds
of logic problems, as solvers tried to understand them. But, says Johnson-Laird,
the hard problems prompted volunteers to search for counterexamples to
solve them; would that be evident in a brain scan?
Here are the premises for one series of problems they used:
There are five students in a room. Three or more of these students are
joggers. Three or more of these students are writers. Three or more of
these students are dancers.
An easy logic problem asks: Does it follow that at least one of the writers
in the room is a student? (See answers, #2.) A hard logic problem asks:
Does it follow that at least one of the students is all three? (#3)
The easy problem can be answered just by understanding the premises.
The hard problem prompts a search for a counter-example any single
arrangement of the five students and their activities in which none is
simultaneously a jogger, writer, and dancer. When people looked for counterexamples,
says Kroger, they were making mental models and checking them until a
counterexample was found. By noting regions active only during counterexample
problems, "we were able to see where in the brain mental models
were made."
As expected, they confirmed the finding, from physicians who treat patients
with frontal lobe damage, that the frontal lobes are strongly involved
in logical and mathematical thinking. More noteworthy, there were some
areas in the right frontal lobe active only during logical reasoning,
and areas of the left frontal lobe active only during mathematical calculation.
And, it turns out, when subjects were thinking of counter-examples while
solving the hard logic problems, they appeared to use an area roughly
the size of a quarter, about an inch above and behind the right eye.
This region, called the frontal pole, was active only during the counterexample
problems.
For Kroger, the results are exciting because they suggest the most forward
parts of the brain are crucial to complex mental processes, a subject
he continues to work on. And the results lend support to Johnson-Laird's
mental models theory as psychologists continue to debate how people think.
Quick thinking
Try this problem: A bat and a ball together cost $1.10. The bat costs
$1 more than the ball. How much does the ball cost? (#4)
If you answer 10 cents, as many Princeton undergraduates do, you've given
an answer that seems reasonable but that you will discover, upon checking,
is wrong.
"We do a lot of thinking without checking," says Kahneman. "My
guess is, you'll find brain differences between when people check and
when they don't. Localizing those differences will be interesting."
Following increasingly common usage, Kahneman called quick, intuitive
thinking system 1 and more controlled, reflective thinking system 2,
a distinction that some trace to William James in the 19th century. Moreover,
he expects to see evidence of these systems in the brain. The system-1
/ system-2 distinction, he says, "is going to be tied to the brain,
or it will vanish." In other words, if scientists don't find evidence
of these two distinct processes in the brain, they will have to abandon
or modify the notion of two systems of thinking.
That distinction between rapid, intuitive thought and deliberate
reasoning with feedback crops up widely in psychology. For instance,
social psychologist and Princeton professor Susan Fiske, who studies
prejudice and stereotyping, says that intergroup biases are "surprisingly
automatic." People, she notes, categorize other people based on
age, race, and gender in "milliseconds" and then have visceral
responses to those categories in milliseconds, too.
According to Fiske, researchers have found that when people are shown
faces of other racial groups, even when presented subliminally, too quickly
to be consciously perceived, they have a reaction that includes a response
in a part of the brain called the amygdala, centrally located in the
brain's limbic system, where emotions are generated. This stereotyping
does not mean that everyone is a racist, Fiske says, but is the result
of learned, socially constructed categories and living in a culture where
racial stereotyping is common.
Fiske and Elizabeth Wheeler '02, then a graduate student, found that
the amygdala response could be eliminated by changing the context or
inviting a more careful, system 2-like deliberation. They interfered
with the automatic response by flashing a vegetable name immediately
before showing a face and asking subjects to say whether the person pictured
would like the vegetable or not.
Fiske also has shown that automatic stereotyping reactions can be reduced
or eliminated if people are sufficiently motivated. For example, a member
of an athletic team that includes someone from a different racial or
social group is much less likely to stereotype that person. And a person
motivated by values of fairness or harmony may employ a more "individuating
kind of process"; to override the automatic response, says Fiske.
Her work tests her theory that two distinct processes exist for making
sense of people one relatively automatic, quick, and categorical,
the other slower, more controlled, and susceptible to motivation and
context. Before the fMRI scanner came to Princeton, Fiske's research
relied on questionnaire responses, measures of attention, and nonverbal
cues to discomfort. Increasingly, she uses fMRI to look for evidence
of those two systems in the brain, extending her own and others' earlier
work that found an amygdala response to unfamiliar faces of another race.
Imaging, Fiske says, "is one more tool in our toolbox" for
understanding how the brain constructs, and is influenced by, social
relationships. Indeed, scientists at the Center, who come from many Princeton
departments as well as other universities and even pharmaceutical companies,
are using a variety of research techniques to better understand the relationship
of brain and mind, and how both lead to behavior.
Billy Goodman '80 is a writer who lives in Montclair, New Jersey.
* When subjects were asked to categorize pictures of faces by age (over
or under 21), they generally had increased amygdala activation a
typical alarm response when those pictures were of unfamiliar
people of a difference race (left). But, if the subjects first had to
concentrate on a task that distinguished one person from another, for
example, predicting whether the person pictured would like a particular
vegetable, the cross-race amygdala response disappears
Answers
-
(page 29) The assertion, "You can have the fish, or else not
both the soup and the salad," is difficult to understand because "or
else,"
"not," and "both" create multiple possibilities
that are difficult to keep in mind at once. To understand what is
possible, substitute for some of the terms a technique that
is widely used in problem-solving. If you were told you could have "the
fish or else the meat," you would know you could have one or
the other, not both. If you could have "the fish or else not
the meat," you could still have one or the other, in this case "fish" or "not
the meat." You could, however, have "fish" and
"meat" together. If you substitute "both the soup
and the salad" for
"meat," you'll see that you can have fish, soup, and salad
together.
-
(page 30) Of course. We told you it was an easy problem.
-
(page 30) No. The easiest way to see this is to represent the five
students by the numbers 1 through 5, and list their activities
beneath them. Make three joggers, three writers, and three dancers,
without any one being all three. That's a counterexample.
-
(page 31) Five cents. Even if you got the right answer, you must
admit that your brain immediately screamed "10 cents." Remember
what your elementary school math teachers told you: Check your
work!
Having my head examined
In the quest for knowledge, a PAW writer has his brain scanned
Do not be alarmed by the lack of activity in writer Billy Goodman's brain.
He probably was just dozing. (Princeton Psychology Department)
Phil got $10 and offered to split it with me. The "split,";
however, was $9 for him and $1 for me. In my head I said a phrase that
translates roughly as "over my dead body," and I pressed a
button with my left middle finger which meant that I was rejecting
his offer. Which meant that neither of us got a dime.
I was participating, off the books, in a study run by two postdoctoral
researchers in the psychology department, Jim Rilling and Alan Sanfey.
They are investigating the neural correlates of the perception of fairness,
or lack of it, in social interactions. In other words, what's going on
in your brain when someone stiffs you, as Phil did me?
To visualize what is happening in the brain, Sanfey and Rilling place
volunteers inside a brain scanner in the basement of Green Hall. That
is how I came to be flat on my back, my head inside a big donut of a
superconducting magnet, as I contemplated the insulting proposition from
Phil, a Princeton undergraduate who made his offer not in person, but
via a computer screen. The magnet is part of a machine that does functional
magnetic resonance imaging (fMRI), which is like a diagnostic MRI scanner
with the addition of being able to measure brain activity.
Lying as still as possible in the narrow confines of the magnet, I was
able to view, with a mirror mounted over my head, a computer screen where
I could read each offer as it was made and see a picture of who made
it. I accepted or rejected offers by pressing the proper buttons on a
special glove I wore. The offers are part of the well-known to
psychologists and economists Ultimatum game. One person receives
money and proposes to split it any way he wants with a responder. The
catch: If the responder rejects the split, neither person gets any money.
Rationally, economists know that the responder should take any sum; it's
free money. Psychologists know, however, that grossly unfair splits are
often rejected. Sanfey and Rilling want to know what's going on in people's
heads when they receive an unfair offer. Are they indignant, and does
that show up in brain activity?
The two-year-old fMRI scanner one of the first of its kind outside
a medical center is able to measure brain activity as subjects
perform tasks, such as accepting or rejecting monetary offers. Specifically,
as a part of the brain becomes more active, there is an increase in blood
flow to that area. The machine, which was loudly scanning my brain every
three seconds, detects that blood-flow response, and computer software
later turns it into a color-coded map of the brain showing regions with
more activity during a particular task.
Rilling and Sanfey are finding that people who reject grossly unfair
offers seem to have more activity in a part of the brain called the anterior
insula than people who do not reject such offers. The anterior insula
is implicated in various negative emotional states and may be activated
when we feel indignant. Because the undergraduate volunteers who play
the game keep any money they earn, they must be truly annoyed to reject
unfair offers and forgo real money.
By Billy Goodman '80