|
Comment on this article
Memories are Made of This
A low-tech slug and high-tech biology are helping Yale neuroscientists probe the ways in which we recall everything from trivia to trauma.
December 1992
by Bruce Fellman
Near the end of the
last century, the psychologist William James addressed the bedeviling issue of
memory. “The stream of thought flows on; but most of its segments fall into the
bottomless abyss of oblivion,” he wrote in his eloquent book, Principles of
Psychology. “Of some, no memory survives the instant of their passage. Of
others, it is confined to a few moments, hours, or days. Others, again, leave
vestiges which are indestructible, and by means of which they may be recalled
as long as life endures.”
Precisely how we
remember anything was an utter mystery to James, but more than a hundred years
later, neuroscientists at Yale are uncovering the short- and long-term cellular
and molecular processes that allow us to learn, retain, and recall information.
This research may in time explain how the human brain works, as well as what
goes wrong when it fails, but for now, many of the investigations concentrate
on a slightly less lofty goal, using a most humble subject.
On a brightly lit dish
in a darkened room in Dunham Laboratory rests a sea slug, a shell-less marine
snail native to the rocky shores of California. Tom Fischer, a postdoctoral
researcher in psychology, watches the anesthetized creature while he gently
brushes the animal’s hind end.
In nature, any touch
might signal the attack of some predator. The slug, known to scientists as Aplysia, would quickly retract
its tail and, like a squeezed accordian, attempt to make its supple,
six-inch-long body as small as possible. It reacts the same way in the lab, and
as Fischer triggers the reflex, he peers through a microcrope at a group of
orange nerve cells that control the disappearing act. Deftly, the researcher
skewers one of these neurons with a glass-and-silver electrode that enables him
to eavesdrop on the electric chatter of the cell while it talks to its
comrades.
As Fischer watches an
oscilloscope that monitors cellular activity, the neuron generates electrical
outbursts each time the tail is touched. But after a few brush strokes, there’s
a change. The cell “fires” with much less enthusiasm. It behaves like someone
who has heard the word “Boo!” too often.
“Look,” Fischer notes, “we have learning in a dish.”
Tracking down the
mechanisms underlying this change occupies a growing number of Yale
researchers, and although Aplysia takes center stage in much of this research,
University scientists are after a far larger quarry than the “mind” of a snail.
“Collectively, we’re
interested in the game plan for memory and learning in the entire biological
domain,” says Thomas J. Carew, who heads Fischer’s research team, and serves as
John M. Musser Professor of Psychology, as well as chairman of the psychology
department. “We go from simple invertebrate animals, to rats, to rabbits, to
monkeys, to humans.”
Carew and his colleagues
are tackling problems as diverse as chronicling the molecules and genes that
underlie memory, and building computer networks of artficial neurons that can
actually learn. While a number of Yale researchers tease apart events at the
cellular level, other investigators take the opposite approach, working out how
nerve cells link up to accomplish tasks like adding numbers, finding direction,
or connecting a face with a name. And although scientists are quick to point
out that most of their efforts do not yet have clinical applications, they are
not about to deny the tantalizing possibilties that may eventually offer help
to people who suffer from anxiety disorders, brain damage and diseases,
learning disabilities, even simple absentmindedness.
Carew readily admits
that an “intellectually impoverished” creature like Aplysia seems an odd stand-in
for the human brain, but such studies are useful, he explains, because “nature
is profoundly conservative—she just doesn’t throw anything away. The mechanism
of communication between nerve cells is the same from sea slugs to Einstein.
The degree of complexity certainly changes, but the actual operating rules
don’t.”
Putting together those
rules has taken Carew and many of his colleagues on what has been called a “reductionistic
odyssey.” Their strategy is to train an animal to learn something and remember
it—withdraw a tail, navigate a maze, associate a benign sound with an annoying
puff of air, and so on—then go into the brain and, circuit by circuit, cell by
cell, molecule by molecule, determine what happened as a result of the
experience.
“Memory describes a
change in behavior,” says Carew. “You don’t see memory; you see performance.
The only way I know you remember me is when you say, ‘Hi Tom.’ I never know what’s
in your head.”
The triumph of recent
research has been to figure out some of the changes that take place inside and
among the cells that enable memory and learning to occur. “It’s staggeringly
complicated—you can stare at the process and wonder, or you can break it into
pieces,” says Carew, who has done both.
So has Leonard K.
Kaczmarek, professor of cellular and molecular physiology, and chairman of the
pharmacology department at the School of Medicine. He and his coworkers focus
on the Aplysia nerve cells in charge of reproductive behavior to chronicle how
experience alters neurons.
Kaczmarek explains that
the mating process starts when certain of the animal’s nerve cells start firing
for about half an hour. (The slugs are hermaphrodites; each of a mating pair
fertilizes the other.) Their electrical activity then subsides abruptly. The
snails stop feeding, move to the side of a rock (or the aquarium), sway their
heads from side to side, and lay their eggs. The entire process takes about six
hours, and when it’s over, the animals have to wait at least a day until their
nerves are well rested before they can mate again.
But the next time
around, the controlling cells are different. They get bigger and better at
their job, and, most important, they remember.
“When you lay down a
memory trace, something physical changes,” says Kaczmarek, as he shows a
micrograph of a swollen “after” cell.
There are other lasting
alterations in cellular structure as well. The researcher explains that neurons
are surrounded by a protective membrane, which, it turns out, is perforated
with numerous breaks called channels. As the ends of the stimulated Aplysia neurons grow, the
number of channels also increases. These openings enable electrically charged
molecules to enter the cell, and with more breaches in the wall, there is an
increase in molecular traffic. This leads to another major change.
The affected neuron
boosts its production of substances known as neurotransmitters. Their job is to
convert an electrical signal into a chemical one, a critical process that
enables the message to traverse the tiny gaps, or synapses, that separate the
nerve cells and all their many branches (the adult human brain has about 10
billion neurons, and these are connected to each other via a network of roughly
100 trillion synapses).
Thomas H. Brown studies
the short and long-term implications of these alterations. A physiologist,
psychologist, and the director of the University’s Center for Theoretical and
Applied Neuroscience, Brown investigates memory in the brains of rats as well
as in artificial “brains” crafted from computer-generated neurons. “You can
think of the brain as a connection machine—it’s the most massively parallel
supercomputer in the known universe,” Brown explains. “The connections—the
synapses—change their strengths as a function of the history of their use.
Memories are stored through changes in the strengths of synaptic connections.”
This idea originated in
1949 with Donald Hebb, a Canadian neuropsychologist. “Hebb suggested that when
a presynaptic cell—the one sending information—and a postsynaptic cell—the one
receiving it—are co-active, the synaptic connection between them strengthens,”
Brown explains.
In 1986, 3’ years after
they were first proposed, Brown found these Hebbian synapses in a part of the
brain called the hippocampus, an area that plays a critical role in the kinds
of memories that help animals find their way in the world (damage to the
hippocampus produces an amnesia in which sufferers can’t remember where they’ve
been, and hence, can no longer navigate in new terrain).
Hebb’s vindication
provided researchers with a process by which an event could be engraved in
synaptic granite, but, as William James pointed out, there are different kinds
of memories. Each has its own mechanism, and each has its own time course.
The connection between
the short term and the long term seems obvious, but in the last year, Tom Carew
made a discovery that “surprised the heck out of us.” Working with Aplysia, the scientist had
learned how to produce the short-term and long-term synaptic changes that
underlie each type of memory, and like everyone else, he figured that the
former gave rise to the latter.
Carew found a way to
block the short-term process. If the brain, as many see it, works like a
desktop computer, then the blockage would be like turning off the power in the
middle of an unsaved story or spreadsheet. The words and figures exist only in
a kind of electronic limbo; they have no permanence until they are stored on a
hard or floppy disk.
Without a short-term
process in operation, there should be nothing to transfer to the long term,
Carew reasoned. But 24 hours later, the long-term mechanism was completely
unchanged. “For several hours, you’d look at the synapse and say that nothing
was happening,” he recalls. “Well, the long-term process was turned on, but you
just didn’t see it. It looks like the two are in parallel rather than in
series. Short-term memory may not be necessary to get long-term.”
Which is not to say
that short-term memory is unimportant. In Patricia S. Goldman-Rakic’s
laboratory in the Sterling Hall of Medicine, a rhesus monkey stares at a
television screen. An image appears on a section of the monitor for a half
second, flicks off, and after a wait of five seconds, the trained animal moves
its eyes to look at where the image has been. During the waiting period, groups
of nerve cells in a region of the brain called the prefrontal cortex start to
fire as the monkey performs a kind of mental arithmetic. “The job of this area
is to access information that is stored elsewhere,” says Goldman-Rakic.
Researchers like Tom
Brown believe that long-term memories exist as a dynamic network of changed
synapses that are distributed throughout a variety of locations in the brain.
The process Goldman-Rakic calls working memory “opens the trunk and brings out
the right information at the right time so we can make the right response.”
The scientist calls the
prefrontal cortex the mind’s “scratch pad,” a place where sights, sounds,
words, facts, figures and the like can be brought to this mental slate, held
briefly in storage, worked on, and then promptly erased as soon as the mind’s
business is done.
Goldman-Rakic explains
that in the brain’s geography, there are places that hold specific information
about such things as color, shape, size, sound, and so forth. “For every area
that represents a feature of the outside world, there must be a prefrontal area
that reads out that information,” she notes. “It’s our highest-level operating
system, the one that makes us human.”
So when working memory
fails, as it does for a variety of reasons, the consequences can range from
annoying to catastrophic.
You get up from the
dinner table, walk over to the refrigerator, open the door, and although not
more than ten seconds may have passed, you find yourself staring blankly
inside, without a clue as to what you wanted. “The prefrontal cells are not
holding information on line as effectively as they once did,” explains
Goldman-Rakic.
Strokes, devastating
illnesses like schizophrenia, Alzheimer’s disease, AIDS, and cancer, accidents
and other forms of brain damage can all wreak havoc on working memory, either
by attacking long-term storage or by somehow preventing the prefrontal cortex from
getting at stored information. But it is also possible to remember too much.
Normally, after
discarding what is deemed unimportant—dreams may play a key role in separating
the mental wheat from the chaff—the brain stores what’s worth saving in a fairly
orderly fashion. Memories serve the mind, but if servant becomes master, the
consequences can be tragic.
Michael Davis, a
professor of psychiatry and psychology, recalls the case of a Vietnam veteran
who had posttraumatic stress syndrome, an inability to forget the horrors of
war. “The vet was doing quite well, and during the summer, he got married,”
Davis relates. “The wedding party was briefly outdoors when a car suddenly
backfired. Instantly, the panic-stricken former soldier dove for cover in the mud.
Everything about the situation should have told him not to be afraid. But his
fear was so strong and reflexive that it overcame every inhibitor.”
The ability to be
afraid is one of the first skills to develop in an infant, and it is something
we remember—touch a hot stove as a child, and you’ll never forget the
lesson—for a lifetime. “Fear conditioning is a fundamental form of learning.
It’s very easy to establish and very hard to get rid of,” notes Davis.
The neural site of
human fear is an almond-sized section of the brain called the amygdala. “If you
stimulate it during brain surgery, people sometimes report feeling anxious, as
if someone is standing behind them,” says the scientist.
Animals capable of
being afraid also possess an amygdala, and in the rat, Davis has laboriously
pieced together fear’s circuitry as it works through what’s called the
fear-potentiated startle effect. “If you’re scared, you tend to startle more,”
he notes. “Say you’re walking down a dark alley, and you’re a little apprehensive.
If a cat were to knock the top off a metal ashcan, you’d startle more there
than you would someplace else.”
The researcher
conditioned rats to be afraid of both sound and light by pairing the two
stimuli with an electric shock. With the fear response firmly established,
Davis used electrodes and chemical tracers to probe the cells he thought might
be involved in the pathway. All roads led to the amygdala.
But there are also
roads that inhibit its activity, and in patients with anxiety disorders like
post-traumatic stress syndrome, those are roads not taken.
If a shock stops
accompanying the light flash or the sound, the rat eventually stops being
afraid. If the animal is given a drug that blocks a critical part of the fear
circuit’s chemistry, it never learns that light or sound predicts a shock—it
never learns fear.
For that Vietnam vet,
however, or for victims of sexual abuse, violent crime, or countless other
nightmares, the amygdala becomes a direct line into the darkness where terrible
memories dwell. In a part of the soldier’s brain that stubbornly defies reason and
logic, a backfiring car becomes a rifle shot or an explosion thousands of miles
and more than twenty years in the past when a link was forged between a sound
and survival.
Davis hopes that,
eventually, understanding the biology of fear may lead to more effective
therapies, including drugs that can blunt these runaway anxieties. Yale
neuroscientists explain that a better knowledge of the cerebral architecture
and mechanisms involved in memory and learning might help overcome, and
possibly prevent, the devastation that accompanies brain disease and damage.
Such work could conceivably make us smarter, and perhaps researchers will
discover a way around the chronic forgetfulness that, alas, increases with age.
In the meantime,
probing the mystery of the mind, particularly our own, fills investigators with
appreciation, awe, and humility. “We’ve tended to think of the brain as a messy
computer,” says Tom Brown, “but the more we learn, the more it looks like
nature came up with an optimal design. One hundred trillion synapses in a
three-pound organ that operates with little power—it’s amazing in terms of
performance. Nature wasn’t just stumbling along.” |
|