|
Noam Chomsky A Review of B. F. Skinner's Verbal
Behavior Language, 35, No. 1 (1959), 26-58. "The
other fundamental notion borrowed from the description of bar-pressing experiments
is reinforcement. It raises problems which are similar, and even more serious.
In Behavior of Organisms, "the operation of reinforcement is defined
as the presentation of a certain kind of stimulus in a temporal relation with
either a stimulus or response. A reinforcing stimulus is defined as such by its
power to produce the resulting change [in strength]. There is no circularity about
this: some stimuli are found to produce the change, others not, and they are classified
as reinforcing and nonreinforcing accordingly" (62). This is a perfectly
appropriate definition for the study of schedules of reinforcement. It is perfectly
useless, however, in the discussion of real-life behavior, unless we can somehow
characterize the stimuli which are reinforcing (and the situations and conditions
under which they are reinforcing). Consider first of all the status of the basic
principle that Skinner calls the "law of conditioning" (law of effect).
It reads: "if the occurrence of an operant is followed by presence of a reinforcing
stimulus, the strength is increased" (Behavior of Organisms, 21).
As reinforcement was defined, this law becomes a tautology. For Skinner,
learning is just change in response strength. Although the statement that presence
of reinforcement is a sufficient condition for learning and maintenance of behavior
is vacuous, the claim that it is a necessary condition may have some content,
depending on how the class of reinforcers (and appropriate situations) is characterized.
Skinner does make it very clear that in his view reinforcement is a necessary
condition for language learning and for the continued availability of linguistic
responses in the adult. However, the looseness of the term reinforcement
as Skinner uses it in the book under review makes it entirely pointless to inquire
into the truth or falsity of this claim. Examining the instances of what Skinner
calls reinforcement, we find that not even the requirement that a reinforcer be
an identifiable stimulus is taken seriously. In fact, the term is used in such
a way that the assertion that reinforcement is necessary for learning and continued
availability of behavior is likewise empty. To show
this, we consider some examples of reinforcement. First of all, we find
a heavy appeal to automatic self-reinforcement, Thus, "a man talks to himself...
because of the reinforcement he receives" (163); "the child is reinforced
automatically when he duplicates the sounds of airplanes, streetcars ..."
(164); "the young child alone in the nursery may automatically reinforce
his own exploratory verbal behavior when he produces sounds which he has heard
in the speech of others" (58); "the speaker who is also an accomplished
listener 'knows when he has correctly echoed a response' and is reinforced thereby"
(68); thinking is "behaving which automatically affects the behaver and is
reinforcing because it does so" (438; cutting one's finger should thus be
reinforcing, and an example of thinking); "the verbal fantasy, whether overt
or covert, is automatically reinforcing to the speaker as listener. Just as the
musician plays or composes what he is reinforced by hearing, or as the artist
paints what reinforces him visually, so the speaker engaged in verbal fantasy
says what he is reinforced by hearing or writes what he is reinforced by reading"
(439); similarly, care in problem solving, and rationalization, are automatically
self-reinforcing (442-43). We can also reinforce someone by emitting verbal behavior
as such (since this rules out a class of aversive stimulations, 167), by not emitting
verbal behavior (keeping silent and paying attention, 199), or by acting appropriately
on some future occasion (152: "the strength of [the speaker's] behavior is
determined mainly by the behavior which the listener will exhibit with respect
to a given state of affairs"; this Skinner considers the general case of
"communication" or "letting the listener know"). In most such
cases, of course, the speaker is not present at the time when the reinforcement
takes place, as when "the artist...is reinforced by the effects his works
have upon... others" (224), or when the writer is reinforced by the fact
that his "verbal behavior may reach over centuries or to thousands of listeners
or readers at the same time. The writer may not be reinforced often or immediately,
but his net reinforcement may be great" (206; this accounts for the great
"strength" of his behavior). An individual may also find it reinforcing
to injure someone by criticism or by bringing bad news, or to publish an experimental
result which upsets the theory of a rival (154), to describe circumstances which
would be reinforcing if they were to occur (165), to avoid repetition (222), to
"hear" his own name though in fact it was not mentioned or to hear nonexistent
words in his child's babbling (259), to clarify or otherwise intensify the effect
of a stimulus which serves an important discriminative function (416), and so
on.
From this sample, it can be seen that
the notion of reinforcement has totally lost whatever objective meaning it may
ever have had. Running through these examples, we see that a person can be reinforced
though he emits no response at all, and that the reinforcing stimulus need
not impinge on the reinforced person or need not even exist (it is sufficient
that it be imagined or hoped for). When we read that a person plays what music
he likes (165), says what he likes (165), thinks what he likes (438-39), reads
what books he likes (163), etc., BECAUSE he finds it reinforcing to do so, or
that we write books or inform others of facts BECAUSE we are reinforced by what
we hope will be the ultimate behavior of reader or listener, we can only conclude
that the term reinforcement has a purely ritual function. The phrase "X
is reinforced by Y (stimulus, state of affairs, event, etc.)" is being used
as a cover term for "X wants Y," "X likes Y," "X wishes
that Y were the case," etc. Invoking the term reinforcement has no
explanatory force, and any idea that this paraphrase introduces any new clarity
or objectivity into the description of wishing, liking, etc., is a serious delusion.
The only effect is to obscure the important differences among the notions being
paraphrased. Once we recognize the latitude with which the term reinforcement
is being used, many rather startling comments lose their initial effect -- for
instance, that the behavior of the creative artist is "controlled entirely
by the contingencies of reinforcement" (150). What has been hoped for from
the psychologist is some indication how the casual and informal description of
everyday behavior in the popular vocabulary can be explained or clarified in terms
of the notions developed in careful experiment and observation, or perhaps replaced
in terms of a better scheme. A mere terminological revision, in which a term borrowed
from the laboratory is used with the full vagueness of the ordinary vocabulary,
is of no conceivable interest." [Full
Text] MacKenzie BD. Behaviorism
and the Limits of Scientific Method. London: RKP, 1977.
Dennett DC. Brainstorms: Philosophical
Essays in Mind and Psychology. Hassocks, Sussex: Harvester
Press, 1978.
Popper KR. Objective
Knowledge: an Evolutionary Approach. Oxford University
Press, 1972. Stich, S. Is Behaviorism
Vacuous? Behavioral and Brain Sciences, 7, 647-649. 1984. Efron
R. The conditioned reflex: a meaningless concept. Perspectives
in Biology and Medicine, 1966; 9: 488-514. N. McLaren Chapter
3. BEHAVIORISM FROM THE PSYCHIATRIC PERSPECTIVE. THE FUTURE OF PSYCHIATRY A
CRITICAL ANALYSIS OF THE THEORETICAL BASIS OF PSYCHIATRY "As
mentioned, Dennett is of the view that, at the beginning of his career, Skinner
made a profound and far-reaching mistake by equating mentalism with the supernatural.
The psychologist was determined to eradicate from his "science of general
psychology" all mentalist concepts, and therefore never looked seriously
at the age-old questions of whether a non-mentalist psychology is possible or,
in the alternative, whether mentalism is genuinely beyond analysis. Thus, there
was a great deal of circularity, even question-begging, in Skinner's psychology,
i.e. he frequently assumed the truth of that which required proof. For example,
it is typical of humans that we 'plan ahead,' which means just what everybody
thinks it means. But Skinner did not allow any mentalist concepts, and planning
ahead is entirely mentalist. Behavior is under the control of the environment,
he believed, but since future events have not yet happened, they cannot control
behavior. What appears to be a case of somebody planning ahead is, he insisted,
actually a matter of their past history of reinforcing contingencies. Given a
detailed account of everything that has happened to that person, we would be able
to say just what compels them to act in a particular way just now such that, lo
and behold, a few days or weeks down the track, they get whatever it is they said
they wanted in the first place. Unfortunately,
in discarding mentalism as non-science, Skinner adopted another bit of non-science.
As every psychologist knows, keeping track of the history of reinforcing contingencies
of even a laboratory animal is difficult; working out what has happened to a human
years before, when no records were kept, is just impossible, and all talk of doing
so is fanciful. All talk of a "proper behavioral analysis" is just deus
ex machina. Skinner was led to this error by his major assumptions:
1. Mentalism is necessarily supernatural; 2. Therefore, behavior must be
under environmental control; 3. But future events cannot control behavior
because they have not yet happened; 4. Therefore, the controlling element
must lie in the past history of environmental contingencies." ... "In
the title to his paper, Efron asserted that the concept of the conditioned reflex
was meaningless. He pointed out that, within psychology, different authors use
the term 'conditioned reflex' in a variety of totally different ways. He cited
the dispute between two authors, one of whom argued that worms can be 'conditioned'
while the other insisted that the first did not know the difference between 'true
conditioning' and 'pseudo-conditioning.' Efron made a number of points:
a) that these types of disputes were due to 'epistemological chaos' rather that
to disagreements over genuine scientific facts; b) the chaos derives from
the assumption that all human behavior can be explained by reductionist materialism,
i.e. that all "concepts of consciousness, volition and the causal efficacy
of mental processes" can and should be excluded from the field of science;
c) that in attempting to eliminate all mention of conscious mental processes,
reductionist biologists (essentially psychologists) have degraded the narrow concept
of the reflex byprogressively broadening it, to the extent that it has long since
become entirely meaningless; d) all attempts to salvage a meaning for conditioning,
such as operationalism, are doomed to failure because they necessarily enter an
infinite regress. Efron showed that, 150 years ago,
the term 'reflex' had a very restricted meaning, essentially that of the automatic
response of an intact, functioning animal to an external stimulus: "The definition
of 'reflex' action contains, therefore, by implication, reference to a class or
classes of action which are non-reflexive....Behavior which is automatic, innate,
involuntary, and independent of consciousness needs to be isolated conceptually
(i.e. defined) only because other behaviour exists which is voluntary, learned
and dependent on conscious activity...To attempt to use the concept 'reflex' while
at the same time denying the validity of the concepts of 'consciousness' and of
'volition' is not logically permissible" (p491). But this is exactly what
reductionist psychology and biology intended to do: deny consciousness. Not explain
it, nor show it was necessarily irrelevent or artefactual, but to deny the mentalism
of their own minds. The term 'reflex' was seized
by late nineteenth century physiologists as part of a broad drive against the
notions exemplified by Bergson's 'elan vital.' Researchers wished to dispense
with the 'mysticism' inherent in such concepts as consciousness, intention, mentality,
etc.. They therefore declared these notions to be non-scientific and wrote a new
'science' which did not depend on them. But in so doing, they simply replaced
one form of mysticism by another, one all the more pernicious by being implicit.
The neurophysiologist Karl Lashley explicated the "reductionist's credo":
"Our common meeting ground is the faith to which we all subscribe...I believe,
that the phenomena of behavior and mind are ultimately describable in the concepts
of the mathematical and physical sciences" (quoted in Efron, p500). This
particular form of mysticism is known as promissory materialism, which has been
around a long time now without delivering on any of its major promises (which
is, of course, just why it is called promissory'). In
order to eradicate mentalism, psychologists had to broaden the concept of the
'unthinking reflex,' eventually to the point where it was used to explain thought
itself. Their campaign had to be managed this way. It was not possible to eliminate
consciousness by reducing it to matters of brain chemistry, so they had to pursue
the alternative approach, which was to squeeze consciousness out of existance
by expanding the unconscious, automatic basis of behavior until it included everything
that the concept of consciousness had previously encompassed. In their mechanistic
world, the basic element of behavior was the reflex, but in order for it to subsume
all that minds once did, it had to be redefined "...in such a fashion that
it no longer rested upon the concept(s) of consciousness and volition" (p501).
"In sum," Efron continued, "the mechanistic biologist (read: psychologist)
retained the word 'reflex' because it enabled him to make implicit use of the
old concept of the reflex (i.e. involuntary behavior independent of consciousness)
without admitting that his 'new science of behavior' still logically rested upon
the concepts of consciousness and volition. This epistemological procedure is
known, in some scientific circles, as 'having your cake and eating it too'"
(p501)
In Efron's view, and supported by lengthy quotes, Pavlov was one
of those responsible for expanding the definition of 'reflex' to the point where
it became facile. Using it, Pavlov could explain "every activity of man and
beast" which, unfortunately, led directly to an infinite regress and even
to self-contradiction: "By virtue of (Pavlov's) definition, it is a reflex
if a hungry dog salivates in response to a bell which has in the past signaled
the appearance of food; it is a reflex if I purchase a painting today which I
saw and enjoyed last year; and it is a reflex if a man tries to escape from his
tormentors in a concentration camp" (p506). Of course, the artist and the
torturers would also be acting reflexly. Efron's
case against the conditioned reflex is unimpeachable, yet it has had remarkably
little effect on behaviorist psychology. But it can be extended further. If we
look at the term 'conditioned reflex' as it stands, it has no meaning. Reflex
we understand (or think we do); but conditioned? What does this mean? We think
it has a meaning, but only because we have heard the term so often that we accept
it as real, rather in the way American psychiatrists of the fifties and sixties
thought they knew what they meant by the term 'schizophrenia.' The original meaning
of Pavlov's term emerges through his writings, but certainly not with any great
clarity. It would appear that the salivating response his dogs showed to food
was originally termed 'unconditional,' meaning one which appeared without further
conditions. The food was a 'stimulus to an unconditional response' (shortened
to 'unconditional stimulus') in that, without any conditions attached, it worked
every time (strictly speaking, this isn't true, as Efron has argued). So an 'unconditional
stimulus' leads to an 'unconditional reflex response.' The bell, however, has
to be associated with the food before it can elicit any sort of response; its
efficacy as a stimulus is 'conditional' upon its contiguity with the (unconditional)
food stimulus. This is associationism; there is no 'process of conditioning' to
be found, but the translators always used the term "conditioned reflex,"
implying something quite different from 'conditional reflex'. 'Conditional' is
an adjective, and its meaning is quite clear, but using the word 'conditioned,'
which has the form of a past participle, implies there is a verb "to condition."
Today, there is such a verb in English, but its meaning is quite the same as "to
associate." Skinner himself admitted this as long ago as 1931: "If we
remain at the level of our observations, we must recognise a reflex as a correlation"
(quoted in Efron, p498)."
Keller Breland
and Marian Breland THE MISBEHAVIOR OF ORGANISMS American
Psychologist, 16, 681-684. 1961. "And if, as Hebb suggests, it is advisable
to reconsider those things that behaviorism explicitly threw out, perhaps it might
likewise be advisable to examine what they tacitly brought in - the hidden assumptions
which led most disastrously to these breakdowns in the theory. Three
of the most important of these tacit assumptions seem to us to be: that the animal
comes to the laboratory as a virtual tabula rasa, that species differences are
insignificant, and that all responses are about equally conditionable to all stimuli."
[Full Text] |
Gary Cziko
Without Miracles: Universal Selection Theory and the Second Darwinian Revolution
"Considering now the operant conditioning of Thorndike and Skinner, we have seen how stimulus-response conceptualizations of learning cannot account for the purposeful nature of behavior as noted by William James in 1890--the ability to achieve fixed ends by varied means. Construing learning as the acquisition of fixed patterns of behavior cannot explain how organisms can be successful in achieving important goals, such as finding food, mates, and shelter, in the face of unpredictable disturbances.
Another problem with operant conditioning theory is that it provides no explanation for why certain events reinforce the organism's behavior and others do not.[19] Why is it that a hungry but well-watered rat will work a lever to obtain food but not water, while a thirsty but well-fed one will do the opposite? Perceptual control theory answers this question by seeing the reward as a controlled variable, that is, a variable that is controlled by the organism by varying its behavior. If a hungry rat pushes a lever to obtain food, it is only to bring his perceived rate of food intake close to its reference level, which has been chosen through natural selection during the evolution of the rat as a species.
Finally, perceptual control theory explains an intriguing pattern of behavior observed by Skinner that contradicts the basic notion that reinforcement increases the probability of the response preceding the reinforcing stimulus. Skinner found that he could obtain very high rates of operant conditioned behavior (such as a hungry pigeon pecking at a key to obtain food) by gradually decreasing the rate of reinforcement. Very high rates of behavior could be shaped by starting out with an easy reinforcement schedule that provided a speck of food for each key peck, and gradually moving toward more and more demanding schedules requiring more and more pecks (2, 5, 10, 50, 100) for each reward. Skinner was thereby "able to get the animals to peck thousands of times for each food pellet, over long enough periods to wear their beaks down to stubs. They would do this even though they were getting only a small fraction of the reinforcements initially obtained."[20]
But if, according to Thorndike's law of effect and Skinner's theory of operant conditioning, more reinforcement is supposed to cause more of the type of behavior that resulted in the reinforcement,[21] how could it also be that less reinforcement could also cause more of the behavior? This problem is effectively solved when we see reinforcement not as an environmental event that increases the probability of the specific behavior that preceded it, but rather as a means by which the organism can achieve a goal by controlling a perception. If the circumstances are arranged so that the hungry rat must perform more bar presses to be fed, and it has no other way to obtain food, the rat will adapt by increasing its rate of pressing to obtain its desired amount of food. And if the rate of reinforcement is increased to the point at which the rat can maintain its normal body weight, a control system model of behavior would predict that further increases in reinforcement should lead to decreases in the rate of behavior. Indeed, this is exactly what happens.[22]
It should now be obvious that a perceptual control theory interpretation of adapted behavior is radically different from a behaviorist view of operant conditioning. Whereas behaviorism sees the environment in control of the behavior of the organism, perceptual control theory sees the organism in control of its environment by means of varying its behavior. In other words, to behaviorists, behavior is controlled by the environment; to perceptual control theorists, behavior controls the environment. This is not to say that the environment has no influence on behavior. Rather, behavior can be adapted only if it is part of a larger control process that varies behavior to produce the perceptions specified by internal reference signals leading to the accomplishment of goals important for survival and reproduction." [Book Link]
Curtis Brown
Behaviorism: Skinner
and Dennett "So we can read Skinner as making the important
point that when we invoke theoretical entities or phenomena we need to do so in
such a way that the theory making use of them makes predictions about observable
phenomena which can be falsified.
But Skinner seems
to take himself to have shown something much stronger than this, namely that a
scientific theory should not make use of inferred entities or phenomena at all.
And this seems much too strong a claim. If we restricted physics, or even archeology
or paleontology, to making use only of things that can be directly observed, we
would deprive ourselves of most of their most interesting results--and also of
a good deal of their predictive power. It often happens that the best theory which
accounts for observed phenomena and makes predictions about unobserved but observable
phenomena makes use of a good deal of theoretical apparatus for which our only
evidence is inferential. An analogy may be helpful in seeing this point. Imagine
typing things into the keyboard of a computer, observing the computer's responses,
and trying to formulate hypotheses about how the machine will respond to various
future stimuli. Conceivably we could do this without appealing to any hypotheses
about how the machine is programmed, so that our theory simply took the form of
correlations between inputs and outputs. But it seems quite clear that it will
be far more useful to hypothesize about the machine's (internal, not directly
observable) program, using hypotheses about the program together with information
about inputs to formulate predictions about the machine's output. Now we may not
be quite like computers, but presumably the principles which govern our behavior
are at least as complex as those that govern a computer, so we may reasonably
expect that formulating hypotheses about our own internal states and processes
will turn out to be the most effective way of explaining and predicting our behavior.
At the very least, it seems clear that it would be a mistake to rule out a priori
any theory which made use of such hypotheses." [Full
Text] George Graham Behaviorism Stanford
Encyclopedia of Philosophy "One defining feature of traditional behaviorism
is that it tried to free psychology from having to theorize about how animals
and persons represent their environment. This was important, historically, because
it seemed that behavior/environment connections are a lot clearer and more manageable
experimentally than internal representations. Unfortunately, for behaviorism,
it's hard to imagine a more restrictive rule for psychology than a rule which
prohibits hypotheses about representational storage and processing. Stich, for
example, complains against Skinner that "we now have an enormous collection
of experimental data which, it would seem, simply cannot be made sense of unless
we postulate something like" information processing mechanisms in the heads
of organisms (1998, p. 649)." Jan
De Houwer, Stefaan Vandorpe and Tom Beckers
On the role of controlled
cognitive processes in human associative learning
(To appear
in: A. Wills (Ed.). New directions in human associative learning. Mahwah, NJ:
Lawrence Erlbaum)
"Why have associative models fared so well?
In hindsight,
it seems obvious that people can learn about associations by using controlled
processes such as reasoning and hypothesis testing. Why then, are associative
models still dominant in modern research? One reason is that the associationistic
view has a long tradition in psychology (and philosophy). It is thus difficult
for many people to leave behind the associationistic view that has guided their
thinking and research for many years. Another important reason is that associative
models do quite well in accounting for the available empirical data. The well
known Rescorla-Wagner model (Rescorla & Wagner, 1972), for instance, is compatible
with a huge number of findings while being relatively simple. If our argument
is correct that associative models do not provide an accurate account of the processes
that underlie associative learning, how is it possible that they are able to account
for so much of the data? We agree with Lovibond (2003, p. 105) that "the
success of these models is due to them capturing, at least in part, the operating
characteristics of the inferential learning system". What this means is that
associative models (as well as probabilistic models for that matter) can be seen
as (mathematical) formalisations of certain deductive reasoning processes. A system
that operates on the basis of associative models does not reason, but acts very
much as if it is reasoning. The associative models will thus often predict the
same result as a model that is based on the assumption that humans actually generate
and test hypothesis or reason in a controlled, conscious manner. The two types
of models can be differentiated, however, by manipulating variables that influence
the likelihood that people will reason in a certain manner but that should have
no impact on the operation of the associative model. We have seen that such variables
(e.g., instructions, secondary tasks, ceiling effects, nature of the cues and
outcomes) do indeed have a huge effect. Given these results, it is justified to
entertain the belief that participants are using controlled processes such as
reasoning and to look for new ways to model and understand these processes."
[.DOC file]
Andreas
K. Engel, Pascal Fries & Wolf Singer
DYNAMIC PREDICTIONS: OSCILLATIONS
AND SYNCHRONY IN TOPDOWN PROCESSING
Nature Reviews
Neuroscience 2, 704 -716 (2001); doi:10.1038/35094565
"Classical theories
of sensory processing view the brain as a passive, stimulus-driven device. By
contrast, more recent approaches emphasize the constructive nature of perception,
viewing it as an active and highly selective process. Indeed, there is ample evidence
that the processing of stimuli is controlled by topdown influences that
strongly shape the intrinsic dynamics of thalamocortical networks and constantly
create predictions about forthcoming sensory events. We discuss recent experiments
indicating that such predictions might be embodied in the temporal structure of
both stimulus-evoked and ongoing activity, and that synchronous oscillations are
particularly important in this process. Coherence among subthreshold membrane
potential fluctuations could be exploited to express selective functional relationships
during states of expectancy or attention, and these dynamic patterns could allow
the grouping and selection of distributed neuronal responses for further processing."
[Abstract/Summary]
[PDF]
Aarts
H, Dijksterhuis A, De Vries P. On the psychology of drinking: being
thirsty and perceptually ready. Br J Psychol 2001 Nov;92(Pt
4):631-42 "The present research is concerned with cognitive effects of
habitually regulated primary motives. Specifically, two experiments tested the
idea that feelings of thirst enhance the cognitive accessibility of, or readiness
to perceive, action-relevant stimuli. In a task allegedly designed to assess mouth-detection
skills, some participants were made to feel thirsty, whereas others were not.
Results showed that participants who were made thirsty responded faster to drinking-related
items in a lexical decision task, and performed better on an incidental recall
task of drinking-related items, relative to no-thirst control participants. These
results suggest that basic needs and motives, such as thirst, causes a heightened
perceptual readiness to environmental cues that are instrumental in satisfying
these needs." [Abstract]
Francisco Varela, Jean-Philippe Lachaux, Eugenio Rodriguez
& Jacques Martinerie THE BRAINWEB: PHASE SYNCHRONIZATION AND
LARGE-SCALE INTEGRATION Nature Reviews Neuroscience 2,
229 -239 (2001); doi:10.1038/35067550 "The emergence of a unified cognitive
moment relies on the coordination of scattered mosaics of functionally specialized
brain regions. Here we review the mechanisms of large-scale integration that counterbalance
the distributed anatomical and functional organization of brain activity to enable
the emergence of coherent behaviour and cognition. Although the mechanisms involved
in large-scale integration are still largely unknown, we argue that the most plausible
candidate is the formation of dynamic links mediated by synchrony over multiple
frequency bands." [Abstract/Summary] |