That the
brain is a digital computer and the mind the software run on the computer are
theses that seem to many to be confirmed by our best science, or at least by
our best science fiction. But
we recently looked at some arguments from Karl Popper, John Searle, and
others that expose serious (indeed, I would say fatal) difficulties with the
computer model of the mind. Saul Kripke
presents another such argument. It is
not well known. It was hinted at in a
footnote in his famous book Wittgenstein
on Rules and Private Language (WRPL)
and developed in some unpublished lectures.
But Jeff Buechner’s recent article “Not Even Computing Machines Can
Follow Rules: Kripke’s Critique of Functionalism” offers a very useful
exposition of Kripke’s argument. (You
can find Buechner’s article in Alan Berger’s anthology Saul
Kripke.)
Though it is, I think, not essential
to Kripke’s argument, the “quus” paradox developed in WRPL provides a helpful way of stating it (and, naturally, is made
use of by Kripke in stating it in WRPL). So let’s briefly take a look at that. Imagine you have never computed any numbers
as high as 57, but are asked to compute “68 + 57.” Naturally, you answer “125,” confident that this
is the arithmetically correct answer, but confident also that it accords with
the way you have always used “plus” in the past, i.e. to denote the addition
function, which, when applied to the numbers you call “68” and “57,” yields 125. But now, Kripke says, suppose that an odd
skeptic asks you how you are so sure that this is really what you meant in the
past, and thus how you can be certain that “125” is really the correct answer. Maybe, he suggests, the function you really
meant in the past by “plus” and “+” was not addition, but rather what Kripke
calls the “quus” function, which he defines as follows:
x quus y = x + y, if x, y < 57;
= 5 otherwise.
So, maybe you have always been carrying out “quaddition” rather than addition, since quadding and adding will always yield the same result when the numbers are smaller than 57. That means that now that you are computing “68 + 57,” the correct answer should be “5” rather than “125.” And maybe you think otherwise only because you are now misinterpreting all your previous uses of “plus.” Of course, this seems preposterous. But how do you know the skeptic is wrong?
Kripke’s skeptic holds that any evidence you have that what you always meant was addition is evidence that is consistent with your really having meant quaddition. For example, it is no good to note that you have always said “Two plus two equals four” and never “Two quus two equals four,” because what is in question is what you meant by “plus.” Perhaps, the skeptic says, every time you said “plus” you meant “quus,” and every time you said “addition” you meant “quaddition.” Neither will it help to appeal to memories of what was consciously going through your mind when you said things like “Two plus two equals four.” Even if the words “I mean plus by ‘plus,’ and not ‘quus’!” had passed through your mind, that would only raise the question of what you meant by that.
Note that it is irrelevant that most
of us have in fact computed numbers higher than 57. For any given person, there is always some
number, even if an extremely large one, equal to or higher than which he has
never calculated, and the skeptic can always run the argument using that number
instead. Notice also that the point can
be made about what you mean now by
“plus.” For all of your current linguistic
behavior and the words you are now consciously running through your mind, the
skeptic can ask whether you mean by it addition or quaddition.
Now, Kripke’s “quus” puzzle famously raises all sorts of questions in the philosophy of language and philosophy of mind. This is not the place to get into all that, and Kripke’s argument against functionalism does not, I think, stand or fall with any particular view about what his “quus” paradox ultimately tells us about human thought and language. The point for our purposes is that the “quus” example provides a useful illustration of how material processes can be indeterminate between different functions. (An Aristotelian-Thomistic philosopher like myself, by the way, is happy to allow that mental imagery -- such as the entertaining of visual or auditory mental images of words like “plus” or sentences like “I mean plus, not quus!” -- is as material as bodily behavior is. From an A-T point of view, among the various activities often classified by contemporary philosophers as “mental,” it is only intellectual activity in the strict sense -- activity that involves the grasp of abstract concepts, and is irreducible to the entertaining of mental images -- that is immaterial. And that is crucial to understanding how an A-T philosopher would approach Kripke’s argument. But again, that is a topic for another time.)
Kripke’s “quus” example can be used to state his argument about computationalism as follows. Whatever we say about what we mean when we use “plus,” there are no physical features of a computer that can determine whether it is carrying out addition or quaddition, no matter how far we extend its outputs. No matter what the past behavior of a machine has been, we can always suppose that its next output -- “5,” say, when calculating numbers larger than any it has calculated before -- might show that it is carrying out something like quaddition rather than addition. Of course, it might be said in response that if this happens, that would just show that the machine was malfunctioning rather than performing quaddition. But Kripke points out that whether some output counts as a malfunction itself depends on what program the machine is running, and whether the machine is running the program for addition rather than quaddition is precisely what is in question.
If we restrict ourselves merely to observing inputs and outputs then it does seem reasonable to say that we cannot tell if the computer is adding or quadding, just as we can't with Kripke's human calculator. But are there really 'no physical features of the computer' that can determine this? Surely we can inspect its structure and use our knowledge of the laws of physics, together with knowledge of initial conditions to predict its behaviour. The computer can be seen as a dynamical system governed by a no doubt large set of differential equations and the program loaded into it can be seen as the system's initial conditions. The initial conditions are quite observable, being the distribution of charge or electric potential in memory cells that arises from loading the program. Such a system of equations and initial conditions can, in principle, be solved. Electronic engineers have been using circuit emulation software to do this, albeit on a limited scale, for a number of years. Furthermore, we can surely create models of the computer and its program at several levels of abstraction. We can eventually arrive at a mathematical description of the computer and its program from which we can make arguments about its behaviour. It is in principle possible to prove that the computer would always output a result larger than its inputs. Hence its program could not be an implementation of quaddition. These techniques are presently in use to prove the correctness of programs against their specifications, and to verify the correctness of circuit designs.
Another way
to put the point is that the question of what program a machine is running
always involves idealization. In any actual machine, gears get stuck,
components melt, and in myriad other ways the machine fails perfectly to
instantiate the program we say it is running.
But there is nothing in the physical features or operations of the
machine themselves that tells us that
it has failed perfectly to instantiate its idealized program. For relative to an eccentric program, even a
machine with a stuck gear or melted component could be doing exactly what it is
supposed to be doing, and a gear that doesn’t
stick or a component that doesn’t
melt could count as malfunctioning. Hence
there is nothing in the behavior of a computer, considered by itself, that can
tell us whether its giving “125” in response to “What is 68 + 57?” counts as an
instance of its following an idealized program for addition, or instead as a
malfunction in a machine that is supposed to be carrying out an idealized
program for quaddition. And there is
nothing in the behavior of a computer, considered by itself, that could tell us
whether giving “5” in response to “What is 68 + 57?” counts as a malfunction in
a machine that is supposed to be carrying out an idealized program for
addition, or instead as an instance of properly following an idealized program
for quaddition.
Again, there is indeed 'nothing in the behaviour of the computer', interpreting this to mean observations of its inputs and outputs alone, that can tell us that a malfunction has occurred. But it is not true that 'there is nothing in the physical features of the machine' that can alert us to a fault. Knowledge of the structure and initial conditions of the system gives rise to expectations of its behaviour. This is the position of the engineer. From anomalies in behaviour he hypothesises changes in structure.
As Buechner
points out, it is no good to appeal to counterfactuals to try to get around the
problem -- to claim, for example, that what the machine would have done had it
not malfunctioned is answer “125” rather than “5.” For such a counterfactual presupposes that
the idealized program the machine is instantiating is addition rather than
quaddition, which is precisely what is in question.
Naturally,
we could always ask the programmer of the machine what he had in mind. But that
simply reinforces the point that there is nothing in the physical properties of the machine itself that can tell us. But if there is nothing intrinsic to
computers in general that determines what programs they are running, neither is
there anything intrinsic to the human brain specifically, considered as a kind
of computer, that determines what program it
is running (if it is running one in the first place). Hence there can be no question of explaining
the human mind in terms of programs running in the brain.
Certainly 'there is nothing in the physical properties of the machine itself' that can tell us the intentions of its designer. But we don't need to know this. Knowledge of its structure and program, all physically observable, is enough to tell us what it will do. Indeed, the designer may have made a mistake, so that his machine does not fulfil his intentions. By our analysis we can come to know the machine better than its designer. It is not true that 'there is nothing intrinsic to computers in general that determines what programs they are running'. A computer with a loaded program is a physical system that we can measure and model.
Might we
appeal to God as the programmer of the brain who determines which program it is
running? Obviously most defenders of the
computer model of the mind would not want to do this, since they tend to be
materialists and materialists tend to be atheists. But it is not a good idea in any case. For that would make of human thought
something as extrinsic to human beings as the program a computer is running is
extrinsic to a computer, indeed as extrinsic as the meaning of a sentence is to
the sentence. Just as the meaning of “The
cat is on the mat” is not really in the sounds, ink marks, or pixels in which
the sentence is realized, but rather in the mind of the user or hearer of the
sentence, so too the idea of God as a kind of programmer or user of the brain
qua computer would entail that the meanings of our thought processes are not
really in us at all but only in Him. The
result would be a new riff on occasionalism that is even more bizarre than the usual
kind -- a version on which it is really God who is, strictly speaking,
doing all our thinking for us!
Neither, as Buechner points out, will it do to suggest that natural selection has determined that we are following one program rather than another. For any program we conjecture natural selection has put into us, there is going to be an alternative program with equal survival value, and the biological facts will be indeterminate between them. There will be no reason in principle to hold that it is the one program that natural selection put into us rather than the other.
Suppose we say instead that there is what Buechner calls a “telos in Nature” that determines that the brain really is following this program rather than that -- the program for addition, say, rather than quaddition? In that case we would have some end or purpose intrinsic to the natural world that determines which program the brain instantiates, which would eliminate the occasionalist problem the appeal to God as programmer raised. (Of course, you could give a Fifth Way style argument for God as the ultimate explanation of this intrinsic telos, but that would not be to make of God a “programmer” in the relevant sense, any more than Aquinas’s Fifth Way makes of God a Paley-style tinkerer.)
Buechner himself is not sympathetic to this “telos in Nature” suggestion, but it is, naturally, one that an Aristotelian is bound to take seriously. But it does not help the advocate of the computer model of the mind, at least not if he is a materialist. For to affirm that there is teleology intrinsic in nature is just to abandon the materialist’s conception of matter and return to something like the Aristotelian-Scholastic conception that materialists, like other modern philosophers, thought they had buried forever back in the days of Hobbes and Descartes.
Still, if the computer model of the mind leads people to reconsider Aristotelianism, it can’t be all bad. (Cf. James Ross’s “The Fate of the Analysts: Aristotle’s Revenge: Software Everywhere”)
As a final remark I make this observation. As I understand it, the Feser/Ross argument has as premises that the physical is indeterminate as to function but the human is determinate. My argument above is that the physical can be quite determinate, and, more strangely, if we apply Kripke's skeptical argument to humans, as Feser does at the beginning of the piece, it would seem that the human is indeterminate. Perhaps the argument against computationalism should be that computers are too determinate to be people. I'm left in confusion. Comments welcome.
No comments:
Post a Comment