Algorithm 版 (精华区)
发信人: ssos (存在与虚无), 信区: Algorithm
标 题: Can machine think? -------by Alan Turing(5)
发信站: 哈工大紫丁香 (2001年06月14日15:57:05 星期四), 站内信件
(5) Arguments from Various Disabilities These arguments take the form, "I gr
ant
you that you can make machines do all the things you have mentioned but you w
ill
never be able to make one to do X". Numerous features X are suggested in this
connection. I offer a selection:
Be kind, resourceful, beautiful, friendly (p.448), have initiative, have a se
nse
of humour, tell right from wrong, make mistakes (p.448), fall in love, enjoy
strawberries and cream (p.448), make some one fall in love with it, learn from
experience (pp.456 f.), use words properly, be the subject of its own thought
(p.449), have as much diversity of behaviour as a man, do something really new
(p.450). (Some of these disabilities are given special consideration as
indicated by the page numbers.)
No support is usually offered for these statements. I believe they are mostly
founded on the principle of scientific induction. A man has seen thousands of
machines in his lifetime. From what he sees of them he draws a number of gene
ral
conclusions. They are ugly, each is designed for a very limited purpose, when
required for a minutely different purpose they are useless, the variety of
behaviour of any one of them is very small, etc., etc. Naturally he concludes
that these are necessary properties of machines in general. Many of these
limitations are associated with the very small storage capacity of most
machines. (I am assuming that the idea of storage capacity is extended in some
way to cover machines other than discrete-state machines. {p.448} The exact
definition does not matter as no mathematical accuracy is claimed in the pres
ent
discussion.) A few years ago, when very little had been heard of digital
computers, it was possible to elicit much incredulity concerning them, if one
mentioned their properties without describing their construction. That was
presumably due to a similar application of the principle of scientific
induction. These applications of the principle are of course largely
unconscious. When a burnt child fears the fire and shows that he fears it by
avoiding it, I should say that he was applying scientific induction. (I could
of
course also describe his behaviour in many other ways.) The works and customs
of
mankind do not seem to be very suitable material to which to apply scientific
induction. A very large part of space-time must be investigated, if reliable
results are to be obtained. Otherwise we may (as most English children do)
decide that everybody speaks English, and that it is silly to learn French.
There are, however, special remarks to be made about many of the disabilities
that have been mentioned. The inability to enjoy strawberries and cream may h
ave
struck the reader as frivolous. Possibly a machine might be made to enjoy this
delicious dish, but any attempt to make one do so would be idiotic. What is
important about this disability is that it contributes to some of the other
disabilities, e.g. to the difficulty of the same kind of friendliness occurring
between man and machine as between white man and white man, or between black
man
and black man.
The claim that "machines cannot make mistakes" seems a curious one. One is
tempted to retort, "Are they any the worse for that?" But let us adopt a more
sympathetic attitude, and try to see what is really meant. I think this
criticism can be explained in terms of the imitation game. It is claimed that
the interrogator could distinguish the machine from the man simply by setting
them a number of problems in arithmetic. The machine would be unmasked because
of its deadly accuracy. The reply to this is simple. The machine (programmed
for
playing the game) would not attempt to give the right answers to the arithmetic
problems. It would deliberately introduce mistakes in a manner calculated to
confuse the interrogator. A mechanical fault would probably show itself through
an unsuitable decision as to what sort of a mistake to make in the arithmetic.
Even this interpretation of the criticism is not sufficiently sympathetic. But
we cannot afford the space to go into it much further. It seems to me that this
criticism depends {p.449} on a confusion between two kinds of mistake. We may
call them 'errors of functioning' and 'errors of conclusion'. Errors of
functioning are due to some mechanical or electrical fault which causes the
machine to behave otherwise than it was designed to do. In philosophical
discussions one likes to ignore the possibility of such errors; one is theref
ore
discussing 'abstract machines'. These abstract machines are mathematical
fictions rather than physical objects. By definition they are incapable of
errors of functioning. In this sense we can truly say that 'machines can never
make mistakes'. Errors of conclusion can only arise when some meaning is
attached to the output signals from the machine. The machine might, for
instance, type out mathematical equations, or sentences in English. When a fa
lse
proposition is typed we say that the machine has committed an error of
conclusion. There is clearly no reason at all for saying that a machine cannot
make this kind of mistake. It might do nothing but type out repeatedly '0=1'.
To
take a less perverse example, it might have some method for drawing conclusions
by scientific induction. We must expect such a method to lead occasionally to
erroneous results.
The claim that a machine cannot be the subject of its own thought can of course
only be answered if it can be shown that the machine has some thought with some
subject matter. Nevertheless, 'the subject matter of a machine's operations'
does seem to mean something, at least to the people who deal with it. If, for
instance, the machine was trying to find a solution of the equation x2 - 40x -
11=0 one would be tempted to describe this equation as part of the machine's
subject matter at that moment. In this sort of sense a machine undoubtedly can
be its own subject matter. It may be used to help in making up its own
programmes, or to predict the effect of alterations in its own structure. By
observing the results of its own behaviour it can modify its own programmes so
as to achieve some purpose more effectively. These are possibilities of the n
ear
future, rather than Utopian dreams.
The criticism that a machine cannot have much diversity of behaviour is just a
way of saying that it cannot have much storage capacity. Until fairly recentl
y a
storage capacity of even a thousand digits was very rare.
The criticisms that we are considering here are often disguised forms of the
argument from consciousness. Usually if one maintains that a machine can do one
of these things, and describes the. kind of method that the machine could use,
one will not make {p.450} much of an impression. It is thought that the method
(whatever it may be, for it must be mechanical) is really rather base. Compare
the parenthesis in Jefferson's statement quoted on p.21.
(6) Lady Lovelace's Objection Our most detailed information of Babbage's
Analytical Engine comes from a memoir by LadyLovelace. In it she states, "The
Analytical Engine has no pretensions to originate anything. It can do whatever
we know how to order it to perform" (her italics). This statement is quoted by
Hartree (p.70) who adds: "This does not imply that it may not be possible to
construct electronic equipment which will 'think for itself', or in which, in
biological terms, one could set up a conditioned reflex, which would serve as a
basis for 'learning'. Whether this is possible in principle or not is a
stimulating and exciting question, suggested by some of these recent
developments. But it did not seem that the machines constructed or projected at
the time had this property."
I am in thorough agreement with Hartree over this. It will be noticed that he
does not assert that the machines in question had not got the property, but
rather that the evidence available to Lady Lovelace did not encourage her to
believe that they had it. It is quite possible that the machines in question
had
in a sense got this property. For suppose that some discrete-state machine has
the property. The Analytical Engine was a universal digital computer, so that,
if its storage capacity and speed were adequate, it could by suitable
programming be made to mimic the machine in question. Probably this argument
did
not occur to the Countess or to Babbage. In any case there was no obligation on
them to claim all that could be claimed.
This whole question will be considered again under the heading of learning
machines.
A variant of Lady Lovelace's objection states that a machine can 'never do
anything really new'. This may be parried for a moment with the saw, 'There is
nothing new under the sun'. Who can be certain that 'original work' that he has
done was not simply the growth of the seed planted in him by teaching, or the
effect of following well-known general principles. A better variant of the
objection says that a machine can never 'take us by surprise'. This statement
is
a more direct challenge and can be met directly. Machines take me by surprise
with great frequency. This is largely because I do not do sufficient calculat
ion
to decide what to expect them to do, or rather because, although I do a
calculation, I do it in a hurried, slipshod fashion, taking risks. Perhaps I
say
to myself, 'I suppose the voltage here ought to be the same as there: anyway
let's assume it is'. {p.451} Naturally I am often wrong, and the result is a
surprise for me for by the time the experiment is done these assumptions have
been forgotten. These admissions lay me open to lectures on the subject of my
vicious ways, but do not throw any doubt on my credibility when I testify to
the
surprises I experience.
I do not expect this reply to silence my critic. He will probably say that such
surprises are due to some creative mental act on my part, and reflect no credit
on the machine. This leads us back to the argument from consciousness, and far
from the idea of surprise. It is a line of argument we must consider closed,
but
it is perhaps worth remarking that the appreciation of something as surprising
requires as much of a 'creative mental act' whether the surprising event
originates from a man, a book, a machine or anything else.
The view that machines cannot give rise to surprises is due, I believe, to a
fallacy to which philosophers and mathematicians are particularly subject. This
is the assumption that as soon as a fact is presented to a mind all consequen
ces
of that fact spring into the mind simultaneously with it. It is a very useful
assumption under many circumstances, but one too easily forgets that it is
false. A natural consequence of doing so is that one then assumes that there is
no virtue in the mere working out of consequences from data and general
principles.
(7) Argument from Continuity in the Nervous System The nervous system is
certainly not a discrete-state machine. A small error in the information about
the size of a nervous impulse impinging on a neuron, may make a large differe
nce
to the size of the outgoing impulse. It may be argued that, this being so, one
cannot expect to be able to mimic the behaviour of the nervous system with a
discrete-state system.
It is true that a discrete-state machine must be different from a continuous
machine. But if we adhere to the conditions of the imitation game, the
interrogator will not be able to take any advantage of this difference. The
situation can be made clearer if we consider some other simpler continuous
machine. A differential analyser will do very well. (A differential analyser is
a certain kind of machine not of the discrete-state type used for some kinds of
calculation.) Some of these provide their answers in a typed form, and so are
suitable for taking part in the game. It would not be possible for a digital
computer to predict exactly hat answers the differential analyser would give to
a problem, but it would be quite capable of giving the right sort of answer.
For
instance, if asked to give the value of pi (actually about 3?416) it would be
reasonable {p.452} to choose at random between the values 3?2, 3?3, 3?4,
3?5, 3?6 with the probabilities of 0?5, 0?5, 0?5, 0?9, 0?6 (say). Under
these circumstances it would be very difficult for the interrogator to
distinguish the differential analyser from the digital computer.
(8) The Argument from Informality of Behaviour It is not possible to produce a
set of rules purporting to describe what a man should do in every conceivable
set of circumstances. One might for instance have a rule that one is to stop
when one sees a red traffic light, and to go if one sees a green one, but what
if by some fault both appear together? One may perhaps decide that it is safest
to stop. But some further difficulty may well arise from this decision later.
To
attempt to provide rules of conduct to cover every eventuality, even those
arising from traffic lights, appears to be impossible. With all this I agree.
From this it is argued that we cannot be machines. I shall try to reproduce the
argument, but I fear I shall hardly do it justice. It seems to run something
like this. 'If each man had a definite set of rules of conduct by which he
regulated his life he would be no better than a machine. But there are no such
rules, so men cannot be machines.' The undistributed middle is glaring. I do
not
think the argument is ever put quite like this, but I believe this is the
argument used nevertheless. There may however be a certain confusion between
'rules of conduct' and 'laws of behaviour' to cloud the issue. By 'rules of
conduct' I mean precepts such as 'Stop if you see red lights', on which one can
act, and of which one can be conscious. By 'laws of behaviour' I mean laws of
nature as applied to a man's body such as 'if you pinch him he will squeak'. If
we substitute 'laws of behaviour which regulate his life' for 'laws of conduct
by which he regulates his life' in the argument quoted the undistributed middle
is no longer insuperable. For we believe that it is not only true that being
regulated by laws of behaviour implies being some sort of machine (though not
necessarily a discrete-state machine), but that conversely being such a machine
implies being regulated by such laws. However, we cannot so easily convince
ourselves of the absence of complete laws of behaviour as of complete rules of
conduct. The only way we know of for finding such laws is scientific
observation, and we certainly know of no circumstances under which we could s
ay,
"We have searched enough. There are no such laws."
We can demonstrate more forcibly that any such statement would be unjustified.
For suppose we could be sure of finding {p.453} such laws if they existed. Then
given a discrete-state machine it should certainly be possible to discover by
observation sufficient about it to predict its future behaviour, and this wit
hin
a reasonable time, say a thousand years. But this does not seem to be the case.
I have set up on the Manchester computer a small programme using only 1000 un
its
of storage, whereby the machine supplied with one sixteen figure number replies
with another within two seconds. I would defy anyone to learn from these repl
ies
sufficient about the programme to be able to predict any replies to untried
values.
(9) The Argument from Extra-Sensory Perception I assume that the reader is
familiar with the idea of extra-sensory perception, and the meaning of the four
items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis.
These disturbing phenomena seem to deny all our usual scientific ideas. How we
should like to discredit them! Unfortunately the statistical evidence, at least
for telepathy, is overwhelming. It is very difficult to rearrange one's ideas
so
as to fit these new facts in. Once one has accepted them it does not seem a v
ery
big step to believe in ghosts and bogies. The idea that our bodies move simply
according to the known laws of physics, together with some others not yet
discovered but somewhat similar, would be one of the first to go.
This argument is to my mind quite a strong one. One can say in reply that many
scientific theories seem to remain workable in practice, in spite of clashing
with E.S.P.; that in fact one can get along very nicely if one forgets about
it.
This is rather cold comfort, and one fears that thinking is just the kind of
phenomenon where E.S.P. may be especially relevant.
A more specific argument based on E.S.P. might run as follows:
"Let us play the imitation game, using as witnesses a man who is good as a
telepathic receiver, and a digital computer. The interrogator can ask such
questions as 'What suit does the card in my right hand belong to?' The man by
telepathy or clairvoyance gives the right answer 130 times out of 400 cards.
The
machine can only guess at random, and perhaps gets 104 right, so the
interrogator makes the right identification." There is an interesting
possibility which opens here. Suppose the digital computer contains a random
number generator. Then it will be natural to use this to decide what answer to
give. But then the random number generator will be subject to the psycho-kine
tic
powers of the interrogator. Perhaps this psycho-kinesis might cause the machine
to guess right more often than would be expected on a probability calculation,
so that the interrogator {p.454} might still be unable to make the right
identification. On the other hand, he might be able to guess right without any
questioning, by clairvoyance. With E.S.P. anything may happen.
If telepathy is admitted it will be necessary to tighten our test up. The
situation could be regarded as analogous to that which would occur if the
interrogator were talking to himself and one of the competitors was listening
with his ear to the wall. To put the competitors into a 'telepathy-proof room'
would satisfy all requirements.
--
<<社会契约论>>是一本好书,应当多读几遍
风味的肘子味道不错,我还想再吃它
※ 来源:·哈工大紫丁香 bbs.hit.edu.cn·[FROM: 202.118.230.220]
Powered by KBS BBS 2.0 (http://dev.kcn.cn)
页面执行时间:203.919毫秒