Flyingoverseas 版 (精华区)
发信人: coolreal (真点点令), 信区: Flyingoverseas
标 题: MIT-AI建议3
发信站: 哈工大紫丁香 (2002年03月21日09:22:29 星期四), 站内信件
在MIT的AI要学些什么
Next: Notebooks Previous: Getting connected Up: How To Do Research In the MI
T AI Lab
Learning other fields
It used to be the case that you could do AI without knowing anything except
AI, and some people still seem to do that. But increasingly, good research r
equires that you know a lot about several related fields. Computational feas
ibility by itself doesn't provide enough constraint on what intelligence is
about. Other related fields give other forms of constraint, for example expe
rimental data, which you can get from psychology. More importantly, other fi
elds give you new tools for thinking and new ways of looking at what intelli
gence is about. Another reason for learning other fields is that AI does not
have its own standards of research excellence, but has borrowed from other
fields. Mathematics takes theorems as progress; engineering asks whether an
object works reliably; psychology demands repeatable experiments; philosophy
rigorous arguments; and so forth. All these criteria are sometimes appliedto work in AI, and adeptness with them is valuable in evaluating other peopl
e's work and in deepening and defending your own.
Over the course of the six or so years it takes to get a PhD at MIT, you can
get a really solid grounding in one or two non-AI fields, read widely in se
veral more, and have at least some understanding of the lot of them. Here ar
e some ways to learn about a field you don't know much about:
Take a graduate course. This is solidest, but is often not an efficient way
to go about it.
Read a textbook. Not a bad approach, but textbooks are usually out of date,
and generally have a high ratio of words to content.
Find out what the best journal in the field is, maybe by talking to someone
who knows about it. Then skim the last few years worth and follow the refere
nce trees. This is usually the fastest way to get a feel of what is happenin
g, but can give you a somewhat warped view.
Find out who's most famous in the field and read their books.
Hang out with grad students in the field.
Go to talks. You can find announcements for them on departmental bulletin bo
ards.
Check out departments other than MIT's. MIT will give you a very skewed view
of, for example, linguistics or psychology. Compare the Harvard course cata
log. Drop by the graduate office over there, read the bulletin boards, pick
up any free literature.
Now for the subjects related to AI you should know about.
Computer science is the technology we work with. The introductory graduate c
ourses you are required to take will almost certainly not give you an adequa
te understanding of it, so you'll have to learn a fair amount by reading bey
ond them. All the areas of computer science-theory, architectures, systems,
languages, etc.---are relevant.
Mathematics is probably the next most important thing to know. It's critical
to work in vision and robotics; for central-systems work it usually isn't d
irectly relevant, but it teaches you useful ways of thinking. You need to be
able to read theorems, and an ability to prove them will impress most peopl
e in the field. Very few people can learn math on their own; you need a gun
at your head in the form of a course, and you need to do the problem sets, s
o being a listener is not enough. Take as much math as you can early, while
you still can; other fields are more easily picked up later.
Computer science is grounded in discrete mathematics: algebra, graph theory,
and the like. Logic is very important if you are going to work on reasoning
. It's not used that much at MIT, but at Stanford and elsewhere it is the do
minant way of thinking about the mind, so you should learn enough of it that
you can make and defend an opinion for yourself. One or two graduate course
s in the MIT math department is probably enough. For work in perception and
robotics, you need continuous as well as discrete math. A solid background i
n analysis, differential geometry and topology will provide often-needed ski
lls. Some statistics and probability is just generally useful.
Cognitive psychology mostly shares a worldview with AI, but practitioners ha
ve rather different goals and do experiments instead of writing programs. Ev
eryone needs to know something about this stuff. Molly Potter teaches a good
graduate intro course at MIT.
Developmental psychology is vital if you are going to do learning work. It's
also more generally useful, in that it gives you some idea about which thin
gs should be hard and easy for a human-level intelligence to do. It also sug
gests models for cognitive architecture. For example, work on child language
acquisition puts substantial constraints on linguistic processing theories.
Susan Carey teaches a good graduate intro course at MIT.
``Softer'' sorts of psychology like psychoanalysis and social psychology hav
e affected AI less, but have significant potential. They give you very diffe
rent ways of thinking about what people are. Social ``sciences'' like sociol
ogy and anthropology can serve a similar role; it's useful to have a lot of
perspectives. You're on your own for learning this stuff. Unfortunately, it'
s hard to sort out what's good from bad in these fields without a connection
to a competent insider. Check out Harvard: it's easy for MIT students to cr
oss-register for Harvard classes.
Neuroscience tells us about human computational hardware. With the recent ri
se of computational neuroscience and connectionism, it's had a lot of influe
nce on AI. MIT's Brain and Behavioral Sciences department offers good course
s on vision (Hildreth, Poggio, Richards, Ullman) motor control (Hollerbach,
Bizzi) and general neuroscience (9.015, taught by a team of experts).
Linguistics is vital if you are going to do natural language work. Besides t
hat, it exposes a lot of constraint on cognition in general. Linguistics at
MIT is dominated by the Chomsky school. You may or may not find this to your
liking. Check out George Lakoff's recent book Women, Fire, and Dangerous Th
ings as an example of an alternative research program.
Engineering, especially electrical engineering, has been taken as a domain b
y a lot of AI research, especially at MIT. No accident; our lab puts a lot o
f stock in building programs that clearly do something, like analyzing a cir
cuit. Knowing EE is also useful when it comes time to build a custom chip or
debug the power supply on your Lisp machine.
Physics can be a powerful influence for people interested in perception and
robotics.
Philosophy is the hidden framework in which all AI is done. Most work in AI
takes implicit philosophical positions without knowing it. It's better to kn
ow what your positions are. Learning philosophy also teaches you to make and
follow certain sorts of arguments that are used in a lot of AI papers. Phil
osophy can be divided up along at least two orthogonal axes. Philosophy is u
sually philosophy of something; philosophy of mind and language are most rel
evant to AI. Then there are schools. Very broadly, there are two very differ
ent superschools: analytic and Continental philosophy. Analytic philosophy o
f mind for the most part shares a world view with most people in AI. Contine
ntal philosophy has a very different way of seeing which takes some getting
used to. It has been used by Dreyfus to argue that AI is impossible. More re
cently, a few researchers have seen it as compatible with AI and as providin
g an alternative approach to the problem. Philosophy at MIT is of the analyt
ical sort, and of a school that has been heavily influenced by Chomsky's wor
k in linguistics.
This all seems like a lot to know about, and it is. There's a trap here: thi
nking ``if only I knew more X, this problem would be easy,'' for all X. Ther
e's always more to know that could be relevant. Eventually you have to sit d
own and solve the problem.
A whole lot of people at MIT
--
※ 来源:·哈工大紫丁香 bbs.hit.edu.cn·[FROM: 202.119.14.197]
Powered by KBS BBS 2.0 (http://dev.kcn.cn)
页面执行时间:3.613毫秒