home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Collection of Education
/
collectionofeducationcarat1997.iso
/
COMPUSCI
/
AI.ZIP
/
AI.TXT
next >
Wrap
Text File
|
1986-03-24
|
19KB
|
369 lines
Making a Computer Get Smart
appeared in Insight Magazine, March 24, 1986
Scientists are teaching a computer to reason out problems and learn from
experience. In other words, they are trying to give a computer common sense.
They have had limited success, but just to teach a computer how to converse
requires a breakthrough.
----------
Inside a gleaming office complex in Austin, Texas, some of the nation's
brightest scientist, linguists and psychologists are trying to tutor a dumb
student.
The student is a computer, and a 24-member artificial-intelligence team is
spoon-feeding it thousands of scraps of knowledge, as well as giving it
grammar and vocabulary lessons. Their goal is to cram the machine - a
mindless array of thumbnail-sized silicon chips - with enough facts, rules of
thumb and human language skills that it may begin to think and learn on its
own.
Here, at the Microelectronics and Computer Technology Corp. (MCC), a joint
research and development venture backed by America's corporate giants, the
future is being built.
MCC is "pushing back the frontiers of science," says its chairman, retired
Navy Adm. Bobby R. Inman, who previously served as deputy director of the
Central Intelligence Agency.
Article by article, the team of researchers is dissecting an encyclopedia,
then encoding its contents into the computer's memory bank. For example, all
the facts presented in an article on flight are encoded, plus the underlying
knowledge about the world needed to understand the article.
They are feeding the machine thousands of bits and pieces of common sense: If
you're out in the rain, you get wet. If you drop something, it falls to the
ground. An object can't be in two places at once. Each person lives for a
single interval of time.
They also are teaching the computer about itself. "It has to understand that
it is a program," says a scientist. "It needs to know that a human being is
watching it." MCC, which began its high-stakes research in January 1984, is
owned by 21 U.S. companies, including Rockwell International Corp., Honeywell
Inc. and Martin Marietta Corp. MCC's goal is to create a variety of new
computer technologies for the 1990s and beyond - passing along the fruits of
its research to its shareholder companies to give them a head start over
foreign competitors in designing new products and services.
The $65-million-a-year project has resulted in a remarkably high degree of
cooperation between otherwise archrivals. At the Austin headquarters, one-
third of MCC's 410 employees are on loan from the various shareholder firms.
MCC has quickly emerged as one of the new heavyweights of artifical
intelligence (AI), the discipline that has already taught computers to play
chess and to help perform medical diagnoses.
Researchers at MCC and a handful of laboratories are trying to build the
prototype for a fifth-generation computer capable of reasoning its way through
myriad tasks in the home, at the workplace and on the battlefield.
But when asked to explain what makes machines "intelligent," a computer
scientist is likely to talk in circles.
" `Artificial intelligence' is trying to do things we don't know how to do
yet," says Marvin L. Minsky, a pioneer in artificial intelligence at MIT.
"But that's a working definition. It changes every year.
"Twenty years ago, having a machine recognize a picture or play chess or
understand simple language would have been out of reach," he says. "It's sort
of a moving horizon."
Even before the first generation of huge machines powered by vacuum tubes, men
dreamed of building a computer that could mimic human thought. But efforts
over the past 30 years to make such a computer have fallen short.
Powerful, number-crunching computers can analyze vast amounts of data, spit
out amazing mathematical solutions and guide an unmanned probe to the outer
reaches of the solar system. Yet these machines have no inkling of human
goals and beliefs, no sense of the world or their place in it.
Jonathan Slocum, MCC's director of natural language processing, believes that
words are a key to machine intelligence.
His reasoning is simple: A child's ability to learn about the world is closely
tied to his use of words as symbols. Digital computers have no grasp of the
meaning of words or what lies beyond them. And these machines will forever
lack common sense until they are able to communicate with, and learn from,
people.
But what might seem like a straightforward task - teaching English to a
computer by cramming it with grammatical rules, words and definitions - has
proved a monumental endeavor.
"We would be very happy if these machines were as effective as a 4-year-old
child with respect to the grammar," says Slocum.
Home computers can mimic verbal skills by using sentences to display a
problem's solution. But faced with interpreting sentences, advance computers
- which rely on limited vocabularies of narrowly defined words - break down.
Simple conversation, as it turns out, takes an enormous amount of information
processing at incredibly high speeds.
"We rarely perceive ambiguity in something someone says," says Slocum. But
"almost any sentence you hear a human being utter will be ambiguous."
Depending on the context, the word "ball" in a sentence could mean a dance, a
round object used in sports or a good time. Similarly, a simple sentence
might contain 10 words with three definitions each.
" We don't consciously review all the interpretations. Human beings select
one and go with it almost all the time," Slocum says. "If your confidence [in
your first interpretation] is high, you're not going to stop the speaker. If
your confidence is low, you may stop the speaker and ask whether he meant this
or that."
Slocum is writing a computer program in which his "linguist's intuition" is
encoded in plausibility scores: the mathematical probabilities for the
likelihood that a statement is true.
Dissecting a sentence, his computer program assigns plausibility scores for
the possible meaning of each word, and then applies rules for combining
plausibility factors as it examines each element. Future computers will
recognize, he says, when to accept at face value its first interpretation of a
sentence, when to ask for clarification and when to say, "I'm confused."
"Four-year-olds are quite good. They know most of the grammar that an adult
does," he says. "They don't know all the grammatical structures that exist in
the language, but they know a great majority of them."
It will take a major scientific breakthrough, he says, for computers to use
metaphors, idioms and similes. After all, how does a literal-minded machine
catch the meaning of phrases such as "cry a river of tears," "kick the bucket"
or "she is like a rose"?
What Slocum's computer program lacks in grammar skills, he hopes to bolster
with a working vocabulary of 20,000 words. Future computer programs, using
complete dictionaries of words and multiple interpretations, will have "vast
proficiency, out-stripping any human being," he says.
Meanwhile, MCC's artificial-intelligence team is bringing up its baby by
feeding the computer with more facts about humans, the world and itself.
The computer is a tabula rasa, a blank slate, says Douglas B. Lenat, an
artificial intelligence project director at MCC. "We're bootstrapping it up
to the point where it will be a reasonable student.
"The more you know, the more easily you can learn," he says. "If you start
out a [computer] program that knows next to nothing, it's hard for it to
assimilate new pieces of information.
"But children already know so much about the world that it's very likely that
they'll have something they can hook new experience onto and thereby relate,"
he says.
Future computers, he says, will examine a problem - for e