a report by Ralph Kenyon EXTRAPOLATOR Aug 25, 1987
This report provides an overview of artificial intelligence. It describes the state of the field as of July 1987 and explains what the term really means. It covers the sources, the organization of the field into sub-fields, the limitations and problems, successful applications, exciting new directions, and projects likely near-term applications. Included is a description of topics in sub-fields within the field of artificial intelligence.
© Copyright 1987 by Ralph E. Kenyon, Jr.
With degrees in Mathematics, Management, Computer Science, and Philosophy, and experience in Intelligence Analysis, Physical and Personnel Security, Nuclear Power, Marine Engineering, Contract Administration, and Systems Design and Programming, Ralph Kenyon is uniquely qualified to provide a broad synthesis across diverse fields.
The term 'artificial intelligence' doesn't mean, to those who use it most, what one might expect from combining the two separate words 'artificial' and 'intelligence'. As a field of academic and economic focus, artificial intelligence (AI) has several sub-fields. (Which also don't necessarily mean what one might infer from their terms).
The earliest sources of artificial intelligence derive from four areas. Efforts to automate mathematical theorem proving flows from the philosophical Logicist tradition. Attempt to translate natural languages derives from military intelligence needs Vision systems and robotics arise from industrial frugality. Learning systems arise in education.
The field has become diverse, but there remains a focus on six areas.
The sixth area, expert systems, arises more recently from efforts to devise more sophisticated computer programs for dealing with specialized areas of knowledge.
Most recently on the scene, and not yet forming its own niche, is the neural net genre. Many are viewing this technique (hardware or simulated in serial software) as a panacea which will solve previously intractable problems. Relaxing a network seems to overcome the local maximum problem and allows solving the traveling salesman problem (np-complete). However, this initial mystique is un-justified.
There has arisen some tension between the foci as they have come into being and the somewhat more flashy interpretations of the 'science fiction genre'. Much research is driven by individuals who are more sympathetic to these more fanciful wishes.
The reality of current (1987) progress is much less optimistic. People doing work in AI fall into a few specific categories and are working on rather limited areas. However, media hype has created a demand for artificial intelligence, most often without a clear idea of what it is that is desired. The two most salable products available are expert system shells, which allow a capturing some specific types of expertise, and industrial robots.
Heavy research in vision systems will pay dividends in the form of improved versatility of industrial robots. In addition, these systems will make possible more general purpose types of robots.
Combining the improved sensor capabilities with expert systems and general purpose arms will make possible a generation of robots for sorting and selecting. Applications will include fruit & vegetable sorting and grading, parts selection and sorting, trash separation for recycling, and other kinds of simple sorting and selecting which must currently be done by human beings.
We have already seen the flurry of activity in applying expert systems in a diagnostic capacity. Another product area will be in interfacing expert systems to large data bases of specialized as well as general information, including libraries. A natural adjunct to these expert system front ends on data bases is their use for training new users of the data bases as well as in preliminary training of new experts. How long will it be before someone puts salesmanship expertise into an expert system front end on a large data base of consumer products? -- The unwary inquirer would find himself maneuvered into a buying position.
Dreamed about capabilities include common sense conversations in natural language using voice input/output. However, this goal is a lot further away than most people would like to admit. Research at Bell Labs is showing that the spoken language generation problem has many more levels of structure than heretofore imagined. As more and more level of structure emerge, the question of how many more levels will eventually emerge arises. And, spoken language recognition is even more intractable than its generation.
The main problems that need to be solved are an explication of just what are "knowledge", "intelligence", "understanding", and "meaning". We can't create artificial intelligence unless we can know or decide what natural intelligence is. Similarly, we cannot create machines or programs which "understand" unless we know what understanding entails.
The philosophers haven't been able to characterize "knowledge", "intelligence", or "understanding" in 2500 years. We need to "de-mystify" these terms. What is required is a paradigm shift which entails our thinking of "knowledge", "intelligence", "understanding", etc., not as "things", but as classifications into which many "things" fit. We do not use these terms univocally. Until we can shift our focus to the level in which the many different things or processes under which we use these terms are differentiated, we won't be able to say what they are.
Marvin Minski's new book "The Society of Mind" presents a new model of intelligence. He proposes a bureaucracy model with agents and agencies all interconnected. This distributed form of intelligence allows many different processes to act from a central agency. If we allow intelligence as a term which encompasses many different processes, then such a model fits well. But, we are on a wild homunculus chase when we search for one thing to explain intelligence.
There is need for an overall 'big picture' organizing scheme for activities focusing on intelligence and artificial intelligence. My six stage model of information processing provides the basic paradigm. Adapting that model to the state artificial intelligence is in requires providing a characterization of intelligence. First, it is not a single 'thing'; intelligence is at least an information handling process which has sub-processes. As such, it cannot be said to be a thing.
The level of structure in which intelligence functions can be characterized by referring to sub-processes. Intelligence inheres in the processes of sensing, abstracting, representing, planning, deciding, and acting, all in order to resolve motivations. The AAAI-87 conference provided a 13 way classification scheme for AI efforts. The thirteen AAAI classifications (in which there is considerable overlapping) relate to the basic sub-processes in intelligence as follows.
Planning maps representations to potential actions in order to resolve motivations. If each of the sub-processes is thought of as a central region with nebulous extensions extending into the other processes and influencing their structure and function, it can be seen that developments in any one area significantly impact the other areas. The amorphous nature of this classification suggests that it will be some time before intelligence is fully realized in artificial systems. A viable theory will be heralded by the arrival of a classification scheme which is less intertwined. It is partly a matter of complexity and of finding a lower level classification scheme which provides greater structural definition. We need a "periodic table" of "elements" that combine in compound, but regular ways, to form the polymer "intelligence", and, we need to stop asking "but which one is the rubber atom?" To change the metaphor, we are mapping out much of the geography of the country of intelligence, but we don't equate the country to one of its provinces. We need to discover the infrastructure of the country, and note that the country accomplishes things by using and modifying its infrastructure.
Where we go from here will depend upon what are seen as possibilities, and that depends upon the paradigm implicit in our metaphysics. We have long been driven by the search for "essences", which when applied to intelligence leads directly to the Cartesian homunculus and infinite regression. It is clear that we need to look for structure and function, and not equate intelligence with any particular sub-structure.
One of the latest emerging views in philosophy, evolutionary epistemology, would have it that there really is no intelligence per-se. The mechanism of evolutionary epistemology is quite simple; it includes variation and selective retention. Loosely, this means that different methods are tried and the ones that work are retained, while those that do not work are discarded from the repertoire. Darwin's revolution applies not only to the evolution of the species, and to the growth of scientific ideas, but also to the very workings of our individual minds. We "brainstorm" to come up with different ideas (variation) and keep those that seem to work (selective retention). Of course, some of those methods which have been retained for long periods of time include classical logic, set theory, law, etc. We can think of intelligence as an eclectic combination of methods that have worked in the past. The combination is dynamic in that new methods are being added and old ones are occasionally being dropped. Part of intelligence includes as yet unspecified methods for deciding which methods map to which problems in which contexts for what purposes.
The really exciting items include work on a visual graphics editor, a robot navigation system that builds internal maps based upon seen landmarks, and implementing Piagetian style learning.
The thirteen classifications of AAAI 87 and their topics.
AI Architectures:
Butterfly Lisp; Concurrent Common Lisp; parallel resolution
based on connection graph; generalized Blackboard; forward
chaining logic; goal directed reasoning in blackboards;
multiprocessor parallel production system matching;
conflict set support in production system matching;
syntactically uniform access to heterogeneous knowledge bases;
concurrent, controllable constraint systems; non-deterministic
LISP with dependency-directed backtracking.
AI & Education:
Intelligent tutoring with misconception models applied to
psychotherapy; intelligent tutoring applied to satellite
ground tracks; student modeling as plan inferencing;
need for community memory for multiple experts.
Automated Reasoning:
Heuristic evaluation function for two player games; syntactic
analogies between proofs with second order pattern matching;
comparing minimax and product backup rules; removing redundancy
in constraint networks; cost-benefit assessment in blackboard
environments; interpreting sensor data with probalistic type
constraints; rule-based systems limited in uncertain reasoning;
inferring formal software specifications from episodic
descriptions; real-time heuristic search; reasoning with
inconsistencies; structural induction with mutual recursion;
synthesizing algorithms with performance constraints;
imperative lisp program synthesis from specifications;
path dissolution is a strongly complete rule of inference;
revised dependency directed backtracking for default reasoning;
efficiency analysis of multiple-context TMSs in scene
representation; a parallel implementation of iterative deepening A*;clause management system in foundations of
assumption based truth maintenance systems.
Planning:
Reasoning about exceptions during plan execution monitoring;
a polynomial time algorithm for incremental causal reasoning
in partial ordered events; reactive action planning in
complex domains; stratified autoepistemic theories;
possible worlds and quantification; simple causal minimizations
for temporal persistence and projection; using goal interaction
to guide planning; plan operators from qualitative process
theory; axioms for time intervals; localized representations
and planning methods for parallel domains; a model for
concurrent actions having temporal extent; consistent labeling
problem in temporal reasoning; the satisfiability of temporal
constraints networks; validating generalized plans with
incomplete information.
Cognitive Modeling:
Implementing a theory of activity; compare and contrast in
legal reasoning; autoassociative module learning in neural
networks; reducing indeterminism in modeling user librarian
interaction; a mechanism for early Piagetian learning; rules
for implicit acquisition of knowledge about users; neural net
approach to case based problem solving with a large knowledge
base; neural network connectivity and propagation applied to
materials handling; asking questions to understand answers;
information retrieval below document level; structure mapping
in analogical processing; goal based generation of motivational
expressions in a learning environment.
Default Reasoning:
Incremental inference in a mixed initiative environment;
augmenting first order logic to implement default reasoning;
annealing in connectionist nets to implement counterfactual
reasoning; inheritance hierarchies with exception; semantic
networks with multiple inheritance and exceptions; a formalism
describing circumscription policies by axioms included with
the knowledge base axioms; causality in formal reasoning;
representing dependencies by directed graphs; default
reasoning by belief revision; default reasoning by partially
ordered theories.
Knowledge Representation:
Goal/subgoal plan representation for real-time process
monitoring; partial compilation of declarative knowledge into
procedures; representing databases in frames; Intension as
choice plus commitment; manipulating knowledge as taxonomic
representation schemes; complexity in classificatory reasoning;
a logic of belief, with semantics, for non-monotonic reasoning; algorithm synthesis through problem reformulation using
generic designs as parameterized theories; using truth
maintenance systems to cure anomalous extensions in non-monotic
logics; semantically sound inheritance for a formally defined
frame language with defaults; assimilation as a strategy for
implementing self-reorganizing knowledge bases;
Machine Learning & Knowledge Acquisition:
Learning to control a dynamic physical system; improved
inference through conceptual clustering; learning conjunctive
concepts in structural domains; comparing knowledge engineering
to decision analysis; formulating concepts according to
purpose in explanation-based learning; defining operationally for explanation-based learning; interactive expert system
generation using general knowledge about evaluation tasks;
using explanation-based generalization to implement a
prolog interpreter that learns; knowledge level learning by
acquiring general procedures from goal-based experience;
inductive concept learning by reasoning with declarative
formulation of biases; dynamic acquisition of appropriate
representations (concept generalization) to minimize initial
representational bias; generalizing to the Nth case in
explanation-based reasoning; optimizing prediction in
diagnostic decision rules.
Natural Language:
Interpreting clues in restricting arguments and discourse;
principle-based description of grammar in machine translation;
recovering from erroneous inferences; control strategies
for achieving pragmatic goals in language generation by
interpreting inputs; word-order variation in natural language
generation; porting an extensible natural language interface;
inference (interpretation?) in text understanding; acquisition
of conceptual structures for the lexicon; procrastination in
resolving ambiguity; memory-based reasoning applied to
pronunciation; nondestructive graph unification.
Engineering Problem Solving:
Reasoning about fluids via molecular collections; extending
the mathematics of qualitative process theory; using order
of magnitude reasoning for troubleshooting complex analog
circuits; explanation based failure recovery in systems
with partially compiled operators; generating function from
shape and motion constraints on parts in mechanical devices;
establishing critical hyper surfaces in the parameter product
space for physical processes; time-scale abstraction: a method
for structuring a complex system as a hierarchy of smaller,
interacting equilibrium mechanisms; formalizing reasoning with
orders of magnitude and approximate relations; making partial
choices in constraint reasoning problems; multi-level
resolution of constraint propagation failures by prototype
modification using physical knowledge in a graph of models;
reasoning about discontinuous change; a system for hierarchical
reasoning about inequalities; analyzing dynamic systems
describable by finite sets of ordinary differential equations;
probabilistic semantics for qualitative influences; extracting qualitative dynamics from numerical experiments using phase
space.
Robotics:
Intelligent task automation integrating task planning, path
planning, vision and robotics for performing autonomous
manufacturing tasks in dynamic, unstructured environments;
reactive reasoning and planning in autonomous mobile robots
including belief, desire, and intension; graphics editor using
visual grammars for visual languages; space representation and
use by landmark-based path planning and following; insertions
using geometric analysis and hybrid force-position control.
Vision:
Sensitivity of motion and structure computation; linking
high and low level image understanding techniques for shape
extraction using generic geometric models; developing general
techniques for automated mapping and photo interpretation tasks;
hypothesis testing in a computational theory of visual word
recognition; integrating multiple shape-from-texture
algorithms; similitude-invariant pattern recognition using
parallel distributed processing; range image interpretation of mail pieces with superquadrics; closed form solution to
the structure-from-motion problem from line correspondences;
Bounds on translational and angular velocity components from
first order derivatives of image flow; regularization of
visual data using fractal priors in Bayesian modeling;
energy constraints on deformable models in recovering shape
and non-rigid motions; locating object boundaries using
shadows; color separation by perceptual significance
hierarchy; detecting, tracking, and locating 3-D line segments
in a mobile robot vision system.
Expert Systems:
Data validation during diagnosis using expectations;
incomplete knowledge (prospective) reasoning; learning by
deriving symptom-fault associations in diagnostic environments;
planning machining process for numerical control cutting
machines from drawings; multiple representation approach to
understanding the time behavior of digital circuits;
integrating multiple expert systems and databases in computer
aided engineering; automated reasoning for providing real
time advice in process operations; TEST an application shell
that provides a domain-independent diagnostic problem solver
with a library of schematic prototypes; script-based reasoning
for situation monitoring of complex activity; coping with the
problems of a very large rule-base; design as top-down
refinement plus constraint propagation.
This page was updated by Ralph Kenyon on 2009/11/16 at 21:21 and has been accessed 11913 times at 38 hits per month. |
---|