General Machine Intelligence (GMI)
My focus is on how a mind can be implemented in an artificial substrate – how we can build machines that think – and understand – the world around them.
Natural intelligence is the result of multiple systems and subsystems, implementing complex information flow and control that produce learning, reasoning, intuition, attention, insight, creativity, and understanding. How can we implement such a system in a machine? Our artificial general intelligence (AGI) work focuses on how the architecture of a mind works as a whole. By building working, running models implemented in software we aim to both understand the mind and build a practical AGI system.
To this end my team and I developed the AERA system. AERA demonstrates numerous operational features necessary to achieve AGI: Domain-independent learning, cumulative incremental learning, transfer learning, time-sensitive resource management, and life-long scalability. The system is currently being used by myself and my collaborators to study machine understanding, teaching methodologies for artificial learners, even the development of ethical values. You can read about AERA in my numerous publications — some of which have received best paper awards — and a 56-page technical report.
Con - struct - iv -
ist AI: Self-constructive artificial intelligence system with general knowledge
acquisition and integration skills. Systems capable
of architectural self-modification and self-directed
growth; develop from a seed specification;
capable of learning to perceive, think and act
in a wide range of novel situations and domains
and learning to perform a number of different
tasks.
Not to be confused with:
Con - struct - ion
- ist AI: Artificially
intelligent manmade system built by hand; learning
is restricted to combining predefined situations
and tasks, based on detailed specifications provided
by a human programmer. While the system may automatically
improve performance in some limited domain,
the domain itself is decided and defined by
the programmer.
|
|
Where Does Intelligence Come From?
The evidence gathered so far on the nature
of intelligence makes it highly
unlikely that mind appears from a single or simple principle.
Even a small set of key principles seems unlikely; after
all, if it takes a myriad of closely coordinated mechanisms for an automobile
engine to run, why should a mind be any different? At the level of the brain a mind results from interaction among a vast amount of
components, hooked up in complex, clever ways according to largely
unknown principles. Although a mind might be constructed on different principles than neurons, the key operating principles responsible for producing human-like thought are still likely to count in the dozens if not hundreds. This means that if we want to build very
smart machines, rivaling the human mind, we need to build
more integrated and complete systems than achieved to date, demonstrating a large number of operating principles.
The mind is a system, and my research to date indicates that
its operation needs to be captured
holistically to achieve truly intelligent machines.
My approach has followed two main traditions
in systems thinking. On the one hand is the familiar modular
decomposition from cognitive science and software development.
Modularization, object-orientation being one expression,
is the most accepeted method at present to construct complex
software systems by hand, including AI systems – what I call constructionist
AI Unfortunately for the field of AI, this
method has severe limitations in the size of the systems that can be built, but until recently there really
wasn't a viable alternative available. There is
now; keep reading.
Have you ever seen a child take apart a favorite
toy? Did you then see the little one cry after realizing
he could not put all the pieces back together again?
Well, here is a secret that never makes the headlines: We
have taken apart the universe and have no idea how
to put it back together. After spending trillions of
research dollars to dissasemble nature in the last
century, we are just now acknowledging that we have
no clue how to continue - except to take it apart further.
— Albert-Lásló Barbasi
Linked - The New Science of Networks
(bold: KRTh) |
As the proponents of the holistic systems
approach have pointed out (e.g. Varela, Maturana, Simon),
many complex systems have the elusive property that local
interactions between their parts are not sufficient to
explain, understand or predict the operation of the whole
system of which they are part. Software methodologies employing
traditional modular decomposition will not be sufficient
to allow us to construct such systems in the lab.
If we are ever to see generally intelligent
artificial systems we must
look towards methodologies that more directly allow us to
model and study complex phenomena, calling for an investigation
of the principles of self-organization and meta-control.
In short, we must employ methods that allow the system to
develop on its own, through self-constructive principles.
This is constructivist
A.I.
A new constructivist AI methodology was the
subject of my 2009
AAAI Fall Symposium on Biologically-Inspired Cognitive
Architectures keynote speech, and much of my writing in the past years, summarized in my transparently-titled paper From Manual Construction to Self-Constructive Systems: A New Constructivist AI (2012).
Constructivist AI
The manual construction process employed in standard software development wil not be sufficient to construct the
kinds of complex architectures that we require for general
intelligence can acquire their own knowledge and grow
on their own, without the constant need for re-design. For this our focus must shift towards systems that can program themselves. Without self-programming principles in hand it is unlikely
that we will we see systems with architecture-wide
integration of learning, attention, analogy making and system
growth – i.e. artificial general intelligence. For the past decade we have managed to take significan steps in this direction.
Constructivist Papers
Seed-Programmed Autonomous General Learning
A New Constructivist AI: From Manual Construction to Self-Constructive Systems
About Understanding
Anytime Bounded Rationality
Bounded Seed-AGI
Autonomous Acquisition of Situated Natural Communication
Bounded
Recursive Self-Improvement
Resource-Bounded
Machines are Motivated to be Effective, Efficient & Curious
A
New
Constructivist AI: From Manual Methods to Self-Constructive
Systems
Self-Programming:
Operationalizing Autonomy
Achieving Artificial General Intelligence Through Peewee Granularity
|