What is an intelligent agent - Software Verification and Validation

Download Report

Transcript What is an intelligent agent - Software Verification and Validation

CS 7850 Fall 2004
Gheorghe Tecuci
[email protected]
http://lac.gmu.edu/
Learning Agents Center
and Computer Science Department
George Mason University
 2004, G.Tecuci, Learning Agents Center
Overview
Class introduction and course’s objectives
Artificial Intelligence and intelligent agents
Domain for hands-on experience
Knowledge acquisition for agents development
Overview of the course
 2004, G.Tecuci, Learning Agents Center
Cartoon
 2004, G.Tecuci, Learning Agents Center
Course Objectives
Provide an overview of Knowledge Acquisition and Problem
Solving.
Present principles and major methods of knowledge acquisition for
the development of knowledge-based agents that incorporate the
problem solving knowledge of a subject matter expert.
Major topics include: overview of knowledge engineering; analysis
and modeling of the reasoning process of a subject matter expert;
ontology design and development; rule learning; problem solving
and knowledge-base refinement.
The course will emphasize the most recent advances in this area,
such as: agent teaching and learning; mixed-initiative knowledge
base refinement; knowledge reuse; frontier research problems.
 2004, G.Tecuci, Learning Agents Center
Course Objectives (cont)
Link Knowledge Acquisition and Problem Solving
concepts to hands-on applications by building a
knowledge-based agent.
Learn about all the phases of building a knowledge-based agent
and experience them first-hand by using the Disciple agent
development environment to build an intelligent assistant that helps
the students to choose a Ph.D. Dissertation Advisor.
Disciple has been developed in the Learning Agents Center of
George Mason University and has been successfully used to build
knowledge-based agents for a variety of problem areas, including:
planning the repair of damaged bridges and roads; critiquing
military courses of action; determining strategic centers of gravity in
military conflicts; generating test questions for higher-order thinking
skills in history and statistics.
 2004, G.Tecuci, Learning Agents Center
Course organization and grading policy
Course organization
The classes will consist of:
- a theoretical recitation part where the instructor will present and
discuss the various methods and phases of building a knowledgebased agent;
- a practical laboratory part where the students will apply this knowledge
to specify, design and develop the Ph.D. selection advisor.
Regular assignments will consist of incremental developments of the
Ph.D. selection advisor which will be presented to the class.
Grading Policy
- Exam, covering the theoretical aspects presented – 50%
- Assignments, consisting of lab participation and the contribution to the
development of the Ph.D. selection advisor – 50%
 2004, G.Tecuci, Learning Agents Center
Readings
Lecture notes provided by the instructor (required).
Tecuci G., Building Intelligent Agents: An
Apprenticeship Multistrategy Learning Theory,
Methodology, Tool and Case Studies, Academic Press,
1998 (recommended).
Additional papers recommended by the instructor.
 2004, G.Tecuci, Learning Agents Center
Overview
Class introduction and course’s objectives
Artificial Intelligence and intelligent agents
Domain for hands-on experience
Knowledge acquisition for agents development
Overview of the course
 2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
Why are intelligent agents important
 2004, G.Tecuci, Learning Agents Center
What is Artificial Intelligence
 2004, G.Tecuci, Learning Agents Center
Central goals of Artificial Intelligence
Understand the principles that make intelligence possible
(in humans, animals, and artificial agents)
Developing intelligent machines or agents
(no matter whether they operate as humans or not)
Formalizing knowledge and mechanizing reasoning
in all areas of human endeavor
Making the working with computers
as easy as working with people
Developing human-machine systems that exploit the
complementariness of human and automated reasoning
 2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
Why are intelligent agents important
 2004, G.Tecuci, Learning Agents Center
What is an intelligent agent
An intelligent agent is a system that:
• perceives its environment (which may be the physical
world, a user via a graphical user interface, a collection of
other agents, the Internet, or other complex environment);
• reasons to interpret perceptions, draw inferences, solve
problems, and determine actions; and
• acts upon that environment to realize a set of goals or
tasks for which it was designed.
input/
sensors
user/
environment
 2004, G.Tecuci, Learning Agents Center
output/
effectors
Intelligent
Agent
What is an intelligent agent (cont.)
Humans, with multiple, conflicting drives, multiple
senses, multiple possible actions, and complex
sophisticated control structures, are at the highest end of
being an agent.
At the low end of being an agent is a thermostat.
It continuously senses the room temperature, starting or
stopping the heating system each time the current
temperature is out of a pre-defined range.
The intelligent agents we are concerned with are in
between. They are clearly not as capable as humans, but
they are significantly more capable than a thermostat.
 2004, G.Tecuci, Learning Agents Center
What is an intelligent agent (cont.)
An intelligent agent interacts with a human or some other
agents via some kind of agent-communication language
and may not blindly obey commands, but may have the
ability to modify requests, ask clarification questions, or
even refuse to satisfy certain requests.
It can accept high-level requests indicating what the user
wants and can decide how to satisfy each request with
some degree of independence or autonomy, exhibiting
goal-directed behavior and dynamically choosing which
actions to take, and in what sequence.
 2004, G.Tecuci, Learning Agents Center
What an intelligent agent can do
An intelligent agent can :
• collaborate with its user to improve the accomplishment of
his or her tasks;
• carry out tasks on user’s behalf, and in so doing employs
some knowledge of the user's goals or desires;
• monitor events or procedures for the user;
• advise the user on how to perform a task;
• train or teach the user;
• help different users collaborate.
 2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
Why are intelligent agents important
 2004, G.Tecuci, Learning Agents Center
Knowledge representation and reasoning
An intelligent agent contains an internal representation of its external
application domain, where relevant elements of the application
domain (objects, relations, classes, laws, actions) are represented
as symbolic expressions.
Model of the Domain
Application Domain
ONTOLOGY
OBJECT
SUBCLASS-OF
represents
BOOK
CUP
TABLE
INSTANCE-OF
If an object is on top of
another object that is itself
on top of a third object
then the first object is on
top of the third object.
(cup1 on table1)
CUP1
ON
BOOK1
ON
TABLE1
RULE
 x,y,z  OBJECT,
(ON x y) & (ON y z)  (ON x z)
(cup1 on book1) & (book1 on table1)
 (cup1 on table1)
This mapping allows the agent to reason about the application domain by performing reasoning
processes
in the
domain model, and transferring the conclusions back into the application domain.
 2004,
G.Tecuci, Learning Agents
Center
Basic agent architecture
Implements a general method of
interpreting the input problem based on
the knowledge from the knowledge base
Intelligent Agent
Input/
Sensors
User/
Environment
Output/
Effectors
Problem Solving
Engine
Knowledge Base
ONTOLOGY
Ontology
OBJECT
SUBCLASS-OF
BOOK
CUP
TABLE
INSTANCE-OF
Rules/Cases/…
CUP1
ON
BOOK1
ON
TABLE1
RULE
 x,y,z  OBJECT,
(ON x y) & (ON y z)  (ON x z)
Data structures that represent the objects from the application domain,
general laws governing them, action that can be performed with them, etc.
 2004, G.Tecuci, Learning Agents Center
Transparency and explanations
The knowledge possessed by the agent and its reasoning
processes should be understandable to humans.
The agent should have the ability to give explanations of
its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept,
for instance, a medical diagnosis performed by an
intelligent agent.
The need for transparency shows that the main goal of
artificial intelligence is to enhance human capabilities and
not to replace human activity.
 2004, G.Tecuci, Learning Agents Center
Ability to communicate
An agent should be able to communicate with its users
or other agents.
The communication language should be as natural to
the human users as possible. Ideally, it should be free
natural language.
The problem of natural language understanding and
generation is very difficult due to the ambiguity of words
and sentences, the paraphrases, ellipses and references
which are used in human communication.
 2004, G.Tecuci, Learning Agents Center
Use of huge amounts of knowledge
In order to solve "real-world" problems, an intelligent agent
needs a huge amount of domain knowledge in its memory
(knowledge base).
Example of human-agent dialog:
User: The toolbox is locked.
Agent: The key is in the drawer.
In order to understand such sentences and to respond
adequately, the agent needs to have a lot of knowledge
about the user, including the goals the user might want to
achieve.
 2004, G.Tecuci, Learning Agents Center
Use of huge amounts of knowledge (example)
User:
The toolbox is locked.
Agent:
Why is he telling me this?
I already know that the box is locked.
I know he needs to get in.
Perhaps he is telling me because he believes I can help.
To get in requires a key.
He knows it and he knows I know it.
The key is in the drawer.
If he knew this, he would not tell me that the toolbox is locked.
So he must not realize it.
To make him know it, I can tell him.
I am supposed to help him.
The key is in the drawer.
 2004, G.Tecuci, Learning Agents Center
Exploration of huge search spaces
An intelligent agent usually needs to search huge spaces
in order to find solutions to problems.
Example: A search agent on the Internet.
 2004, G.Tecuci, Learning Agents Center
Use of heuristics
Intelligent agents generally attack problems for which
no algorithm is known or feasible, problems that require
heuristic methods.
A heuristic is a rule of thumb, strategy, trick, simplification,
or any other kind of device which drastically limits the
search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions. In fact they
do not guarantee any solution at all.
A useful heuristic is one that offers solutions which are good
enough most of the time.
 2004, G.Tecuci, Learning Agents Center
Reasoning with incomplete or conflicting data
The ability to provide some solution even if not all the
data relevant to the problem is available at the time a
solution is required.
Examples:
The reasoning of a physician in an intensive care unit.
Planning a military course of action.
The ability to take into account data items that are more
or less in contradiction with one another (conflicting
data or data corrupted by errors).
Example:
The reasoning of a military intelligence analyst that has
to cope with the deception actions of the enemy.
 2004, G.Tecuci, Learning Agents Center
Ability to learn
The ability to improve its competence and performance.
An agent is improving its competence if it learns to solve
a broader class of problems, and to make fewer
mistakes in problem solving.
An agent is improving its performance if it learns to solve
more efficiently (for instance, by using less time or space
resources) the problems from its area of competence.
 2004, G.Tecuci, Learning Agents Center
Extended agent architecture
The learning engine implements methods
for extending and refining the knowledge
in the knowledge base.
Intelligent Agent
Input/
Sensors
User/
Environment
Problem Solving
Engine
Learning
Engine
Output/
Effectors
Knowledge Base
Ontology
Rules/Cases/Methods
 2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
Why are intelligent agents important
 2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents
Planning: Finding a set of actions that achieve a certain goal.
Example: Determine the actions that need to be performed in order to
repair a bridge.
Critiquing: Expressing judgments about something according to certain
standards.
Example: Critiquing a military course of action (or plan) based on the
principles of war and the tenets of army operations.
Interpretation: Inferring situation description from sensory data.
Example: Interpreting gauge readings in a chemical process plant to infer
the status of the process.
 2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)
Prediction: Inferring likely consequences of given situations.
Examples:
Predicting the damage to crops from some type of insect.
Estimating global oil demand from the current geopolitical world situation.
Diagnosis: Inferring system malfunctions from observables.
Examples:
Determining the disease of a patient from the observed symptoms.
Locating faults in electrical circuits.
Finding defective components in the cooling system of nuclear reactors.
Design: Configuring objects under constraints.
Example: Designing integrated circuits layouts.
 2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)
Monitoring: Comparing observations to expected outcomes.
Examples:
Monitoring instrument readings in a nuclear reactor to detect accident
conditions.
Assisting patients in an intensive care unit by analyzing data from the
monitoring equipment.
Debugging: Prescribing remedies for malfunctions.
Examples:
Suggesting how to tune a computer system to reduce a particular type of
performance problem.
Choosing a repair procedure to fix a known malfunction in a locomotive.
Repair: Executing plans to administer prescribed remedies.
Example: Tuning a mass spectrometer, i.e., setting the instrument's
operating controls to achieve optimum sensitivity consistent with correct
peak ratios and shapes.
 2004, G.Tecuci, Learning Agents Center
Sample tasks for intelligent agents (cont.)
Instruction: Diagnosing, debugging, and repairing student behavior.
Examples:
Teaching students a foreign language.
Teaching students to troubleshoot electrical circuits.
Teaching medical students in the area of antimicrobial therapy selection.
Control: Governing overall system behavior.
Example:
Managing the manufacturing and distribution of computer systems.
Any useful task:
Information fusion.
Information assurance.
Travel planning.
Email management.
Help in choosing a Ph.D. Dissertation Advisor
 2004, G.Tecuci, Learning Agents Center
Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
Why are intelligent agents important
 2004, G.Tecuci, Learning Agents Center
Why are intelligent agents important
Humans have limitations that agents may alleviate
(e.g. memory for the details that isn’t effected by
stress, fatigue or time constraints).
Humans and agents could engage in mixed-initiative
problem solving that takes advantage of their
complementary strengths and reasoning styles.
 2004, G.Tecuci, Learning Agents Center
Why are intelligent agents important (cont)
The evolution of information technology makes
intelligent agents essential components of our future
systems and organizations.
Our future computers and most of the other systems
and tools will gradually become intelligent agents.
We have to be able to deal with intelligent agents either
as users, or as developers, or as both.
 2004, G.Tecuci, Learning Agents Center
Intelligent agents: Conclusion
Intelligent agents are systems which can perform
tasks requiring knowledge and heuristic methods.
Intelligent agents are helpful, enabling us to do our
tasks better.
Intelligent agents are necessary to cope with the
increasing complexity of the information society.
 2004, G.Tecuci, Learning Agents Center
Overview
Class introduction and course’s objectives
Artificial Intelligence and intelligent agents
Domain for hands-on experience
Knowledge acquisition for agents development
Overview of the course
 2004, G.Tecuci, Learning Agents Center
Problem: Choosing a Ph.D. Dissertation Advisor
Choosing a Ph.D. Dissertation Advisor is a crucial decision
for a successful dissertation and for one’s future career.
An informed decision requires a lot of knowledge about the
potential advisors.
In this course we will develop an agent that interacts with a
student to help selecting the best Ph.D. advisor for that
student.
See the project notes: “1. Problem”
 2004, G.Tecuci, Learning Agents Center
Overview
Class introduction and course’s objectives
Artificial Intelligence and intelligent agents
Domain for hands-on experience
Knowledge acquisition for agents development
Overview of the course
 2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent development
Approaches to knowledge acquisition
Disciple approach to agent development
Demo: Agent teaching and learning
Research vision on agents development
 2004, G.Tecuci, Learning Agents Center
How are agents built: Manual knowledge acquisition
Intelligent Agent
Subject
Matter Expert
Problem Solving
Engine
Knowledge
Engineer
Dialog
Programming
Knowledge Base
Results
A knowledge engineer attempts to understand how a subject
matter expert reasons and solves problems and then encodes
the acquired expertise into the agent's knowledge base.
The expert analyzes the solutions generated by the agent
(and often the knowledge base itself) to identify errors, and
the knowledge engineer corrects the knowledge base.
 2004, G.Tecuci, Learning Agents Center
Why it is hard
The knowledge engineer has to become a kind of subject
matter expert in order to properly understand expert’s problem
solving knowledge. This takes time and effort.
Experts express their knowledge informally, using natural
language, visual representations and common sense, often
omitting essential details that are considered obvious. This
form of knowledge is very different from the one in which
knowledge has to be represented in the knowledge base
(which is formal, precise, and complete).
This transfer and transformation of knowledge, from the
domain expert through the knowledge engineer to the agent, is
long, painful and inefficient (and is known as "the knowledge
acquisition bottleneck“ of the AI systems development
process).
 2004, G.Tecuci, Learning Agents Center
Mixed-initiative knowledge acquisition
Intelligent
Learning
Agent
Subject
Matter Expert
Dialog
Learning
Engine
Problem Solving
Engine
Knowledge
Knowledge Base
Results
The expert teaches the agent how to perform various tasks, in
a way that resembles how an expert would teach a human
apprentice when solving problems in cooperation.
This process is based on mixed-initiative reasoning that
integrates the complementary knowledge and reasoning styles
of the subject matter expert and the agent, and on a division of
responsibility for those elements of knowledge engineering for
which they have the most aptitude, such that together they
form a complete team for knowledge base development.
 2004, G.Tecuci, Learning Agents Center
Mixed-initiative knowledge acquisition (cont.)
This is the most promising approach to overcome the
knowledge acquisition bottleneck.
DARPA’s Rapid Knowledge Formation Program (2000-2004):
Emphasized the development of knowledge bases directly by the
subject matter experts.
Central objective: Enable distributed teams of experts to enter and modify
knowledge directly and easily, without the need for prior knowledge
engineering experience. The emphasis was on content and the means of
rapidly acquiring this content from individuals who possess it with the goal
of gaining a scientific understanding of how ordinary people can work with
formal representations of knowledge.
Program’s primary requirement: Development of functionality enabling
experts to understand the contents of a knowledge base, enter new
theories, augment and edit existing knowledge, test the adequacy of the
knowledge base under development, receive explanations of theories
contained in the knowledge base, and detect and repair errors in content.
 2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisition
Autonomous
Learning
Agent
Data Base
Data
Learning
Engine
Problem Solving
Engine
Knowledge
Knowledge Base
Results
The learning engine builds the knowledge base from a data
base of facts or examples.
In general, the learned knowledge consists of concepts,
classification rules, or decision trees. The problem solving
engine is a simple one-step inference engine that classifies a
new instance as being or not an example of a learned concept.
Defining the Data Base of examples is a significant challenge.
Current practical applications limited to classification tasks.
 2004, G.Tecuci, Learning Agents Center
Autonomous knowledge acquisition (cont.)
Autonomous Language
Understanding and Learning Agent
Text
Text
Data
Understanding
Engine
Learning
Engine
Knowledge
Problem Solving
Engine
Knowledge Base
Results
The knowledge base is built by the learning engine from data
provided by the text understanding system able to understand
textbooks.
In general, the data consists of facts acquired from the books.
This is not yet a practical approach, even for simpler agents.
 2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent development
Approaches to knowledge acquisition
Disciple approach to agent development
Demo: Agent teaching and learning
Research vision on agents development
 2004, G.Tecuci, Learning Agents Center
Disciple approach to agent development
Research Problem: Elaborate a theory, methodology and family of
systems for the development of knowledge-based agents by subject
matter experts, with limited assistance from knowledge engineers.
Approach: Develop a learning agent that can be taught directly by a
subject matter expert while solving problems in cooperation.
1. Mixed-initiative
problem solving
The expert teaches
the agent how to perform
various tasks in a way that
resembles how the expert
would teach a person.
3. Multistrategy
learning
 2004, G.Tecuci, Learning Agents Center
Interface
2. Teaching and
learning
The agent learns
from the expert,
building, verifying
and improving its
knowledge base
Problem
Solving
Learning
Ontology
+ Rules
Sample Disciple agents
Disciple-WA (1997-1998): Estimates the
best plan of working around damage to a
transportation infrastructure, such as a
damaged bridge or road.
Demonstrated that a knowledge engineer
can use Disciple to rapidly build and update
a knowledge base capturing knowledge from
military engineering manuals and a set of
sample solutions provided by a subject
matter expert.
Disciple-COA (1998-1999): Identifies
strengths and weaknesses in a Course of
Action, based on the principles of war and
the tenets of Army operations.
Mission:
BLUE-BRIGADE2 attacks to penetrate RED-MECH-REGIMENT2 at 130600 Aug in order to enable the completion of seize
OBJ-SLAM by BLUE-ARMOR-BRIGADE1.
Close:
BLUE-TASK-FORCE1, a balanced task force (MAIN EFFORT) attacks to penetrate RED-MECH-COMPANY4, then clears
RED-TANK-COMPANY2 in order to enable the completion of seize OBJ-SLAM by BLUE-ARMOR-BRIGADE1.
BLUE-TASK-FORCE2, a balanced task force (SUPPORTING EFFORT 1) attacks to fix RED-MECH-COMPANY1 and REDMECH-COMPANY2 and RED-MECH-COMPANY3 in order to prevent RED-MECH-COMPANY1 and RED-MECHCOMPANY2 and RED-MECH-COMPANY3 from interfering with conducts of the MAIN-EFFORT1, then clears REDMECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and RED-TANK-COMPANY1.
…
Reserve:
The reserve, BLUE-MECH-COMPANY8, a mechanized infantry company, follows Main Effort, and is prepared to reinforce )
MAIN-EFFORT1.
Security:
SUPPORTING-EFFORT1 destroys RED-CSOP1 prior to begin moving across PL-AMBER by MAIN-EFFORT1 in order to
prevent RED-MECH-REGIMENT2 from observing MAIN-EFFORT1.
…
Deep:
Deep operations will destroy RED-TANK-COMPANY1 and RED-TANK-COMPANY2 and RED-TANK-COMPANY3.
Rear:
BLUE-MECH-PLT1, a mechanized infantry platoon secures the brigade support area.
Demonstrated the generality of its learning
methods that used an object ontology created
by another group (TFS/Cycorp).
Demonstrated that a knowledge engineer and a
subject matter expert can jointly teach Disciple.
Fires:
Fires will suppress RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and REDMECH-COMPANY4 and RED-MECH-COMPANY5 and RED-MECH-COMPANY6.
End State: At the conclusion of this operation, BLUE-BRIGADE2 will enable accomplishing conducts forward passage of lines through
BLUE-BRIGADE2 by BLUE-ARMOR-BRIGADE1.
MAIN-EFFORT1 will complete to clear RED-MECH-COMPANY4 and RED-TANK-COMPANY2.
SUPPORTING-EFFORT1 will complete to clear RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECHCOMPANY3 and RED-TANK-COMPANY1.
SUPPORGING-EFFORT2 will complete to clear RED-MECH-COMPANY5 and RED-MECH-COMPANY6 and RED-TANKCOMPANY3.
 2004, G.Tecuci, Learning Agents Center
A Disciple agent for Center of Gravity determination
The center of gravity of an entity (state, alliance,
coalition, or group) is the foundation of capability,
the hub of all power and movement, upon which
everything depends, the point against which all the
energies should be directed.
Carl Von Clausewitz, “On War,” 1832.
If a combatant eliminates or influences the enemy’s
strategic center of gravity, then the enemy will lose
control of its power and resources and will
eventually fall to defeat. If the combatant fails to
adequately protect his own strategic center of
gravity, he invites disaster.
(Giles and Galvin, USAWC 1996).
 2004, G.Tecuci, Learning Agents Center
Synergistic collaboration and transition to the USAWC
George Mason University - US Army War College
Identify the strategic COG candidates for the Sicily_1943 scenario
Which is an opposing force in the Sicily_1943 scenario?
Anglo_allies_1943
589jw Military Applications
of Artificial Intelligence
Identify the strategic COG candidates for Anglo_allies_1943
Is Anglo_allies_1943 a single member force or a multi-member force?
Anglo_allies_1943 is a multi-member force
Identify the strategic COG candidates for the Anglo_allies_1943 which is a multi-member force
What type of strategic COG candidates should I consider for a multi-member force?
I consider the candidates corresponding to the multi-member nature of the force
Identify the strategic COG candidates corresponding to the multi-member nature of the Anglo_allies_1943
What type of strategic COG candidates should I consider for the multi-member nature of the force?
I consider the relationships between the members of the force
I consider the type of operations being conducted by the members of the force
Identify the strategic COG candidates with respect to the type of operations being conducted by the members of the Anglo_allies_1943
Which is the primary force element that will conduct the campaign for Anglo_allies_1943?
Allied_forces_operations_Husky
Identify the strategic COG candidates with respect to the type of operations being conducted by Allied_forces_operations_Husky
Is Allied_forces_operations_Husky made up of a true single group or are there subgroups?
Allied_forces_operations_Husky is made up of several subgroups
Formalization of
the center of gravity
determination process
Disciple
Artificial
Intelligence
Research
Knowledge bases and agent
development by subject matter
experts, using learning agent
technology. Experiments in the
USAWC courses.
 2004, G.Tecuci, Learning Agents Center
Students
319jw Case Studies in
developed
Center of Gravity Analysis scenarios
Students
developed
agents
Use of Disciple in a
sequence of two joint
warfighting courses
Approach to Center of Gravity (COG) determination
• Based on the concepts of critical capabilities, critical requirements and
critical vulnerabilities, which have been recently adopted into the joint
military doctrine of USA (Strange , 1996).
• Applied to current war scenarios (e.g. War on terror 2003, Iraq 2003)
with state and non-state actors (e.g. Al Qaeda).
Identification of COG candidates
Identify potential primary
sources of moral or physical
strength, power and
resistance from:
Government
Military
 2004, G.Tecuci, Learning Agents Center
Testing of COG candidates
Test each identified COG
candidate to determine whether
it has all the necessary critical
capabilities:
Which are the critical
capabilities?
People
Are the critical requirements of
these capabilities satisfied?
Economy
If not, eliminate the candidate.
Alliances
If yes, do these capabilities
have any vulnerability?
Etc.
Problem Solving Approach: Task Reduction
A complex problem solving task is performed by:
T1
Q1 S1
• successively reducing it to simpler tasks;
• finding the solutions of the simplest tasks;
A11 S11
…
T11a S11a T11bS11b
…
Q11b S11b
A1n S
1n
T1n
• successively composing these solutions until
the solution to the initial task is obtained.
…Test whether President Roosevelt is
President Roosevelt is a strategic COG
candidate that can be eliminated
a viable strategic COG candidate
Which are the critical capabilities that President Roosevelt should have to be a COG candidate?
A11b1 S11b1… A11bm S11bm
T11b1
T11bm
Knowledge Base
Object Ontology
Reduction Rules
Composition Rules
 2004, G.Tecuci, Learning Agents Center
Does President Roosevelt have all
the necessary critical capabilities?
The necessary critical capabilities are: be protected, stay informed, communicate,
be influential, be a driving force, have support and be irreplaceable
Test whether President
Roosevelt has the critical
capability to be protected
President Roosevelt has the critical capability to be protected. President Roosevelt is
protected by US Service 1943 which has no significant vulnerability
Test whether President
Roosevelt has the critical
capability to stay informed
President Roosevelt has the critical capability to stay informed. President Roosevelt
receives essential intelligence from intelligence agencies which have no significant
vulnerability
Test whether President
Roosevelt has the critical
capability to communicate
President Roosevelt has the critical capability to communicate through executive orders,
through military orders, and through the Mass Media of US 1943. These communication
means have no significant vulnerabilities
Test whether President
Roosevelt has the critical
capability to be influential
President Roosevelt has the critical capability to be influential because he is the head of the
government of US 1943, the commander in chief of the military of US 1943, and is a trusted
leader who can use the Mass Media of US 1943. These influence means have no
significant vulnerabilities.
Test whether President
Roosevelt has the critical
capability to be a driving force
President Roosevelt has the critical capability to be a driving force. The main reason for
President Roosevelt to pursue the goal of unconditional surrender of European Axis is
“preventing separate peace by the members of the Allied Forces”. Also, “the western
democratic values” provides President Roosevelt with determination to persevere in this
goal. There is no significant vulnerability in the reason and determination.
Test whether President
Roosevelt has the critical
capability to have support
President Roosevelt has the critical capability to have support because he is the head of a
democratic government with a history of good decisions, a trusted commander in chief of
the military, and the people are willing to make sacrifices for unconditional surrender of
European Axis. The means to secure continuous support have no significant vulnerability.
President Roosevelt does not have the critical capability to be irreplaceable. US 1943 would
No.
Problem Solving and Learning
force
We need to
Identify and test a strategic COG candidate
corresponding to a member of the Allied_Forces_1943
Which is a member of Allied_Forces_1943?
multi member force
multi group force
single member force
multi state force
single group force
...
US_1943
Therefore we need to
Identify and test a strategic
COG candidate for US_1943
EXAMPLE OF
REASONING
STEP
...
multi state alliance
multi state coalition
...
dominant partner
multi state alliance
equal partners
multi state alliance
Allied Forces 1943
LEARNED RULE
IF
Identify and test a strategic COG candidate
corresponding to a member of the ?O1
Question
Which is a member of ?O1 ?
Answer
?O2
INFORMAL STRUCTURE
THEN
Identify and test a strategic COG candidate
for ?O2
 2004, G.Tecuci, Learning Agents Center
single state force
ONTOLOGY
FRAGMENT
has as member
US 1943
IF
Identify and test a strategic COG candidate corresponding
to a member of a force
FORMAL STRUCTURE
The force is ?O1
Plausible Upper Bound Condition
?O1
is multi_member_force
has_as_member ?O2
?O2
is force
Plausible Lower Bound Condition
?O1
is equal_partners_multi_state_alliance
has_as_member ?O2
?O2
is single_state_force
THEN
Identify and test a strategic COG candidate for a force
The force is ?O2
Use of Disciple at the US Army War College
319jw Case Studies in Center of Gravity Analysis
Disciple helps the students to perform a center
Disciple was taught based on the expertise of
of gravity analysis of an assigned war scenario.
Prof. Comello in center of gravity analysis.
Teaching
Learning
Disciple
Agent KB
Problem
solving
Global evaluations of Disciple by officers from the Spring 03 course
Strongly
Agree
Agree
Neutral
Disagree
8 Disciple should be used in
7 future versions of this course
6
5
4
3
2
1
0
Strongly
Disagree
Strongly
Agree
Agree
Neutral
Disagree
8Disciple helped me to learn to
7 perform a strategic COG
6 analysis of a scenario
5
4
3
2
1
0
Strongly
Disagree
Strongly
Agree
 2004, G.Tecuci, Learning Agents Center
Agree
Neutral
Disagree
Strongly
Disagree
8 The use of Disciple is an
7
assignment
that is well suited to
6the course's learning objectives
5
4
3
2
1
0
Use of Disciple at the US Army War College
589jw Military Applications of Artificial Intelligence course
Students teach
Disciple their COG
analysis expertise,
using sample
scenarios (Iraq 2003,
War on terror 2003,
Arab-Israeli 1973)
Students test
the trained
Disciple agent
based on a
new scenario
(North Korea
2003)
Global evaluations of Disciple by officers during three experiments
I think that a subject matter expert can use Disciple to build an agent,
with limited assistance from a knowledge engineer
Strongly
Agree
Agree
Neutral
Spring 2003
COG testing based on
critical capabilities
Disagree
9
8
7
6
5
4
3
2
1
0
Strongly
Disagree
Strongly
Agree
Agree
Neutral
Spring 2002
COG identification
and testing
Disagree
9
8
7
6
5
4
3
2
1
0
Strongly
Disagree
Strongly
Agree
 2004, G.Tecuci, Learning Agents Center
Agree
Neutral
Disagree
Spring 2001
COG identification
Strongly
Disagree
9
8
7
6
5
4
3
2
1
0
Parallel development and merging of knowledge bases
Initial KB
Domain analysis and ontology
development (KE+SME)
Parallel KB development
(SME assisted by KE)
DISCIPLE-COG
Team 1
stay informed
be irreplaceable
5 features
10 tasks
10 rules
Knowledge
Engineer (KE)
All subject matter
experts (SME)
37 acquired concepts and
Extended KB features for COG testing
DISCIPLE-COG
Team 2
communicate
14 tasks
14 rules
DISCIPLE-COG
Team 3
be influential
2 features
19 tasks
19 rules
KB merging (KE)
Unified 2 features
Deleted 4 rules
Refined 12 rules
Final KB:
+9 features  478 concepts and features
+105 tasks 134 tasks
+95 rules 113 rules
Correctness = 98.15%
 2004, G.Tecuci, Learning Agents Center
432 concepts and features, 29 tasks, 18 rules
For COG identification for leaders
Training scenarios:
Iraq 2003
Arab-Israeli 1973
War on Terror 2003
DISCIPLE-COG
Team 4
have support
35 tasks
33 rules
DISCIPLE-COG
Team 5
be protected
be driving force
3 features
24 tasks
23 rules
Learned features, tasks, rules
Integrated KB
DISCIPLE-COG
5h 28min average training time / team
3.53 average rule learning rate / team
COG identification and testing (leaders)
Testing scenario:
North Korea 2003
Knowledge Acquisition for agent development
Approaches to knowledge acquisition
Disciple approach to agent development
Demo: Agent teaching and learning
Research vision on agents development
 2004, G.Tecuci, Learning Agents Center
Demonstration
Teaching Disciple how to determine whether a strategic
leader has the critical capability to be protected.
Disciple
Demo
 2004, G.Tecuci, Learning Agents Center
Knowledge Acquisition for agent development
Approaches to knowledge acquisition
Disciple approach to agent development
Demo: Agent teaching and learning
Research vision on agents development
 2004, G.Tecuci, Learning Agents Center
Vision on the future of software development
Mainframe
Computers
Personal
Computers
Learning
Agents
Software systems
developed and used by
persons that are not
computer experts
Software systems developed
by computer experts
and used by persons that
are not computer experts
Software systems
developed and used
by computer experts
 2004, G.Tecuci, Learning Agents Center
Overview
Class introduction and course’s objectives
Artificial Intelligence and intelligent agents
Domain for hands-on experience
Knowledge acquisition for agents development
Overview of the course
 2004, G.Tecuci, Learning Agents Center
Overview of the course
Mixed-initiative knowledge acquisition.
Overview of the Disciple approach.
Problem solving through task reduction.
Modeling the reasoning of subject matter experts.
Ontology design and development.
Agent teaching and multistrategy learning.
Mixed-initiative problem solving
and knowledge base refinement.
Knowledge bases integration.
Scripts development for scenario elicitation.
Discussion of frontier research problems.
 2004, G.Tecuci, Learning Agents Center
Development of an assistant for choosing
a Ph.D. Dissertation Advisor
Overview of knowledge engineering and of
the manual knowledge acquisition methods.
Additional recommended reading
G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12.
 2004, G.Tecuci, Learning Agents Center