Real-Time Input of 3D Pose and Gestures of a User`s Hand and Its
Download
Report
Transcript Real-Time Input of 3D Pose and Gestures of a User`s Hand and Its
Agents and Intelligent Agents
An agent is anything that can be viewed as
perceiving
its environment through sensors and
acting upon that environment through actuators
An intelligent agent acts further for its own interests.
Artificial Intelligence, Lecturer #8
Example of Agents
Human agent:
Sensors: eyes, ears, nose….
Actuators: hands, legs, mouth, …
Robotic agent:
Sensors: cameras and infrared range finders
Actuators: various motors
Agents include humans, robots, thermostats, etc
Perceptions: Vision, speech reorganization, etc.
Agent Function & program
An agent is specified by an agent function f
sequences of percepts Y to actions A:
that maps
Y { y0 , y1 ,..., yT }
A {a0 , a1 ,..., aT }
f :Y A
The agent program runs on the physical architecture to
produce f
agent = architecture + program
“Easy” solution: table that maps every possible sequence Y
to an action A
Agents and Environments
The agent function maps from percept histories
(sequences of percepts) to actions:
[f: P* A]
Example: A Vacuum-Cleaner Agent
A
B
Percepts: location and contents, e.g., (A,dust)
• (Idealization: locations are discrete)
Actions: move, clean, do nothing:
LEFT, RIGHT, SUCK, NOP
Example: A Vacuum-Cleaner Agent
Properties of Agent
Mobility: the ability of an agent to move around in an environment.
Veracity: an agent will not knowingly communicate false information
Benevolence: agents do not have conflicting goals, and that every
agent will therefore always try to do what is asked of it
Rationality: agent will act in order to achieve its goals, and will not
act in such a way as to prevent its goals being achieved.
Learning/adoption: agents improve performance over time
Agents Vs. Objects
Agents are autonomous
agents embody stronger notion of autonomy than objects, and in particular, t
hey decide for themselves whether or not to perform an action on request fr
om another agent
Agents are smart
capable of flexible (reactive, pro-active, social) behavior, and the standard obj
ect model has nothing to say about such types of behavior
Agents are active
a multi-agent system is inherently multi-threaded, in that each agent is assu
med to have at least one thread of active control
The Concept of Rationality
What is rational at any given time depends on four
things:
The performance measure that defines the criterion of
success.
The agent’s prior knowledge of the environment.
The actions the agent can perform.
The agent’s percept sequence to date.
Rational Agents
Rational Agent:
For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure.
Performance measure:
An objective criterion for success of an agent's behavior, given
the evidence provided by the percept sequence.
Nature of Task Environment
To design a rational agent we need to specify a task environment
a problem specification for which the agent is a
PEAS: to specify a task environment
Performance measure
Environment
Actuators
Sensors
solution
PEAS
Specifying an Automated Taxi Driver
Performance measure:
safe, fast, legal, comfortable, maximize profits
Environment:
roads, other traffic, pedestrians, customers
Actuators:
steering, accelerator, brake, signal, horn
Sensors:
cameras, sonar, speedometer, GPS
PEAS: Another Example
Agent: Medical diagnosis system
Performance measure:
Healthy patient, minimize costs.
Environment:
Patient, hospital, staff
Actuators:
Screen display (questions, tests, diagnoses, treatments, referrals)
Sensors:
Keyboard (entry of symptoms, findings, patient's answers)
Recommended Textbooks
[Negnevitsky, 2001] M. Negnevitsky “ Artificial Intelligence: A guide to
Intelligent Systems”, Pearson Education Limited, England, 2002.
[Russel, 2003] S. Russell and P. Norvig Artificial Intelligence: A Modern
Approach Prentice Hall, 2003, Second Edition
[Patterson, 1990] D. W. Patterson, “Introduction to Artificial Intelligence
and Expert Systems”, Prentice-Hall Inc., Englewood Cliffs, N.J, USA, 1990.
[Minsky, 1974] M. Minsky “A Framework for Representing Knowledge”,
MIT-AI Laboratory Memo 306, 1974.
[Hubel, 1995] David H. Hubel, “Eye, Brain, and Vision”
[Ballard, 1982] D. H. Ballard and C. M. Brown, “Computer Vision”,
Prentice Hall, 1982.