07_collaborativeAIx - University of Southampton

Download Report

Transcript 07_collaborativeAIx - University of Southampton

COMP 2208
Collaborative AI
Dr. Long Tran-Thanh
[email protected]
University of Southampton
Classical AI
The ultimate goal: build an AI that behaves like a human
John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E.
Shannon (1955)
7 main key requirements of AI:
1. Automatic computer
2. Language understanding
3. Usage of neuron nets
4. Computational efficiency
5. Self-improvement
6. Abstractions
7. Creativity
Classical AI
Classical AI
Classical AI
Singularity: the time when AI becomes superior to humans (Ray Kurzweil)
This raises many moral/legal issues
Sci-Fi: Isaac Asimov’s laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does
not conflict with the First or Second Laws.
Classical AI
More realistic laws (Murphy & Woods, IEEE Intelligent Systems, 2009):
1. A human may not deploy a robot without the human-robot work
system meeting the highest legal and professional standards of safety
and ethics.
2. A robot must respond to humans as appropriate for their roles.
3. A robot must be endowed with sufficient situated autonomy to
protect its own existence as long as such protection provides smooth
transfer of control which does not conflict with the First and Second
Laws.
Classical AI
Fear & hatred towards AGI:
AGI = artificial general
intelligence
Is AI the next cloning?
1995: first cloned sheep (Dolly)
Human cloning is banned in many
countries
New challenges
Let’s approach the problem from a more practical aspect:
Age of Data / Information:
• Too much data, information
• How to process all?
• How to choose the relevant ones?
Would be nice if we have some support system that handles these issues for
us
New challenges
Let’s approach the problem from a more practical aspect:
Healthcare:
• Improvement of health technologies extend our life span
• We need to provide support to those without 100% healthy conditions
• E.g., elderly people, people with disabilities, accidents
Would be nice if we have some solutions that provide support in these cases
Collaborative AI
Idea: we use AI to build ubiquitous systems around us, that makes our
everyday lives easier
Collaborative AI
But what are the requirements to have an efficient collaborative AI?
ORCHID (2010-2015): led by prof. Nick Jennings and a team of researchers
from Southampton
• Oxford, Nottingham, BAE Systems, Secure, Rescue Global, …
www.orchid.ac.uk
• A pioneer project that aims to lay down the foundations of collaborative
AI
Collaborative AI
1. Flexible autonomy:
• Sometimes the AI is the decision maker, other times humans
2. Agile teaming:
• Quickly set up adhoc groups
Collaborative AI
3. Incentive engineering:
• How to incentivise humans to collaborate
2. Accountable information:
• What if things go wrong
• Where and when did it happen?
• Who made the mistake?
• Who should be in charge of fixing it?