MACHINE LEARNING
Download
Report
Transcript MACHINE LEARNING
MACHINE
LEARNING
Fatemeh Saremi
Mina Razaghpour
Overview
What is Learning?
What is Machine Learning?
Why should Machines Learn?
How machines learn?
Specific Machine Learning Methods
Solving Traveling Salesman Problem with Ant colony
Summary
References
2
What is Learning?
Learning is any change in a system that allows
it to perform better the second time on
repetition of the same task or on another task
drawn from the same population.
•
•
One part of learning is acquiring knowledge and
new information ;
And the other part is problem-solving .
3
What is Machine Learning?
The goal of machine learning is to build computer
systems that can adapt and learn from their experience.
x1
x2
xN
System
h1 , h2 ,..., hK
y1
y2
yM
.
.
relationships
Machine Learning algorithms discover the
between the variables of a system (input, output and
hidden) from direct samples of the system.
4
Why Should Machines Learn?
We expect machines to learn from their mistakes.
An Intelligence that didn’t learn ,would not be
much of an Intelligence.
Machine Learning is a prerequisite for any mature
programme of Artificial Intelligence.
5
How Machines Learn?
Machine Learning typically follows three phases:
Training :
A training set of examples of correct behavior is analyzed
and some representation of the newly learnt knowledge is
stored. This is some form of rules.
6
How Machines Learn? (cont.)
Validation
:
The rules are checked and ,if necessary ,additional training
is given . Sometimes additional test data are used , but
instead , a human expert may validate the rules , or some
other automatic knowledge - based component may be
used. . The role of the tester is often called the critic.
Application
:
The rules are used in responding to some new situation.
7
How Machines Learn? (cont.)
Training set
Existing knowledge
Training
New knowledge
Test data
Validation
New situation
Critic
Application
Response
8
Specific
Machine Learning
Methods
9
Learning by Memorizing
The simplest way of learning
Storing examples of correct behavior
An example:
•
Learn to play Checkers : written by “Samuel”
10
Checkers
Using min-max method.
When complete search is impossible , use a Static
Evaluation Function .
At the end of each turn , Record computed values
for each state.
Reaching a state ,visited in previous games, stop the
search and use the stored value.
11
Learning by Memorizing (cont.)
It is too simple , and it is not sufficient for
complicated problems.
We also need :
•
•
•
Organized information storing
Generalization
Direction
So in this method learning is similar to problem
solving , but its success is dependent on proper
structure for knowledge-base.
12
Learning by Adjusting Parameters
Determining parameters
Assigning initial weight to each parameter
Modifying weights as the program goes on
In Checkers :
o
o
16 parameters for each state
f = c1t1 + c2t2 + … + c16t16
When to modify a coefficient?
And to what degree?
13
Learning by Adjusting Parameters
(cont.)
So in this method learning is similar to other
problem–solving methods,and it is dependent
on searching algorithms.
14
Learning by Exploration
This program explores domains , looking
for interesting patterns and generalizations.
A remarkable program : AM developed by
“Doug Lenat”
AM works in the domain of elementary
mathematics.
It maintains a large , growing database of
concepts , such as set and function in the
mathematics domain.
15
Learning by Exploration (cont.)
The program maintains an agenda of tasks ,
and keeps them stored in decreasing order
of interestingness . Its basic control cycle is
to select the first task from the agenda,work
on it (which may add new tasks), and repeat.
Working on a task is done by rules called
heuristics .
16
Learning by Exploration (cont.)
Another example is Eurisko ,developed by “Doug
Lenat”
Eurisko works in a variety of domains , including
three-dimensional VLSI circuits and the design of
battle fleets for a space warfare game.
Eurisko is more complex than AM , and was
designed to overcome some of AM ’s flaws. But
both programs operate similarly.
17
Ant Colony System:
A Learning Approach to TSP
18
Ant Colony Algorithms
Inspired from Ants Natural behavior
Ants can find the shortest path between two
points.
However, they can’t see! So How?
19
Finding the shortest path
Ants choose paths
according to amount of
pheromone.
Pheromone is
accumulated faster on
shorter path.
After some time ,all of
ants choose the shorter
path.
20
Natural behavior of ant
21
ACS for Traveling Salesman Problem
Having a set of simple agents called ants
Each edge has a desirability measure called
Pheromone
Ants search in parallel for good solutions to TSP
Ants Cooperate through pheromone-mediated
communication
22
Algorithm
Initialize : randomly place ants in cities
Each ant constructs a tour iteratively
It chooses the next city by :
A Greedy Heuristic : the nearest city
Use past experience : the Edge with Highest
Pheromone Level
23
Updating Pheromone Level
Global Updating :At the end of each round
The best solutions get extra point
Local Updating
24
Algorithm
Loop
randomly place m artificial ants on n cities
For city=1 to n
For ant=1 to m
select probabilistically the next city according
to exploration and exploitation
apply the local updating rule
End For
End For
Apply the global updating rule using the best ant
Until End_condition
25
Transition function
With Probability q0:
Exploitation
j arg max jN k ij ( t ) ij
i
With Probability (1- q0): Exploration
[ ij ( t )] [ ij ]
[ ( t )] [ ] if j allowed
k
pij ( t )
ik
ik
kallowedk
otherwise
0
k
26
A simple TSP example
[]
1
[]
A
B
2
[]
C
3
[]
4
D
E
dAB =100;dBC = 60…;dDE =150
[]
5
27
Iteration 1
[B]
[A]
1
2
A
B
[C]
3
C
[E]
[D]
4
D
5
E
28
How to build next sub-solution?
[A]
1
A
[A]
B
[
(
t
)]
[
]
[A]
ij
ij
if j allowed k
C
pijk ( t ) [ ik ( t )] [ ik ]
kallowedk
1
[A,D]
[A]
otherwise
0
1
1 1
D
E
29
Iteration 2
[E,A]
5
[C,B]
3
A
B
[B,C]
2
C
[A,D]
[D,E]
1
D
4
E
30
Iteration 3
[D,E,A]
[E,A,B]
4
5
A
B
[A,D,C]
1
C
[B,C,D]
[C,B,E]
2
D
3
E
31
Iteration 4
[B,C,D,A]
2
[D,E,A,B]
4
A
B
[E,A,B,C]
5
C
[C,B,E,D]
D
[A,DCE]
3
1
E
32
Iteration 5
[A,D,C,E,B]
[C,B,E,D,A]
1
3
A
B
[D,E,A,B,C]
4
C
[E,A,B,C,D]
[B,C,D,A,E]
D
5
E
2
33
Path and Pheromone Evaluation
[A,D,C,E,B]
1
L1 =300
[B,C,D,A,E]
L2 =450
2
[C,B,E,D,A]
L3 =260
3
[D,E,A,B,C]
L4 =280
4
[E,A,B,C,D]
L5 =420
5
34
Global Pheromone Updating
Only the ant that generated the best tour is allowed to
globally update the amount of pheromone on it’s tour
edges.
35
Local Pheromone Updating
If edge (r,s) is visited by
ant:
36
Effect of the Local Rule
Local update rule makes the edge pheromone level
diminish.
Visited edges are less & less attractive as they are
visited by the various ants.
Favors exploration of not yet visited edges.
This helps in shuffling the cities so that cities visited early
in one ants tours are being visited later in another ants
tour.
37
Enhancements to ACS
The Algorithm can be performed in Parallel, so
the order is independent of ants number.
For each size of problem a special set of values
for ants number ant other parameters lead the
best result.
38
Compare results with some well-known
Algorithms
Problem
name
ACS
GA
EP
SA
optimum
Oliver30
(30-city)
420
[830]
425
[1830]
535
[3480]
21,282
[4820]
421
[3200]
428
420
424
420
[40000]
[24617]
426
443
[25000]
[100000]
[68512]
545
542
580
[80000]
[325000]
[173250]
21,761
N/A
N/A
Eil50
(50-city
Eil75
(75-city
Kroa100
(100city)
425
535
21,282
[103000]
39
SUMMARY
The goal of machine learning is to build computer systems
that can adapt and learn from their experience.
An Intelligence that didn’t learn ,would not be much of an Intelligence.
Machine Learning typically follows three phases:
•
•
•
Specific Machine Learning Methods :
•
•
•
Training
Validation
Application
Learning by Memorizing
Learning by Adjusting Parameters
Learning by Exploration
Ant colony algorithms :
•
an efficient ,nature-inspired learning algorithm for TSP
40
References
J. Finlay & A. Dix , “An introduction to Artificial Inteligence” , 1997.
R.S. Michalaski & J.G. Carbonell & T.M.Mitchell , “Machine Learning”
,1983.
E. Charniak & D. McDermott , “Introduction to Artificial Inteligence” ,
1985.
M. Fahimi , “Artificial Inteligence” , 2002.
M.Dorigo,L.Gambadrella : A Cooperative learning approach to the
travelling salesman problem,IEEE Transactions,1997
L.Gambadrella,M.Dorigo :Ant Colonies for the travelling salesman
problem,1997
V. Maniezzo, L. Gambardella, F. de Luigi :Ant colony Optimization,2001
41
QUESTIONS???
Thanks for your attention!
42