Siri, a Virtual Personal Assistant Bringing Intelligence to the Interface

Download Report

Transcript Siri, a Virtual Personal Assistant Bringing Intelligence to the Interface

Ch 14. Active Vision for Goal-Oriented
Humanoid Robot Walking (1/2)
Creating Brain-Like Intelligence,
Sendhoff et al. (eds), 2008.
Robots Learning from Humans, Fall 2015
Summarized by Jin-Hwa Kim
Biointelligence Laboratory
Program in Cognitive Science
Seoul National University
http://bi.snu.ac.kr
Contents
14.1 Introduction
14.2 Robotic Setup and Neural Architecture
14.3 Evolution of Neural Controllers of Hoap-2
Humanoid Robot
14.4 Discussion
14.5 Conclusion
2
Overview of Chapter 14


Complex visual tasks may be performed by a coevolutionary process of active vision and feature
selection.
To validate this hypothesis more further, a goaloriented bipedal humanoid is used:


A primitive vision system on its head is evolved while
exploring
Tolerate visual perturbation owing to own walking
dynamics
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
3
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking
4.1 Introduction
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
4
Terminology

Active Vision



A sequential and interactive process of selecting and
analyzing parts of a visual scene.
It reduces a computational cost using two-step
process, permitting a heuristic search of a partial area
on an entire image in the first step.
Feature Selection


Sensitivity to relevant features to which the system
selectively responds
Task-aware selection of partial information
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
5
Neural Architecture
Suzuki, T. Gritti, and D. Floreano
D) system
behavior
E) vision
behavior
A) visual neurons with non-overlapping
receptive fields whose inputs are grey
levels of the corresponding pixels in a
given image B)
C) proprioceptive information about the
movement of the vision system
F)
A) visual
neurons
C) proprioceptive
neurons
D) a set of outputs determining the
behavior of the system (performing tasks)
retina
E) a set of outputs determining the
behavior of the vision system (active
vision)
neural architecture of the active vision system is composed
of A) a grid
B) visual scene
urons with non-overlapping receptive fields whose activation is given by
evel of the cor responding pixels in the image; C) a set of proprioceptive
F) a set
provide infor mation about the movement of the vision system;
D) a of
set evolvable
urons that determine the behavior of the system (pattern recognition, car
navigation); E) a set of output neurons that determine the behavior of
stem; F) a set of evolvable sy naptic connections. The number of neurons
system can vary according to the experimental settings.
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
synaptic connections
6
Neural Architecture
Suzuki, T. Gritti, and D. Floreano
Selecting features in A & F to
D) system
E) vision
behavior
behavior
perform a given task (D), at
the same time, control the
F)
vision system (E).
A) visual
neurons
 The synaptic strengths of the
C) proprioceptive
network (F) were encoded in a
neurons
binary string and evolved with
B) visual scene
a genetic algorithm while
freely
neural architecture of the active vision system is composed of
A) a gridexploring.
urons with non-overlapping receptive fields whose activation is given by
Size and position invariant
evel of the cor responding pixels in the image; C) a set ofproprioceptive
provide infor mation about the movement of the vision system; D) a set
shape
urons that determine the behavior of the system (pattern recognition,
car discrimination.

retina
navigation); E) a set of output neurons that determine the behavior of
stem; F) a set of evolvable sy naptic connections. The number of neurons
system can vary according to the experimental settings.
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
7
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking
4.2 Robotic Setup and Neural
Architecture
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
8
Robotic Setup


Humanoid robot Hoap-2

25cm(W) x 16cm(L) x
50cm(H)

Simulated by Webots™
Goal

To reach a designated location
by detecting the beacon (white
window) while avoiding
obstacles (black cylinders) and
walls.
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
9
Extended Neural Architecture
Active V ision for Goal-Oriented Humanoid Robot Walking
Zoom Filter Pan Tilt Dir Speed

A set of proprioceptive neurons
provides information about the
movement of the head camera
with respect to the upper torso
of the robot. (pan & tilt angles)

Memory units are copies of
previous outputs recurrently
giving more dynamics.
Bias
Hidden
neurons
Proprioceptive
neurons
30 7
Memory units
Visual neurons
 Bias provides adaptive
r al architecture which contr ols the humanoid robot in the goal-oriented
is architecture is an extended version of the original architecture
shown
thresholds
of output neurons.
omposed of: a) A gr id of visual neurons with nonover lapping r eceptive
tivation is given by the grey level of the cor responding pixels in the
t of pr oprioceptive neur ons that provide information about the movead camera
withBiointelligence
r espect toLab.,
the http://bi.snu.ac.kr
upper torso of the robot; c) A set of
© 2015, SNU
10
Extended Neural Architecture
Active V ision for Goal-Oriented Humanoid Robot Walking
Zoom Filter Pan Tilt Dir Speed


Bias
Hidden
neurons


Proprioceptive
neurons
Memory units
30 7
Zoom: zooming factor
Filter: visual neurons filtering
Pan & tilt: new velocities of the
camera
Dir & speed: walking direction
and speed of the robot
Outputs use the sigmoid
activation function f(x) =
Visual neurons
1/(1+exp(-x)), where x is a
r al architecture which contr ols the humanoid robot in the goal-oriented
weighted sum of all inputs.
is architecture is an extended version of the original architecture shown

omposed of: a) A gr id of visual neurons with nonover lapping r eceptive
tivation is given by the grey level of the cor responding pixels in the
t of pr oprioceptive neur ons that provide information about the movead camera
withBiointelligence
r espect toLab.,
the http://bi.snu.ac.kr
upper torso of the robot; c) A set of
© 2015, SNU
11
Summary


Macroscopic Control

The algorithm of bipedal walking itself is beyond our
research scope.

To control macroscopic behavior, visuo-motor
coordination exploiting active vision and feature
selection is used.
In next two chapters

We will discuss evolution of neural controllers of
Hoap-2 Humanoid robot.
© 2015, SNU Biointelligence Lab., http://bi.snu.ac.kr
12