PPT Presentation - Subhrajit Bhattacharya

Download Report

Transcript PPT Presentation - Subhrajit Bhattacharya

REAL-TIME AUTOMATED TARGET
TRACKING GUIDED BY TDM
COMPRESSED IMAGE TRANSMISSION
PALLAB BARAI
 SUBHRAJIT BHATTACHARYA
 SAMUDRA DASGUPTA
 KAUSHIK SENGUPTA
 SIDDHARTH TALAPATRA

What do the jargons mean?
TDM
Compressed Image
Transmission
Real-Time Automated Target
Tracking
What is it all about?
Another image compression scheme?
 How is compression related to target
tracking?
 Where exactly is neural network being
applied?
 Where do these systems find
application?

Image compression – Why do we need
it at all?




Time constraints
Bandwidth constraints
Information Redundancy Removal
Memory saving
But we already have so many compression
schemes – why another?
JPEG 2000 (the world standard
which utilizes DWT) – maximum
acceptable compression achievable
is around 10:1
 Lesser bandwidth implies lesser
chances of detection in undercover
wireless operations

So what is the scheme?
The Transmitter:
 Predictor
 Vector Quantizer
 An Error Code-book !!
The Receiver:
Inverse Vector Quantizer
The same Code-book
The same Predictor
The Prediction scheme

The black pixel is predicted as a
linear function of 1, 2 and 4
1
2
4
6
3
5
7
8
Vector Quantization
We
calculate the difference between
the predicted and the actual input
vector and call it the error vector.
The error vectors are mapped into a
n-dimensional sub-space by the vector
quantization procedure.
Code-book Generation


The code book is generated by selforganizing map. The clustering of
the error vectors are done on the
basis of minimization of Euclidean
distance.
After a large number of epochs the
error vectors get clustered around
the code-book vectors.
TIME TO CHECK THE CLAIMS!
How do the reconstructed images
look?
 Can we have a look at the ‘error’
images?
 What about the variation of PSNR
with size of code vectors?
 Variation of MSE with number of
nodes?

Next:
Time to track down the
targets….!!!
ROBOT MOTION PLANNING
Robot motion being a very important issue in various problems in
robotics, like the present one, has been explored & studied upon for
long time.
Some of the well-known approaches are:
• Graph Theory
• Potential Field Theory
• Diffusion Theory
Our project explores a Neural Network approach towards solving the
problem.
The Model
An Unsupervised Neural Network Model
The model that we used is basically a sort of Competitive Learning
Model. The following are the salient features of the model:
•described
The configuration of a robot with n degrees of freedom is
by a point in a n-dimensional Variable Space. The
variable space is discretized into a finite number of cells.
•in Corresponding
to each such cell in variable space there is an input
the Input Layer of the model.
•fashion
The Competitive Layer contains Neurons arranged in similar
as that of the cells in the discretized input space.
•corresponding
There is a connection between each cell in the Input Layer and the
neurons in the Competitive Layer.
•Competitive
There is only one output connected with a group of neurons in the
Layer. The output decides the position of the Robot.
Schematic Representation of the Model for a Robot
with two degrees of freedom
The main working principle of the Model is to update the Activity of
the Neurons in the Competitive Layer with time, and decide position of
the Robot depending on the activity Values.
The Neural Network Model is governed by the following shunting equation:
  k
dxi
  Axi  ( B  xi ) I i    wij x j
dt
j 1

    ( D  x )I 



i
i
k
 
  Axi  ( B  xi )I i   ( D  xi )I i   ( B  xi ) wij x j



(1)
j 1
where,
Ii is the external Input depending on existence of Obstacles & Targets in Variable Space,
xi is the Neural Activity attached to each Neuron in the Competitive Layer,
A, B and D are parameters for the model,
th
th
wij  f d ij is the Synaptic weight between i & j Neurons in Competitive Layer, with
dij being the distance between the Neurons, and,
f being a monotonically decreasing function.
The non-linear functions [a]   max{ a, 0} and [a]   max{ a, 0} are chosen,
 
and we chose, f (a)  1 (1  a) ,
0
0  a  r0
, otherwise
Deciding the Winner
 At a particular instant of time the
robot's position is determined by a
cell in the discretized Variable
Space. The Output is the position
of the robot at that instant of time.
Corresponding to that position in
the Variable Space there is a
Neuron in the Competitive Layer.
 At the next time instant the
Neurons in the neighborhood of
the last time instant is searched for
maximum activity. The Output now
becomes the position
corresponding to the Neuron with
highest activity.
PROBLEM DEFINTION
•
•
In our case we have considered a robot with two degrees of freedom.
The Variables are X and Y coordinates of the robot’s position in a 2D
reference coordinate system.
We have considered 2 types of problems:
 Finding path through a maze to reach fixed targets.
Finding path through a maze to reach a moving target.
•
The parameters we chose for our program are
given below:
Δt = 0.001,
A = 10,
B = 5,
D = 1,
r0 = √2.
Ii = E, if the ith Neuron corresponds to a target,
= -E, if the ith Neuron corresponds to a obstacle,
= 0, otherwise.
Where E is is a large positive value; We took E = 100
We started with initial Neural Activity for all Neurons of the Competitive
Layer as zero.
Other Applications:
• Sensor Based Exploration - Finding an unpredictable target in a
Workspace and avoid collision with obstacles.
• Molecular modeling - to study Protein folding pathways, Ligand
binding.
• Medical surgical planning.
• Flexible objects - planning paths for elastic and deformable objects.
• Manipulation planning for Optimized Execution - Multi-Arm
Manipulation Planning.
• Using Probabilistic Roadmaps - Visibility based probabilistic
roadmaps for motion planning
References
 Robert Citrniak, Leszek Rutkowsai, On Image Compression by
Competitive Neural Networks and Optimal Linear Predictors, Signal
and Processing : Image Communication 15 (2000), 559-565.
 Simson X. Yang, Max Meng, An efficient neural network approach to
dynamic robot motion planning, Neural Networks 13 (2000), 143148.
  k
dxi
  Axi  ( B  xi ) I i    wij x j
dt
j 1

 


  ( D  xi )I i 


k
 
  Axi  ( B  xi )I i   ( D  xi )I i   ( B  xi ) wij x j



j 1
[a]   max{ a, 0}
[a]   max{ a, 0}
 
wij  f d ij
f (a)  1 (1  a) , 0  a  r0
0
, otherwise