Application - IoT Vision

Download Report

Transcript Application - IoT Vision

GROUP SD1501
IOT VISION
ADVISORS: BOB WEINMANN, SAMEE KHAN
CLIENT: APPAREO
Group Members
Dale Bromenshenkel
James Massey
Bradley Hoffmann
INTRODUCTION
• Problem
• How do you give a robot the ability to see?
• Once the robot is able to ‘see’, how does it make decisions based on that?
• How can you share that data with other people/devices?
• Why is it important for a robot to see?
• Many tasks performed by humans are possible only through visual input.
• Many monotonous tasks could be solved by a robot using computer vision.
• Why is it important to share that data?
• Sharing the data allows it to be used by other applications.
• Humans can use that data improve existing technology and develop new solutions.
REQUIREMENTS
Client Requirements
• Build something we are passionate
about.
• IoT device capabilities
Team Requirements
• Computer Vision
• Object Detection
• Image Color Segmentation
• Save coordinates of object to text file
• Robot
• Controlled by a Raspberry Pi
• Controlled wirelessly
• Integrate with other systems to be controlled by application
• Application
• Must be able to control outside scripts/applications
• Must have a physics system
• Must have the ability to make a GUI
TECHNICAL CONTENT
Computer Vision
SIMPLECV
• Overview
• SimpleCV is an open source computer vision library that can implement
OpenCV without a high learning curve.
• Built in commands allow the use for display augmenting, image/video
processing, object tracking, color mapping, color manipulation, data storage
and extraction, etc.
• We implement this library into python scripts.
OBJECT TRACKING
• Combined object tracking techniques
• Blobs
• The blobs command is a predetermined script that allows for object detection.
• By implementing this command we do not have to re-invent the wheel.
• Ability to detect different geometric shapes, color, edges, data each blob
contains.
• Camera/Display
• Allow the use to open and manipulate video through display
• Image Processing
• HSV and RGB color schemes
• Histograms
• Image segmentation
OBJECT TRACKING
Start
Camera
Activation
Save Ball (x,y)
Coordinates
Is Ball
Detected?
Object Detection
Is Circle
Detected?
Save Circle (x,y)
Coordinates
Is Square
Detected?
Save Square
(x,y)
Coordinates
COLOR MAPPING/MANIPULATION
• RGB Color Scheme
• Theory every image contains Red, Green and Blue fundamental colors.
• Pixelated and mapped on screen with Red, Green and Blue pixels values
• RGB in SimpleCV
• By importing SimpleCV into our base code we can segment images into RGB
components.
• Pushing back color in correlation with our object detection can allow us to
segment our object and attain coordinates of the individual object minus its
surrounding.
COLOR MAPPING/MANIPULATION
• HSV Color Scheme
• Hue, Saturation, Value
• Hue = Color in General
• Saturation = Presence of White
• Value = Brightness of Color
• HSV unlike RGB allows for a wider range of color and manipulation
• No need to segment using Red, Green and Blue. We can use direct values of color.
• HSV can be directly implemented and imported into python script using
SimpleCV.
DESIGN AND TESTING OPTIONS
• Controlled conditions
• Perfect light
• Background
• Color of object
• Testing RGB vs. HSV
• Histograms = Map of image in pixel values
• Testing Object Tracking through the use of visual effects.
• Set color scheme to gray scale to perfect object detection.
• Saving Coordinates of objects to text file
TECHNICAL CONTENT
Raspberry Pi Robot
OVERVIEW OF ROBOT
• The robot has two powered wheels and a
third to balance it.
• The two wheels will be powered by a DC
motor.
• These DC motors will be connected to a
couple H-bridges that will allow them to
spin both forward and backwards.
• The movements will be executed by the
Pi, with commands sent from a PC.
• The robot will be completely wireless
SPECIFICS- MOTORS
• These DC motors are capable of 200mA
for a 6v input
• They can move .4 Kg/cm
• The wheels snap right on to the output
• There is a 1:48 gear ratio
SPECIFICS- RASPBERRY PI
• The robot will be controlled by a Raspberry Pi 2
• The GPIO pins of the Raspberry Pi will be the way the software will interact with the
hardware
• The Pi will be controlled wirelessly through SSH with a USB Wi-Fi dongle
RASPBERRY PI GPIO
• The Raspberry Pi is equipped with 40
pins 17 of which are GPIO
• The motors will be controlled from
GPIO pins 17, 18, 22, 23
• 4 pins are needed because 17 and 18
control one motor forward and reverse
and 22 and 23 for the other motor
• The GPIO pins output 3.3v
RYANTECK MOTOR CONTROL BOARD
• This is a pre-built board specifically for the Raspberry
Pi
• Its purpose is to control two motors and allow them to
go forward and backwards
• Allows for a separate source of power to drive the
motors, because the Pi cannot supply enough current.
• The board uses the TI chip SN754410NE, which is a
quadruple high-current half-H driver
BODY FOR THE ROBOT
• We will be designing the body of the robot to house the components.
• The body will be 3D printed.
• The top will have two distinguishable marks on it that the camera will be able to
detect for location and direction.
POWER FOR THE ROBOT
Raspberry Pi Power
• The Raspberry Pi requires 5 volts to run from a
standard micro USB port
• For this we thought a portable USB charging pack
would work the best
• These charging packs are commonly used for cell
phones and output 5 volts
• I had one on hand that had a capacity of 2200mAh
• After testing it lasted over 2 hours before needing
a charge
Motor Power
• The motors operate at 6 volts so the USB
power pack could have probably
worked but we decided to go with
batteries instead
• 4 AA batteries in series output 6 volts
and are a perfect solution to power the
motors
CODE FOR THE ROBOT
• The code that is on the Pi is written in python.
• The robot is coded to have 5 main actions: Forward, Reverse, Turn Left, Turn Right,
and Stop.
• We set it up to react to keyboard inputs in such a way that: W=forward, S= Reverse,
A=Left, D=Right, Q= Stop.
• GPIO pins 17 and 18 are for motor 1 and 22 and 23 are for motor 2.
Forward
Reverse
Turn Left
Turn Right
Stop
GPIO 17
1
0
0
1
0
GPIO 18
0
1
1
0
0
GPIO 22
1
0
1
0
0
GPIO 23
0
1
0
1
0
WIRELESS INTERACTION
• The Pi needs to be able to receive commands wirelessly.
• Considered using WebIOPi
• We researched many ways to do this and found a solution that works over the network
that both the PC that is sending the commands and the Pi are connected to.
• The way we are doing it is through an SSH connection using PuTTy and using the
python module called SendKeys
TECHNICAL CONTENT
Application
APPLICATION - OVERVIEW
• Base Requirements Recap
• Control outside scripts,
• Solution
• Unity3D
• Python Scripts
• Putty
Customizable GUI,
Physics Engine
APPLICATION – UNITY3D
• Supports a wide variety of
platforms
• Windows Standalone, Webplayer,
Android, iOS, Linux, etc.
• Ability to easily create three
dimensional scenes
• Can use built in objects or
import custom objects
• Comes with Monodevelop IDE
• Has a customizable UI system
• Buttons, sliders, text, images, etc.
• And much much more!
UNITY - ENVIRONMENT
UNITY - MONODEVELOP
• Supported Languages
• C#, Javascript (Unityscript), Boo
• F#, Visual Basic, .Net, C, C++,Vala
• Allows outside libraries to be imported.
• Loads of other tools available
• Debugging, Source Control, Unit Testing, etc.
MONODEVELOP - IDE
APPLICATION - PYTHON
• Python
• Interpreted, high level object oriented, scripted
language.
• Runs through console.
• Does not need to be compiled.
• Can integrate modules (libraries)
APPLICATION - PUTTY
• Client program for SSH, Telnet, and Rlogin network
protocols
• Can access devices over Wi-FI
• Serial
• Used for sending commands to Raspberry Pi
PROJECT STATUS - OVERVIEW
• 3 different systems
• CV, Robot, Application
• Work on bringing them together
• Testing Area
• Scaled down
• Github Repository
• Code
PROJECT STATUS - TASKS REMAINING
Computer Vision
•
•
Implement Hue Color Control
•
Recognize different colors
•
Recognize multiple objects
Robot
•
•
•
Work on collecting more data
•
Statistical data
Make a body
•
3D print / actobotics
Add ball catcher arm
•
•
Application
Stop momentum of arm
Explore Wi-fi control
•
Add the ability to program
through Wi-Fi
Add GUI control methods
•
Robot
•
Settings/Modes
•
Implement physics simulation
•
Add ability to create more objects
in scene
•
Add additional platforms
•
Web, Android
PROJECT STATUS - TIMELINE
BUDGET
Budget
Allotted Budget
???
Purchases
Part
CC3200 Eval Board
HDMI Pi 5" Display
CanaKit Raspberry PI B+
Total
Quantity Retail Cost
1
1
1
Purchased Cost Notes
30
30
85
65 800x480 Ips, touchscreen
80
0 Wi-Fi adapter, GPIO, 8GB SD Card, Power cable, HDMI
$95
Remaining Purchases
Part
Quantity Retail Cost
Expected Cost Notes
Logitech HD Pro Webcam
1
68.6
68.6 USB, 1080p, Widescreen
SUMMARY SLIDE
Accomplished
• We have created a system which can
take in camera input, process that data,
and then use that data in an application
and robot.
Future
• We hope to be able to simulate the
environment more, and have the robot
be even more autonomous in its
decision making.
QUESTIONS?