presentation

Download Report

Transcript presentation

Emotional Intelligence
Vivian Tseng, Matt Palmer, Jonathan Fouk
Group #41
Introduction
▪
Some people find it harder to communicate their
emotions with others
▪
Provide an interface for them to identify their
emotions and communicate it clearly with others
Overview
▪
Wearable Device on Wrist
o Similar to watch
▪
Software
o Speech Processing and Classification
▪
Hardware
o Sensors
o Digital Signal Processor (DSP)
o Display
High Level Block Diagram
Block Diagram: Data Collection and Processing
Block Diagram: User Interface
Block Diagram: Power System
Power
▪
Requirements
o 3.3V ± 0.2V
o 1.7V ± 0.1V
o 0.1Vpp Voltage Ripple
at 100 mA
o 500 mA Charge
Current
o 100 mA maximum
current
o Charge LiPo Battery
Microphone and Audio Amplifier
▪
Requirements
o 30 dB Gain
o Output Impedance of
400 Ohm
o Filter transition band
from 8k to 10k of -30 dB
o Less than 1 ms of Time
Delay for all Passband
frequencies
o Output Offset of 1.65V
Simulation Results Frequency Sweep
Simulation Results Outputs
Power, Microphone, and IO Implementation
Sensors
▪
Temperature Sensing
o Thermistor
o Input to Analog-to-Digital Converter (ADC) pin of
microcontroller (MCU)
o Calculate temperature on MCU
▪
Heart Rate Monitor
o Analog Front End Chip
o Use Green LED and Photodiode to collect data
o Communicate to Microcontroller through Serial
Peripheral Interface (SPI)
o Calculate beats per minute on MCU
User Interface
▪
Two Display methods decided by the user
o LCD screen display
 Connected to the Microcontroller
and display the determined
emotion
o LED lights display option
 Different LED colored lights
showing your emotion
▪
USB Charging
Feature Extraction
Mel-Frequency Cepstral Coefficients
▪ take Discrete Fourier Transform of signal
▪
create Mel filterbanks
▪
▪
take the log of filterbank energies
take Discrete Cosine Transform of log filterbank energies
Feature Extraction
Feature Extraction
Pitch
▪
Autocorrelation
Classification
Support Vector Machines (SVM)
▪ Finds maximum distance plane
between 2 or more classes
Multi-class Problem
▪ More than two classes of emotions
▪ One vs One strategy
Classification
Cross correlation method
▪ Split dataset into v subsets of equal size
▪ test each subset against trained group of
other subsets
▪ For each distinct emotion, accuracy ~41%
Classify by valence and arousal
▪ valence = positivity of feelings
▪ arousal = excitability of feelings
▪ only voted for one class
Problem: Unequal training sets
After equalizing training set
▪ valence accuracy ~ 63%
▪ arousal accuray ~ 75%
Digital Signal Processor
▪
Pulls in speech data from microphone
o Analog-to-Digital Converter (ADC)
o Feature Extraction
▪
Sends data through Serial Peripheral
Interface (SPI) to MCU
o MCU performs classification on
speech and biosensor data
Successes
▪
▪
▪
▪
▪
Determine correct emotion from speech with higher than 65% confidence rate
Display module working
o LEDs and LCD
Temperature Sensor (ADC to MCU)
Amplifying circuit for microphone
Power charger and converter
Future Steps
▪
▪
▪
▪
▪
▪
Fix SPI communication between modules
Replace Microcontroller Launchpad with surface mount microcontroller
Add feature extraction to DSP
Extensive testing with different users
Re-design IOBoard and Controller Board
o Design Wearable Package
Implement with Sensitive Microphone
o Adjust OP-AMP gain
Conclusion
▪
Learning
o Speech signal processing
o Circuit design
o Power system design
▪
Reflections
o Use less surface mount for prototyping
o Have access to speech databases
o Expanded soldering and design skills
Thank you!
Questions?