Designing Sound for Interactive Dance Performance

Download Report

Transcript Designing Sound for Interactive Dance Performance

Motion in Sound:
Designing Sound for Interactive Dance Performance
Dr. Dan Hosken
Associate Professor of Music
California State University, Northridge
Presented at:
ATMI 2006
San Antonio, TX
September 16, 2006
Purpose

Present a somewhat simplified and useful
approach to creating for the interactive
dance medium
 Facilitate collaboration between students
of dance and students of music
Objectives:

Give an overview of the hardware and
software components of a camera-based
interactive dance/music system
 Present a loose taxonomy of motion
parameters and mapping types
 Suggest some useful mappings between
motion parameters and sound element
parameters
 Illustrate these mappings using examples of
my recent work with the Palindrome IMPG
General System Overview

Camera trained on dancer(s) is connected
to computer
 Video Analysis Software abstracts motion
data in realtime
 Motion Data are passed to Sound Software
 Sound Software maps incoming motion
data to sound element parameters in
realtime
Overview w/ bad clipart
video computer
audio computer
ethernet
Sound Generation Software
Max/MSP (Cycling ‘74)
 PD (Miller Puckette)—free!
 Supercollider (J. McCartney)—free!
 Reaktor (Native Instruments)
 …and any software that can receive
data and produce sound in realtime

Video Analysis Software
EyeCon (Frieder Weiss)
 EyesWeb (eyesweb.org)—free!
 Jitter (Cycling ‘74)

SoftVNS (David Rokeby)
 Cyclops (Eric Singer/Cycling ‘74)
 TapTools (Electrotap)
 cv.jit (Jean-Marc Pelletier)
 Eyes (Rob Lovel)—free!

Objectives (redux):




Give an overview of the hardware and software
components of a camera-based interactive
dance/music system
Present a loose taxonomy of motion parameters and
mapping types
Suggest some useful mappings between motion
parameters and sound element parameters
Illustrate these mappings using examples of my
recent work with the Palindrome IMPG
Definitions (1)

Motion Parameter: made up of specified
data abstracted from part or all of video, e.g.,




Height
Width
Dynamic
Sound Element: a distinct, coherent sonic
behavior created by one or more synthesis
or processing techniques, e.g.,



A Low Drone created by FM Synthesis
Time-stretched text created by Granulation
Percussive patterns created by Sample Playback
Definitions (2)

Sound Element Parameter: a parameter of
a synthesis/processing technique, e.g.,




Modulation Frequency of a simple FM pair
Grain Size of a granulated sound file
Ir/regularity of tempo in a rhythmic pattern
Mapping: the connection between a motion
parameter and a sound element
parameter, e.g.,



Heightmodulation frequency of FM
Widthgrain size of granulated sound file
DynamicIrregularity of tempo
Definitions (3)

Scene: a group of motion parameters,
sound elements, and mappings
between them
EyeCon Interface (1)
Field: can measure height or width or dynamic or…
Touchlines: detect crossing and position on line
EyeCon Interface (2)
Fields and lines are mapped to MIDI data (or OSC)
Sequencer steps through “scenes”
Taxonomy of Motion Parameters

Body Parameters: position independent,
“attached” to body




Height
Width
Dynamic
Stage Parameters: position dependent,
“attached” to stage



Left-right position
Touchlines
Extremely Narrow Fields
Parameter Type Examples

Stage Parameter (position): Scene 3 from
Brother-Sister Solo



Julia Eisele, dancer/choregrapher
Stuttgart, June 2005
Body Parameter (Dynamic): Conversation



Robert Wechsler, dancer/choreographer
Julia Eisele, dancer
Stuttgart, June 2005
Primary/Secondary Mappings

Primary Mapping: controls dominant sonic
feature
 Secondary Mapping: …is secondary…
 Example: Scene 3 from Brother-Sister Solo



Primary mapping: positionposition in sound
“landscape”
Secondary mapping: dynamicdisturbance of
drone
Secondary mapping: widthloop size/speed of
segment within sound file
Sound Element mappings (1)

A Human Conversation (in progress)

Scene 7-8:


DynamicGranulated Text (playback rate)
Scene 9:
Dynamic (left)Granulated Text (playback rate)
 Dynamic (right)Granulated Text (playback rate)

A Human Conversation
Robert Wechsler (Palindrome),
choreographer/dancer
 J’aime Morrison (CSUN),
choreographer/dancer
 Dan Hosken, composer and sound
programmer
 Work session, CSUN, June 23, 2006

Sound Element mappings (2)

Perceivable Bodies (Emily Fernandez)

Scene 3a:




Scene 3b:




PositionGranulated Text (position in file) [Primary]
WidthGranulated Text (grain duration)
DynamicLow FM Drone (mod frequency)
PositionPhase Voc File (position in file) [Primary]
WidthPhase Voc file (loop length/rate)
DynamicLow FM Drone (mod frequency)
Scene 4:


DynamicGranulated Noise (density) [Primary]
DynamicGranulated Noise (position in file)
Perceivable Bodies
Emily Fernandez,
choreographer/dancer
 Frieder Weiss, projections and
interactive programming
 Dan Hosken, composer and sound
programmer
 World Premiere at Connecticut College,
April 1, 2006

[email protected]
Examples shown can be found:
http://www.csun.edu/~dwh50750/Papers-Presentations/
Full Pieces can be found:
http://www.csun.edu/~dwh50750/Music/
Other Examples of Palindrome’s work:
http://www.palindrome.de/
Max/MSP Screenshot
PD Screenshot
Reaktor Screenshots
Eyecon Screenshot
EyesWeb Screenshot
Jitter Screenshot
Cyclops Screenshot