Compositional Technology

Download Report

Transcript Compositional Technology

Compositional
Technology
Module Aims
• To provide the student with the tools to explore audio;
music and soundscape installations through the
design and implementation of computer driven audio
interfaces. Implement the computer as a
compositional creative tool.
Expected Learning Outcomes
At the end of this module, students will be able to:
• 1. Analyse and evaluate the different types of interfaces required to explore
sound design.
• 2. Define a set of principles and tools to use in order to effectively adapt to the
changing audio environment.
• 3. Critically analyse compositional current trends within the music and associated
industries
Transferable/Key Skills and other attributes:
• Effective group work
• The capability to analyse and adapt to changing technology trends
• Time management and multi tasking
• Communication skills
Assessment
• Portfolio
• The student will develop a portfolio of compositional work based on
an interface minimum 8 mins of audio (or equivalent work).
• to explore the capabilities of audio interface software such as:MAX/MSP/Jitter; Reaktor; CSound; Reason. Use of and creation of
additional hardware is permitted.
• This can take the form of ‘fixed’ audio, notated composition, live
performance, installation or a combination of the above as agreed.
Bibliography
•Blum,F. (2007) Digital Interactive Installations, VDM Verlag.
•Emmerson,S. (2008) Living Electronic Music, Ashgate
•Gibbs,T. (2007) The Fundamentals of Sonic Art & Sound Design, AVA Publishing
•Holmes,T. (2008) Electronic and Experimental Music: Technology, Music, and Culture, Routledge
•Licht,A. (2007) Sound Art: Beyond Music, Between Categories.Rizzoli International Publications
•Michael Nyman,M. (1999) Experimental Music: Cage and Beyond (Music in the Twentieth Century). Cambridge
University Press
•Roads,C. (1996) The Computer Music Tutorial, MIT Press
•Truax,B. World Soundscape project. http://www.sfu.ca/~truax/wsp.html
•Winkler,T.(1998) Composing Interactive Music: Techniques and Ideas Using Max MIT Press
•Wishart,T. (1997) On Sonic Art, Routledge
•Computer Music Journal, MIT press,
•Organised Sound Journal
Philosophy
• “Our musical alphabet must be enriched. We also need new instruments very badly. . . . In my own
works I have always felt the need of new mediums of expression . . . which can lend themselves to
every expression of thought and can keep up with thought.”
Edgard Varèse: New York Morning Telegraph 1916
• “Perhaps the time is not far off when a composer will be able to represent through recording, music
specifically composed for the gramophone”
Andre Coeuroy: Panorama of Contemporary Music 1928
• “The rediscovery of the musicality of sound in noise and in language, and the reunification of music,
noise and language in order to obtain a unity of material: that is one of the chief artistic tasks of radio.”
Rudolf Arnheim Radio 1936
• “When I proposed the term ‘musique concrète,’ I intended … to point out an opposition with the way
musical work usually goes. Instead of notating musical ideas on paper with the symbols of solfege and
entrusting their realization to well-known instruments, the question was to collect concrete sounds,
wherever they came from, and to abstract the musical values they were potentially containing.”
Pierre Schaeffer 1949
Technical
• A lot of the development of modern music depends upon
the development of electronic recording of sounds. In
some ways the use of concrete sounds is what defines the
genre. The development of the radio gave rise to great
leaps in microphone and speaker technology as well as
simple sound processing such as volume and reverb. The
development of magnetic tape made it far easier to edit
and manipulate sounds in a physically more tangible way.
The rise of synthesis and the standardization of midi
protocols have made creating and controlling ‘new’
sounds much easier. Since then the use of computers has
revolutionized the way that we can control and store
large amounts of data in order to create sounds and
structures that would have been impossible a few
decades ago.
• To view electronic music purely as a biproduct of a technical revolution would be
over simplistic. There are many creative
concerns that are shared with acoustic
music. The concept of orchestration to
provide sonic clarity and development, has
direct parallels with audio mixing. The use
of space or panning as a compositional tool
appears throughout the electronic body of
work but also runs through many notated
works such as Gabriell’s antiphonal choral
pieces. The shift in focus from pitch to
timbre can also be seen paralleled in
Varèse’s orchestral works e.g. Amériques
and many of the pieces by John Cage such
as Sonatas and Interludes (for prepared
piano).
Stylistic Preoccupations
• Sounds
Great emphasis is placed upon the collection and quality of sounds. A
composer will be expected wherever possible to record and collect the
sounds that are used in a composition. In many cases using other peoples’
audio is regarded as somewhat un-ethical.
• Compositional Continuum
• The use of pitch is of fundamental importance to most types of music.
Electroacoustic music is no exception. There are some important
differences to the way that we understand and use pitch, largely due
to the unique set of tools available in the studio.
• Most western music is dominated by the idea of notes and rhythms. Pitch
and duration are easily quantifiable either in conversation or notation by
the regular division and codification of these parameters. With
Electroacoustic music we are free to experiment with a more holistic
continuous understanding.
• At a given time a sound can be quantified and defined by its pitch, volume,
and timbre. This is taking a snapshot, so disregards issues such as duration,
relative loudness and harmonicity. Pitch, volume and timbre can be
thought of as a 3 dimensional space in which a sound could be plotted. In
the case of clearly defined ‘notated’ music there are a number of set
possibilities e.g. the chromatic scale, pp-ff and orchestral instruments.
Looking at the Rubik’s Cube analogy these sounds would be occupying the
black lines of the potential space. With electronic control and the loss of
the need to define the space to the performer we are free to use the entire
pitch volume timbre space as a continuum. Thus Acousmatic music rarely
uses ‘notes’ and ‘harmony’ in the traditional sense, as it is one of the few
musical forms that does not necessarily have to!
• Space
• We exist in a spatial environment and our understanding of it is
important to our day to day survival. Space is an innate part of our
understanding of most music, even if we are not always aware of it.
Listening to a symphony orchestra or any ensemble is an innately
spatial experience; the composition is diffused based on a complex
set of rules. These are governed by a pool of possible pitch, duration
volume and timbre choices. More simply the music is placed
throughout the ensemble based on the instruments performing it
(consider the motion of left to right and treble to bass in a
conventionally arranged string section). In electroacoustic
composition we have 3 possible planes of spatialization for a given
sound:
• Horizontal plane (simple left/right balance based largely on amplitude)
• Virtual plane (depth, created by psycho-acoustic effects employing processes
such as reverb and EQ)
• Implied plane (height, largely created by frequency and the construct of ‘low
and high’ sounds)
• In addition to this we have 2 basic states of a sound object within that space:
• Static
• Dynamic (movement of a sound in one or more plane)
• At a higher level of classification we have concepts such as the predictable
motion of a sound within space, moving in a spiral for instance. It is also
possible to then consider the interaction of sound objects with each other in
the stereo space, as well as the linking of spatial motion to implied musical
gesture, however we will focus on the lower level concepts initially.
No matter how complex they are to describe, every sound has pitch
timbre and volume characteristics. They also have a place within the
stereo field whether you have specifically defined one or not. After all if
you can hear it, it must come from somewhere!
• Texture
• Texture is a fundamental part of the way we understand music. In
conventional music, texture is often thought of as a function of a number
of factors, mainly; harmony/polyphony, rhythm and orchestration. In a
sense this holds true for acousmatic music, in that texture is the
comprehension of changes in the music over time. These changes can be in
pitch, volume or timbre and can be relative to other parts of the piece or
relative to our expectations of a work.
• Texture is not really a continuous parameter such as pitch, although the
objects it describes can be seen as such. In everyday usage ‘texture’ or
‘textured’ usually refers to sound objects or structures with a lot of
changing internal information or perceptible internal detail. These are what
we would commonly call ‘course’ or ‘rough’ textures. Obviously the term
texture can also refer to smooth textures, although this link is not as
strong, in that when we describe something as textured we tend to think of
something ‘at least slightly bumpy’.
Rough
Textured
Smooth
(no discernable texture?)
Performance
• Given the fixed nature of acousmatic music it is hard to define a way of
‘performing’ it. As a result in concerts the music is diffused over a large number
of loudspeakers by a performer at a mixing desk. If it is well done act of diffusion
will naturally exaggerate many of the spatial features of the music. A large set-up
may consist of over 60 speakers. In some cases these may be arranged with
speakers of differing frequency responses in different places such as the GRM
Acousmonium (Paris). This will colour the sound and create an extra layer of
spatial motion as the audio moves to the specific frequencies of each given
speaker. This particular arrangement is rather unusual. More common but no less
striking is the multi layered approach taken in spaces such as SARC (Belfast)
where speakers are arranged in a number of vertical layers. In the case of SARC,
the auditorium floor is a grill with speakers and subs in the basement below and
then a set of speakers at floor level, around ear level, overhead and then in the
roof (this can be changed to fit specific works). No matter which approach is
taken it is worth being aware that stereo width can alter, distance and height.
• “Acousmatic, what is it?”
“Acousmate
n. (from the Greek Akousma, what is heard). Imaginary sound, or of which the cause is not
seen.
In 1955, the writer and poet Jérôme Peignot, at the beginning of musique concrète, used
the adjective acousmatic, meaning ‘a sound that we can hear without knowing its cause’,
to designate “the distance that separates a sound from its origins” by obscuring behind the
impassivity of the loudspeaker any visual element that may be connected to it. In 1966,
Pierre Schaeffer mused about giving his Traité des objets musicaux (Treatise on Musical
Objects) the title “Traité d’acousmatique” (Treatise on Acousmatic). Finally, around 1974,
to mark the difference and to avoid any confusion with incidental or transformed musical
instruments (ondes Martenot, electric guitars, synthesizers, real-time digital audio
systems…), François Bayle introduced the expression acousmatic music as a specific kind of
music, as the art of projected sounds which is “shot and developed in the studio, projected
in halls, like cinema.”
It is true that over the past twenty years, under the term electroacoustics there has been a
proliferation of sound pieces which have little in relation to each other except a common
use of electricity. It was therefore important to affirm, with precise terminology, æsthetic
choices, a body of thought, and a language. It is also in this spirit that, since 1989, the
Rencontres acousmatiques (Acousmatic Meetings) of composers in the south of France
have been organized.”
Francis Dhomont, Saint-Rémy-de-Provence (France), July-September 1991
Introduction to Max/MSP programming
• What is Max/MSP?
• Max/MSP (often just called ‘Max’) is a ‘multimedia programming
environment’ which will allow you to create pretty much any kind of
music or audio software you can think of. It can also handle video
using a built-in extension called ‘Jitter’.
• To get more of an idea of what Max can do, visit the website
www.cycling74.com and click on the ‘projects made with Max’ link.
Making your first ‘Patch’
• A programme in Max is a called a ‘Patch’ (or ‘Patcher’). This is because it is made
by connecting (or ‘patching’) graphical objects together on the screen. To create a
new patch, select File>New Patcher ( ⌘n). This creates the window in which you
will make your patch. The patcher has two modes, EDIT MODE for editing the
patch (creating objects, making connections), and LOCKED MODE for actually
using the patch. If you want to press buttons, move sliders and so on, you need to
lock the patch by ⌘e Now double-click anywhere in the window. The object
pallet appears. If you hover over each object, you will see its name. (There are
many more objects than these – but these are the most common, basic ones.)
When you’ve made an object, you can resize it, drag it around the screen, cut,
copy and paste it. If you drag it while holding alt, you also get a copy. As well as
the objects shown in the palette, there are many more objects. To create these
you must use an object box, which is a ‘blank’ box into which you type an object
name. Objects are really small programs which you put together to make larger
programs.
Elements
• There are several different elements that go to make up a patch. The
message box is simply a container for any piece of text or a number
which gets sent when clicked on.
• A max object carries out some sort of function on data going through
it.
• An msp object works similarly to a max object but at a far higher
processing speed and is therefore more suited to direct handling of
audio data. It looks different and has a ~ sign in the name to remind
you that it is msp and not max.
• Arguments can be added to an object to tell it how to behave, for
instance a cycle~ object will generate a sine wave but the addition of
the argument 440 will make the sine wave oscillate at 440 Hz.
• A comment box lets you comment on your patch. This is useful in that
it can tell you or other people working on it how the patch is put
together, which is very helpful for fault finding, and revision
• Objects can be connected together with
patch cords. Each object has inlets at the
top, and outlets at the bottom. You make
a patch cord by dragging from the outlet
of an object. When you stretch the cord
to nearby the inlet of another object, a
comment appears telling you about that
inlet. When you let go (of the mouse) the
connection is made. You can only connect
outlets to inlets (i.e. bottom to top).
Different objects have different functions.
They also have different numbers of
inlets and outlets depending on their
function
Order of Execution
• By default Max will work from right to left and top to bottom across
the patcher, in many cases this can only mean fractions of
milliseconds difference, but in some cases this can be crucial.
Keeping things neat
It is important to try to keep your patch neat. Things can get very messy
and tangled if you’re not careful, and then finding and fixing problems
can be a real nightmare. To align objects and patch cords nicely, select
them and type ⌘y Commenting your patch is also very important.
Above, this week’s patch unlocked with examples at bottom.
Below the same patch locked and formatted in presentation view.
Max and MSP
• Max/MSP is really two parts. Max is the part that handles numbers, messages, MIDI
information and other data. MSP handles audio signals. (There is also a third part called
Jitter which handles video signals, not covered in this module. Note: Max/MSP is often
just referred to as ‘Max’ for short!)
• Max and MSP are used together seamlessly in ‘Max/MSP’, but it’s often helpful to
understand the distinction. For example, the manuals for Max and MSP are separate.
Also, MSP objects use a lot more CPU (computing power) than Max objects, and knowing
that can help you write programs that don’t make the computer work as hard.
• The most obvious difference is in making connections. Max connections carry numbers
and other data, whereas MSP connections carry audio signals. Max number and
messages go at a slower rate intended for MIDI notes (the ‘scheduler rate’) whereas MSP
audio signal numbers go at the much faster audio sample rate.)
• You can easily tell the difference between Max and MSP connections when building your
patch. Max connections are simple black lines (which you can colour) but MSP
connections are thicker stripy lines.
• You can also tell the difference
between Max and MSP objects.
MSP objects always have the
symbol ‘~’ at the end of their
name. Sometimes that
distinction is crucial to avoid
confusion. For example, the
Max object cycle is completely
different from and unrelated to
the MSP object cycle~.
However, Max/MSP helps you
get it right, because it only lets
you make the right kind of
connections. For example, you
can’t connect an signal cable to
cycle, because it is not an audio
object.
Some MSP audio objects
• cycle~
a sine-wave oscillator
• scope~
an oscilloscope for looking at signals
• EZDAC~
a simple audio output object, with graphic on/off button
• gain~
a graphic-based signal level control
• *~
a multiplication object for audio signals (NB: * is for
numbers)
• spectroscope~
a spectral signal scope
Some Max objects
• message
• int
• float
• slider
• line
a simple container for any kind of data
an integer number
a floating point number
a graphic fader control for numbers
a ramp (or envelope) generator. (Also line~ for signals)
• Try building and
playing with this, it
can be produced
from the list of
elements above.