Plus ça change.....
Journeys, Instruments and Networks - 1966- 2000
By Lawrence Casserley
A version of this paper was first published in Leonardo Music Journal, Vol 11, 2001. The current version corrects some errors and misprints that occurred in the original.
The author has been using electronic means in performance since the late 1960s. In this article he compares his work in the 1970s and 1990s from both a technical and a philosophical viewpoint. How do these two periods differ? How are they similar? He concludes that, partly due to recent technological developments, he has been able in recent years to explore more deeply and broadly the aims which were established in the earlier period.
Keywords: Live electronics, improvisation, electronic instruments, real time digital sound processing, Max, msp.
The intention of this article is to examine, from a technical and from a philosophical point of view, key aspects of the work in which I was involved in the 1970s compared with that of the 1990s. How was it similar? How did it differ? Particular emphasis will be placed on the use of electronic means of sound production and processing in live performance, and relationships between electronic performance and improvisation.
In 1966 I returned to Britain after fourteen years in the USA. In 1967 I joined the new Electronic Music course taught by Tristram Cary (please see Appendix 1 for a list of references to persons mentioned in the text) at the Royal College of Music in London. From then onwards most of my work has involved electronic means.
One of the crucial concepts in my music at that time was the idea of journey. The idea of taking a sound on a journey occurred to me quite early in my electronic explorations, and it is still a significant concept in my work; the concept of journey had been present already in my instrumental writing. I developed a form of progressive transformation using ring modulation that formed the basis of my first electronic work “The Final Desolation of Solitude" (1968-9). The technique involves repeated modulation so that the timbre is progressively altered, while the morphology of the sounds remains recognisable. The most developed and clear example of this technique is another early tape work "Transformations I" (1970), where the repeated comparison of each generation of transformation with its original source points up the distance that has been travelled.
Equally crucial was the conviction that, for me, electronic music would have to be primarily a performance medium, and that I would need to develop my own tools and instruments in order for this to happen. My second electronic work "Solos, Commentaries and Integrations" (1969) for clarinet, percussion, live electronics and tape, developed both these ideas. The piece consists of three layers: electronically transformed clarinet, percussion and electronic tape, and each layer is in three parts, which begin and end independently.
Through Tristram Cary, the RCM Studio had a connection to Peter Zinoviev's pioneering computer music studio in Putney, and the RCM students made several visits there. During one of these (as far as I recall in 1968) I created some sound material which formed the basis of the tape part. The three tape sections are progressive transformations of this material. In 1969 I purchased a VCS1 (Voltage Controlled Studio), the first product of the fledgling Electronic Music Studios (London) (EMS) Ltd and forerunner of the famous VCS3 (also called "The Putney"); this VCS1 became my first performance instrument (1). I used the VCS1 to process the clarinet sound; in the first part the clarinet is simply amplified and reverberated; in the second it is ring modulated with a fixed frequency; and in the third it is ring modulated with a slowly changing frequency controlled by the electronics performer. Here is the germ of the idea of the electronic instrument and its performer each taking an equal roles in the ensemble.
Later, I would evolve the idea of networks. These differ from my other electronic instruments in that they display a definite, usually quite complex behaviour, which requires little or no input from an electronics performer. Typically these would include multiple delays with multiple feedback paths and some form of processing built into these paths. These depend on an instrumental performer listening and responding to the delays; the "control" of the network comes from the acoustic performer's responses to its behaviour. A step in this direction is "Transformations III" (begun in 1971, but not completed in its final form until 1982 - during this period the network concept was developed, although I did not actually use this term until about 1992), for flute and live electronics. Here two delays, of four and seven beats at mm50, are used. The outputs are fed to three ring modulators, which perform modulations between live flute and delay one; between live flute and delay two; and between delay one and delay two. A different combination of these modulations is used in each section of the piece, but otherwise there is no input from the electronics performer.
These three concepts of journey, performance and networks - have played a major role in much of my music ever since. I will examine their importance in some of my work in the 1970s and 1990s (I consider the 1980s a transition period between the two).
The most important outlet for my work during the 1970s was the multi media group "Hydra". In 1970 I was invited to create a studio and teach some classes at the Inner London Education Authority's new Cockpit Arts Centre, a purpose-built theatre and art centre, working both with school groups and adult education classes. One of the first students was the painter Eddie Franklin-White, who was teaching, and running a mixed media resource centre, at Hornsey College of Art. The Cockpit's theatre encouraged cross media experimentation, and Eddie and I collaborated on a piece for live electronics and light.
No attempt was made to form a direct equivalence between the sound and light, particularly as the use of space was central to our ideas. Indeed it was the concept of articulating space, physical and temporal, with both media which was the driving force. This eventually became the core intention of "Dodman Point" (1972). The original working title, used for performances at the Cockpit in 1971 and at The Round House in the 1972 ICES (International Carnival of Experimental Sound) Festival, was "Sound, Light and Space". The piece is constructed as a series of "tableaux", which were fixed points where we had decided upon a specific combination of sound and light. Between the tableaux, sound and light would each follow its own logic for the journey from one fixed position to another, through a process of continuous transformation [Fig 1]. The realities of the equipment available to us - three VCS3s for the sound, simple dimmer boards for the light - meant that a precise configuration required considerable fine adjustment. The tableaux became pauses, during which the performers gradually "tuned in" to the final setting. The settling down of the tableaux became an important element of the piece. Between the tableaux the transitions were worked out to a timed score with performers following stopwatches. The score was a list of instructions and timings for the performers [Fig 2].
"Dodman Point" represents one of the most detailed workings out of the "journey" principle in my work. I created what I called evolutionary patches on the VCS3, where each sound could gradually metamorphose itself into another. Each voice is in constant movement, following its own route, until the next point of rest, a tableau, is reached. This is a continuous process of transformation; in a piece lasting sixty minutes, only once does each player drop out to repatch the VCS3.
Another interesting aspect of "Dodman Point" was the spatial configuration used for the sound. One of several possible layouts for the Cockpit Theatre was in-the-round. We wanted the central 'arena' to be the space that we were articulating with both light and sound. I adopted a tetrahedral loudspeaker array, which was skewed so that one speaker was on the floor, two were in a gallery half way up the walls and the fourth was in another high gallery near the roof. I had built myself a mixer that made it possible to pan between any two combinations of speakers. This made possible a very flexible control of sound movements, and a sculptural approach to the sound in space, in contrast to the surround approach used by most multi-speaker systems; and my performance of these movements from the mixing desk was a vital element of the piece. Subsequently we adapted this arrangement to suit different performance spaces.
As a result of these experiences, we formed "Hydra", which was not a fixed ensemble, but rather a forum for collaboration between practitioners of different art forms. Over the years a number of visual artists, writers and musicians became involved to a greater or lesser extent. Eddie's work at Hornsey produced a rich stream of experimental artists (Rupert Morley's work with polarised light and lasers, and Gillian Olensky's smoke domes are prime examples)(2). Musicians included Per Hartmann and Simon Desorgher. Sound poet Bob Cobbing was also a key element in some of the performances.
I first met Bob in the early 1970s when I was teaching at Goldsmiths College, London, and he was Poet-in-Residence. The result was a collaborative tape version of Bob's "15 Shakespeare Kaku", and subsequently a fruitful relationship lasting a number of years. Later, while planning a performance at the National Poetry Centre in London, we decided to collaborate on a live work combining Bob with the resources of Hydra; this produced "Hydrangea" (1974)(3), for voices, instruments, live electronics and light. This work stretches the word 'Hydrangea', with a few embellishments, to a length of one hour. Bob wrote some visual poems for each section, and I created instrumental, electronic and light interpretations of the sonic material revealed; these interpretations are not meant to be "the same as" the original, but additional layers of response. For each section the performer has a list of letters, a duration, a number of events and a "density vector", an arrow indicating the general density shape of that section [Fig 3]. The player interprets these freely within the capabilities of the instrument.
The electronic part, for two VCS3s, develops the electronic instrument concept in that essentially only two patches are used, with certain simple variations, and they were designed so that both could be set up on a VCS3 without major changes. One type, based on frequency modulation with and without filtering, is associated with vowels; the other, based on filtered and gated white noise, is associated with consonants. The indications for these players are similar to those for the other instrumentalists.
The point of Hydra was to encourage collaboration and experimentation across media boundaries, and much of Hydra's work was improvised, or partially improvised. It is interesting to note that I had little contact with the free improvisation scene in London at that time. Certainly, I was aware of groups such as AMM and the Music Improvisation Company and attended some of their concerts, but there seemed between them and me a great gulf fixed. With hindsight, that gulf appears merely a shallow ditch, but then it seemed impassable. A number of people with whom I was in contact (Hugh Davies and Barry Guy, for example) moved in both worlds. Other contemporary groups, such as Gentle Fire and Intermodulation were using indeterminate scores and/or group composition (4), but the "Hydra" approach, particularly in the later work, was in many ways closer to the free improvisers, but working across several media. Unlike Gentle Fire or Intermodulation, who frequently interpreted scores by Cage or Stockhausen, for example (5), we only performed our own material.
In one important respect, though, we differed from groups like AMM; we were happy to use pre-composed or part pre-composed, as well as wholly improvised work as the context required. I think it is fair to say our work was much less purist, more hybridised, perhaps as a result of the cross-fertilisation of ideas from several media. It must be said, however, that such cross influences occurred in much of the work of the period. For example, Derek Bailey (6) describes how Tony Oxley's connection with contemporary jazz and Gavin Bryars's interest in contemporary composers fused with his own background in commercial music in the work of their trio Joseph Holbrooke. In attempting to fuse in performance the very different experiences and priorities of music, poetry and visual arts we were setting ourselves some very great challenges. Saying this, however, I am reminded of Bailey's caveat in the same passage (7): "The main distortion of this retrospective description is to greatly simplify the whole process and, most particularly, to give the development of the music a more deliberate, more calculated, intellectual character than it actually had".
From our millennial perspective, it is too easy to regret the separation noted above, but I prefer to think that these separate strands have grown in their own ways until, in the late nineties, it has become obvious that their ever overlapping branches are just part of the rich culture of our time. With hindsight, I admit that it was easier for me at that time to accept and work with ideas from different media, than it was to accommodate those from other forms of music. For example, I felt closer to much of the work of the sound poets than to many musicians of the time. Also, although I was doing a lot of performing, I still did not see myself primarily as a performer. Later I would realise that, for me, composition and performance are completely inter-related, but then I was a composer first and a performer only because it was necessary.
Towards the end of the 1970s I was becoming increasingly dissatisfied with the limitations of the available instruments; I was also becoming aware of the growing potential of microprocessors (I had been involved in a microprocessor based design project between 1975 and 1977) (8). I was convinced that real time digital signal processing was the only way forward. In 1980 I had begun to compose "Angel Music" for oboe, percussion and live processing, but found myself unable to complete it to my satisfaction; I simply did not have the technical resources to realise my ideas. It would wait until the 1990s before I would have the means to develop the signal processing instruments to resolve this impasse.
One of the effects of my absorption during the 1980s with developing digital signal processing techniques was that I did less composing and more performing. The key events of the time were centred around my collaboration with Simon Desorgher, and the Nettlefold Festival, which we founded together. Out of that came the ensembles Tube Sculpture (later reformed as Electric Tubes) and the Electroacoustic Cabaret. Both of these were cross media concepts, and both were collaborative and improvisatory in nature.
Tube Sculpture (founded 1984) featured a new acoustic instrument, the giant panpipes, that Simon and I designed and built together. This instrument is played by two performers and consists of about 150 pipes, ranging in length from 15 feet to a few inches. Half the set are blown flute-fashion by simon Desorgher; the other half are made nto percussion instruments, the onger ones fitted with drum heads, while the shorter ones form a tubular glockenspiel played by myself. Not only was this instrument designed to be spectacular visually as well as aurally, it was intended as an instrument with electronic extensions integral to its nature. It was not until the 1990s that the full potential of this was realised in a new version of the electronic part of the instrument utilising real time digital signal processing (9).
The Electroacoustic Cabaret (founded 1985) evolved as a way of presenting contemporary music in a new way, an informal 'cabaret' atmosphere; the collaboration with mime artists Ian Cameron and Linda Coggin was a vital element, as well as trombonist Alan Tomlinson. Later Hugh Davies and Biff Harrison became involved, as well as Paul Houlton as compere.(10)
Part of the point of this group was to mix zany events with serious matter, and particularly items with elements of both; for us the sublime and the ridiculous were but different sides of the same coin. At the height of the Cabaret's activities in the early 1990s a show might go something like this: Alan Tomlinson would play a fanfare on the Boghorn, an instrument created from a toilet bowl and sections of drainpipe, culminating in explosions of coloured smoke from the bowl; Paul Houlton would introduce the show, with "subtitles" (many of our performances were in non anglophone countries) manipulated hilariously by the mime artists; then a sudden change of pace with a serious, but intensely visual piece such as Hugh Davies's "Strata" for quadraphonic aeolian harp; then one of Alan Tomlinson's madcap solos, interacting with the mime artists, would lead to the appearance of the motorbike for Simon Desorgher's "The Incredible Clanking of the Chains and Cogs of Beelzebub", followed by an encore, when I would play Bach's "Toccata in D Minor" on the motorbike (used as a percussion instrument). After an interval we would perform Hugh Davies's "The Birth of Live Electronic Music", featuring an acoustic processing system made out of piping; a musical battle between Biff Harrison's musical saw and Ian Cameron's musical alien would lead to a staged performance of "The Monk's Prayer" (see below), and further twists culminated in a final improvisation by the whole ensemble.
For me the vital experience of both these groups was learning to perform in different contexts and to interface with audiences in different ways. It was here that I began to accept myself as a performer, to find a different way to create new things. No longer was I just a composer, but more importantly, I developed a fundamentally different attitude to the audience. Although we used a lot of electronic resources in Cabaret performances, my main instruments were invented or abstracted devices - monochords, motorcycles, and unusual percussion objects.
Two major events altered the landscape irrevocably for me: in 1989 Simon Desorgher and I began to work with Peter Jones's "Colourscape", a walk-in sculpture of pure colour; and in 1992 I was fortunate to be able to obtain one of the first IRCAM Signal Processing Workstations (11). The first exposed me to a whole new way of interfacing with a completely new public. Colourscape is the generic title for a series of inflatable structures created by Peter Jones and his associates since the the 1970s. These consist of chambers of translucent plastic in primary colours coupled with opaque chambers, where the light from adjacent colour chambers is mixed to produce a large range of coloured vistas (12). Unlike a typical music venue, which attracts an audience expecting a specific genre, Colourscape attracts a wide variety of public, many of whom would not usually attend contemporary music concerts. Engaging with this new audience has been an important aspect of working in this exciting environment. Just as important is the fact that performers and audience are occupying the same space; their is no separation between "stage" and "auditorium".
The second event allowed me, at last, to begin realising concepts dating back twenty years (13). One of the first benefits of the ISPW was to enable my network concepts to reach fruition. In 1987 I had created a music theatre work, "The Unending Rose", for the Electroacoustic Cabaret. One of the characters in this work is a monk "who kneels at a prayer desk and intones a long prayer on a bass flute, which spreads like ripples over water". Subsequently I extracted the monk's music as a separate piece, which has been performed many times. The flute plays a melody, then repeats it a number of times with variations indicated in the score [Fig 4]. My idea was that there would be a multi-tapped delay lasting the length of the melody, about 97 seconds, but delays of that length were not affordable (the rapid expansion in the capacity of digital memory chips was only just beginning), so I always made a compromise using available resources. In 1993 we performed "The Monk's Prayer" for the first time in its intended form.
In 1990 "Los Hijos del Sol" was premiered at the Gulbenkian Foundation in Lisbon. For this piece I created a network out of Yamaha SPX processors and delay lines. This is really a very simple network [Fig 5], four delay/pitch shifts and two longer delay lines. Feedback from the two delay lines is set so that a sound reverberates through the system for about two minutes. Despite its simplicity this network allows the performer to delineate the four sections clearly by altering the input material - voice and the drums of the giant panpipes. Nothing is written in "Los Hijos del Sol", except the titles of the four sections - the network is the piece. Inherent in this idea is the journey that each sound takes as it passes through the network and the performer’s response to the transformed echoes.
Another important work is "Labyrinth" (1989, revised 1998), a music theatre work for Colourscape. Once again this was originally created with the limited resources available at the time. In revising the work in 1998 I utilised the full potential of the techniques I had developed on the ISPW, now transferred to MAX/msp (14) on MacIntosh computers. This is the most highly developed example of the network principle in my work to date. There are in fact two separate networks: one processes the sound of gongs, representing the labyrinth itself, and voice, the Minotaur; the second processes the flute, Theseus. The Theseus network is very similar to that used in "Los Hijos del Sol", except that the delay and pitch shift parameters change gradually through the piece. These changes, together with changes in the material on which the flute part is based, constitute the Theseus music.
The Minotaur network is more complex. I wanted something that was changing constantly, with a life of its own; every gesture of the Minotaur becomes multiplied and magnified to enormous proportions; in addition, I wanted the music to be dark and disturbing, as well as rather beautiful. There are three entry points to the network, one for the voice, one for the two lower gongs and one for the two higher gongs [Fig 6]. Each of these is independently panned across three outputs, which are sent to three separate sub networks; each of these consists of a delay line with four taps and four single sideband modulators, or frequency shifters, which are constantly ramping up and down. There is a feedback loop from the longest tap of each sub network to the input of the next one. All of the parameters, the movements of the three panners and the twelve shifters, are controlled by random generators with carefully chosen limits; the result is a constantly moving continuum, ever changing, ever different, but ever the same; the reverberating echoes simulate the echoing depths of the labyrinth. Only one control change occurs; at the climax of the piece the shifters change rapidly from downwards shifts to upwards shifts, so that the end has a completely different character.
One feature of this system is that a very small amount of material played by the performer results in a massive amount of sound - an interesting exercise in self restraint; as the piece progresses to its climax, more and more layers of sound are added. But this is really the issue - I see networks of this kind as an intrinsic part of the instrument. The flautist in "Labyrinth" is not just playing the flute, but playing an integrated flute-plus-processing network - just as the flute itself has an intrinsic behaviour, so does the network attached to it; and it is the combination of these two behaviours that constitutes the complete instrument, and demands a very different playing technique as a result.
The most significant development of my music in the 1990s has been the Signal Processing Instrument (SPI), which extends the ideas of electronic instruments, most particularly in the way the performer can control its behaviour. The instruments and networks discussed so far are all made for the specific requirements of particular pieces. The SPI was designed to be a general instrument for improvised music, capable of functioning in many different contexts. The original design and philosophy of this instrument have been discussed elsewhere (15, 16), so I will confine myself to discussing its relation to the ideas above. The instrument has evolved noticeably since the original paper, so I will begin with an update on these changes.
The fundamental idea behind the SPI, based on Simon Emmerson's "local/field" concept (17), remains unchanged, although the implementation has developed significantly. The linear design from "local to instrumentalist", through "local to computer musician" to "field" has been replaced by a more integrated structure with multiple signal paths between the processing blocks [Fig. 7]. The dedicated, fft-based, "local to instrumentalist" block has disappeared, but the remaining two blocks, the "Pad Instrument" and the "Long Delay Instrument" now operate across all three areas, allowing much stronger integration between "local" and "field" processing. Both these instruments have been enhanced to provide more voices, greater range of delay times (from milliseconds to several minutes), more feedback paths and a greater range of spectral processing. Some of the transformations achievable now are remarkably similar to those created by the fft instrument in the original version. An important new development creates improved control of the "Long Delay Instrument"; I can now manipulate the range of the 35 delay taps with great precision around a delay buffer of about 2.5 minutes.
At the same time the physical arrangement of the controllers has developed to improve gestural control of the instrument [Fig. 8]. The two DrumKat 3.5 controllers provide the primary interface to the "Pad Instrument", which allows me to replay delayed sounds as I choose, thus imposing my own gestural impulses on them. Above to the left is a Wacom graphics tablet, which allows gestural control of the various spectral transformations, as well as manipulation of the long delay taps. Above to the right are the Apple G3 PowerBook which now handles all the processing, and a Peavey PC-1600 fader controller, which is used mainly to control signal levels within the system. On the floor are a Yamaha MFC10 foot controller with four additional controller pedals. The five pedals (including the one on the MFC10) control, from left to right, pad instrument level, long delay to pad level, long delay level, pad instrument feedback level and pad to long delay level. The switches on the MFC10 set the transposition ranges of the pads and control various other functions.
First of all, the SPI is the most highly developed electronic instrument design I have achieved. Earlier instruments, utilising the VCS1 and VCS3 synthesisers and tape delay systems, or later, commercial effects units, were constrained by the physical controls required; performing meant turning knobs or sliding faders; the most gestural event possible was pressing a button. With computer systems it became possible to map a whole range of physical controllers to whatever aspects of the process were needed. For example, the graphics tablet is operated by a five button mouse, and each button selects a different set of parameters for the X and Y axes. In addition, many of the problems of routing signals between modules are greatly simplified when all the modules are implemented within one computer. Now it is possible for me to "play" the processed sound in a way that is directly analogous to the way acoustic instrumentalists play their sounds. While I can still make the gradual transformations typical of the earlier work, I can also make strong gestural events that make this feel like an independent instrument, although the sounds are all derived from the acoustic instrument(s) used as input.
A less expected result is that it has also become, particularly with the recent enhancements, a wonderful system for generating networks; but now there is a new capability to control dynamically the evolution of network behaviour in many ways. This has proved an immensely valuable development; I will give some examples of different contexts for the use of this instrument.
1) Duet with an acoustic musician, eg Evan Parker or Agusti Fernandez: in this context I concentrate on processing the sounds created by the other musician, sometimes adding a small amount of vocal material; the emphasis will be on the more gestural "instrument" aspects of processing.
2) Larger groups, eg Evan Parker Electroacoustic Ensemble or the various quartets with Parker, Joel Ryan and one other musician; here I only process the other musicians, with a strong emphasis on gestural performance, although some network-like processes happen as well.
3) Solo work, where I am creating the source sounds, generally voice, monochords and amplified metal instruments; inevitably if I am playing these other instruments I am not able to give the same amount of energy to controlling the processing, so the emphasis will shift toward the "network" concept - setting up system behaviours that I can play by means of the source instruments, but with the ability to shift network behaviour as I wish, sometimes with gestural content too.
4) Quartet with Melvyn Poore, Peter Cusack and Nicolas Collins, duo with Keith Rowe; in these cases, because the other musicians are already making a very electronic or processed kind of sound, I have adopted a middle ground where I could balance their sounds with my processed instruments, but could also process their sounds when appropriate.
5) The group HyperYak consists of Michael Ormiston, an expert in mongolian overtone singing, Jeff Higley, who plays Tibetan singing bowls, Simon Desorgher, playing a variety of Western and ethnic flutes, and myself. As well as processing, I use a small set of my percussion instruments, monochords and voice, which assist me in fusing the electronic sounds with the rest of the ensemble.
It is significant that all these are improvising groups. The capabilities of the SPI have helped me to resolve my performer/composer dichotomy and to view their combination as the most effective area for my work.
Another spect of the SPI is its relationship to the concept of "journey". Earlier I talked of "progressive transformation"; in my discussion of networks I talked of each sound taking a journey through the instrument. Both these things can happen in the SPI. The multi-voice design of the system, coupled with a variety of processes and feedback paths enable many sound journeys to be set up; sometimes sounds can travel through these processes many times, becoming transformed progressively. But what is most interesting about this instrument is the way these journeys can be altered and shaped as they are happening, often leading to unexpected results. This is an instrument that is both realising old dreams and creating new possibilities.
There is one other aspect of my recent work that should be mentioned in this context. Having spent so much effort getting my hands into and controlling the internal workings of the process, I am now starting to make installations, where I must construct networks that will run without my intervention and will make meaningful responses to people who don't understand their behaviour. This is not only a technical, but also a conceptual challenge; it is another new development of the composer/performer balance. What is clear now is that all these activities are mutually complementary.
In investigating key elements in my work during the 1970s and 1990s I have shown that the most important themes are similar in both eras. The technical developments of the 1990s have certainly provided me with better tools with which to implement these ideas. The frustration I felt in 1980 at the inability to achieve my aims is reversed in 2000, when I see great possibilities for future development. This is due not only to technological developments, but to my changed perspective on the roles of performance and composition in my work.
Lawrence Casserley, January - May, 2001
Some of these require reformulation for web display - thank you for your patience!b
Fig 1 - Excerpt from structure plan of "Dodman Point" (not available yet)
Fig 2 - Sample of synthesiser part in "Dodman Point" (not available yet)
Fig 3 - Excerpt from score of "Hydrangea"
Fig 4 - Excerpt from score of "The Monk's Prayer"
Fig 5 - Block Diagram of network structure for "Los Hijos del Sol (not available yet)"
Fig 6 - Block diagram of Minotaur network structure for "Labyrinth (not available yet)"
Fig 7 - Block diagram of Signal Processing Instrument - Mark 2
Fig 8 - Control Layout of Signal Processing Instrument - Mark 2
Appendix 1 - References to people mentioned in the text.
As many of the names mentioned will not be familiar to all readers, I have attached brief notes and given web references where available.
Cameron, Ian - British actor, director and mime artist, who was a regular member of the Electroacoustic Cabaret from 1986 until 1993. He also worked on music theatre projects in Colourscape.
Cary, Tristram - British Composer and electronic music pioneer - founder of the Electronic Music Studio at the Royal College of Music, London, co-founder of Electronic Music Studios (London) Ltd. An interesting account of his early and subsequent development may be found in the Introduction to his "Illustrated Compendium of Musical Technology" Faber and Faber, London, 1992; isbn 0-571-15251-1.
Cobbing, Bob - British poet, particularly known as a pioneer of concrete and sound poetry. see: http://wings.buffalo.edu/epc/authors/cobbing/ and http://www.ubu.com/feature/sound/feature_cobbing.html
Coggin, Linda - British actress and mime artist, who was a regular member of the Electroacoustic Cabaret from 1986 until 1993.
Collins, Nicolas - American composer, instrument designer and improviser - see; http://www.lively.com//bios/collins.html
Cusack, Peter - British guitarist, composer and improviser. See: http://www.forcedexposure.com/artists/cusack.peter.com
Davies, Hugh - British composer, instrument designer, improviser and writer on electroacoustic instruments. He was a founder member of the ensemble Gentle Fire; he was also a member of the Electroacoustic Cabaret from 1988 onwards.
Desorgher, Simon - British flautist, composer and improviser; he has been a constant collaborator with the author since they met at the Royal College of Music in 1971. He was a member of Hydra, and together they have created the Nettlefold Festival, the Colourscape Music Festivals, Electric Tubes and the Electroacoustic Cabaret, as well as performing together in various other contexts.
Deutsch, Stephen - American composer resident in UK since 1970, He was a post-graduate student at the Royal College of Music from 1971 to 72 and a Director of Synthesizer Music Services. He is now a Professor at Bournemouth University, UK.
Fernandez, Agustí - Spanish pianist and improviser, resident in Barcelona.
Franklin-White, Edmund - British artist and educator; formerly Director of the Mixed Media Workshop at hornsey College of Art; he is also known for his courses for serious non-professional artists; since 1981 he has been working free-lance and now lives in France.
Guy, Barry - British bassist, composer and improviser - see: http://www.shef.ac.uk/misc/rec/ps/efi/mguy.html
Harrison, Biff - British mathematician and multi-instrumentalist; former member of the Bonzo Dog Doodah Band, member of Bill Posters Will Be Band and a member of the Electroacoustic Cabaret from 1988 onwards.
Hartmann, Per - Norwegian composer resident in UK since 1971. He was a post-graduate student at the Royal College of Music from 1971 to 72 and a Director of both Synthesizer Music Services and Integrated Music; he now runs a music publishing company, Edition HH Ltd - see: http://editionhh.co.uk
Higley, Jeff - British sculptor and performer; Co-Chair of the Landscape and Art Network.
Houlton, Paul - British graphic designer and spoons player; a member of the Electroacoustic Cabaret from 1988 to 1993.
Morley, Rupert and Olensky, Gillian - British artists; students at Hornsey College of Art who contributed to several Hydra performances. Morley also created the slides for "Hydrangea".
Ormiston, Michael - British musician specialising in mongolian Khoomii singing; not only has he studied with Mongolian masters, he is the only Westerner authorised by them to teach overtone singing.
Parker, Evan - British saxophonist and improviser. His collaboration with the author at the time of the development of the Signal Processing Instrument is documented in the CD "Solar Wind". - see: http://www.shef.ac.uk/misc/rec/ps/efi/mparker.html
Poore, Melvyn - British tubist, composer and improviser; he has appeared with the Electroacoustic Cabaret and in various other collaborations with the author - see: http://www.chaconne.com/poore/
Rowe, Keith - British artist, guitarist and improviser; founder member of AMM - see: http://www.shef.ac.uk/misc/rec/ps/efi/mamm.html
Ryan, Joel - American computer musician resident at STEIM (Studio for Electro Instrumental Music), Amsterdam, Netherlands.
Tomlinson, Alan - British trombonist and improviser; he was a member of the Electroacoustic Cabaret from 1987 onwards.
Zinoviev, Peter - British writer and composer; co-founder of Electronic Music Studios (London) Ltd with Tristram Cary and David Cockerell. In 1967 he founded a computer music studio in London.
Much of the work described is undocumented and/or unpublished (which was one of the incentives to writing the article), I will endeavour to put as much material as possible on my website, or show links to where it may be obtained. I have set up a page (http://www.chiltern.demon.co.uk/LMJLinks.html), which will have links to relevant material on my and other sites. This will be a growing resource, and all suggestions for additions will be welcomed.
There are a number of existing resources that provide much relevant background material, particularly:
Chadabe, Joel - Electric Sound, the Past and Promise of Electronic Music, Prentice Hall, New Jersey, 1997 - isbn 0- 13 - 303231 - 0.
Derek Bailey's book "Improvisation - It's Nature and Practice in Music ", cited above, contains valuable information on the British free improvising scene, as well as other relevant material.
The European Free Improvisers site: http://www.shef.ac.uk/misc/rec/ps/efi/ is another valuable resource in this area.
Volume 6 Part 1 of Contemporary Music Review, edited by Peter Nelson and Stephen Montague, is cited above (Emmerson, 1991). Apart from Emmerson's study of Live Electronic Music in Britain, there are many other relevant articles, some by people mentioned in this text.
The author will be happy to answer any queries about the work discussed; email: firstname.lastname@example.org.
(1) The VCS1, and another early EMS product, the Dynamic Filter, are now in the collection of electronic instruments at the Gemeente Museum, The Hague, Netherlands.
(2) As far as I know, there are no published references to these works, but their contribution to Hydra should be acknowledged.
(3) The score of "Hydrangea" was later published (1985) by Writers Forum - isbn 0 - 86162- 361 - 4.
(4) Emmerson, S., "Live electronic music in Britain: three case studies", Contemporary Music Review, Vol 6, Part 1; 179 - 195 (1991).
(5) See the repertoire lists in Emmerson, 1991 (3) pp 190 - 193.
(6) Bailey, D., Improvisation - It's Nature and Practice in Music (London, The British Library National Sound Archive, 1992, isbn 0-7123-0506-8) p 86.
(7) Bailey, 1992 (5) pp 86 - 87.
(8) In 1971 composers Per Hartmann, Stephen Deutsch and I formed a small independent studio, Synthesizer Music Services, in West London. As well as providing studio facilities for our own work and for clients, we began building synthesizers to order. This led, in 1976, to a project to design and build a microprocessor based polyphonic synthesizer. A new company, Integrated Music, formed as a co-operative, was set up to develop this project, but was unable to achieve a viable size and was forced to disband in 1977. Only a prototype of the polyphonic synthesizer was ever built.
(9) Casserley, L., "Tube Sculpture Spanish Tour", Diffusion 1, Sonic Arts Network.(1996)
(10) A full description of the Electroacoustic Cabaret is beyond the scope of this article; more information may be found at <http://www.chiltern.demon.co.uk/EAC.html>
(11) I was also fortunate to be able to visit IRCAM and received invaluable support from Eric Lindemann, Miller Puckette, Cort Lippe and Zack Settel, to all of whom I am extremely grateful.
(12) A detailed description of Colourscape is beyond the scope of this article. Some more information may be found at: http://www.chiltern.demon.co.uk/Colourscape.html
(13) Casserley, L., - "The IRCAM Signal Processing Workstation. A Composer/Performer's View.", Journal of Electroacoustic Music , Sonic Arts Network, Vol 7; pp 25 - 32, (1993).
(14) M.Puckette, “Combining Event and Signal Processing in the MAX Graphical Programming Environment”, computer music Journal 15, No 3 (1991)
(15) See Casserley .
(16) I must acknowledge the help and support I received from STEIM (Studio for Electro Instrumental Music) during my residencies there, and also the invaluable aid of those musicians, particularly Simon Desorgher, Evan Parker and Melvyn Poore, who have been willing partners in my explorations.
(17) Emmerson, S., ''Local/field': towards a typology of live electroacoustic music', International Computer Music Conference Aarhus, September 1994, Proceedings: San Francisco: ICMA, 1994, pp 31-34). Reprinted in Journal of Electroacoustic Music , Sonic Arts Network,Vol 9, pp 10 -13 (1996).
"Solar Wind", Evan Parker, Soprano Saxophone; Lawrence Casserley, Signal Processing Instrument - Touch, TO:35 (1997) www.touch.demon.co.uk
"Drawn Inward", Evan Parker Electroacoustic Ensemble - ECM Records - ECM1693 (1999) www.ecmrecords.com
"Labyrinths" - four pieces by Lawrence Casserley (including "Labyrinth" and "The Monk's Prayer") - Sargasso - SCD28030 www.sargasso.com
Further information may be found at: http://www.chiltern.demon.co.uk/Disco.html