eC!

Social top

Français

Plays Well With Others

Regarding modular synthesizer in collaborative performance practice

First Thoughts

Why are electronic musicians so often solo performers and so seldom ensemble performers? Why are electronic musicians often the only people that play their works? Why do electronic musicians so seldom perform music written by others? Why are acoustic ensembles so slow to integrate dedicated electronic performers?

Electronic musical instruments doubtless hold a very different role in the world of music as a whole than do acoustic instruments, largely because they are still so new — we are still figuring out what they can do, what they do not do well and, most importantly, how we integrate our discoveries about their strengths and shortcomings into our practice as composers and performers. Of course, the sonic and conceptual options made possible by electronic instruments’ unique offerings are innumerable and warrant the creation of their own modalities of composition and performance, but these instruments should still be able to simultaneously find a home within our extant, time-tested schemes of performance. Perhaps they also have the potential to change the way that we approach these practices in acoustic instrumental groups. Perhaps they can change our understanding of the drummer’s role in a jazz combo, the pianist’s role in an orchestra, the electric guitarist’s role in a rock band.

I grew up as a pianist and trumpeter and had a musical upbringing not unlike other acoustic musicians. I took lessons, learned to read and write music, and played in bands and orchestras. As I started to work with improvisational rock and jazz groups as a trumpeter and keyboardist, collaborative performance became a large part of my interest as a musician. A certain pleasure comes from the practice of performing with others, be it for the purpose of realizing the vision of a composer or simply for the purpose of creating something new together in real time. As my personal performance interests shifted away from purely tonal music and toward the world of sounds made possible with electronic timbre-instruments such as computers, effects pedals, circuit-bent toys and appropriated electronics, I saw my love of playing the trumpet and piano diminish. I started to develop my skills as a computer musician and DIY electronic performer, playing with contact microphones, tin cans and altered effects pedals. I no doubt grew as a composer and performer at that time, but as these skills developed, it became more and more difficult to find a way to play with other musicians. Suddenly my instruments did not have a place in the music around me. Bandleaders and composers seldom knew how to integrate those sorts of sounds into their groups — the improvisers did not know how to respond to said sounds. Somehow, simply changing my instrument resulted in a barrier of communication that made it difficult (though certainly possible with appropriate effort) to work in environments with which I was already familiar.

The modular synthesizer, a fascinating and peculiar instrument, has changed the course of my musical practice. Since discovering it I have found that, at least for myself, it seems to hold a unique position among other electronic instruments: it seems to have the potential to both maintain its own presence and identity as a self-contained instrument, as well as integrate into the larger realm of collaborative musical performance practice. The following is a series of thoughts as to why it is that electronic timbre-instruments have so often failed to find homes in collaborative contexts, why the modular synthesizer has certain potential that other electronic timbre-instruments do not, and what responsibilities befall us as theorists, composers and performers, should we hope such integration to work.

Why Do Electronic Instruments Fail to Integrate?

image
Figure 1. Burnt Dot performing during Gypsy Collective’s Less Than Slash Three exhibit at Muzeo Museum and Cultural Center in Anaheim (California) on 13 February 2016. Ryan W. Gaston (electronics) and Sarah Belle Reid (trumpet). [Click image to enlarge]

I loosely define an “electronic timbre-instrument” as an electronic instrument, one of whose primary variable sonic parameters is its timbre. This term does not imply that timbre is its only variable or performable parameter; it simply implies that one of the instrument’s key points of interest is its ability to produce several different variable timbres. This is distinct from traditional acoustic instruments in that they typically have a relatively static and predictable timbre, and is also distinct from electronic instruments such as the theremin, ondes Martenot and commonplace keyboard synthesizers in the sense that although the timbres these instruments offer are compelling, they are nevertheless relatively static. 1[1. It is worth note and thought that static-timbre electronic instruments have successfully integrated into performance ensembles since their inception; consider such pieces as Bohuslav Martinů’s Fantasia for theremin, oboe, piano and string quartet, Olivier Messiaen’s Turangalîla-Symphonie for large orchestra with ondes Martenot soloist, and the Head Hunters LP by Herbie Hancock and the Headhunters, as evidence that these instruments have had no difficulty finding a place in both popular and art ensembles.]

Though there are no doubt several composers and ensembles that have made good work of utilizing live electronic timbre-instruments 2[2. Consider such composers as David Rosenboom in And Out Come the Night Ears (1978), Mark Trayle in Laminaria (2012) and Toshimaru Nakamura’s no-input mixing board (NIMB) series.], this practice is still the exception rather than the norm. A number of differences between acoustic and electronic instruments have made this the case over time, mostly out of passive tendency as opposed to wilful intent. Both a dissimilarity in the historically evolving theoretical and pedagogical scheme for each instrument class and some fundamental differences in playing approach have made a marriage of approaches quite difficult to achieve.

The Composer as Studio Artist

The history of electronics in music composition and performance is obviously too long and messy to outline here, but because of its lasting implications, at least one revolutionary concept is worth mention: the development of the composer as a studio artist. Before the use of electronics, dissemination of music was necessarily social and required a specific social process to reach the compositional endpoint, a performance. The compositional process in its simplest form required:

  1. Gathering of concept and materials;
  2. The generation of a score;
  3. Study by conductor/director and/or performer group;
  4. Interpretation by conductor/director and/or performer group;
  5. Performance for audience.

Early electronic timbre-music, however, was generally created through a different workflow than that of purely instrumental music. Early electronic music was largely written for fixed media, as instruments were not yet developed that could realize the composer’s intentions in real time. In fact, the creation of most of this music was dependent on non-real-time sound generation and manipulation. Many composers did not seem to mind this change in paradigm, as in some ways it gave them more direct control over the outcome of the compositional process. The process in its simplest form then only required:

  1. The generation and arrangement of materials;
  2. Dissemination of materials in concert (or later, as a commercially distributed recording).

In this model, the involvement of other artists does not become a factor in the work’s realization and therefore the resultant product is commonly considered to be somehow more pure and more controlled than a work whose realization relies on other interpreters and performers. Because of the state of technology at the time, mid-century (and later) electronic composers had to write in this way. However, it is still a prevailing approach in professional and hobbyist studios, despite the fact that our electronic instruments now are sophisticated enough to execute much of the same sort of work in real time. We can understand the appeal as outlined in Morton Subotnick’s model of the composer as studio artist: when the process involves as few people as possible, it seems much cleaner and less apt to depart from the composer’s intentions (Subotnick 1976). However, other consequences result from the lasting modification to the more traditional compositional process previously outlined. These consequences are rooted largely in the new model’s first modification of the traditional compositional process: the change in the purpose of a score.

This is not to say that early electronic music did not have scores, but instead that these scores often served a different purpose than a score often does in purely instrumental compositions. Because electronic music (even electronic concert music) relied mostly on playback at that time, it was seldom necessary to generate performance scores. Written documentation of these works were produced for another series of reasons, however, a score might be: a sketch or documentation of the compositional process for the purpose of assisting later discussion or analysis, as in Stockhausen’s 1954 Elektronische Studie II; a technical means of instructing a third party how to construct the piece, as in John Cage’s 1953 Williams Mix; or a means of assisting an audience in understanding how to listen to the piece, as in Rainer Wehinger’s listening score for György Ligeti’s 1958 Artikulation. Performance scores for completely electronic works were uncommon, partly because sufficient performance instruments did not yet exist, but also because composers were suddenly tackling issues in sound that composers in prior generations had no need to tackle. How does one talk about fluctuations in timbre? How does one discuss crossing the boundary beyond which a pitch becomes a rhythm? And moreover, how does one notate for a performer how to execute these changes using traditional Western notation, which so far had evolved only to handle (and very roughly so) pitch, duration and dynamic on instruments which of their own accord had fairly static timbres?

When composers did document their pieces, they often resorted to describing the technical processes by which their sounds were created. They might describe what brand and model oscillators they were using, how many of what brand and model tape machines they used, by what percentage the original speed the tape had been slowed when rerecorded through a narrowing-width band-pass filter, etc. Needless to say, this approach is far more technical than is common in scores for acoustic music and, especially at the time, exceeded the technical knowledge of the common musician.

Conclusions Regarding Effects on Pedagogical Practices

For the purposes of such composers, these means of documentation are perfectly fine, but it neglects the contributions that the production of scores makes toward the development of theoretical and pedagogical models for music making. That is to say that while these works no doubt have a strong influence on our aural understanding of what sorts of sounds electronic instruments may be used to make, it is not terribly helpful in the process of developing a vocabulary for discussing these sorts of sounds so that they may be recreated by performers in our own works or our own performance groups, which so often are not made up entirely of people who themselves have a working understanding of the materials and methods of electronic music making.

Because of this lack of theoretical underpinning, electronic musicians are not afforded some of the basic niceties that acoustic performers have on their side while learning not just the technical craft of playing their instruments, but also the musicality and listening skills that assist them in playing with others. Young acoustic musicians often learn the technique and technical details of their own instruments in private lessons while learning the vocabulary used to discuss music as a whole during ensemble rehearsals in which they interact very directly with players of other instruments. This old pedagogical scheme not only acts as a means to teach players the language of their craft, but it also works to help the musicians to develop a collaborative attitude toward their work. Because there is so little electronic performance repertoire (especially that places electronic performance musicians as equals in a larger ensemble) and because there is so far no standard means of notating techniques common to electronic music performance, developing the craft to that end becomes difficult. Not only do electronic musicians commonly not have the same understanding of acoustic music that acoustic musicians do (and therefore are poorly equipped to interact in technical circles with acoustic musicians), but they are also often ill equipped to describe their own craft to the musicians with whom they might otherwise collaborate. Moreover, left without opportunity to develop their skills in that context, electronic musicians often develop by themselves in their homes and studios. And while they can certainly develop virtuosic skill and sensibility in their own craft, this form of learning does not grant them the necessary interpersonal artistic interaction that makes it easy for a trumpet player, for example, to switch from group to group with agility and ease.

Differences in Playing Approach: Acoustic vs. Electronic Instruments

Beyond these historical dissimilarities in development, we must also consider that electronic timbre-instruments often of their own nature require, in general, a different approach to their performance than acoustic instruments.

It is first important to consider that unlike acoustic instruments, the sound produced by electronic instruments does not necessarily originate from the instrument itself. In fact, electronic musicians often play over networks of physically displaced speakers for the purpose of exploiting the spatialization potential such a setup offers. 3[3. Performers of electronic instruments — both analogue and digital — are uniquely capable of controlling the displacement of their sound as a playing technique. And while many players approach this displacement as a process that occurs at an outboard mixer separate from the instrument, I personally consider the spatial positioning to be part of the instrument. Voltage-controllable panning is a huge part of my own creative practice and the practice of countless others.] The separation of sound from the instrument itself promotes a subtly different mindset in many situations: a trumpet player can feel the vibration of the instrument on his/her face; a violinist can feel the strings vibrating; a harpist can feel the instrument resonate. Electronic instruments are, however, not necessarily afforded this level of haptic feedback, unless it is intentionally planned by the player. This can be logistically complex to sort out when playing with other musicians and requires a certain amount of technical consideration that acoustic performers are not burdened with. 4[4. Obviously this changes when performing amplified or with interactive systems.] Combining necessarily amplified and non-amplified instruments is a common situation in which we become aware of the fundamental differences in approach between said instruments — the default disembodied relationship of electronic musicians and their instruments no doubt results in a different attitude and approach than the necessarily embodied relationship between acoustic musicians and their instruments.

image
Figure 2. Ryan W. Gaston’s current personal Eurorack setup. [Click image to enlarge]

Additionally, the natural state of electronic instruments is default-active versus the default-inactive behaviour of acoustic instruments, whether obvious or on some hidden level below the user interface. An acoustic instrument will not make sound until it has been acted upon — producing a sound requires physical effort and planning from the performer. Electronic instruments behave in an opposite way. An oscillator’s tendency is to constantly oscillate — quieting the instrument requires effort and planning from the performer. This difference of resting states can lead to some fundamental differences in musical approach between the two worlds, which can make bridging between them difficult. Acoustic music requires breath and pause, whereas electronic music does not require it. As a result, acoustic musicians can often come to think of silence as a canvas on which the apply their sonic imprint, whereas electronic musicians may learn to think of sound as a block out of which they sculpt, following the now decades-old practice of “electronic music as sound sculpture” (Subotnick 1976). And while the same aural results can be achieved from either direction, we must consider the difference in mindset between performers attempting to assimilate with one another.

Because each individual sound does not have to be produced as the result of direct intent or contact with the instrument (as in the case of acoustic instruments), electronic musicians are perhaps more easily capable of performing at phrase level and form level than their acoustic counterparts. A single action from an electronic performer can initiate a complex string of sounds and interactions, and as such, performing electronic instruments could be likened to the process of conducting an ensemble, where a minimal number of gestures from a conductor allow for the synchronization and shaping of a number of note-level events from multiple voices. In fact, electronic performers are sometimes naturally inclined to treat their instrument like an ensemble of several distinct voices. In attempting to instil a sense of accountability for the produced sounds, I prefer to think of this approach in my own practice as shepherding: the performer produces a multitude of sounds and takes care that they lead this group of sounds responsibly forward as a group without letting anything get lost. And while this is a valid and often useful approach, it can be alienating to an acoustic musician who is accustomed to intentionally producing every sound one by one. The acoustic and electronic musicians can sometimes encounter difficulty reading one another's intentions in an improvisation or ensemble context when their instruments require such different types of performer-instrument interaction.

Modular Synthesizer as Solution

One of the perhaps obvious, but necessary steps in resolving the above complications that separate electronic timbre-instruments is the development of an instrument that is approachable and playable by performers. For more widespread implementation, the instrument must also be broadly available and financially accessible. Moreover, the instrument should not require extensive specialized knowledge outside the field of music upon the beginning stages of learning. The modular synthesizer in many ways fulfills all of these criteria.

Accessibility and Approachability

To an outsider, the modular synthesizer to must look like a haphazard mess of cables, buttons and knobs. In many ways, yes, they seem more like electronic test equipment than a musical instrument (a critique I know I have heard about my own instrument, in any case). To counter this critique, though, I would argue that the flexibility of the instrument’s inherent modularity outweighs this initial aversion, or intimidation. What does modularity mean in regards to approachability? In short, it means that the performer can choose how he/she wants to interact with his/her instrument and what sort of haptic feedback he/she wants the instrument to provide.

In great contrast to acoustic musicians, synthesists can freely choose and alter the format of their instruments according to context, which has a huge impact on the means by which they interact with their instruments. One can choose the spacious and ergonomic layout of a 5U system, or instead may favour the compactness and portability of a Eurorack instrument. Perhaps the performer loves the form factor of 4U and the flexibility of patching with banana cables (he/she might be well-served with one of the many available Serge-format systems). Perhaps the performer prefers to have a distinct visual scheme by which to distinguish between the audio signal path and the control voltage signal path, in which case Buchla may be a preferable offering.

Beyond these issues of size and patching technique, one must also consider the actual playing interface. Perhaps the performer wants to interact with the instrument by means of a series of joysticks, or a piano-style keyboard, or force sensitive resistors, or a series of tuneable touchplates. Or perhaps the performer prefers simply to turn knobs and press buttons. All of these are acceptable approaches and can be made to have little or no impact on the timbres produced. Electronic performers have the choice of a personalized playing interface, allowing them to interact with their instruments in a way that they personally find exciting and inspiring. Without the ability to choose a playing interface that “feels right” to us, we are perhaps less likely to develop passion and skill in what we do. In principle, this is not terribly different from an electric guitarist selecting their instrument’s scale, a neck with a particular radius, or a preferred body style; however the scale of choices is much greater for electronic instruments.

The performer can also choose what means of producing and manipulating sounds that he/she finds personally appealing and inspiring, and this is perhaps the one of the strongest appeals of working with such an instrument. Traditional subtractive synthesis setups are supported, as are traditional schemes of frequency and amplitude modulation. Part of the current interest of the field, though, is that there are so many newer and more novel approaches to sound production easily available for immediate exploration. For example, specialized forms of physical modelling, wavetable synthesis, granular synthesis, spectral shaping, chaotic systems, mechanical sound production and sound processing systems are easily integrated into the performer’s instrument. 5[5. To name but a few examples: the Make Noise Mysteron, perhaps the world’s first commercially available hardware implementation of waveguide synthesis (physical modelling); Synthesis Technology’s E350 Morphing Terrarium, a three-dimensional interpolating wavetable lookup module (wavetable synthesis); Mutable Instruments’ Clouds, a real-time granular processor (granular synthesis); Don Buchla’s latest revision of the classic Model 296 Spectral Processor (spectral shaping); Rob Hordijk’s Benjolin (chaotic systems), available in Eurorack format from Epoch Modular; the uncanny designs of Gijs Gieskes (mechanical sound production); and the guitarist-marketed preconfigured Pittsburgh Modular Patch Box (sound processing system).] If the performer wants to work with sample playback, or real-time synthesis, or feedback patching, or anything outside or in between, there are current offerings that make each approach valid and compelling.

The level of immediacy of various sorts of sound-making techniques as well as the variety of available playing interfaces makes it such that every player’s synthesizer can be unique, which is perhaps less important in terms of what sort of sounds each instrument can produce and more important in terms of fostering the same sort of relationship between player and instrument that acoustic instrumentalists so commonly develop.

The synthesizer’s relative inflexibility is actually what makes it function similarly enough to acoustic instruments to remain a relatable platform for collaboration with other instrumentalists.

At the time of writing this document, there are well over one hundred modular synthesizer manufacturers from whom instruments and components may be readily purchased, and more and more shops that carry these boutique items. As of 2015, it is safe to say that larger electronic instrument manufacturers are also showing an interest in modular instrument production (with Roland and Moog both re-entering the scene as of late 6[6. See Roland’s new Aira Modular Series and their reintroduction of the classic 500 series in Eurorack format, produced in conjunction with Malekko Heavy Industry, and Moog’s reissues of classic 5U modular systems and new Eurorack designs.]), meaning that these instruments are more broadly available than ever and, because of modern schemes of manufacture, are less expensive purchased new than ever before. The fact that these systems are commercially available carries with it some perhaps overlooked conveniences that are not inherent to all approaches to learning and playing instruments for electronic timbre music, including common DIY practices for building analogue and digital instruments. For one, these instruments are, for the most part, prepared, assembled and tested by expert designers. They do not require extensive troubleshooting, low-level prototyping or any training in computer science or associated topics. It truly is as simple as planning your instrument, purchasing the components, turning them on and starting to play. And while some headaches and complications might still emerge when a module does not work in exactly the way that one plans, it generally requires considerably less effort than reverse engineering a toy to circuit bend, or writing a computer program with which to compose and perform, or prototyping from scratch a circuit for making sound. Of course, I do not say this to belittle those approaches, as I practice all of them myself. Speaking from a place of experience in each approach, though, I can confidently say that working with the modular synthesizer has been the most immediate. The number of steps between having an idea for a sound and hearing the sound emerge from our speakers is fairly low.

What About Computer Music?

The astute reader will realize that nearly all of the above defenses for the modular synthesizer as a performance instrument can equally be applied to performance with computers, which today is a considerably more common approach for electronic music making. The musical instrument market is ripe with manufacturers who produce sophisticated MIDI, OSC and Human Interface Device (HID) controllers as well as offerings for interesting Digital Audio Workstations (DAWs), virtual instruments, virtual effects processors and simple environments for visual audio programming (Max/MSP, Pure Data, Reaktor, AudioMulch, etc.). More to the point, most musicians interested in making electronic music already own an invaluable tool necessary for this approach: a computer. It is unarguable that, like the modern synthesist, the modern computer musician has an enormous amount of flexibility in configuring the playing interface and sound generating scheme of his/her instrument — in fact, they have an even higher degree of flexibility. This is one of the reasons, though, that the computer loses some of its viability as an instrument in the conventional sense (while no doubt gaining immense and exciting viability as its own area of exploration as to what a musical instrument could be).

The multimodality of the laptop is in fact the reason that I am hesitant to adopt it as my sole performance instrument. Being constantly faced with an infinite range of sonic possibilities is often more intimidating than it is inspiring. Something about infinite flexibility makes it difficult to simply explore the instrument to just to see what happens, to discover what happy results might occur, to find out what useful sounds you might uncover by simply toying around with the same interface you always use, as with an acoustic instrument, where the user interface does not change and has only one set of intended uses. This kind of effortless, aimless approach to music making is nearly lost with the laptop, with which useful results are often only found by making very intentional decisions about their outcome. Without the intentional decision to use only a limited number of control devices and a limited number of plugins, this kind of effortless familiarity and creativity through play is less possible with a laptop than with a dedicated instrument, as the computer (we must remind ourselves) is not a device manufactured for the purpose of making music. The computer is a multipurpose device that must be augmented by the use of external HIDs to achieve maximum efficiency for most tasks, let alone assuming a role similar to a dedicated musical instrument.

As concerns these HIDs, while there is a tremendous number of commercially available MIDI controllers, I would argue that their marketed form factor tends by its nature to lend itself to a particular use and therefore may act as a hindrance to the creative process as much as an inspiration. Most pads on MIDI controllers send note messages, most knobs send 7-bit CCs, most “bend wheels” send 14-bit messages, all of which have their own optimal uses as defined by general MIDI protocol. While repurposing this data is certainly possible, such tasks are more complex than the common electronic musician’s DAWs and virtual instruments are capable of handling. These controllers are not easily multimodal in that way. 7[7. Interestingly, in this way, there is an interesting disconnect between the multimodality of the computer and the somewhat less flexible nature of its commercially available user interfaces.] As a result, one musician may choose acquire a plethora of controllers, each with their own intended purpose, deciding at any given time which controllers are most appropriate for use. For myself, having a tabletop full of essentially unrelated devices seems less satisfactory for the purpose of spontaneous music making than does having a single, self-contained instrument — a central part of the modular synthesizer’s purpose and basic operating principles.

Another matter of concern typically highlighted as a benefit of the computer over the modular synthesizer is the computer’s ability to instantaneously execute global changes in behaviour. This ability is partly attractive because of its potential for use as a system of behaviour recall, like a form of preset memory by which behavioural states can be stored and later recalled without requiring the reconfiguration of all parameters specific to the desired behavioural state. The ability to execute these sudden global changes in some ways challenges the sort of musicality inherent to playing an acoustic instrument: being able to suddenly alter large portions of the emotion, intent and playing style of an acoustic instrument is characteristic of a only an advanced acoustic performer, but is a fundamental technique of laptop performance. This inherent ability to immediately and drastically change global behaviour can be alienating to acoustic musicians whose instruments are inherently inclined to behave in a narrower range of pitch, timbre, rhythmic flexibility, etc. Moreover, the extensive multimodality of the laptop can complicate the laptop performer’s relationship with his/her preferred playing interface, which is commonly made to change its own behaviours at the push of a button. A single knob on a MIDI controller can have countless different purposes at different times, whereas a trombone’s slide always performs basically the same function. Similarly, the cut-off frequency knob on a modular synthesist’s preferred filter always performs basically the same function. In this way, many modular synthesizer playing interfaces are perhaps more similar to the single-function interfaces of acoustic instruments than to the nearly infinite multimodality of computer performance interfaces. These several shades of difference in performance practice and personal performer-instrument relationship are important to consider in how to establish a comfortable and familiar environment for collaborative performance.

Summary of Potential

In all of these ways, I would argue that the modular synthesizer strongly stands up partly as a potential solution to the issues that have historically plagued the integration of electronic performance instruments in music for ensembles while also maintaining its own identity as a self-contained, strongly charactered instrument with its own idiomatic quirks and playing styles. Its physical configuration and current market availability is such that it is flexible enough to offer nearly infinite potential for personalized sound production and playing techniques, but still inflexible enough in physical form to encourage the player to think creatively as to how to most cleverly and efficiently utilize all of its constituent parts. While modular synthesizers are reconfigurable, their reconfiguration requires enough time and capital that it encourages the musician to instead more deeply explore its fine nuances than does the laptop, in which drastically different sound worlds can be conjured with minimal difficulty and minimal time. In exploring these nuances and in carefully selecting how the player wants to control them in real time, the player develops the same sort of intimate relationship with his/her instrument that acoustic instrumentalists do with their primary instruments. The synthesizer’s relative inflexibility is actually what makes it function similarly enough to acoustic instruments to remain a relatable platform for collaboration with other instrumentalists.

Manifesto for the Integration of Modular Synthesizer in Collaborative Performance

However, because there are still significant issues to overcome in order for the synthesizer to assume a common role within our bands, combos and ensembles, we should consider now the responsibilities that we as composers, educators, collaborators and synthesists must assume in order to make this integration possible.

As Composers

Theory follows practice. Standard theoretical and musical trends begin with composers who are insistent about using new techniques and who are committed to investigating, repeating, developing and proliferating their ideas. It is first the composer’s responsibility to incite broad changes in musical practice, not the educator’s, and not the player’s. Without available repeatable literature, the educator and theorist have few guided materials for study, and performers lack the structured development offered by traditional performance study. And while improvisation is certainly a useful means for developing playing technique, it can be difficult to use as a framework for developing varied and repeatable sounds and techniques outside the player’s typical comfort zone. So this initial roadblock of sorting out how to handle electronic timbre-instruments as a whole could be aided greatly by composers interested in resolving this decades-long issue. Luckily, they now have the advantage of access to sophisticated players of electronic instruments and a large body of timbre-music literature and therefore are several steps ahead of mid-century electronic composers, later instrument designers, composers and computer musicians whose focus drifted in many ways from standardization of practice in favour of more exploratory work.

Composers, then, must learn to handle the elusive parameter of interest for these instruments: they must learn to talk about timbre and to more intimately understand and notate the intricacies of the relationship between timbre, texture, rhythm, pitch, etc. Composers and bandleaders should learn how to discuss the properties of a sound, but not necessarily to do so by extremely technical means nor by exceedingly simple vocal imitations (as are both commonplace). This requires the development of a vocabulary by which we can more carefully describe sounds independently of the means by which they are generated.

I consider it key that this language be interface-neutral, a concept not unlike the instrument neutrality emphasized by Princeton’s PLOrk ensemble (Tormey 2011). It is unrealistic to expect two electronic performers to use precisely the same playing interface, or even to expect that two performers with the same interface would be compelled to use them in exactly the same way. Moreover, the multiplicity of available interfaces for making electronic music is one of the things that makes the synthesizer itself so attractive and gives it such staying power. The qualities of the sounds are not necessarily related to the interface used to play them, unlike acoustic instruments. Notating a piece of music for a specific playing interface or discussing a sound that can be produced equally well through several means by addressing only one of said means is a disservice to the longevity of the piece or sound in question.

image
Figure 3 / Audio 1 (0:17). Excerpt of an analytical listening / performance score for Morton Subotnick’s Until Spring (1975) prepared by Ryan W. Gaston based on Lasse Thoresen’s spectromorphological notation symbols. [Click image to enlarge]

All too many pieces of electronic music have been lost to time because they were written to be interface-specific and therefore only playable by one or a small group of performers who used the correct interface in the exact way the composer dictates. We must find a way to give our music more staying power. We can take this particular point from traditional Western notation: a clarinettist can read a trumpet player’s music, a tuba player can read a bassoonist’s music, a harpist can read a guitarist’s music, etc. Our first step should be not to write for Serge TKB+Animoo+BOG panel, nor for Buchla 216+208+246, nor for Make Noise Shared System, but instead for modular synthesizer, or simply for electronics. Moreover, the composer would not need an intimate technical understanding of musical electronics to write in this capacity, thereby being free write what he/she hears rather than what he/she can technically conceptualize. Returning to our traditional model of music composition, it is the responsibility of the composer to write the music and the responsibility of the performer to interpret the music for their particular instrument. Most instrumental composers do not know how to play the harp, but that does not mean that they do not know how to write for one. The same should be the case for musical electronics.

Luckily, there are already several largely successful approaches to interface-neutral timbre notation surfacing and gaining popularity. One particular system of interest is the spectromorphological notation developed by Lasse Thoresen’s Aural Sonology Project in Norway (Fig. 3 / Audio 1). While this system was generated largely with the intent of developing a vocabulary for discussing æsthetics in electronic music, one of the fantastic side effects of the project is a symbolic notation suitable for handling timbre in a way that should be sufficient for most purposes when combined with certain terms and concepts borrowed from traditional Western notation (Thoresen 2007). It is the responsibility of the composer to become familiar with both the successful and unsuccessful attempts at handling these materials to better understand how best to assist our notation and documentation in evolving along with our music and our instruments.

It is worth note here that I do not think in every case that composers should be electronic musicians themselves. In fact, I believe that this could be a hindrance in the advancement of the craft for two primary reasons: first, that it will lead toward the potentially damning practice of overly technical discussion when creating new works; secondly, that their limited personal experience with the playing interfaces of a broad range of electronic instruments may lead them to be too tame in their approach for handling these instruments. I believe there is value in composers not too intimately understanding the playing techniques of the instruments for which they write. Some of the most impressive and challenging music earns its staying power by being slightly more difficult than a performer may think possible to execute — i.e. when the composer is concerned about a larger creative potential of an instrument than is possible within the standard protocols for its performance. The element of challenge is one of the many factors that lead so many musicians to become truly great at their craft and what leads many works to be remembered. And while it is important for a composer to consider practicality of performance, they should balance this practicality with simply notating the sounds they imagine rather than how those sounds are to be played. In conjunction, it should partly be the responsibility of performers to leverage their familiarity with their personal instruments in order to realize music that at first may seem impractical, but compelling.

As Educators

There is much overlap between the responsibilities of educators and composers. Educators should also become familiar with prior attempts at timbre notation and should do their part as theorists (and often composers themselves) to assess means of improving extant approaches and to propose these improvements either in the form of music or in the form of writing. Beyond these responsibilities, educators must focus on the nurturing of young electronic musicians the same way they currently nurture the development of acoustic musicians.

Educators and theorists therefore should develop a pedagogical scheme for learning these instruments based around the aforementioned interface-neutral means of discussing timbre in music. In the same way a young trumpeter works through the Arban method, that a harpist often learns the Salzedo and Grandjany methods or a violinist the Suzuki method, a set of exercises in helping youth to understand timbre and to use their instruments to produce these sounds in the way a composer might request should be key.

In order for this system to work for variable-interface instruments, however, educators must become familiar themselves with the tools and techniques of electronic music performance. In this way, they will be equally equipped to explain to a Buchla Music Easel player, laptop player and Ekdahl Polygamist player how to produce, for example, a low-pitched complex timbre with brusque onset, fast dynamic gait, medium granularity and a swelled ending. Furthermore, an instructor of new performers should be aware of methods of instrument planning in order to help their students to choose how to design collections of modules that complement their own musical voices and will assist in tackling the style and repertory in which they are interested, from harsh noise to cool jazz to chamber music.

As Collaborators

Collaborators must, like all involved, learn the vocabulary for discussing the sounds produced by the electronic instruments around them. They must learn how to overcome the obstacle of communication about how synthesists play and, when necessary, must come to understand the fundamental differences in playing approach between acoustic and electronic instruments. Having an understanding of the mindset of the synthesists with whom they collaborate will help them to understand the idiomatic techniques of the synthesizer, which can lead to an enriched understanding of their own instruments as well as a better informed understanding of how to musically interact with the players around them. In time, these interactions will become second nature and there will be no strangeness encountered in approaching the synthesizer’s default-on tendency and the saxophone’s default-off tendency or the other subtle differences approach addressed above. And of course, acoustic collaborators must be willing to have a bidirectional exchange with synthesists; they must share information about their own thought processes and approaches so that synthesists may better know how to react to them as well.

As Synthesists

As synthesists interested in advancing this aspect of our craft, we have a large number of responsibilities to fulfill as well, some of them perhaps contrary to our own nature as players of instruments that can change in scale and sonic potential indefinitely and that produce music that resists analysis and critique. Like the composers and educators, we must learn to talk about our craft in less technical but still standardized terms so that we can have productive conversations about our instruments and our music with others. This communication barrier is one of the things that still hold our instrument back from its full potential for integration into collaborative performance environments, and we have the ability to overcome this barrier when an appropriate lexicon of terms can be adopted by composers, theorists and ourselves.

First, we must carefully consider the relationship we have with our instruments and their inherent reconfigurability. We must do what we can to learn to be more flexible players. Partly as a matter of financial practicality, but more interestingly as a matter of developing sophisticated technique on our instruments: for instance, rather than buying a new complex oscillator that internally facilitates voltage control over frequency modulation index, why can we not route an audio rate modulation oscillator to a VCA that is then fed into our primary oscillator’s most sensible FM input? Why must we buy a new dedicated coloured noise generator when we can employ circular frequency modulation and circular reset between two low frequency oscillators to generate pseudo-red noise? We should teach ourselves that there is no such thing as a single-function module and should internalize that by intimately understanding the potential uses of each constituent part of our instruments. We must learn to make patches that are not all on, all the time, instead relying on subtle changes of routing or subtle differences in the fluctuation of our control voltages to allow us to produce drastically different sorts of sounds or drastically different groupings of sounds on the turn of a dime.

That is how we can turn our instruments into sufficient tools for live interaction with other performers — we must learn to have the same degree of disciplined control over our instruments’ every sound, as do acoustic musicians. And even when we are not playing in such a way that we control the generation of each individual sound in an accumulated texture, we should take the shepherding approach, in which we take careful account of each sound and exercise control of just how far away from the herd they are allowed to stray. We must to a certain extent reprogramme our understanding of long-standing approaches to patch design and instrument planning so that we can be as flexible and as agile on our instruments as is possible. While an enormous instrument with a huge array of modules can produce an impressively intricate texture with ease, I would argue that small systems are often capable of just as impressive an array of sounds when cleverly and carefully patched, and are usually more easily and more quickly controllable.

Audio 2 (4:16). Studio performance of the improvisational piece Mast by Burnt Dot at California Institute of the Arts’ Dizzy Gillespie Recording Studios on 17 November 2015. Ryan W. Gaston (electronics) and Sarah Belle Reid (trumpet).

Which brings me to the second, very directly related point: we need to be very careful to strongly consider and become acquainted with the actual playing interface of our instruments. As my friend and synthesist Todd Barton often states, “You are only as expressive as your control voltages.” In reprogramming our understanding of how to patch our sounds themselves, we should gradually learn the points at which injecting additional layers of control may be useful. We can sit in the studio and tweak knobs and sample all day if we like, and can make some truly wonderful sounds and recordings as a result, but even the best soundmakers are often not the best performers. We must learn how to make judgments as to when it is best to send a stepped control voltage through a lag processor before it reaches its destination, in the event that during a performance we in the moment should choose that some slope to this CV’s fluctuations would be more compelling than none. We need to become adequately aware of the sensitivity of each of the inputs of our modules, whether our VCAs have linear or logarithmic response, at what point inserting an additional attenuator or CV mixer before an input makes sense musically.

These are all aspects of our instrument’s playing technique and we need to fully understand their implications in order to be compelling and nuanced musicians. Improvisational musicians need to learn to plan for these decisions and should be prepared to make musical judgments during the course of performance, which will no doubt often be related to reconsidering a judgment made during patch setup. Repatching during performance can be messy and can distract from the moment’s musical direction, so we must learn to cope with these decisions gracefully and efficiently in real time. We should be mindful that we do not let the default behaviours of our control voltage sources to control our musicality; control voltage is a mutable part of our instrument and shaping these voltages deliberately is how we play the instrument and how we make musical statements of our work. The true art of our craft is in management of these systems of controlling sounds and we must learn to practice these skills accordingly.

Players of composed music should of course work with composers to assist them in understanding what sounds are possible from our instruments and how they may approach them in their work. This communication should be bidirectional, though. We must also learn to interpret the music they provide us and to plan for difficult changes of timbre, texture, etc. by undergoing intensive score study and practice. A professional trumpet player does not show up to rehearsal without the required mutes and, similarly, a professional synthesist should not show up to rehearsal without being properly patched to approach the change in overtone harmonicity at 6:37 in the piece, for example. Because very little of this sort of music exists for our instruments, we are not often used to approaching our practice time in that way. In order for that type of music to exist, though, we must learn to make according changes of discipline to our own practice habits.

Most importantly, we should learn to make all of these changes playfully. We must recall the importance and the gratification of working with other musicians, both electronic and acoustic, remembering that music for centuries before our instruments were invented was a necessarily social activity. It is something that we do together. Rehearsals are a time to bond with your collaborators, and performances are a temporary community formed about a common interest. In an ideal situation, we use our practice in order to reach people and to tell them something through music. I can think of no better or more effective way to do this than to help one another along as musicians, developing a scheme of collaboration and theoretical discourse that enable others to do the same for time to come. We should understand that our habits and our practices have an impact on the evolution of our instrument’s role in the musical world and that it will take work and discipline for those of us who are interested to find a way to teach others about the enticing possibilities that our instruments present and to solidify their place in the advancement of electronic performance practice as a whole.

Final Thoughts

In this document, I have demonstrated the desire for and potential of the development of the role of the modular synthesizer as a musical instrument in common performance practice. By offering a brief overview of the history of common approaches to the theory and practice of electronic music making, I have assessed some of the reasons that electronic timbre-instruments have to this point resisted concise theoretical discourse and the reasons why so little live electronic timbre-instrument repertory currently exists, and have determined why the modular synthesizer seems to offer unique potential as a tool for helping to overturn this tendency. Furthermore, I have used this assessment to suggest a basic model by which a new method of electronic music theory, performance and composition may be instituted from the levels of pedagogy to professional practice.

Bibliography

Thoresen, Lasse. “Spectromorphological Analysis of Sound Objects: An adaptation of Pierre Schaeffer’s typomorphology.” Organised Sound 12/2 (August 2007), pp. 129–141, 2007.

Subotnick, Morton. “Electronic Music as Sound Sculpture.” Liner notes to the vinyl release of Until Spring (1975). MCA, Colombia/Odyssey 1976.

Tormey, Alan. “Developing Music Notation for the Live Performance of Electronic Music.” CLIEC 2011. Proceedings of the Concordia Live and Interactive Electroacoustic Colloquium (Montréal, Canada: Concordia University, 26 March 2011). http://cliec2011.hexagram.ca

Social bottom