Beyond Art? Digital Culture in the Twenty-first Century Colloquium
The Oxford Union, 21st April, 1999
Organisers: Oxford University's Humanities Computing Unit (http://www.hcu.ox.ac.uk/)
The Effect of Digital Technology on Musical Creativity
School of Music, Bretton Hall College, University of Leeds.
What has digital technology to offer the composer searching for the right note, the appropriate chord, the smooth transition from one idea to another? Are the current tools and systems, said to make us more creative and productive, delivering the goods? Or, are there intrinsic problems that may be already having a serious effect on the development of composers whose formative relationship with music composition has depended on interactions with technology? To this end, the paper considers a study of undergraduate students working with computer sequencers and describes research and development of new tools for music composition and its teaching and learning. It concludes with recommendations that suggests composers empower themselves to take control of digital technology, rather than being controlled by it; to move away from the emphasis on production and performance outcomes towards the development of personal environments responsive to creative and critical thinking.
Digital technology has made it possible for every man, woman and their respective dogs to be composers. Own a Playstation, compose Music. Tim Wright, the inventor of 'Music' for the Playstaion, says "Once you've learned the key presses it's really quick and easy". So, why bother with performers, concerts, even other peoples recordings when you can do it all yourself.
Despite such competition (!), the serious composition of music via the signs and symbols of a notated score continues to thrive, continues to be considered by some as a pinnacle of human intellectual achievement: this extraordinary business of conceiving and coding a time-based continuum of simultaneous strands of organised sound, capable of projecting to the listener intensities of expression, sensation and meaning.
In the last 25 years the gurus of music education have done much to convince us that music composition is not only accessible but a central learning activity. It embraces all the skills of music. And this centrality is now recognized and enshrined in our National Curriculum, GCSE and (soon) A Level..
In Higher Education music composition is a recognized undergraduate course component no longer for the few but, at some point anyway, for everyone. It is seen as a valuable and an active way to encounter and experience music, particularly as the previously active relationship musicians used to have through the keyboard has been overwhelmed by musical experience gained from a predominantly passive engagement with the CD recording.
This corner of music composition also reflects and serves a significant historical and cultural investment: in instrumental performance practice; in the international network of concerts and festivals. It feeds the insatiable appetite of a recording industry and continues to be a vigorous part of the soundscape of film and TV. All these things require the services of composers of the highest technical and creative calibre.
So how exactly does digital technology come alongside the creative process in this area of composing music? Desk Top Publishing of music certainly deals with the most complex of metrical and graphical requirements. And this is important because self-publishing, particularly on the internet, is now a reality and, as publishers continue to cut their lists, a necessity.
The software engine behind the processing of music symbols also enables us to hear a model or simulation of a musical score in sound. This is courtesy of a 15 year old protocol for music known as MIDI. The Musical Instrument Digital Interface is a data protocol that deals not with sound but the parametric ingredients of music - for example, pitch, duration, dynamics. Remember you can't hear the pitch of middle C unless something or somebody plays or sings it. A MIDI keyboard makes no sound; it simply send performance information - the keys I choose to press, the various durations my fingers decide to hold, and the individual velocity of the attack each finger brings to a key. Our MIDI keyboard sends all this in a serial stream of data through a computer to a sound card or synthesiser which interprets this data to trigger appropriate sounds. And my performance data (not the sounds that ensue mind you) can be recorded and played back using software known as a sequencer. Most sequencers are able to capture such musical performance information and display it on screen in musical notation.
>From its inception the higher eschelons of the digital music fraternity have been looking down their noses at MIDI as a technological dinosaur that has to disappear. Yet it hasn't disappeared. It continues to thrive because it can handle this parametric information described previously in a powerful and efficient way. And although we have the ability to effect incredible transformations on a soundfile of acoustical data we can hardly begin to separate basic parametric information.
So how does a composed score come into being before it is processed to be seen, interpreted and auditioned? Let us debunk the notion that unless our names are Shostakovich or Mozart musical ideas rarely come entire to the composer in the grandeur and detail of a finished score in the head. To illustrate the reality of the composing act for the rest of us, a few words from Papa Haydn:
'I get up early , and as soon as I have dressed I go down on my knees and pray God and the Blessed Virgin that I may have another successful day. Then when I have had breakfast I sit down at the clavier and begin my search. If I hit on an idea quickly, it goes ahead easily and without much trouble. But if I can't get on, I know I must have forfeited God's grace by some fault and then I pray for more grace till I'm forgiven.'
This searching rarely results in the immediate completion of sections of a musical work. Most of us are lucky if we get a glimmer of what we're after. We engage in a detection process focused on what can be remembered of isolated imaginative moments. The composer may then creates all kinds of abstractions of musical material: text, graphics, charts, tables. These help to build up a personal explanation of a body of knowledge from which relationships can be perceived providing the wherewithal for continuation. This kind of music composition is definitely an off-line and non real-time.
Regrettably, the MIDI sequencer / scorewriter, perceived as the de facto tool for the composer in this medium, doesn't begin to serve the composer at points of origination, conception and detection of musical ideas. It is a recorder and editor of performance data. It is on-line and real-time, unable to interpret or analyse abstractions from which potantial structural relationships are made. It rarely has any kind of generative mechanism; it will not create 'what ifs'. The software has been designed for the three minute songwriter, the jazz arranger and the media composer, not for the composer of art music.
And it is precisely this mismatch that I set out to investigate when in the mid eighties my university invited me to explore the potential of this particualr technology to serve teaching and learning music composition. I not only studied and worked creatively with most of the early sequencer applications but made associations with their designers in order to forge a dialogue that might result in appropriate specifications being adopted. A survey of this early work can be found in my paper Access for All, or Toys for Enthusiasts? published in the the CTI journal Musicus in 1992.
In the early nineties I took a further step and initiated a piece of action research with students at Dartington College of the Arts, in the first instance observing undergraduate students of music, art and theatre disciplines working with MIDI technology in creative music situations.
Let's look over some of the findings:
I came out of that experience resolved to find and develop a better solution . . .
It is against this background that four years later I began research and development with information scientist John Cook to design a guided tutoring system that would:
This work has now come to fruition in the development of MetaMuse now being used to help those teaching teaching music composition to encourage more effective pre-composition planning [and thinking] and help students develop reusable devices and strategies directly transferable into music-generating code.
MetaMuse is built on top of what might be regarded as a second generation music composition environment known as Symbolic Composer. This is an application I have been developing with its originator Pekka Tolonen. Its context free and extendible design gives the composer the potential to develop the tools and features that almost every new composing project invariably requires.
To conclude: I believe we have been too quick to sacrifice the vision of the likes of Seymour Pappert and Marvin Minksy on the altar of glossy interactive multimedia. I see a generation of composers of art music turning their backs on technology; they are not creating active relationships with software that could enrich their thinking and their creativity. And this is possibly because they've had no opportunity to explore and create, at any time in their educational experience, what Papert has termed the microworld: something owned, personal, extendible. Programming as an adjunct to developing thinking skills is definitely not part of the current educational agenda. Without it we have little or no control over our creativity. Musicians today are victims of the Microsoft culture (or in musical terms the Steinberg culture) that binds our creativity to the design constraints and ill-informed perceptions of a team of programmers. Only when we're brave enough to face the uncomfortable facts about what all this might be doing to our creativity will we begin to see exciting new paradigms of musical thought appear, as composers / programmers like Tod Machover (MIT), Magnus Lindberg (IRCAM) and Danny Oppenheim (IBM) are already beginning to demonstrate.