(This article is taken from the forthcoming UCC DAH iBook “Digital Arts & Humanities: Scholarly Reflections”).
As we travel through the technological epochs of the twenty-first century, musicians are curiously adapting the applied sciences to suit the needs of contemporary electronic music. As musicians in the twenty-first century are exposed to portentous computing power, the genre is always changing and mutating technology into somewhat antithetic applications. The technological advances we are making in various fields are being taken up by creative individuals to make music that is often unconventional, but progressive.
In the twentieth century, we saw a rapid spurt of technological advances in the applied sciences. It was at the dawn of this era that we first saw Count Ferdinand Von Zeppelin take to the sky, and in a science lab somewhere in the world, the first radio was receiving messages through the aether. Over a period of only one hundred years we now fly through the air at thousands of kilometres per hour and almost every home on the planet has an old analogue radio receiver. So how has twenty-first century technology developed over the last decade and what does that mean for composers of modern, contemporary music?
As technological advances have progressed, so have the manners in which we produce sound and music. Throughout the twentieth century, we leaped through rapid accelerations in science, creating new musical instruments from this technology. The birth of electronic music took place, and electronic musical instruments took form from the output of engineering science labs. Engineering experts became composers and composers became sound scientists. Another fascinating change occurred as technology became musical. This was that “noise” became art and electro-acoustics became entwined with the musical genre of old. It has been convincingly argued in literature how music technology has managed to shift human mental and physical orientations, encouraging the evolution of the industrious culture we live in today.
Changes in musical interaction follow closely the technological advances, but these are not the only issues that require addressing. There is also the philosophical changes that have occurred. With the omnipotent technology that we have in our homes, the musician is now very much a part of the conceptualisation as they are in the performance. Musicians are now composers and vice-versa. Whereas in the twenty-first century, a musician was expected to strictly follow the instructions of a composer and their interpretations of a score were seriously frowned upon. Improvisation and artistic expression have burst into modern musicality with both barrels blazing.
For many, sound is taken for granted. We are constantly bombarded with “noise” from the moment we are born. Be it the steady beat of a heart monitor or the smack of a doctors hand, in our first moments of existence in this world, we are submerged in sound. Even before birth scientists believe that a developing child in the womb can recognise voices and music and that these combined can effect a child’s development from within. So as sound is all around us from the start it is no wonder that we strive to control its production with the modern tools we are given as we grow. Think back to when you were a child, were you given a pseudo-musical instrument early on?
What about the younger generation’s toys today? Technology is very much a part of early life and it is no wonder that today’s musicians are using every means available to expose the inner workings of electronic sound generation to the greater public. Of which, just take these sounds as a given, as something that comes out from devices so easy a child can use them. Back in the twentieth century, these means of musical creation would have taken expensive circuitry or months of computer programming. The technology available was just not good enough for the realisation of an artistic concept.
Towards the end of the twentieth century we saw the introduction of MIDI into musical devices. Since then, we have seen this very basic and often limited musical communication language take over and become the most prolific, “all encompassing” technology of electronic and acoustic art. In the twenty-first century, this dated technique is still used and its limitations are ever noticeable by string and wind players all over the world. The use of MIDI has dominated the market for keyboard instruments due to the commercial interests of the founding companies. The new musical technology being produced today is liberated by the inclusion of artists in the conceptualisation of musical devices.
The simple comparison of basic sound generating devices over generations highlights the technological advances of music technology in the twenty-first century. If we look at the images on our televisions, in our magazines, or on billboards, musicians are no longer wielding cumbersome guitars or holding up keyboards, but they have the power to generate music from within tiny boxes with less obvious means of sound production. This is the computer music generation, and they are here to replace the old analogue electronic technology with digitalised sounds. Sounds which have no natural sound generating device that can be seen or even held.
The creation of sound has been the one dominating factor in the design of musical instruments. A sound generator being agitated via bow, blow or strike has a natural sound quality that we can all recognise. This knowledge of natural sound creation helps us to understand an acoustic event and even how we can control one. The physical characteristics of an instruments dictate the sound produced, the control restraints we put on it and these devices have been developed over many millennia. The same cannot be said for digital sound creation.
However, digital technology can no longer be considered as a new concept, as it has been around now for a few decades. Furthermore, with digital technology we are presented with an almost limitless world of sonic and musical possibilities. This technology was initially only available for the large academic institutions that could afford massive computers and synthesis devices. This digital computing may have been around for a while, but it was never more affordable than it is today. Affordability of digital devices, such as laptop computers means that the masses now have the means of manipulating the digital realm from their home or studio. Sound synthesis is only a quick click and download away and once you have the means, the possibilities are endless. These possibilities are restricted only by the end users ability or imagination.
Digital music has presented us with a new blank slate for musical composition. The composition of music through computers is slowly replacing the more traditional paper and pencil methods of old. This contemporary means of composition is also finding its way into traditional musical genre. It is not unnatural for say jazz musicians to have digital improvisations accompanying them on a CD or on-stage. Computer music can no longer be confined to just the electronic music taxon, but it is now a tool to be wielded by all who possess a computer. Current processor speeds can quite easily handle real time sound synthesis and modern software is designed to be graphically pleasing and intuitive to operate.
In the early stages of the twenty-first century, we are embracing the digital as an old friend and incorporate it into everything we can. As computers become faster, so do our means of expressing our musicality become more extravagant. Our input device is no longer a keyboard for writing a long string of complicated coding to produce a colourful tone, but a single hand gesture can be recognised as a control parameter that will change anything the musician desires. To realise these methods sensor technology must also be a considerable influence on the computer music generation. The means by which a musician interacts with his digital creations is also mutating. Controlled only by the direction of technological advancements in this field.
Not only are infra-red and motion sensors readily available from electronic stores and internet distributors, but motion tracking devices are being introduced into the home via games consoles. The Wii controller release in 2006 revolutionised home computer entertainment, and it was not long after that this controller was reverse engineered and applied to musical applications. Recent releases include the Sony PlayStation Move controller, with the ability to track movement and location through the use of technology similar to that of the Wii and the use of a camera. Also, Microsoft have launched the Xbox 360 Kinect system, which relies only on observation of motion to control game movements. How long before these are incorporated into twenty-first century music technology? The answer is almost instantly. Examples of the use of these devices without a games console are available online to view today. Hacking this technology is not readily accepted by the distributor, but it is impossible to stop the curious minds of creative individuals.
The concepts, ideas and even the technology of the newly released to the public controllers are by no means “new” either. Academic research has been carried out for decades on capturing gestures for musical manipulation, but this was limited to the elite institutes specialising in these fields and was not available for the average amateur to reconstruct and manipulate for their own means.
And yet, as time creeps forwards in this century, there are those among us who yearn and even cry out for a revival of the old technology. The clinical sound of a perfectly constructed waveform is almost painful to the ears of some fans of the analogue world. The need for society to relive the old, for a trend to come around again and again requires the same technology to be reproduced and revisited. So we reach a peculiar point in time. One which sees us being given more freedom of expression than any other generation before and yet we are still pining for the “old” sounds that coevals of the last century were limited to.
The production of music through natural means is inspiring, but also restrictive. The limitations of acoustic instruments become very apparent when a beginner picks one up for the first time. Whilst we have full waveform production, we also have full control over it’s creation. This control can be unstable and non-linear to a beginner, but is something that a competent player would be attracted to. Years of practice can be rewarding, but far too time consuming in this fast paced century. On the other hand, we have the engineering principles of HCI developing new musical devices with “user friendly” principles. These principles often overlook the rewards that are coveted by skilled performers of acoustic instruments. The freedom of expression that is attained via acoustically imperfect instruments, is lost in the use of clean cut digital interfaces. However, the new music generation find the immediate rewards very gratifying and inspiring.
The “cut and paste” computer generation is now faced with a hunger to go out and gather their own means of sound generation and the construction of analogue devices to reproduce the “warmth” in music that is somewhat missing in today’s home produced music. Musicians are rummaging through the attics, storage rooms, second hand stores trying to reclaim that which was disposed of in order to make room for the new.
So what we see today, after living through the first decade, is an amalgamation of old and new ideas being brought to the public and laid on their lap. The general populous are given creative freedom to manipulate old technologies, freedom to develop new and intriguing sounds digitally, and are offered this opportunity to do so at a fraction of the cost it used to in the twentieth century. How lucky we are to be given such freedom, how lucky we are to live in the twenty-first century.