For the first time in the history of man and/or music, after many millennia, we can learn how music works harmonically, including a myriad cultural conventions of notation and theory, without running scales on an acoustic instrument–we have an app for that.
Running scales can be charming and beneficial when artfully invented and performed: Bach, Mozart, Ravi Shankar, etc. But many of us, who learned by playing scales as children stopped playing because we were either not gifted in this way or were bored stiff with soul-less instruments. Many were forced by parents or teachers or peer competition to do something we didn’t enjoy with instruments that would challenge the sanity of a virtuous performer. Some early scale-runners applied their limited knowledge and experience in garage bands; some became rock stars and/or stars of jazz. So few have composed a single masterpiece that we must admit the conventional approach, though effective for a few, may actually foreclose possibilities for a much larger number of those who love music and could do well, if they could learn some other way.
Today, using off the shelf music technology and their own musical sensibility, anyone can learn to distinguish nuances of musical relationship with finer resolution and better technical understanding than they could by running scales. With a little guidance from midi interpretations of scores by Mahler, Bartok, Beethoven, etc., they can understand music as well as former scale-runners who became music teachers. Unfortunately, midi performances using the best samples can’t now come close to acoustic performance by a gifted musician. I doubt that will ever change. I don’t expect it will and I feel that when it does, it will be only because our ability to hear music will have become less acute.
Shortly after I started working with digital composition and production, I wrote a scene for a sort of feel-good movie, to be called, “The Schoenberg” and I realized it wasn’t a movie I had written but a future I unknowingly predicted and that particular future may now be emerging.
Playing with their ideas, I’ve learned to follow musical narratives of the amazing Russian, Italian, French, Hungarian, Czech, German, Austrian, English, Spanish composers of the last millennium and those who came to America: Copeland, Ives, Berg, Schoenberg, Adorno, Korngold and Goldsmith.
Composing, for me, is self-rewarding, while my satisfaction from writing occurs when the work is read. Narratives of any kind satisfy our need for connection and its ironic that media us a surrogate for real connection. And since commercial media narratives aims to please an “average” human being–an abstract notion; imaginary, the value of media is limited.
Our actions and inactions are responses to our feelings and as we’ve seen in cinema, music can alter perception without logical reason. Logical explanations follow our actions–interpretations of prior performance. Wisdom is emotional: we feel what we feel.
We can’t question the economic value of music: the music industry rakes in hundreds of billions. In view of this, that public education in the United States abandoned music (to be fair, with the rest of the arts) in the 1980s and now our university classes are led by musicians who are unable to support a lifestyle plying their craft. Most of us teach for a living, not artistic commitment. We gave up artistic ambitions to become carpenters, real estate brokers, educational administrators, and so on, and now we pay begrudging respect to those who achieve commercial success and we tend to focus on recording and production technologies rather than artful expression. Educators haven’t yet seen the possibilities of emerging music technology.
For most of history, children who had excellent support and musical ambition, might become virtuosic performers and composers by applying knowledge of traditions by rote. American jazz musicians had family and peer support in their cultural tradition. Musical development of many popular performers: Bob Dylan, Elvis Presley, Janis Joplin was only informed by rudiments of the canon but they were typically supported by traditionally trained musicians and song writers. Popular music composition doesn’t rely heavily on the canon but it does rely on common practice conventions of 17th century notation and harmony and most of the $100s of billions is for recordings of popular music.
In the 21st century, we can make complex, articulate and emotionally powerful compositions without ever learning to play an instrument. Moreover, we may better understand, compose and produce music of any tradition and complexity. As a young man, I learned to play classical guitar when I stumbled into buying a concert quality instrument from my flamenco teacher. It was a 1962 concert model made by José Ramirez. In 1970, I traded it for a bass viol made in Czec by another famous instrument maker. I sold it not knowing that great bass viols are even more rare than great guitars. At the time, though I spent countless hours practicing, I hadn’t a clue about harmony. In Canada, for a while, I lived in a small rural village. I had five pianos in my home and played them all by ear. Today, I have a Yamaha Motif 8 professional electronic piano, which interfaces with my digital audio workstation and it was using this technology that I was able to study and understand the finest distinctions of music theory, art and practice.
The term, “Digital Audio Workstation” means, a computer program (software) that processess digital representations of sounds. I use an app called, Logic, sold by Apple, installed on a Macbook. I’ve purchased a library of instrument voices from Vienna Symphonic Instruments, an Austrian company that makes high quality samples by recording instrument sounds performed by competent artists in a nearly completely dry environment. (Dry means without reverberations (reflected sounds reflecting from surfaces or the room). I also use several software synthesizers and digital signal processors that can emulate analog, digital and acoustic instruments and add reverberation and other effects.
A sample is a compressed digital recording of a sound made by an instrument or group of instruments playing a single note. When performers play notes on acoustic (real) instruments, their technique shapes the sound of each note. The DAW can sound that note and the DAW musician can adjust its sound in real time by programming nuances of timing, attack, sustain, pitch, volume, timber, decay, release, reverb, for each note. We program automated changes in the sound of each sampled note, following a pattern using the software’s intuitive graphic user interface. Algorithms can modify sounds to create a humanizing effect. Those who use sound and music technology make use of the same fuzzy distinctions that characterize acoustic music, even though the CPU rounds down computations and is highly precise.
Music technology is probably a lot more important to the future of humanity than we typically understand. We think music is unnecessary but if it is necessary, the reason why this is so is a defining characteristic of human being. You could organize your life to not include the function of depth perception and do away with an eye. You could live without color and remove that part of your brain’s function. You could live without intimate human contact. But when you evaluate things we give our time to for the sake of quality of life, we see that music has a humanizing effect and for the listener, this is without effort. Would we still be human without the ability to enjoy music?
Now, since music technology makes it possible for anyone to learn, regardless of previous experience, without practicing scales on mediocre instruments, the possibility of engagement with music is, for the first time in history, available to everyone. There’s a learning curve that requires commitment but its difficulty is in proportion to the complexity of music you would like to hear.
Last week, I met an educator whose company promotes the use of graphic digital processing as a way to help develop creativity in students who have had difficulty mastering verbal languages. She uses Adobe collection of Creative Suite apps. Comic books often include images to evoke anger, frustration and desire though not as effectively as music, which can contextualize any object within it’s emotional envelope. Music, however, can also be triumphant, fearful, ecstatic, dangerous, remorseful, fraternal, etc. I explained this to the educator I met last week; the idea that sound, unlike graphic technologies, doesn’t even require that viewers pay attention, much less, make rational sense of what they are present to, to get the feeling: reading images or language, we discern and then assess the meaning of prominent features and process our optical perceptions though a grid of that which we recognize and so view the world evoked by the graphic as a recreation of what we already know. In contrast, music directly stimulates emotional assessments and even digitally produced music allow us to directly express emotions without using language or symbols.
Of what use are emotional assessments evoked in music? In films, they tell us how to feel about what we’re seeing. A song, piece or sequence of pieces is also a narrative, imagined by listeners, often unrelated to a visual or verbal image because we understand sound emotionally, independent of rational ideas about source or subject. Music creates an emotional context in which we behold the emotional world, analogous to physical space within which we perceive the physical world and, when composers describe their musical stories with titles like, “Prelude to Afternoon of a Faun*, its unrecognizable.