Milan Gutar that which is left after sound has died away.

Title Annotation:interview
Author:Studeny, Jaromir; Studeny, Petr
Article Type:Interview
Geographic Code:4EXCZ
Date:Jul 1, 2012
Words:3502
Publication:Czech Music
ISSN:1211-0264


Milan Guitar, musician, programmer, electronic gadget designer, organologist, pedagogue and writer, has been creating his own music for over three decades. His original compositions, audio and multimedia installations, in which he has frequently applied mathematical principles and algorithmic techniques, are influenced by Minimalism. For his music performances and audio installations he has produced compelling video projections that sensitively add to the overall impression. In addition to creating his own works, Milan Guitar has also devoted to research straddling the boundaries of science, technology and art, above all in relation to mathematical principles in music, the theory of tonal systems, electronics, informatics and applied mathematics. He has frequently collaborated with leading Czech visual artists (Veronika Bromova, David "Cerny, Floex, Kurt Gebauer, Kristof Kintera, Lukas Rittstein, Milos Vojtechovsky, etc).

Humans can hear much earlier than they are born. But there is hearing and hearing. When did you first heal; and when did you consciously register the existence of music?

I don't know how and when exactly it was. Some time towards the end of primary school, my friends and I began exchanging recordings, that is rewinding songs on reel-to-reel tape-recorders. This continued at the grammar school, when new classmates appeared and with them new music too. And that is when I most likely really "heard" for the first time. And at the time I also had my first profound musical experiences. I recall that first listening to the Rolling Stones album Black and Blue, which had just been released by Supraphon, made a really powerful impression. I still don't actually know why it was this disc in particular. And Pink Floyd's Dark Side of the Moon was released too at around the same time. That album spawned my interest in getting better-quality audio equipment and building amplifiers and hi-fi systems. From there it was just a small step to building synthesisers and other electronic musical instruments. Then, the Beatles, of course. From them I got into many other types of music, J. S. Bach, for instance. And also Indian and through it other ethnic music. Of significance too was my encounter with Frank Zappa's music, from which there was a direct path leading to Edgard Varese and 20th-century classical music. At the time, I also got hold of the first recordings by Czech underground and alternative bands, which revealed to me that music can be done in a totally different manner.

[ILLUSTRATION OMITTED]

We would like to know more about your musical development, both as a listener and composer Did you have any idols, or did you pursue you own, unexplored path?

The basic impulse for me was when, some time around the turn of the 1980s, I heard a fragment of some minimalist piece on Radio Free Europe. It struck me really deeply and as a matter of fact I still haven't fully recovered. Minimalism led me to ponder what in music is essential, how its individual components work, what can be perceived and what effect things take. But I was certainly influenced by all the other music I listened to. And not only music.

Yet I didn't have any idols as such, I rather sought the techniques and principles that don't overly appear in our music. Either because they are considered inapplicable or because, for some reason, they have edged away. One such area is microtonality, that is, making use of pitches and chords different to those offered by our twelve-note, equal-tempered chromatic scale. Perception of chords is related to the timbre, which is another not overly explored area. Moreover, I am interested in rhythm and time in music, gradual changes of tempo, as well as the areas of stillness-movcment transition, extreme tempos, and the like. I have also researched into the possibilities of serialism and dodecaphony, which in connection with minimalism and microtonality are nowhere near exhausted.

[ILLUSTRATION OMITTED]

As regards sound and synthesis, I like timbres that you can't tell whether they are natural or artificial. I like applying, for instance, synthetic piano sounds that don't imitate the tones of the traditional piano absolutely faithfully but retain their typical character. I have also devoted to synthesis of the human voice, again with the aim to attain the status whereby it is no longer possible to recognise whether the voice is natural or not. Plenty of possibilities are also provided by combining parameters from several sounds.

The drawback to this approach is that the created compositions are difficult to implement. In most cases, microtonal pieces can't be played on standard instruments, and musicians don't master complicated and unstable tempo-rhythmic structures and common music programs do not support them. What's more, since there are no off-the-shelf instruments to synthesise sound with an extremely uncommon timbre, creators must arrange everything on their own.

The profile drawn from sources available on the Internet reveals the scope and diversity of your activities and skills. Can you tell us which of them is primary, or most characteristic?

No one of them is primary; I like alternating and combining them. Even though it may not appear so at first glance, all of them are related. I am mainly interested in their extreme positions and the connections between them in particular. One of the links between them is numbers and numeric relations. The world was already viewed through numbers by Pythagoreans, two-and-a-half-thousand years ago. And numbers have always played a significant role in our civilisation. The current all-embracing digitisation indicates that many a thing can be expressed by means of numbers. Digitisation directly relates to electronics, which in comparison with the previous mechanics allows for working on a more precise level. Digital and other data are related to informatics, which deals with their processing. And art associates with all this to a much greater extent than is evident at first glance. The medieval architect Jean Mignot, for instance, wrote: "Ars sine scientia nihil est" (Art without knowledge is nothing). This holds especially true with music, whose fundamental components are various symmetries, relations, regularities, repetitions, structures. Rhythm, forms, melodies, harmonic relations, tuning, microtonality, properties of sound, construction of traditional and electronic musical instruments, all this is directly connected with numbers. Perhaps it is the reason why music is the closest of all the arts to me.

In the past, art glorified God and accompanied ecclesiastic ceremonies. These reasons have gradually withered away and today it is merely humans and their individualities that occupy the top position in modern art. What do we actually need art and/or music for nowadays?

What we need music for today may be too broad a question. It is not even clear what art actually is and what music is. By no means do I agree to the statement attributed to Pierre Boulez, that music is sound. I rather incline to the opinion voiced by Charles Ives, that music is that which remains after sound has died away. I consider music to be the most abstract of all the art forms. And since it is only itself, the creator/composer possesses absolute freedom to conceive his/her own musical universe. It is similar to mathematics--'s also independent of that which surrounds it and it is possible to create new worlds in it at large. And the yardstick of their quality in both music and mathematics is aesthetics. Hence, it comes as no surprise that mathematics and music have always been related. At least this is the case of our civilisation.

Music and sound still belong to church and other ceremonies, they can strongly affect the psyche and emotions, and unlike visual perception we cannot easily avoid the impact of sound--can close our eyes, yet we cannot close our ears. This all comes in handy at ceremonies. This is perhaps the case in all cultures, and it is interesting that many a time this type of music is not deemed art and music according to our standards. It is simply part of the ritual.

Sacred music or music glorifying God has always been composed. Many a time, composers conceive it in a goal-directed manner, it also frequently originates somehow incidentally, as a result of the principles implicit in the origination of music, which respect and reflect the world's order. Steve Reich wrote that musical processes can make it possible for a person to directly contact the superpersonal. And the spiritual aspects of music based on mathematical principles have also been observed by others. It can be said that the less the composer's personality is imprinted in the music and the more it is based on impersonal, more general principles, the more spiritual it is. Where Ludwig Wittgenstein says: "What we cannot talk about, we must pass over in silence," Aldous Huxley offers the solution: "After silence, that which comes nearest to expressing the inexpressible is music."

Composing music for self-made electronic instruments., or audio software, according to mathematical principles, on the one hand, and composing for, or playing, classical musical instruments in the band Flao YG, on the other How do you combine in yourself these two different musical worlds?

[ILLUSTRATION OMITTED]

These two musical worlds are not overly different and distant from each other. My instruments in Flao YG in the 198os were electronic too, and I designed and built most of them myself, similarly to today. The difference is that at the time it was analogue electronics and today it is programs.

The stuff we played in the "rock" band Flao YG wasn't traditional rock. Perhaps it could be labelled as alternative music; towards the end especially, our style was markedly influenced by Minimalism. And some mathematical principles were applied in the majority of my compositions for Flao YG. The band's name too was created by means of a mathematical principle--at the beginning of the 1980s, I programmed a simple algorithm for generating words in the processor code of the Zuse Z23 computer, which then generated a page of text that contained the band's name.

At some point in the mid-80s, I began experimenting with algorithmic processes when generating music and graphics, using an eight-bit ZX Spectrum computer. Yet owing to its low performance and the absence of a D/A converter it didn't actually achieve much. Then, for a short time, I used an Amiga, which already had a sound generator and was able to do simple sampling. An Atari computer followed, which possessed excellent MIDI sequencers on which I recorded and transcribed pieces written in notation. This is how most of the music for the Bulsitfilm movies came into being. By the end of the 1980s, I already had a PC available and in the middle of the 1990s finally a sound card of a reasonable quality. Since then, I have been using a computer more and more when composing and recording music. I have processed the majority of the recordings by means of a computer; many of them originated in a computer through sampling, some of them are entirely generated, from composition to sound synthesis.

Yet I am still using "traditional" electronic and acoustic devices. Many a time, I find that it is simpler to play something on an instrument than to enter it in the sequencer or write a program that would calculate it. And, what's more, I still enjoy playing something for pleasure occasionally.

Standing out among your discography is the Flex series, which you have frequently performed live too. Although it dates from approximately the past five years, it was engendered much earlier. Can you specify its genesis?

The initial idea, dating from circa 1999, was to compose several short pieces, formed by unanchored, constantly changing pitches. I then began systematically exploring individual notes, simple chords and chords made up of many thousands of notes of variable pitch, constantly changing rhythmic structures or harmonies, which coherently alter. Through these experiments I got from the originally intended short compositions to much longer formations, which may last up to several hours.

[ILLUSTRATION OMITTED]

Ultimately, a series of studies originated, mostly based on a single principle or a single structure, all of them having in common simplicity of sounds used, and sonic and formal purity. In addition, in most cases they can be represented well graphically and hence the works also contain graphic scores or, perhaps, algorithmically generated graphics.

Back in 1999 it would simply have been too time-consuming to generate the proposed compositions. Then, from time to time, I returned to Flex and added other parts. After 2005, I began "materialising" some of the pieces. To generate sound, I chose thc Csound program, which directly links up to Max Mathews's more than half a century old MUSIC programs. It is thus wonderfully archaic, yet allows for programming anything.

Flex should be made up of a total of 99 compositions. One third of them are complete, one third semi-finished and one third haven't been created yet. Perhaps, for the sake of symmetry, it would be good to leave it this way.

Let's stay for a while with this series. Flex Nr: 10 (1 + 2 + 3 + 4 = 10) has been played live a few times. What form does a live performance of thousand sine waves actually take?

The sheet music of traditional compositions intended for live performance never contains everything. The interpreter always has to supplement that which is not there in the notation. And the situation is similar when it comes to algorithmic compositions. Unless all the piece's parameters are determined absolutely precisely and the score affords the possibility of choice, such a choice can then sometimes be made in real time during the performance. The visual component, the possibility to observe the musicians playing, is always significant at concerts. If the musical instrument is a computer program that is controlled by a mouse or in a similarly subtle manner, there isn't much left to see, for the most part a person bent over a laptop. During my performances, I like to let the audience have a somewhat deeper insight. I mostly project the computer console so that they can observe everything that's going on. Sometimes, the booting up of the entire system is part of the projection. This could be compared to the musicians tuning their instruments and the final preparations prior to the beginning of a concert.

And this is also the case of Flex Nr. to, a piece in which approximately one thousand tones are played. At the beginning, there is the start-up of the sound server and the console, which contains about a thousand keys for controlling individual generators. Then I only have to attend to when this or that note is played, as with any other type of instrument.

Last August, you performed Flex .96 within the "Libechov Chapel" project. It was different there.

Playing some compositions live doesn't make sense. They have been created by means of a given algorithm, without any contribution from the performer. They could be presented in the manner that they are calculated in the concert hall in real time or played from a recording, as was done years ago in the case of music for tapes. Many a time, the form of sound or multimedia installation is better. I like it when the audience can come and go as they please and move freely around the space of the concert hall or gallery. Since the majority of my compositions of this type have a score according to which a "virtual orchestra" plays, this score is often displayed during their presentation too. Usually in some animated form, which also makes it possible to observe the piece's flow in time.

As regards the event in Libechov, I prepared this very type of audio-visual installation. For three quarters of an hour, the chapel resounded to eleven simple tones with slowly changing pitches, while the character of the resulting chord passed between the sound of the organ and the bells. Thanks to the reverb, the sound was also different in different parts of the chapel's interior. The bare oval windows functioned as speakers and the entire building thus became a large sound system for the audience standing outside. The score, made up of eleven lines that intersected in the chancel, was projected in the space of the chapel. The ongoing transformations were also underlined by nature itself, since during the performance day changed into night.

Are your compositions only intended for listening or do you create them with the intention of bringing about changes in human consciousness and perception?

Every listening experience changes human consciousness and perception. Yet there is a sound installation of mine, Przdnota (Void) from 1995, whose aim it was to change consciousness directly. An empty room was filled with an inaudible ultrasonic wave modulated by infrasonic vibration whose frequency corresponded to brain waves in the state of relaxation or meditation. The objective was to tune the listeners into these waves.

What is your priority when creating music pieces based on mathematical principles and algorithmic processes? Do you first have a notion of the resulting sound and only then seek a mathematical model to express it, or do you first select a mathematical model and the sound is rather a secondary product of it?

When it comes to algorithmic compositions, by and large there is initially a notion of a structure or process. This process can be entirely abstract or can from the beginning relate to the development of a certain musical element or parameter. But I also have compositions that came into being through the "traditional procedure", from the notion of a melody, harmonic process or rhythmic pattern, or on the basis of improvisation and instinctual playing, inspired by, for instance, a timbre. Then I mostly discover in them the buds of some structures or processes, which I subsequently develop.

I perceive two poles in composition. On the one hand, there are the components that are important, the components that form the composition's substance. I like them to respect the given rules and be the closest to the ideal state. Then there are the unsubstantial components, which can be of virtually any type and their shape can mostly be entrusted to chance. Many of my compositions aren't clear-cut in this manner, yet some of them are. Their substantive components then originate according to a strict regulation, an algorithm, and everything else is provided by a generator of random numbers. Yet since I know that only God can make chance operations, I mostly let several versions of the composition be calculated and from them I choose the one to be presented. So, I am not able to avoid participating as a composer even in the case of totally algorithmic pieces.

What are your plans and dreams for the future--especially as regards composing? Do you have any specific objective you would like to achieve, or is the journey itself the goal?

If you bear in mind the Journey, then it is a journey. I don't have any specific objective in the sense of a definite goal. Yet I do have some plans for the near future. I would like to have some of the prepared compositions performed. The majority of my pieces take a long time to come to fruition: years, decades even. The basis is mostly incepted very quickly, but then it takes me a long time to complete the composition. I repeatedly return to it; it is a sort of alchemy, a long-term treatment of the matter, that which is within and without. If the basis is all right, it gradually crystallises, new connections emerge, individual items interlock. And a technical solution appears too: how to write or calculate the individual parts. At that moment, I consider a composition finished. Then I need an impulse, so as to try to flesh it out--program, generate, play, record.

[ILLUSTRATION OMITTED]

Milan Guitar

studied electronics, informatics and applied mathematics. He has devoted to research straddling the boundaries of science, technology and art, above all in relation to mathematical principles in music, the theory of tonal systems, organology and electroacoustics. Since the 1980s he has pursued his own musical and intermedia creation, influenced by Minimalism, in which he has frequently applied algorithmic techniques, mathematical principles and microtonality. He has created electroacoustic music, as well as compositions for traditional instruments. In the 19805 and 1990s, he headed the band Flan YG and made music for film and theatre. He has recorded several CDs featuring his pieces, which he has occasionally performed live too. Milan Guitar is the author of the extensive, two-volume encyclopaedic book Elektrofony (Electrophones), in which he sums up the development and principles of the functioning of electromechanical and electronic musical instruments. At the present time, he works at the Music Faculty of the Academy of Performing Arts in Prague as a pedagogue and researcher into music acoustics, and at the Film Faculty teaches at the centre of audiovisual studies. He also leads occasional seminars at the Academy of Fine Arts and other schools.

A selection of Milan Gustar's works can found on the website UVNITR (www.uvnitr.cz/summasummarum/index.html).
COPYRIGHT 2012 Czech Music Information
Copyright 2012 Gale, Cengage Learning. All rights reserved.