Sound Remade

김예슬Yeseul Kim (KAIST 과학기술정책대학원 석사과정)
yskstp@gmail.com


한글 요약

이 글은 음악과 과학의 상호작용에 대해 개괄한다. 어떻게 악기가 기술개발과 함께 진화했는지, 또 음악은 어떻게 과학 및 기술의 분야에 기여하고 있는지를 간단히 살펴보았다. 악기는 정해진 형식이 없으며 언제나 인간의 상상력을 통해 진화해나가고 있다. 이는 2000년대 이후 새롭게 만들어진 Seaboard Keyboard 및 Alpha Sphere와 같은 새로운 악기를 통해서 잘 알 수 있다. 글의 후반부는 인류세(Anthropocene)에서의 작곡 형태의 변화와 인공지능 시대에서 음악이 진화한 방식에 초점을 맞추었으며, 이에 관하여 카이스트 문화과학기술대학원의 남주한 교수님과의 간략한 인터뷰를 진행하였다.


 

Evolution of Musical Instruments

Science has significantly advanced the living of people. This statement can be fairly applied to the scientific culture of music, while not many people might have paid full attention to this science of music. Making of musical instruments all starts with doing mathematics. Most of the harmonic musical instruments have been crafted with mathematical ratios to produce the golden harmony. Writing music all starts with learning mathematical relations between each note – rhythm, dynamics, melody and harmony.

From its inception, music has been one ramification of mathematics. It was taught in Aristotelian school as well as Platonic school, imposing the importance of harmony. Since then, music has been intertwined with human history and has evolved with the development of new musical instruments and new ways to compose music. For example, the piano that we now all know of was an improved version of the organ. The piano’s original name, pianoforte, implies that this is an elevated form of the musical instrument of the past. While the organ was not able to deal with the sonar changes according to the strength of the player’s touch, fortepiano, invented at around 1700 by Bartolomeo Cristofori. Fortepiano finally enabled the musicians to imbue crescendo and decrescendo which brought a broader range of musical expressions.

Recently, a British artist published an art work composed of the signals of ‘Moonlight Sonata’ reverberated from the moon. The artist sent electronically transformed signals of Beethoven’s Moonlight Sonata to the moon and chronicled the changes of sound on its journey to the moon and back to the earth. The result was an intermittently interrupted Moonlight Sonata, mixed with some sounds in space. Could this be possible inside a society without such technologies to shoot the signal to the moon as well as sound recorders for the signal? The technological impact on sounds does not cease here. Electronic sounds have become prevalent in contemporary music, from pop to hip-hop to even inside the territory of contemporary classical music.

The interrelation between music and technological advancements have not ceased there, and the kinds of music and instruments made are dependent upon everyone interested in this intersection of music and technology. The discovery and utilization of electricity have also hugely affected how musical sound is made. As pianoforte did open up a new territory for music with its additional feature of tonal sound, the electric piano started to produce sounds which have been unknown – and impossible to be made – before its importation of electricity. Starting from 1950’s, classical composers avidly adopted these new possibilities to realize more minute sounds which lead to fractionalization of given musical scales to even smaller Hertz. Polychromatic music would have never been much explored without the assistance of synthesizer and other electrically charged instruments.

ㅁㄴㅊ둗

A Scene from Youtube Video, “Introducing the Seaboard”

 

The evolution of musical instruments is still an on-going story. One of the most distinguished contemporary musical instrument is ‘Seaboard’, which added ‘vibrato’ function to the keyboard. The users can produce vibrations of each sound just by rhythmically pressing the keys of the instrument as if one plays the violin. Whether this new form of the piano will be widely adopted is yet uncertain, but it has flung open a new stage for keyboard instruments.

In fact, entirely new musical instruments have also been developed, as in the case of Alpha Sphere and other ‘panels’ which are widely used among EDM or Hip-hop players. At the dawn of the ‘electric’ era, nobody would have imagined that its discovery would far reach to be utilized in music industries, which even changed how music is produced.

Enforced adaptation of classical music to electronic music soon loses its glimmer when it loses its context. The harmonies and rhythm of traditional music had been composed when the electronic music did not exist. Thus, the whole set of music only gets its full meaning when it is situated inside the era and space that it was composed. Like this, many of contemporary electronic dance music or pop music would soon lose its inherent sounds and even would not be able to be reproduced with traditional musical instruments such as violin and piano. Technological advancement has brought new possibilities for making music. With the assistance of the advancement of science and technology, the horizon of sounds could have been broadened infinitely, which means there can be more possibilities for sounds to be made, composed, heard and recorded.

Electronic Hae-geum and electronic Gayageum has already been developed and music written exclusively for these new instruments was in 1990’s. Jin-hi Kim, a Komungo artist has published her virtuoso music in Komungo in CD around the world, collaborating with other artists in various regions. At the forefront of this adaptation of music to computers exists Tacit Group, who not only focuses on computerization of music but also strives to visualize their music as in the form of Tetris. However, still, the convergence of music and computer is out of the interest of many of people, compared to the efforts in the U.S. and other European countries.

And the possibilities are still open to imagination. What’s needed is imagination and execution which will bring the imagination into the life.

 

AI technologies and Music

With increasing interest in the capacity of the computer, it’s also possible to question the composition of musical pieces by computers. Partly yes, partly No. In the U.K. a team of scientists initiated a project called ‘Darwin Tunes,’ which explores the ‘survival theory’ inside music. According to their motto, the team tried to find the ‘funkiest’ sounds. As is evident in the name of the project, it is applying Darwin’s evolution theory in music so that they can find whether there are the best melodies and sounds which please the greatest number of ears.

Recent research trend in Computer music is now more focused on the utilization of algorithm in making music. David Cope[1], one of the top-ranked pop music composers, predicted that about 80% of pop music would be composed of algorithms instead of ‘creative mind’ of human composers. However, still, revolutionary pieces won’t be able to appear only with ‘algorithm,’ thus computers, as computer functions based on binary numbers. Creation of original musical pieces will only be possible if inspiration can be encoded by programmers or computer scientists in the end. According to Professor Nam, current visual production or auditory production of computers are in the end the results of the reproduction of given patterns of paintings or musical pieces. Computers can surely learn the patterns, yet they cannot flip the given patters so that revolutionary ways of thinking can be proposed, as Stravinsky or Picasso did in the works. However, in the case of pop music, it only takes few chord procession and adaptation of these few number of given chords to make new songs to be widely heard in the public. He continues that the future of music or the future of human-computer music will evoke new kinds of controversy on the ‘authorship’ of the music. Yet he was still pessimistic about the revolutionary composition done by computers themselves.

Other interesting projects on the Music-Computer interfaces are fairly recent ones, and the competitiveness if even higher for the researchers in AI to trump the title of the ‘first’. Computoser and Flow Machine have already succeeded in producing AI-generated music, (which is even quite ‘original’), and Google has also announced that the company was successful in producing machine generated music. According to Professor Nam, these AI machines still lack the originality, or the capacity to overturn the existing ‘grammar’ of music to produce exceptionally ‘original’ works. As these machines are still dependent on two digit numbers to function, thus to create, the key to making these works revolutionary depends on the programmers’ ability to digitize ‘imagination and inspiration.’ However, even counting in this inherent limitation of AI machines, their works sometimes amaze the listeners with unexpected progression of melodies.

 However, it’s still possible for these machines to lack the possibility to consider melodies which have not been computerized, which means, some exotic and alienating melodies such as Chines e ones or Arabic ones, might have been shunned from this computerization or input. Still, as some studies have been proposed on how to make an input and to make computers produce novel music, any reader can attempt to digitize Korean traditional music so that the machine can generate unprecedented types of music on their own.

 

Effect of Machines on Musicians

Then, what is music to scientists? To some scientists, music has worked as pastime. It’s widely known that Einstein and Feynman played violin and drum respectively, and several research indicates that scientists who perform musical instruments, (or who are involved in any other artistic experience), perform higher than those who do not. In fact, there is a composer who was originally trained as an architect and established new theories of music as we can see in the case of Iannis Xenakis. Sometimes, new musical instruments open up new soundscape for musicians which enables the musicians to establish a whole new piece of musical grammar. In his book ‘Formalized Music,’ Xenakis vehemently adopted mathematical calculations and models to explain his Stochastic music. Here, he drew upon Markov theory and this presaged the possibility of computer – music composer. However, even at this time, the computer has been already fairly adopted by several musicians. Thus, we can assume that the existence of computers and its capacity to produce new musical sound inspired the composer to consider establishing new types of musical grammar.

Even aside from Xenakis’ case, many of modern composers have been influenced by any other existing ‘scientific artifacts.’ For example, John Luther Adams has been delving into ‘Anthropocene’ soundscape, which tries to catch changed musical scape on the earth due to extensive humane activities on this planet. Before him comes George Gershwin who imitated the sound of a ship for the opening of ‘Rhapsody in Blue’. Steve Reich imitated the honking sounds of cars in New York City for many of his works and this initiated minimalist music in the 20th century. Following him is Philip Glass, and the list goes on and on, even including The Beatles. As musicians in the old days were influenced by birds’ chirping sounds and the sounds of rain drops, 21st-century composers cannot escape the sounds of loud speakers, squeaking sounds of steels and digital devices. And this sometimes makes musicians capable of seeing the new territories to realize their reiterating sounds inside their brain.

 

Mutual Beneficiaries

In this piece of writing, I tried to briefly cover how music and science have evolved together and how the two are still interacting with each other. One possibility that music inspires scientists would be that it shows them new dimensions. If we think of the reality as layers, it is composed of multiple and quite complex layers. Although modern ‘intellectual’ activity is heavily centered around ‘visual’ interpretation, sonar activity is hinged upon a different sphere which cannot be interpreted only with eyes. Music is, per se, movement, or incident. It is hinged upon a different dimension that we live in, and make people/scientists to be able to criss-cross new dimensions.

The interweaving of ‘culture’ and science is thus virtually everywhere, and how to find new possibilities to broaden the scope of both depend on the will and interest of musicians and scientists. Although scientists and artists might differ in their responsibilities and capacities – for example, musicians are more prone to ‘expression’ while scientists are ‘analytic’ although this is still refutable – both are quite indebted to one another in finding new horizon in their own fields.

 

 


Things to Read

Alex Ross, The Rest is Noise

ㄴㅆAlex Ross brings the readers up to the rooftop of the music history, focusing on the 20th and 21st-century music but still providing a panoramic view of the entire history of music. However, this is not a technical music book which requires the readers to be acquainted with any musical knowledge. More often, the book chronicles and weaves the life story of outstanding composers of the 20th century so that the composers can be portrayed and understood in their context of a lifetime. For example, Alex Ross gives an example of Shostakovich’s music and political turmoil in Russia during his lifetime and how these two interacted with each other. The sound of Belgrade and chilling 1940’s Russian societal upheaval all influenced the metallic sound of Shostakovich, yet it still embraces the sounds and patterns of its predecessors which can be appropriated into his revolutionary composition styles. The book also gives us how the lives of modernist composers could be forged with the backdrop of New York City, which provides them a grand repository of sounds to be exploited and fast adopted into these new musical waves.

 

Jinho Kim, 매혹의 음색 (Maehokeueumsak, “Enticing Tones”)

ㄴㅅㅎㄷIn this book, the writer, who himself is a trained composer and social science doctorate, explores how tonality has become one of the main components and themes for the 21st musical composition aside from melody and rhythm. While Alex Ross’s book focuses more on the anecdotes of the composers so that the changes of the musical styles can be naturally brought up to the surfaces, this book, Enticing Tones, provides more systematic explanations (which even can be assimilated to a thesis on history of music) on how tonalities have a key musical component in the 20th and 21st century. The writer also touches on how scientific development has hugely influenced this writing of new musical styles. In this scholastic view on new musical style, he tries to give how the society has changed as a whole in the previous two centuries yet still putting his work on his formidable musical background.


 

[1] David Cope is an American author, composer, scientist, and former professor of music at the University of California, Santa Cruz.

답글 남기기

아래 항목을 채우거나 오른쪽 아이콘 중 하나를 클릭하여 로그 인 하세요:

WordPress.com 로고

WordPress.com의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Google+ photo

Google+의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Twitter 사진

Twitter의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

Facebook 사진

Facebook의 계정을 사용하여 댓글을 남깁니다. 로그아웃 /  변경 )

%s에 연결하는 중

WordPress.com 제공.

위로 ↑

%d 블로거가 이것을 좋아합니다: