== History ==
===Late 19th century to early 20th century===
[[Image:Teleharmonium1897.jpg|thumb|200px|[[Telharmonium]], [[Thaddeus Cahill]], 1897.]]
The ability to record sounds is often connected to the production of electronic music, but not absolutely necessary for it. The earliest known sound recording device was the [[phonautograph]], patented in 1857 by [[Édouard-Léon Scott de Martinville]]. It could record sounds visually, but was not meant to play them back.Rosen 2008
In 1878, [[Thomas A. Edison]] patented the phonograph, which used cylinders similar to [[Raymond Scott|Scott]]'s device. Although cylinders continued in use for some time, [[Emile Berliner]] developed the disc phonograph in 1887.Russcol 1972, 67.
A significant invention, which was later to have a profound effect on electronic music, was [[Lee DeForest]]'s triode [[Audion tube|audion]]. This was the first thermionic valve, or [[vacuum tube]], invented in 1906, which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, amongst other things.
Before electronic music, there was a growing desire for [[composers]] to use emerging technologies for musical purposes. Several instruments were created that employed electromechanical designs and they paved the way for the later emergence of electronic instruments. An electromechanical instrument called the [[Telharmonium]] (sometimes Teleharmonium or Dynamophone) was developed by [[Thaddeus Cahill]] in the years 1898-1912. However, simple inconvenience hindered the adoption of the Telharmonium, due to its immense size. The first electronic instrument is often viewed to be the [[Theremin]], invented by Professor [[Léon Theremin|Leon Theremin]] circa 1919–1920.[http://www.bbc.co.uk/dna/h2g2/A520831 ''Theremin'', BBC h2g2 encyclopaedia project, Undated]. Accessed: 05-20-2008. Another early electronic instrument was the [[Ondes Martenot]], which was most famously used in the ''[[Turangalîla-Symphonie]]'' by [[Olivier Messiaen]] as well as other works by him. It was also used by other, primarily French, composers such as [[Andre Jolivet]].{{Fact|date=June 2007}}
\===="New Aesthetic of Music"====
{{main|Ferruccio Busoni}}
Just a year later, another significant contribution was made to the advent of experimental music. This was the 1907 publication of [[Ferruccio Busoni]]'s ''Sketch of a New Esthetic of Music'', which discussed the use of electrical and other new sound sources in future music. He wrote of the future of microtonal scales in music, made possible by Cahill's Dynamophone Only a long and careful series of experiments, and a continued training of the ear, can render this unfamiliar material approachable and plastic for the coming generation, and for Art.
Also in the Sketch of a New Esthetic of Music, Busoni states:
Music as an art, our so-called occidental music, is hardly four hundred years old; its state is one of development, perhaps the very first stage of a development beyond present conception, and we—we talk of "classics" and "hallowed traditions"! And we have talked of them for a long time! We have formulated rules, stated principles, laid down laws;—we apply laws made for maturity to a child that knows nothing of responsibility! Young as it is, this child, we already recognize that it possesses one radiant attribute which signalizes it beyond all its elder sisters. And the lawgivers will not see this marvelous attribute, lest their laws should be thrown to the winds. This child—it floats on air! It touches not the earth with its feet. It knows no law of gravitation. It is wellnigh incorporeal. Its material is transparent. It is sonorous air. It is almost Nature herself. It is—free! But freedom is something that mankind have never wholly comprehended, never realized to the full. They can neither recognize or acknowledge it. They disavow the mission of this child; they hang weights upon it. This buoyant creature must walk decently, like anybody else. It may scarcely be allowed to leap—when it were its joy to follow the line of the rainbow, and to break sunbeams with the clouds.
Through this writing, as well as personal contact, Busoni had a profound effect on many musicians and composers, perhaps most notably on his pupil, Edgard Varèse, who said:
Together we used to discuss what direction the music of the future would, or rather, should take and could not take as long as the straitjacket of the tempered system. He deplored that his own keyboard instrument had conditioned our ears to accept only an infinitesimal part of the infinite gradations of sounds in nature. He was very much interested in the electrical instruments we began to hear about, and I remember particularly one he had read of called the Dynamophone. All through his writings one finds over and over again predictions about the music of the future which have since come true. In fact, there is hardly a development that he did not foresee, as for instance in this extraordinary prophecy: 'I almost think that in the new great music, machines will also be necessary and will be assigned a share in it. Perhaps industry, too, will bring forth her share in the artistic ascent...
====The 1920–1930s====
This decade brought a wealth of early electronic instruments. Along with the Theremin, there is the presentation of the [[Ondes Martenot]],Composers using the instrument ultimately include [[Pierre Boulez|Boulez]], [[Arthur Honegger|Honneger]], [[André Jolivet|Jolivet]], [[Charles Koechlin|Koechlin]], [[Olivier Messiaen|Messiaen]], [[Darius Milhaud|Milhaud]], [[Gilles Tremblay|Tremblay]], and [[Edgard Varèse|Varèse]]. In 1937, Messiaen wrote ''Fête des belles eaux'' for 6 ondes Martenot, and wrote solo parts for it in ''[[Trois petites Liturgies de la Présence Divine]]'' (1943–44) and the ''Turangalîla Symphonie'' (1946–48/90). which was designed to reproduce the microtonal sounds found in Hindu music,{{Fact|date=August 2008}} and the [[Trautonium]]. Maurice Martenot invented the Ondes Martenot in 1928, and soon demonstrated it in Paris.
Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin (actually [[Lev Termen] built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. In 1929, [[Joseph Schillinger]] composed ''First Airphonic Suite for Theremin and Orchestra'', premièred with the[Cleveland Orchestra]]with [[Leon Theremin]] as soloist.
In 1924, [Ottorino Respighi]] composed ''The Pines of Rome'', which calls for the use of a [phonograph]] recording of nightingales. However, at the time of composition, phonograph players were acoustical, not electric, and this is actually more along the lines of using a sound effect, and therefore cannot be considered an electroacoustic element in the composition.
The following year, [[Antheil]] first composed for mechanical devices, electrical noisemakers, motors and amplifiers in his unfinished opera, ''Mr. Bloom'', as a response to the "art of noises" of [[Luigi Russolo]], [[Marinetti]] and the other Futurists.{{Fact|date=May 2008}} And just one year later in 1926, was the première of Antheil's ''[[Ballet Mécanique]]'', using car horns, airplane propellers, saws, and anvils (but no electronics).
Recording of sounds made a leap in 1927, when American inventor J. A. O'Neill developed a recording device that used magnetically coated ribbon. However, this was a commercial failure. Two years later, [[Laurens Hammond]] established his company for the manufacture of electronic instruments. He went on to produce the [[Hammond organ]], which was based on the principles of the [[Telharmonium]], along with other developments including early reverberation units.Russcol 1972, 70.
The method of photo-optic sound recording used in cinematography made it possible to obtain a visible image of a sound wave, as well as to realize the opposite goal—synthesizing a sound from an artificially drawn sound wave. The research work by the Russian optical engineer Evgeny Murzin{{Fact|date=August 2008}} taken from 1937 to 1957{{Fact|date=August 2008}} made it possible to create a photoelectric synthesizer—a musical instrument that combined three processes: creation, recording, and playback of music. Murzin named his invention in honour of the composer Alexander Nikolayevich Scriabin (“ANS”).{{Fact|date=August 2008}
===Developments from 1945 to 1960===
====Musique concrète====
{{main|Musique concrète}}
{{seealso|Acousmatic music}}
Low-fidelity magnetic [[wire recorder]]s had been in use since 1898, and in the late 1920s the movie industry adopted optical sound-on-film recording systems based on the [[photoelectric cell]],{{Fact|date=March 2009}} but it was not until the 1930s that the German electronics company [[AEG]] developed the first practical audio [[tape recorder]], the "[[Magnetophon]]".{{Fact|date=March 2009}} During [[World War II]] AEG technicians discovered the [[AC bias]]ing technique, which dramatically improved the fidlelity of magnetic recording by adding an inaudible high-frequency tone, and by 1943 AEG had developed the first [[stereo]] tape recorders.{{Fact|date=March 2009}} However these devices and techniques remained a secret outside Germany until the end of WWII, when captured Magnetophon recorders and reels of [[I.G. Farben|Farben]] recording tape were brought back to the United States by [[Jack Mullin]] and others.{{Fact|date=March 2009}} These captured recorders and tapes were the basis for the development of the first commercial tape recorder, the Model 200, manufactured by the American [[Ampex]] company (Angus 1984) with support from entertainer [[Bing Crosby]], who became the first performer to record radio broadcasts and studio master recordings on tape.{{Fact|date=March 2009}}
Magnetic audio tape opened up a vast new range of sonic possibilities to musicians, composers, producers and engineers. Audio tape was relatively cheap and very reliable, and its fidelity of reproduction was better than any audio medium to date. Most importantly, unlike discs, it offered the same plasticity of use as film. Tape can be slowed down, speeded up or run even backwards during during recording or playback, with often startling effect. It can be physically edited in much the same way as film, allowing for unwanted sections of a recording to be seamlessly removed or replaced; likewise, segments of tape from other sources can be edited in. Tape can also be joined to form endless loops that continually play repeated patterns of pre-recorded material. Audio amplification and mixing equipment further expanded tape's capabilities as a production medium, allowing multiple pre-taped recordings (and/or live sounds, speech or music) to be mixed together and simultaneously recorded onto another tape with relatively little loss of fidelity. Another unforseen windfall was that tape recorders can be relatively easily modified to become [[echo machine]]s that produce complex, controllable, high-quality [[echo]] and [[reverberation]] effects (most of which would be practically impossible to achieve by mechanical means).
It wasn't long before composers began using the tape recorder to develop a new technique for composition called [[Musique concrète]]. This technique involved editing together recorded fragments of natural and industrial sounds."Musique Concrete was created in Paris in 1948 from edited collages of everyday noise" (Lebrecht 1996, 107). The first pieces of ''musique concrète'' were assembled by [[Pierre Schaeffer]], who went on to collaborate with [[Pierre Henry]].
On 5 October 1948, [[Radiodiffusion-Télévision Française|Radiodiffusion Française]] (RDF) broadcast composer [[Pierre Schaeffer]]'s ''Etude aux chemins de fer''. This was the first "[[Movement (music)|movement]]" of ''Cinq études de bruits'', and marked the beginning of studio realizations and [[musique concrète]] (or [[acousmatic art|acousmatic music]]). Schaeffer employed a disk-cutting [[lathe]], four turntables, a four-channel mixer, filters, an echo chamber, and a mobile recording unit.
Not long after this, Henry began collaborating with Schaeffer, a partnership that would have profound and lasting affects on the direction of electronic music. Another associate of Schaeffer, [[Edgard Varèse]] began work on ''Déserts'', a work for chamber orchestra and tape. The tape parts were created at Pierre Schaeffer's studio, and were later revised at Columbia University.
In 1950, Schaeffer gave the first public (non-broadcast) concert of musique concrète at the [[Ecole Normale de Musique de Paris]]. "Schaeffer used a PA system, several turntables, and mixers. The performance did not go well, as creating live montages with turntables had never been done before."Snyder [n.d.] ''[http://csunix1.lvc.edu/~snyder/em/schaef.html Pierre Schaeffer: Inventor of Musique Concrète]''. Later that same year, Pierre Henry collaborated with Schaeffer on ''Symphonie pour un homme seul'' (1950) the first major work of musique concrete. In Paris in 1951, in what was to become an important worldwide trend, RTF established the first studio for the production of electronic music. Also in 1951, Schaeffer and Henry produced an opera, ''Orpheus'', for concrete sounds and voices.
====Elektronische Musik====
[[Karlheinz Stockhausen]] worked briefly in Schaeffer's studio in 1952, and afterward for many years at the [[Westdeutscher Rundfunk|WDR]] [[Cologne]]'s Studio for Electronic Music.
In Cologne, what would become the most famous electronic music studio in the world was officially opened at the radio studios of the [[NWDR]] in 1953, though it had been in the planning stages as early as 1950 and early compositions were made and broadcast in 1951.Eimert 1972, 349. The brain child of [[Werner Meyer-Eppler]], Robert Beyer, and [[Herbert Eimert]] (who became its first director), the studio was soon joined by [[Karlheinz Stockhausen]] and [[Gottfried Michael Koenig]]. In his 1949 thesis ''Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache'', Meyer-Eppler conceived the idea to synthesize music entirely from electronically produced signals; in this way, ''elektronische Musik'' was sharply differentiated from French ''musique concrète'', which used sounds recorded from acoustical sources.Eimert 1958, 2; Ungeheuer 1992,
With Stockhausen and [[Mauricio Kagel]] in residence, it became a year-round hive of charismatic avante-gardism (Lebrecht 1996, 75). "... at Northwest German Radio in Cologne (1953), where the term 'electronic music' was coined to distinguish their pure experiments from musique concrete..." (Lebrecht 1996, 107). on two occasions combining electronically generated sounds with relatively conventional [[orchestra]]s—in ''Mixtur'' (1964) and ''[[Hymnen|Hymnen, dritte Region mit Orchester]]'' (1967).Stockhausen 1978, 73–76, 78 Stockhausen stated that his listeners had told him his electronic music gave them an experience of "outer space," sensations of flying, or being in a "fantastic dream world""In 1967, just following the world premiere of ''[[Hymnen]]'', Stockhausen said this about the electronic music experience: '... Many listeners have projected that strange new music which they experienced—especially in the realm of electronic music—into extraterrestrial space. Even though they are not familiar with it through human experience, they identify it with the fantastic dream world. Several have commented that my electronic music sounds "like on a different star," or "like in outer space." Many have said that when hearing this music, they have sensations as if flying at an infinitely high speed, and then again, as if immobile in an immense space. Thus, extreme words are employed to describe such experience, which are not "objectively" communicable in the sense of an object description, but rather which exist in the subjective fantasy and which are projected into the extraterrestrial space'" (Holmes 2002
More recently, Stockhausen turned to producing electronic music in his own studio in [[Kürten]], his last work in the genre being ''Cosmic Pulses'' (2007).
====American electronic music====
In the United States, sounds were being created electronically and used in composition, as exemplified in a piece by [[Morton Feldman]] called ''Marginal Intersection''. This piece is scored for winds, brass, percussion, strings, 2 oscillators, and sound effects of riveting, and the score uses Feldman's graph notation.
The Music for Magnetic Tape Project was formed by members of the [[New York School]] ([[John Cage]], [[Earle Brown]], [[Christian Wolff (composer)|Christian Wolff]], [[David Tudor]], and Johnson 2002, 2. and lasted three years until 1954. Cage wrote of this collaboration,
In this social darkness, therefore, the work of Earle Brown, Morton Feldman, and Christian Wolff continues to present a brilliant light, for the reason that at the several points of notation, performance, and audition, action is provocative.| Johnson 2002,
Cage completed ''Williams Mix'' in 1953 while working with the Music for Magnetic Tape Project. "Carolyn Brown [Earle Brown's wife] was to dance in Cunningham's company, while Brown himself was to participate in Cage's 'Project for Music for Magnetic Tape.'... funded by Paul Williams (dedicatee of the 1953 ''Williams Mix''), who—like [[Robert Rauschenberg]]—was a former student of Black Mountain College, which Cage and Cunnigham had first visited in the summer of 1948" (Johnson 2002, 20). The group had no permanent facility, and had to rely on borrowed time in commercial sound studios, including the studio of Louis and Bebe Barron.
====Columbia-Princeton====
{{main|Vladimir Ussachevsky}}
Also in the U.S., in the same year, significant developments were happening in New York City. [[Columbia University]] purchased its first [[tape recorder]]—a professional [[Ampex]] machine—for the purpose of recording concerts.
[[Vladimir Ussachevsky]], who was on the music faculty of Columbia University, was placed in charge of the device, and almost immediately began experimenting with it.
Herbert Russcol writes: "Soon he was intrigued with the new sonorities he could achieve by recording musical instruments and then superimposing them on one another."Russcol 1972, 92.
Ussachevsky said later: "I suddenly realized that the tape recorder could be treated as an instrument of sound transformation. Both pieces were created at the home of Henry Cowell in Woodstock, NY. After several concerts caused a sensation in New York City, Ussachevsky and Luening were invited onto a live broadcast of NBC's Today Show to do an interview demonstration—the first televised electroacoustic performance. Luening described the event: "I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations.
1954 saw the advent of what would now be considered authentic electric plus acoustic compositions—acoustic instrumentation augmented/accompanied by recordings of manipulated and/or electronically generated sound. Three major works were premiered that year: Varèse's ''Déserts'', for chamber ensemble and tape sounds, and two works by Luening and Ussachevsky: ''Rhapsodic Variations for the Louisville Symphony'' and ''A Poem in Cycles and Bells'', both for orchestra and tape. Because he had been working at Schaeffer's studio, the tape part for Varèse's work contains much more concrete sounds than electronic. "A group made up of wind instruments, percussion and piano alternates with mutated sounds of factory noises and ship sirens and motors, coming from two loudspeakers.
''Déserts'' was premiered in Paris in the first [[stereo]] broadcast on French Radio. At the German premiere in [[Hamburg]], which was conducted by [[Bruno Maderna]], the tape controls were operated by [[Karlheinz Stockhausen]]. The title ''Déserts'', suggested to Varèse not only, "all physical deserts (of sand, sea, snow, of outer space, of empty streets), but also the deserts in the mind of man; not only those stripped aspects of nature that suggest bareness, aloofness, timelessness, but also that remote inner space no telescope can reach, where man is alone, a world of mystery and essential loneliness."Anonymous 1972.
====Stochastic music====
{{Unreferenced section|date=June 2007}}
{{main|Iannis Xenakis|Stochastic music#Music}}
An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. [[Iannis Xenakis]] began what is called "musique stochastique," or "[[stochastic music]]," which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of [[glissando|glissandi]] for his orchestral composition ''Metastasis'' (1953–54), but later turned to the use of computers to compose pieces like ''ST/4'' for string quartet and ''ST/48'' for orchestra (both 1962).
====Mid to late 1950s====
In 1954, Stockhausen composed his ''Elektronische Studie II''—the first electronic piece to be published as a score.
In 1955, more experimental and electronic studios began to appear. Notable were the creation of the Studio de Fonologia (already mentioned), a studio at the [[NHK]] in [[Tokyo]] founded by [[Toshiro Mayuzumi]], and the Phillips studio at [[Eindhoven]], the [[Netherlands]], which moved to the [[University of Utrecht]] as the Institute of Sonology in 1960.
The score for ''[[Forbidden Planet]]'', by [[Louis and Bebe Barron]],"From at least Louis and Bebbe Barron's soundtrack for 'The Forbidden Planet" onwards, electronic music - in particular synthetic timbre - has impersonated alien worlds in film" (Norman 2004, 32). was entirely composed using custom built electronic circuits in 1956.
The world's first computer to play music was [[CSIRAC]] which was designed and built by [[Trevor Pearcey]] and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the [[Colonel Bogey March]]{{cite journal| last = Doornbusch | first = Paul| title = The Music of CSIRAC | journal = Melbourne School of Engineering, Department of Computer Science and Software Engineering| url = http://www.csse.unimelb.edu.au/dept/about/csirac/music/introduction.html}} of which no known recordings exist. However, [[CSIRAC]] played standard repertoire and was not used to extend musical thinking or composition practice which is current computer music practice. CSIRAC was never recorded, but the music played was accurately reconstructed (reference 12). The oldest known recordings of computer generated music were played by the [[Ferranti Mark I]] computer, a commercial version of the [[Manchester Small-Scale Experimental Machine|Baby]] Machine from the [[Victoria University of Manchester|University of Manchester]] in the autumn of 1951. The music program was written by [[Christopher Strachey]].
The impact of computers continued in 1956. [Lejaren Hiller] and Leonard Isaacson composed ''Iliac Suite'' for [[string quartet]], the first complete work of computer-assisted composition using [[algorithm]]ic composition. "... Hiller postulated that a computer could be taught the rules of a particular style and then called on to compose accordingly."Schwartz 1975, 347. Later developments included the work of [[Max Mathews]] at Bell Laboratories, who developed the influential [[MUSIC-N|MUSIC I]] program. [[Vocoder]] technology was also a major development in this early era.
In 1956 Stockhausen composed ''[[Gesang der Jünglinge]]'', the first major work of the [[Cologne]] studio, based on a text from the ''[[Book of Daniel]]''. An important technological development of that year was the invention of the [[Clavivox]] [[synthesizer]] by [[Raymond Scott]] with subassembly by [[Robert Moog]].
The [[RCA Mark II Sound Synthesizer]] made its debut in 1957. Unlike the earlier Theremin and Ondes Martenot, it was difficult to use, required extensive programming, and could not be played in real time. Sometimes called the first electronic synthesizer, the [[RCA Mark II Sound Synthesizer]] used [[vacuum tube]] oscillators and incorporated the first electronic [[music sequencer]] driven by two punched-paper tapes. It was designed by RCA and installed at the Columbia-Princeton Electronic Music Center where it remains to this day.{{Fact|date=June 2007}}
In 1957, [MUSIC-N|MUSIC], one of the first computer programs to play electronic music, was created by [Max Mathews]at [[Bell Laboratories.
Later, [Milton Babbitt]], influenced in his student years by Schoenberg's "revolution in musical thought" began applying serial techniques to electronic music.{{Fact|date=February 200 From 1950 to 1960 the vocabulary of tape music shifted from the fairly pure experimental works which characterized the classic Paris and Cologne schools to more complex and expressive works which explored a wide range of compositional styles. More and more works began to appear by the mid-1950s which addressed the concept of combining taped sounds with live instruments and voices. There was also a tentative interest, and a few attempts, at incorporating taped electronic sounds into theatrical works.Dunn 1992, 17.
The public remained interested in the new sounds being created around the world, as can be deduced by the inclusion of Varèse's ''Poeme Electronique'', which was played over four hundred loudspeakers at the Phillips Pavilion of the 1958 [Expo '58|Brussels World Fair]. That same year, [[Mauricio Kagel]], an [[Argentina|Argentine]] composer, composed ''Transición II''. The work was realized at the WDR studio in Cologne. Two musicians perform on a piano, one in the traditional manner, the other playing on the strings, frame, and case. Two other performers use tape to unite the presentation of live sounds with the future of prerecorded materials from later on and its past of recordings made earlier in the performance.
====Computer music====
{{main|Computer music}}
{{seealso|Music-N|Algorithmic composition}}
[[CSIRAC]], the first computer to play music, did so publicly in August 1951 (reference 12).{{cite journal| last = Doornbusch | first = Paul| title = The Music of CSIRAC | journal = Melbourne School of Engineering, Department of Computer Science and Software Engineering| url = http://www.csse.unimelb.edu.au/dept/about/csirac/music/introduction.html}} One of the first large-scale public demonstrations of [[computer music]] was a pre-recorded national radio broadcast on the [[NBC]] [[radio network]] program [[Monitor (NBC Radio)|Monitor]] on February 10, 1962. In 1961, [[LaFarr Stuart]] programmed [[Iowa State University]]'s [[CYCLONE]] computer (a derivative of the [[Illiac]]) to play simple, recognizable tunes through an amplified speaker that had been attached to the system originally for administrative and diagnostic purposes. An interview with Mr. Stuart accompanied his computer music.
The late 1950s, 1960s and 1970s also saw the development of large mainframe computer synthesis. Starting in 1957, Max Mathews of Bell Labs developed the MUSIC programs, culminating in [[MUSIC-N|MUSIC V]], a direct digital synthesis language (Mattis 2001).
====Live electronics====
{{main|live electronic music}} {{seealso|electroacoustic improvisation}}
In America, live electronics were pioneered in the early 1960s by members of Milton Cohen's Space Theater in [[Ann Arbor, Michigan]], including [[Gordon Mumma]] and [[Robert Ashley]], by individuals such as [[David Tudor]] around 1965, and The Sonic Arts Union, founded in 1966 by Gordon Mumma, Robert Ashley, [[Alvin Lucier]], and [[David Behrman]]. ONCE Festivals, featuring multimedia theater music, were organized by Robert Ashley and Gordon Mumma in Ann Arbor between 1958 and 1969. In 1960, [[John Cage]] composed ''Cartridge Music'', one of the earliest live-electronic works.
In Europe in 1964, Karlheinz Stockhausen composed ''[[Mikrophonie (Stockhausen)|Mikrophonie I]]'' for [[tam-tam]], hand-held microphones, filters, and potentiometers, and ''Mixtur'' for orchestra, four [[Sine wave|sine-wave]] generators, and four [[ring modulator]]s. In 1965 he composed ''[[Mikrophonie (Stockhausen)|Mikrophonie II]]'' for choir, Hammond organ, and ring modulators.
In 1966-67 [[Reed Ghazala]] discovered and began to teach "[[circuit bending]]"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by [[John Cage]]’s [[aleatoric music]] concept.{{cite web
|url = http://www.gamemusic4all.com/backto8bit%204.html
|title = Back to the 8 bit: A Study of Electronic Music Counter-culture
|last = Yabsley
|first = Alex
|authorlink =
|coauthors =
|date = 2007-02-03
|publisher = Dot.AY
|quote = This element of embracing errors is at the centre of Circuit Bending, it is about creating sounds that are not supposed to happen and not supposed to be heard (Gard, 2004). In terms of musicality, as with electronic art music, it is primarily concerned with timbre and takes little regard of pitch and rhythm in a classical sense. ... . In a similar vein to Cage’s aleatoric music, the art of Bending is dependent on chance, when a person prepares to bend they have no idea of the final outcome.
====Digital synthesis====
{{seealso|Digital synthesizer|Sampling (music)}}
In 1979 the Australian [[Fairlight]] company released the [[Fairlight CMI]] (Computer Musical Instrument) the first practical polyphonic digital synthesiser/sampler system. In 1983, [[Yamaha Corporation|Yamaha]] introduced the first stand-alone digital synthesizer, the [[DX-7]]. It used frequency modulation synthesis (FM synthesis), first experimented with by [[John Chowning]] at Stanford during the late sixties.Chowning 1973.
[[Barry Vercoe]] describes one of his experiences with early computer sounds:
At IRCAM in Paris in 1982, flutist Larry Beauregard had connected his flute to DiGiugno's 4X audio processor, enabling real-time pitch-following. On a [[Guggenheim]] at the time, I extended this concept to real-time score-following with automatic synchronized accompaniment, and over the next two years Larry and I gave numerous demonstrations of the computer as a chamber musician, playing [[Handel]] flute sonatas, [[Boulez]]'s ''Sonatine'' for flute and piano and by 1984 my own ''Synapse II'' for flute and computer—the first piece ever composed expressly for such a setup. A major challenge was finding the right software constructs to support highly sensitive and responsive accompaniment. All of this was pre-MIDI, but the results were impressive even though heavy doses of tempo rubato would continually surprise my '''Synthetic Performer'''. In 1985 we solved the tempo rubato problem by incorporating ''learning from rehearsals'' (each time you played this way the machine would get better). We were also now tracking violin, since our brilliant, young flautist had contracted a fatal cancer. Moreover, this version used a new standard called MIDI, and here I was ably assisted by former student Miller Puckette, whose initial concepts for this task he later expanded into a program called '''MAX'''.Vercoe 2000,
====Advancements====
In the 1990s, interactive computer-assisted performance started to become possible, with one example described as follows:
Automated Harmonization of Melody in Real Time: An interactive computer system, developed in collaboration with flutist/composer [[Pedro Eustache]], for realtime melodic analysis and harmonic accompaniment. Based on a novel scheme of harmonization devised by Eustache, the software analyzes the tonal melodic function of incoming notes, and instantaneously performs an orchestrated harmonization of the melody. The software was originally designed for performance by Eustache on Yamaha WX7 wind controller, and was used in his composition ''Tetelestai'', premiered in [[Irvine, California]] in March 1999.Dobrian 2002, ''[http://music.arts.uci.edu/dobrian/research02.htm Automated Harmonization of Melody in Real Time]''
Other recent developments included the [[Tod Machover]] (MIT and IRCAM) composition ''Begin Again Again'' for "[[hypercello]]," an interactive system of sensors measuring physical movements of the cellist. Max Mathews developed the "Conductor" program for real-time tempo, dynamic and timbre control of a pre-input electronic score. Morton Subotnick released a multimedia CD-ROM ''All My Hummingbirds Have Alibis'
===The 2000s===
In recent years, as computer technology has become more accessible and [[music software]] has advanced, interacting with music production technology is now possible using means that bear no relationship to traditional [[performance|musical performance]] practices:Emmerson 2007, 111–13. for instance, [[laptop computer|laptop]] performance (''laptronica'')Emmerson 2007, 80-81. and [[live coding]].Emmerson 2007, 115; Collins 2003.
In the last decade a number of software-based virtual studio environments have emerged, with products such as Propellerhead's [[Reason (software)|Reason]] and [[Ableton Live]] finding popular appeal. [http://www.wintermusicconference.com/events/idmas/winners2008.php 23rd Annual International Dance Music Awards:] Best Audio Editing Software of the Year - 1st Abelton Live , 4th Reason. Best Audio DJ Software of the Year - Abelton Live.
Such tools provide viable and cost-effective alternatives to typical hardware-based production studios, and thanks to advances in [[microprocessor]] technology, it is now possible to create high quality music using little more than a single laptop computer. Such advances have, for better or for worse, democratized music creation,Chadabe 2004 leading to a massive increase in the amount of home-produced electronic music available to the general public via the internet.
Artists can now also individuate their production practice by creating personalized software synthesizers, effects modules, and various composition environments. Devices that once existed exclusively in the hardware domain can easily have virtual counterparts. Some of the more popular software tools for achieving such ends are commercial releases such as [[Max/Msp]] and [[Reaktor]] and [[freeware]] packages such as [[Pure Data]], [[SuperCollider]], and [[ChucK]].
====Circuit bending====
[[Image:Bending.jpg|thumb|Probing for "bends" using a jeweler's screwdriver and alligator clips.]]
{{main|Circuit bending}}
A practice originally pioneered by [[Reed Ghazala]] in the 1960s; it has recently found significant popular appeal. [[Circuit bending]] is the creative [[short circuit|short-circuiting]] of low [[voltage]], battery-powered [[Electronic musical instrument|electronic audio devices]] such as [[guitar effects]], children's [[toys]] and small [[synthesizers]] to create new musical instruments and sound generators.{{Fact|date=April 2008}} Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with [[noise music]], though many more conventional contemporary musicians and musical groups have been known to experiment with "bent" instruments. Fact date April 2008
Music-Face Enjoy The Music..