The Evolution of AI Music

Written by:

Introduction: Visionaries

The story of AI music isn’t just about circuits, algorithms, or code – it’s about people. Over the past century, composers and inventors have imagined new ways to utilise technology, transforming the potential of music. Each generation of pioneers has challenged tradition: Luigi Russolo with his noise machines and George Antheil’s mechanical chaos. The immersive soundscapes of Edgard Varèse inspired composers such as Delia Derbyshire, who painstakingly spliced tape to create new timbres. Simultaneously, the trained architect Iannis Xenakis applied a mathematical approach to his compositions, and ultimately Brian Eno introduced us to the new realm of ambient environments.

These innovators didn’t merely adopt technology; they shaped it, bending machines to fit their artistic visions. Their experiments laid the groundwork for today’s AI composers, who carry forward the same spirit of rebellion and innovation into the digital age. This article traces their journey, illustrating how human imagination – working hand in hand with technology -has taken us from Futurist noise to neural networks.

A radio broadcast, produced for RTHK Radio 3 and now available on SoundCloud, features excerpts from five key composers – George Antheil, Edward Varèse, Delia Derbyshire, and Brian Eno – culminating in AIVA’s orchestral AI composition: Symphonic Fantasy in A minor Op 24: Among the Stars (2018)

Thaddeus Cahill’s Telharmonium

Noise and Rebellion: 1910s–1920s

In 1906, the first public demonstrations of an electromechanical synth, Thaddeus Cahill’s Telharmonium, were held in New York at Telharmonic Hall. The Telharmonium was the first electromechanical synthesiser, weighing over 200 tons and using rotating tone wheels to generate sounds. It influenced later synths, including the Hammond organ (1930s), the RCA Mark II Synthesiser (1957), and modern digital synthesisers. The following year, in 1907, Ferruccio Busoni published Sketch for a New Aesthetic of Music – an essay calling for a radical reimagining of music that embraced technology, microtones, and electronic sounds. Busoni’s ideas predicted algorithmic composition and electronic sound-generation technologies decades before they existed.

Luigi Russolo in 1913 with his mechanical orchestra

An even more radical voice emerged in 1913 when Luigi Russolo declared the orchestra obsolete. In The Art of Noises, he argued that factories, trains, and crowds create the true music of the modern age. He built intonarumori machines that produced controlled chaos. When he premiered them in Milan around 1913–1914, the audience reacted with uproar and excitement.

George Antheil Ballet mécanique (ICTUS, Brussels i2013)

George Antheil soon pushed mechanisation further. Between 1923 and 1924, he composed Ballet Mécanique for 16 synchronised player pianos, sirens, propellers, and typewriters. At the 1926 Paris premiere, the machines failed to synchronise properly, and the audience erupted in laughter and jeers. Yet, Antheil’s mechanical polyrhythms foreshadowed MIDI sequencing (introduced in 1983) and algorithmic beat generation, and his radical approach inspired Conlon Nancarrow and Frank Zappa.

Le Corbusier’s Philips Pavilion in Brussels

Organising Sound: 1930s–1950s

Between 1918 and 1921 (revised in 1927), Edgard Varèse composed Amériques, his significant orchestral work, blending modernist dissonance, rhythmic complexity, and ‘sound masses’. The work focused heavily on percussion (including 11 percussionists and sirens) and was among the first to elevate percussion as a dominant, independent force in the orchestra. Over the years, he transformed sound into architecture through his groundbreaking work, Poème électronique (1958). Collaborating with Le Corbusier (who designed the Philips Pavilion) and Iannis Xenakis (who served as architectural assistant), Varèse installed approximately 325 loudspeakers in the Philips Pavilion at the 1958 Brussels World’s Fair, immersing listeners in a moving soundscape.

Turing (right) and colleagues working on the Ferranti Mark I Computer in 1951

The first computer-generated melody was programmed by Christopher Strachey on the Ferranti Mark I (derived from the Manchester Mark I) in 1951, with Alan Turing’s involvement in the machine’s programming environment. The computer played God Save the King, marking the earliest documented instance of a computer creating music and the beginning of composition using digital technology.

A page from the score of Karlheinz Stockhausen Gesang der Jünglinge (Song of the Youth)

Karlheinz Stockhausen composed Gesang der Jünglinge (Song of the Youth) in 1955–1956, an electronic/vocal fusion with tape, inspired by a Biblical story (Daniel 3:1–30, the Song of the Three Youths). It combines a boy soprano voice (12-year-old Josef Protschka) with electronically generated sounds, spliced, looped, and sped up to create otherworldly vocal textures.

Lejaren Hiller and Leonard Isaacso: Illiac Suite (String Quartet No. 4), the first computer-generated score

Computers formally entered the compositional story in 1957, when Lejaren Hiller and Leonard Isaacson wrote the Illiac Suite (String Quartet No. 4), the first computer-generated score, by applying Bach-style counterpoint rules through algorithmic decision-making. That same year, Max Mathews at Bell Labs created MUSIC I, the first digital sound synthesis program. Together, Varèse’s spatial vision and Hiller’s algorithms shifted music from notation to machine-sculpted sound.

Delia Derbyshire at the BBC Radiophonic Workshop

Tape, Chance, and Sci‑Fi Sound: 1960s–1970s

Delia Derbyshire at the BBC Radiophonic Workshop realised Ron Grainer’s Doctor Who theme in 1963. She spliced tape loops, layered oscillators, and crafted each note by hand. The BBC refused to credit her on-screen (until the 50th anniversary in 2013), but her theme became one of the first fully electronic TV signatures and influenced generations of electronic composers.

Iannis Xenakis (1922-2001)

At the same time, Iannis Xenakis used mathematics in his compositions. In Achorripsis (1957), he applied probability theory (called stochastic processes) to decide pitch, duration, density, and overall structure. Though the piece is a fixed score (not free-form or improvised), its statistical design gives each performance a uniquely structured sound. Later, in 1977–1978, Xenakis created the UPIC system, which let composers draw sound waves and shapes on a screen and turn them directly into electronic music.

1980s–1990s: Brian Eno and the Rise of Generative Music

Brian Eno’s Ambient 1: Music for Airports  was released in February 1979 (copyright sometimes listed as 1978), marking the birth of ambient music as a defined genre – music designed to be ‘as ignorable as it is interesting.’ Using tape loops, phasing, and minimalist piano and vocal textures, Eno crafted an atmospheric soundscape intended to calm airport passengers, transforming music into an environmental utility rather than a performance. The album’s slowly shifting loops created a sense of timelessness and space, influencing everything from AI-generated ambient tools to modern background playlists. Eno’s radical idea – that music could be functional, not just expressive – laid the groundwork for today’s algorithmic compositions, where machines now generate endless, evolving soundscapes in the same spirit.

Moving forward to 1995–1996, – the first commercial generative music software (Koan Pro was released in 1995; Eno’s Generative Music 1 with SSEYO Koan Software appeared in 1996). The software allows users to set rules and generate endless, non-repeating variations. Koan Pro’s music philosophy – generative, rule-based, ever-changing – prefigures modern AI apps such as Boomy’s chill presets. Whilst Eno’s analogue loops became digital algorithms, the foundational concept remains: music that evolves organically through systems rather than fixed composition.

2000s–2020s: AI Enters the Orchestra

AIVA (Artificial Intelligence Virtual Artist), created in 2016, became the first AI to be officially recognised as a composer by SACEM – a milestone publicly reported in 2017. Its 2018 symphonic work, Among the Stars (Symphonic Fantasy in A Minor, Op. 31), premiered by the CMG Orchestra Hollywood under conductor John Beal, arguably shows that AI systems can produce orchestral music with convincing emotional expressiveness.

AIVA composes music using deep learning and reinforcement learning, trained on thousands of classical scores including works by Bach, Mozart, Beethoven, and Tchaikovsky. Rule-based algorithms ensure AIVA’s compositions often follow classical structures such as sonata form. Although AIVA’s orchestral works demonstrate that AI can attempt to emulate human emotion, they also raise ethical questions about whether AIVA is merely a tool or a composer

Taryn Southern’s I AM AI, released in 2018, marked a groundbreaking moment as the first pop album by a solo artist co-created with artificial intelligence. The album blends Southern’s songwriting and vocals with compositions generated by tools including Amper Music, IBM Watson Beat, Google’s Magenta, and AIVA. Rather than hiding the AI’s role, Southern embraced it as a collaborative partner, using algorithms to craft melodies, harmonies, and even lyrical ideas.

The album’s futuristic pop sound proved that AI could thrive beyond classical or experimental music, entering the mainstream pop landscape. Southern’s work not only demonstrated the creative potential of human-AI collaboration but also sparked conversations about authorship, emotion, and the role of technology in art – questions that continue to define the evolving relationship between musicians and machines.

2020s: Democratisation, Lawsuits, and the Future of AI Music

The rise of AI music platforms like Suno, Udio, and Boomy has democratised music creation, allowing anyone to compose professional-quality tracks with a few clicks. But this accessibility has sparked legal battles, with major labels (UMG, Sony, Warner) suing AI platforms over unauthorised use of copyrighted training data.

As the industry grapples with authorship, ownership, and fairness, the future of AI music hinges on three key developments:

  • Licensed AI tools (where artists opt into training datasets for royalties)
  • New genres born from human-AI collaboration
  • A potential backlash—with some artists rejecting AI in favour of handmade music

The question is no longer whether AI can compose, but how do artists share the creative – and financial – rewards?

Epilogue: The Human Need to Explore

The story of AI music isn’t really about machines replacing humans. It’s about something far older and deeper: our need to explore. From Russolo’s noise machines to AIVA’s symphonies, every leap forward in music technology has been driven by the same human impulse – to push boundaries, break rules, and discover what lies beyond the familiar.

Creativity has never been about the tools we use. It’s about the questions we ask. What if we turn noise into music? What if we let chance guide the melody? What if a machine could dream up sounds we’ve never heard?  The piano didn’t replace the harpsichord; it expanded what was possible. The synthesiser didn’t kill the orchestra; it opened new worlds of sound. AI won’t replace composers – it will challenge us to redefine what creativity means.

As AI develops its own skills and ideas—regardless of human influence – will it supersede us? The answer lies not in technology, but in what it means to be human. Machines can imitate, generate, and even surprise us, but they don’t yearn. They don’t struggle. They don’t rebel against their own limitations. Creativity isn’t just about producing something new; it’s about the desire to explore, the courage to fail, and the joy of discovery. AI may one day compose music that moves us in ways we can’t yet imagine. But it will never need to be created. It won’t wake up in the middle of the night with a melody in its head, or spend years searching for a sound that doesn’t yet exist. That’s the one thing machines can’t replicate: the human hunger to go beyond. AI will challenge us—to listen deeper, think bigger, and keep exploring. Because in the end, that’s what creativity has always been about: not the tools we use, but the boundaries we break.

Bibliography

1. AIVA Technologies. AIVA: Artificial Intelligence Virtual Artist. Official website and press materials, 2016–2025.  https://www.aiva.ai

2. Brian Eno, Ambient 1: Music for Airports, EG Records, 1979.

3. Brian Eno & SSEYO, Generative Music 1 with SSEYO Koan Software, SSEYO, 1996.

4. Hiller, Lejaren, and Leonard Isaacson, Illiac Suite (String Quartet No. 4), University of Illinois, 1957.

5. Mathews, Max V. The Digital Computer as a Musical Instrument.Science, vol. 142, no. 3592, 1963, pp. 553–556.

6. Stockhausen, Karlheinz, Gesang der Jünglinge, Wergo, 1956.

7. Southern, Taryn. I AM AI Self-released, 2018.

8. Varèse, Edgard, Poème électronique, Philips Pavilion, Brussels World’s Fair, 1958.

9. Hiller, Lejaren. Experimental Music: Composition with an Electronic Computer.  McGraw-Hill, 1959.

10. Russolo, Luigi. L’arte dei rumori (The Art of Noises). 1913. Translated by Patrick Camiller. Arcana, 2004.

11. Busoni, Ferruccio, Sketch for a New Aesthetic of Music. 1907. Translated by Clifford Tokens. G. Schirmer, 1913.

12. Xenakis, Iannis. Formalised Music: Thought and Mathematics in Composition. Harmonie Park Press, 1971.

13. Penny, Simon. Computational Culture and the Emergence of AI Art   In The Art of Artificial Intelligence, edited by T. Metzinger, MIT Press, 2020.

14. Morrison, Jeff, and David C. Stone. Algorithmic Composition: Computational Thinking in Music. Communications of the ACM, 

15. Strachey, Christopher. Love Song 1951: The First Computer-Generated Music. 16. Zinovieff, Peter. “MUSYS and Early Digital Synthesis.” Journal of the Audio Engineering Society, vol. 18, no. 4, 1970, pp. 314–321.

17. Tenney, James. Digital Synthesis and Algorithmic Composition at Bell Labs.

18. Isaacson, Walter. The First Computer Music: Christopher Strachey and Alan Turing.

19. Kapur, Ajay.. The Evolution of Generative AI for Music. LinkedIn Pulse, 23 Apr. 2024.  https://www.linkedin.com/pulse/evolution-generative-ai-music-kelli-richards-9wfic

20. McCartney, Paul. How AI Helped Complete the Last Beatles Song. The Guardian,  2 Nov. 2023.  

21. Pasquini, Giuseppe. AIVA: The First AI Composer Recognised by SACEM. Music & AI Review, 15 Mar. 2021.

22. The Verge. How AI-Generated Music Is Changing the Way Hits Are Made, 31 Aug. 2018 https://www.theverge.com/2018/8/31/17777008/artificial-intelligence-taryn-southern-amper-music.  

23. Time Magazine. How AI Is Transforming Music, 3 Dec. 2023. https://time.com/6340294/ai-transform-music-2023/ 

24.Computer Music. 2 Nov. 2001.  https://en.wikipedia.org/wiki/Computer_music

25. Music and Artificial Intelligence. 21 Feb. 2009. https://en.wikipedia.org/wiki/Music_and_artificial_intelligence  

26. Telharmonium. https://en.wikipedia.org/wiki/Telharmonium

27.Ballet Mécanique (Antheil). https://en.wikipedia.org/wiki/Ballet_m%C3%A9canique

28. Delia Derbyshire. https://en.wikipedia.org/wiki/Delia_Derbyshire https://en.wikipedia.org/wiki/Ambient_1:_Music_for_Airports

30. Poème électronique. https://en.wikipedia.org/wiki/Po%C3%A8me_%C3%A9lectronique

31.AIVA (composer). https://en.wikipedia.org/wiki/AIVA

32. Wikipedia. Koan (program). https://www.linkedin.com/pulse/evolution-generative-ai-music-kelli-richards-9wfic

https://en.wikipedia.org/wiki/Koan_(program)

33. Generative music. https://en.wikipedia.org/wiki/Generative_music

34. Doctor Who theme music. https://en.wikipedia.org/wiki/Doctor_Who_theme_music

35. Manchester University. First Digital Music Made in Manchester, 4 Oct. 2016.  https://www.manchester.ac.uk/about/news/first-digital-music-made-in-manchester

36. SSEYO. SSEYO Koan Pro. Archive page. https://intermorphic.com/archive/sseyo/koan/pro/  

37. Vozart.ai. History of AI Music: From Early Experiments to Today, 4 Sept. 2025.  

38. MusicRadar. A Short History of AI in Music Production, 13 June 2022.  https://www.musicradar.com/news/the-history-of-ai-in-music-production

39. Perfect Circuit. Zeroes and Ones: Tracing The Evolution of Computer Music Part One, 23 June 2025. https://www.perfectcircuit.com/signal/computer-music-history-pt1

40. American Symphony Orchestra. Ballet mécanique, Buried Alive, Ulysses, 5 oct. 2010.  https://americansymphony.org/concert-notes/ballet-mecanique-buried-alive-ulysses/

41. BBC Radiophonic Workshop. Doctor Who Theme (Original 1963 Version) Delia Derbyshire’s performance. YouTube, 7 nov. 2019.

42. Edgard Varèse. Poème électronique (1958). YouTube, 13 Jan. 2017.

43. RTHK Radio The Evolution of AI Music (special broadcast), 2025. SoundCloud.  

44. AIVA.Symphonic Fantasy in A Minor, Op. 31: Among the Stars. Apple Music, 4 Mar. 2018.  

45. Universal Music Group, Sony Music Entertainment, Warner Music Group. Lawsuit Against AI Music Platforms Over Copyright Infringement. Press release, 2024–2025.  https://www.bbc.com/news/articles/ckrrr8yelzvo

46. SACEM. AIVA Recognised as First AI Composer,  https://aibusiness.com/verticals/aiva-is-the-first-ai-to-officially-be-recognised-as-a-composer


Discover more from Paul Archibald Sounding Out

Subscribe to get the latest posts sent to your email.

Leave a comment

Discover more from Paul Archibald Sounding Out

Subscribe now to keep reading and get access to the full archive.

Continue reading