You deserve a better reply and I will write you one later, but…
Arctopus fucking rules but this take is hot as fuck my dude.
Bro, the song skullgrid was generated by a Java program that Colin and a programmer friend of his, possibly Mike , was working on. In 200X. You’ve been rocking out to AI generated music before you even realised. Brian Eno also had some music that was meant to be generated by an automated system, and afaik, so did John cage.
the sounds were created in Wire, a program using Jsyn written by Phil Burk, which is a java-based synthesis engine. i created the scores in JMSLscore (a java-based scoring program by Nick Didkovsky). this allowed me to access sounds i created from scratch, but organize them with a traditional musical staff.
“Fore” became the warr guitar part for the song “Skullgrid” from BTA. the first 41 seconds:
beholdthearctopus.bandcamp.com/album/skullgrid
now, not to be an asshole, but I remember 1) talking to Colin before a Gorguts show in 2014 ish and 2) Hearing it confirmed in an interview.
Oh also, in the documentation for JMSL (Java Music Specification Language) its puropuse is to :
It is suited for algorithmic composition, live performance, and intelligent instrument design. At its heart is a polymorphic hierarchical scheduler, which means Java objects that are different from one another can be scheduled together and interact with each other in conceptually clean and powerful ways.
JMSL’s open-ended nature will reward your programming efforts and your creativity by offering you a rich toolkit for making music.
Just to beat on this idea a bit more, with JMSL you can make music based on experimental music theory, statistical processes, any algorithms you can implement… you can notate that music using JMSL Score, or leave it in the abstract. You can use Java’s networking tools to grab data off the Internet and sonify it. You can _________ (fill in the blank and start slingin’ code).
If you want to open a window with standard music staff notation and start entering notes, JMSL Score will let you do that as well. Straight out of the box. Later you can start writing your own custom note transformations, or generating musical material automatically, which JMSL Score will notate for you. Of course all music generated for and within JMSL Score can be mouse-edited, and transformed again!
I see where you’re coming from and will concede JMSL’s ability to algorithmically create music.
I still maintain an artist using that or similar software (Guitar Pro, etc.) to translate their own ideas into a more manipulateable form for composing/practicing is fundamentally different from prompting a genAI that has been trained on ideas stolen from actual artists.
That said, music written via formula to cater to the lowest common denominator and generate the greatest possible monetary return is certainly closer to how genAI is/will be used, but the human element involved in writing, recording, and performing that music still distinguishes it from the sort of slop showing up on Spotify. AI generated works are an exemplar of derivative beyond that of even the blandest pop. The only human involved is the prompt writer at best; lyrics, melody, the recording itself are statistical approximations and entirely devoid of human creativity and that is an utter tragedy.
I’d much rather the record companies be replaced with systems that don’t alienate the artists from their labor and creativity. Embracing slop is playing into the execs hands and removes all artistic merit from the process.
Quick edit;
Generated slop training on generated slop is already a problem and will get exponentially as more platforms are flooded with it. That will only alienate and divorce it even further from reality. It will only get worse.
That is by no means AI generated, and certainly not by today’s understanding of the term. If I write a score and design an instrument (or sound, etc), that is still a creative process. Brian Eno literally created ambient music with algorithms like that, but it is still his creative work.
My point is just computer-generated ≠ ai-generated in general discourse.
What’s the difference? Why isn’t it seen as a collaboration between the person writing the prompt (using a scripting language) and the programmer/designer of the generation software and curator of the Data set?
I’m not sure I entirely follow you (I’m only half awake, sorry), but programmed music is only generated by computers insofar the computer is generating 44100 samples every second based on a set of mathematical rules the composer made. AI music is generated based on huge datasets and probability; the composer has very little to no specific control.
If I program a instrument/synth in Supercollidor or Pure Data or some hardware synth, and then sample the instrument/synth or create and sequence a melody for it on my MIDI (piano) keyboard or Schism Tracker, etc., I have complete and absolute control over everything, down to the very waveform. In that case I am truly and purely the creator of the piece.
If I type in a prompt, I am just playing a probability lottery. I have done jack shit more than describing a piece of music.
I might have misunderstood you though. For now, I’m going to bed. Good night!
You deserve a better reply and I will write you one later, but…
Bro, the song skullgrid was generated by a Java program that Colin and a programmer friend of his,
possibly Mike, was working on. In 200X. You’ve been rocking out to AI generated music before you even realised. Brian Eno also had some music that was meant to be generated by an automated system, and afaik, so did John cage.Edit : https://colinmarston.bandcamp.com/album/computer-music-2003-2004
He created the score. If you equate using scoring software, MIDI and synths to creating slop with genAI we’re done here my dude.
now, not to be an asshole, but I remember 1) talking to Colin before a Gorguts show in 2014 ish and 2) Hearing it confirmed in an interview.
Oh also, in the documentation for JMSL (Java Music Specification Language) its puropuse is to :
https://www.algomusic.com/jmsl/download.html
JMSL_v2_20250209\JMSL_v2_20250209\html
I see where you’re coming from and will concede JMSL’s ability to algorithmically create music.
I still maintain an artist using that or similar software (Guitar Pro, etc.) to translate their own ideas into a more manipulateable form for composing/practicing is fundamentally different from prompting a genAI that has been trained on ideas stolen from actual artists.
That said, music written via formula to cater to the lowest common denominator and generate the greatest possible monetary return is certainly closer to how genAI is/will be used, but the human element involved in writing, recording, and performing that music still distinguishes it from the sort of slop showing up on Spotify. AI generated works are an exemplar of derivative beyond that of even the blandest pop. The only human involved is the prompt writer at best; lyrics, melody, the recording itself are statistical approximations and entirely devoid of human creativity and that is an utter tragedy.
I’d much rather the record companies be replaced with systems that don’t alienate the artists from their labor and creativity. Embracing slop is playing into the execs hands and removes all artistic merit from the process.
Quick edit; Generated slop training on generated slop is already a problem and will get exponentially as more platforms are flooded with it. That will only alienate and divorce it even further from reality. It will only get worse.
That is by no means AI generated, and certainly not by today’s understanding of the term. If I write a score and design an instrument (or sound, etc), that is still a creative process. Brian Eno literally created ambient music with algorithms like that, but it is still his creative work.
My point is just computer-generated ≠ ai-generated in general discourse.
What’s the difference? Why isn’t it seen as a collaboration between the person writing the prompt (using a scripting language) and the programmer/designer of the generation software and curator of the Data set?
I’m not sure I entirely follow you (I’m only half awake, sorry), but programmed music is only generated by computers insofar the computer is generating 44100 samples every second based on a set of mathematical rules the composer made. AI music is generated based on huge datasets and probability; the composer has very little to no specific control.
If I program a instrument/synth in Supercollidor or Pure Data or some hardware synth, and then sample the instrument/synth or create and sequence a melody for it on my MIDI (piano) keyboard or Schism Tracker, etc., I have complete and absolute control over everything, down to the very waveform. In that case I am truly and purely the creator of the piece.
If I type in a prompt, I am just playing a probability lottery. I have done jack shit more than describing a piece of music.
I might have misunderstood you though. For now, I’m going to bed. Good night!