A composer I know is coming to terms with the inevitable appearance of online AIs that can compose music based on general parameters like mood, genre and length, somewhat similar to AI image generators that can create an image based on a prompt (someone I know tried "Willem Dafoe as Ronald McDonald" with ... intriguing ... results). I haven't looked at any of these in depth, but from what I have seen it looks like they can produce arbitrary amounts of passable music with very little effort, and it's natural to wonder about the implications of that.
A common reaction is that this will sooner or later put actual human composers out of business, and my natural reaction to that is probably not. Then I started thinking about my reasons for saying that and the picture got a bit more interesting. Let me start with the hot takes, and then go on to the analysis.
- This type of AI is generally built by training a neural network against a large corpus of existing music. Neural nets are now pretty good at extracting general features and patterns and extrapolating from them, which is why the AI-generated music sounds a lot like stuff you've already heard. That's good because the results sound like "real music", but it's also a limitation.
- At least in its present form, using an AI still requires human intervention. In theory, you could just set some parameters and go with whatever comes out, but if you wanted to provide, say, the soundtrack for a movie or video game, you'll need to actually listen to what's produced and decide what music goes well with what parts, and what sounds good after what, and so forth. In other words, you'll still need to do some curation.
- The first argument is basically that the automaton can only do what it was constructed or taught to do by its human creators, and therefore it cannot surpass them. But just as a human-built machine can lift more than a human, a human-built AI can do things that no human can. Chess players have known this for decades now (and I'm pretty sure chess wasn't the first such case).
- The second argument assumes that there's something about human curation that can't be emulated by computers (though I was careful to say "at least in its present form"). The oldest form of this argument is that a human has a soul, or a "human spark of creativity" or something similar, while a machine doesn't, so there will always be some need for humans in the system.
- Computer chess did not put chess masters out of business. The current human world champion would lose badly to the best computer chess player, which has been the case for decades, and we can expect it to be the case from here on out, but people still like to play chess and to watch the best human players play (watching computers play can also be fun). People will continue to like to make music and to hear music by good composers and players.
- Current human chess players spend a lot of time practicing with computers, working out variations and picking up new techniques. I expect similar things will happen with music: at least some composers will get ideas from computer-generated music, or train models with music of their choosing and do creative things with the results, or do all sorts of other experiments.
- Drum machines did not put drummers out of business. People can now produce drum beats without hiring a drummer, including beats that no human drummer could play, and beats that sound like humans playing with "feel" on real instruments, but the effect of that has been more to expand the universe of people who can make music with drum beats than to reduce the need for drummers (I'm not saying that drummers haven't lost gigs, but there is still a whole lot of live performance going on with a drummer keeping the beat).
- Algorithms have been a part of composition for quite a while now. Again, this goes back to before computers, including common-practice techniques like inversion, augmentation and diminution and 20th-century serialism. An aleatoric composition arguably is an algorithm, and electronic music has made use of sequencers since early days. From this point of view, model-generated music is just one more tool in the toolbox.