Wednesday, December 28, 2022

Pushing back on AI (and pushing back on that)

A composer I know is coming to terms with the inevitable appearance of online AIs that can compose music based on general parameters like mood, genre and length, somewhat similar to AI image generators that can create an image based on a prompt (someone I know tried "Willem Dafoe as Ronald McDonald" with ... intriguing ... results).  I haven't looked at any of these in depth, but from what I have seen it looks like they can produce arbitrary amounts of passable music with very little effort, and it's natural to wonder about the implications of that.

A common reaction is that this will sooner or later put actual human composers out of business, and my natural reaction to that is probably not.  Then I started thinking about my reasons for saying that and the picture got a bit more interesting.  Let me start with the hot takes, and then go on to the analysis.

  • This type of AI is generally built by training a neural network against a large corpus of existing music.  Neural nets are now pretty good at extracting general features and patterns and extrapolating from them, which is why the AI-generated music sounds a lot like stuff you've already heard.  That's good because the results sound like "real music", but it's also a limitation.
  • At least in its present form, using an AI still requires human intervention.  In theory, you could just set some parameters and go with whatever comes out, but if you wanted to provide, say, the soundtrack for a movie or video game, you'll need to actually listen to what's produced and decide what music goes well with what parts, and what sounds good after what, and so forth.  In other words, you'll still need to do some curation.
Along with this, I have a general opinion about the progress of AI as a whole: A few years back, there was a breakthrough as hardware got fast enough, thanks in part to special-purpose tensor-smashing chips, and new modeling techniques were developed, for the overall approach of neural network-based machine learning (ML) models to solve interesting problems that had so far resisted solution.  We're now in the phase of working out the possibilities, with new applications turning up left and right.

One way to look at is that there were a bunch of problem spaces out there that computers weren't well suited for before but are a good match for the new techniques, and we're in the process of identifying those.  Because there has been so much progress in applying the new ML, and because these models are based on the way actual brains work, it's tempting to thing that they can handle anything that the human brain can handle, and/or that we've created "general intelligence", but that's not necessarily the case.

My strong hunch is that before too long the limitations will become clear and the flood of new applications will slow.  There may or may not be a new round of "failed promise of AI" proclamations and amnesia about how much progress has been made.  Researchers will keep working away, as they always have, and at some point there will be another breakthrough and another burst of progress.  Lather, rinse, repeat.


That's all well and good, but honestly those bullet-pointed arguments above aren't that great, and the more general argument doesn't even try to say where the limits are.

The bullet points amount to two arguments that go back to the beginnings of AI, if not before, to the first time someone built an automaton that looked like it was doing something human, and they have a long history of looking compelling in the short run but failing in the long run.
  • The first argument is basically that the automaton can only do what it was constructed or taught to do by its human creators, and therefore it cannot surpass them.  But just as a human-built machine can lift more than a human, a human-built AI can do things that no human can.  Chess players have known this for decades now (and I'm pretty sure chess wasn't the first such case).
  • The second argument assumes that there's something about human curation that can't be emulated by computers (though I was careful to say "at least in its present form").  The oldest form of this argument is that a human has a soul, or a "human spark of creativity" or something similar, while a machine doesn't, so there will always be some need for humans in the system.
The problem with that one is that when you try to pin down that human spark, it basically amounts to "whatever we can do that the machines can't ... yet", and over and over again the machines have  eventually turned out to be able to do things they supposedly couldn't.  Chess players used to believe that computers could only play "tactical chess" and couldn't play "positional chess", until Deep Blue demonstrated that if you can calculate deeply enough, there isn't any real difference between the two.

As much as I would like to say that computers will never be able to compose music as well as humans, it's pretty certain that they eventually will, including composing pieces of sublime emotional intensity and inventing new paradigms of composition.  I don't expect that to happen very soon -- more likely there will be an extended period of computers cranking out reasonable facsimiles of popular genres -- but I do expect it to happen.


Where does that leave the composer?  I think a couple of points from the chess world are worth considering:
  • Computer chess did not put chess masters out of business.  The current human world champion would lose badly to the best computer chess player, which has been the case for decades, and we can expect it to be the case from here on out, but people still like to play chess and to watch the best human players play (watching computers play can also be fun).  People will continue to like to make music and to hear music by good composers and players.
  • Current human chess players spend a lot of time practicing with computers, working out variations and picking up new techniques.  I expect similar things will happen with music: at least some composers will get ideas from computer-generated music, or train models with music of their choosing and do creative things with the results, or do all sorts of other experiments.
There is also some relevant history from the music world
  • Drum machines did not put drummers out of business.  People can now produce drum beats without hiring a drummer, including beats that no human drummer could play, and beats that sound like humans playing with "feel" on real instruments, but the effect of that has been more to expand the universe of people who can make music with drum beats than to reduce the need for drummers (I'm not saying that drummers haven't lost gigs, but there is still a whole lot of live performance going on with a drummer keeping the beat).
  • Algorithms have been a part of composition for quite a while now.  Again, this goes back to before computers, including common-practice techniques like inversion, augmentation and diminution and 20th-century serialism.  An aleatoric composition arguably is an algorithm, and electronic music has made use of sequencers since early days.  From this point of view, model-generated music is just one more tool in the toolbox.

Humanity has had a complicated relationship with the machines it builds.  On the one hand, people generally build machines to enable them to do something they couldn't, or relieve them of burdensome tasks.  Computers are no different.  On the other hand, people have always been cautious about the potential for machines to disrupt their way of life, or their livelihood (John Henry comes to mind).  Both attitudes make sense.  Fixating on one at the expense of the other is generally a mistake.

Personally, having watched AI develop for decades now, I don't see any significant change in that dynamic.  We don't seem particularly closer to the Singularity than we ever were (and I argue in that post that's in part because the Singularity isn't really a well-defined concept).  But then, given the way these things are believed to work, we may not know different until it's too late.

If it does happen maybe someone, or something, will compose an epic piece to mark the event.

2 comments:

  1. We don't have any difficulty knowing the difference between what a drum machine does and what Max Roach does (reacting in real time to what the rest of the group is doing. But a drum machine is not AI. The question is how close will AI have to get before we do have difficulty knowing the difference.

    ReplyDelete
  2. If we're talking Max Roach or Elvin Jones, say, that'll be a while yet.

    ReplyDelete