Guest post by Stephen Carlisle of NOVA Southeastern University
“I have songwriting credits…even though I don’t know how to write a song.”
The speaker of this statement is not a musician and has no musical training. His involvement with “creating” the songs in questions? Virtually none. He writes computer code. He helped create an app called Endel, which is self-described as “a cross-platform audio ecosystem.” Endel is part of a larger part of the current hot debate over works of art being “created” by computers using programs employing “artificially intelligent” modes of computer learning, or AI for short.
As reported by The Verge:
“Dmitry Evgrafov, Endel’s composer and head of sound design, says all 600 tracks were made ‘with a click of a button.’ There was minimal human involvement outside of chopping up the audio and mastering it for streaming. Endel even hired a third-party company to write the track titles.”
What makes this notable is that Endel has a record deal with Warner Bros. Music.
“Five Endel albums have already been released, and 15 more are coming this year — all of which will be generated by code. In the future, Endel will be able to make infinite ambient tracks.”
But what makes this problematic, is that there is serious doubt as to whether the output of Endel is capable of copyright protection at all.
First, there is the rule that in order to be protected by copyright, the work must have a human author. The authority for this is the U.S. Copyright Office’s Compendium of US Copyright Office Practices, Section 305:
“U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects ‘the fruits of intellectual labor’ that ‘are founded in the creative powers of the mind.’ (citation omitted) Because copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”
But didn’t the Endel engineers create the software in question? And isn’t software copyrightable?
The answers, of course, are yes and yes. But just because a computer software exists, and might be used as a creative utility, it does not follow that the software authors are now authors or co-authors of the output. Put more succinctly, is Microsoft an author or co-author of this blog post because it is being written using MS Word?
Then we move to the questions of what was the AI program told to do? Were the instructions to the program “creative choices” that might entitle the authors of the software to claim authorship of the output? And further, how did this information get in the AI database?
“AI is essentially a pattern-recognition system. Feed it enough data, and it will find patterns within that information that it can use to make decisions.”
According to this article by the BBC:
“In 2017, one of DeepMind’s AI programmes beat the world’s number one player of Go, an ancient and highly complex Chinese board game, after apparently mastering creative new moves and innovative strategies within days. [Cognitive neuroscientist Romy Lorenz says] ‘Google would say that was creativity – new ways of finding solutions that it was not taught.’”
The difference is that Go is a game, which, like chess, has a fixed set of rules. These rules do not change and cannot be altered. A computer might have a distinct advantage over a human player. Like Dr. Strange in “The Avengers” the computer is capable of analyzing thousands, perhaps hundreds of thousands of future possibilities arising from a single move. Unlike a human, the computer is never tired, stressed or lacking focus.
Music does not have such rigid rules. It has some preferences, as some notes sound better over certain bass notes and chord progressions than others. Melodies within an octave and small jumps between notes are preferred because they are easier to sing. But there are no rigid rules, as with a game like chess or Go.
Plus, interesting things happen when you break the “rules.”
Both 10cc’s “The Things We Do for Love” and Elton John’s “Goodbye Yellow Brick Road” have verses which are in different musical keys than the chorus.
Or take Queen’s “Bohemian Rhapsody.” Each musical sequence in the song is different than the one immediately preceding. And no two sequences of the song are ever repeated.
In order for an AI enabled machine to “create” art, it first has to be fed the necessary data. So, the AI program is not going to be wise to the possibility of changing the musical keys within one song, unless it is fed an example of something like that happening.
And the necessary part of this, is making copies of other people’s work.
In order to paint a portrait in the “style” of classical painters, it was necessary to feed the computer 15,000 portraits painted between the 14th and 20th Centuries. The BBC called the result “a bit rubbish.” The painting is a shade blurry and would never be confused with a Rembrandt. But follow the link in the previous endnote and judge for yourself.
Then according to the Entrepreneur:
“ Just this past [April]Google’s Magenta and PAIR teams created a Google Doodle which celebrated Johann Sebastian Bach’s 333rd birthday. The Doodle, which lets users create their own music by using machine learning to harmonize melodies, analyzed 306 of Bach’s original chorale harmonizations to create a tune with the user’s notes.”
Then there’s this story from NBC News about LA composer Lucas Tanner’s attempt to “finish” Franz Schubert’s famous Symphony Number 8, known colloquially as the “Unfinished Symphony,” using AI.
“In the end, Cantor and engineers from Huawei fed as much of Schubert’s catalog as they could find — roughly 2,000 pieces of piano music — into the software inside the company’s new Mate 20 phone. The goal was to teach the AI to think like Schubert and to compose new passages, including what Cantor calls the ‘heart and soul of any piece of music:’ the melody.”
So what do we have at the outset? Copying. Lots and lots of copying. Some 2,000 pieces of piano music. Some 15,000 portraits.
None of this is important when the object is to come close to what Schubert or Rembrandt might have done in their day. All of their works entered the public domain many, many years ago. But what happens when the intent is not to sound like Schubert, but Ed Sheeran? Or Stevie Wonder? Or Paul McCartney? Then we have a different problem.
Recall that the very first requirement for a work to be protected by copyright is that the work be “original.” According to the Courts, “originality requires ‘a work independently created by its author, one not copied from pre-existing works…’”
So if the AI was instructed to compose songs like Paul McCartney, we face three problems:
- There is doubt whether a computer can qualify as an “author”
- The AI computer would have to be fed copies of Paul McCartney songs since it would not discover his “style” by accident
- The resulting output would be necessarily copied from the existing input
In essence, either there is no copyright since a human being is not the author, or there is copyright infringement from violating the reproduction right by imputing complete copies into to computer and then using an AI blender to make a similar sounding song. Take your pick.
Let’s take the hypothetical and expand a bit, since it is doubtful that a programmer would limit the input to that of one composer, even though that composer is probably the most successful composer in the history of modern recorded music. Let’s say that the programmers fed the computer 100,000 songs. And not surprisingly, one of the songs popped out by the AI computer sounds an awful lot like an existing song.
As an attorney who might litigate this case, before I even get to an analysis of whether the existing song and the AI song are “substantially similar” to each other, I must prove “access.” Since a lot of songs sound similar to each other, I must prove that the AI computer actually had the opportunity to analyze the song in question.
In a deposition, I ask the programmer:
“Was the AI computer fed a copy of the song?”
“I don’t remember.”
“Is there a list of songs that the AI computer was fed?”
“No. We did not keep any records.”
So, now that it has come down to a question of copyright infringement, and paying potential damages, the incentive for hazy memories, sloppy record keeping, or even out and out cheating become more and more appealing. Might, under the duress of contemplating damages, a programmer remove a song from a master list and then delete it from the memory of the computer, so I cannot prove access? Possibly.
But the one thing I cannot do is cross-examine the computer. Because it is not a human being.
The other avenue of deflection is that the programmer does the usual tech liability avoidance dance and points to the computer: “It was the computer, not me, who did the infringing.”
But you are the one claiming copyright in the AI song. If you are the one claiming to be an “author” as a result of what the computer has done, shouldn’t you also be liable for when the computer infringes? After all, fair is fair. But tech companies are so used to avoiding liability for what they do through things like Section 512 of the Copyright Act and Section 230, that they continually act as if they are above the law.
Yet, in the distance I can hear the Electronic Frontier Foundation crowd braying that “all musicians copy from each other and this is no different.”
Sorry, but we do not just sit around and make copies of each other. This argument fails to distinguish between copying and inspiration, something which is apparent to musicians, not lawyers, especially those lawyers with an anti-copyright agenda. This subject was thoroughly discussed in the blog post Copying Is Not Creativity.
I have written more than a few songs, and I honestly cannot tell you I do not know where they come from. They come from somewhere inside my head. In looking (hearing?) back over the years I can hear inspiration, in that certain sections sound like what some musical groups “might have done” but never what one actually did.
Usually a song was started through improvisation, either by myself or with a group in a jam session. Once I hit upon a melodic fragment that was pleasing or “catchy” I would toy with it, trying various possibilities until I felt the song was done.
But I also had the advantage of musical training. I knew what I was doing, and why I was doing it. For example, if I was trying to write a “pretty” song I would not start out with a melody note of “C” over a C# minor chord, because it would be dissonant and sound bad. However, if I was trying to write something that sounded ominous or eerie, I just might do that, for the exact same reason: I’m trying to express an emotion I’m feeling or I want the listener to feel.
Again, this is the difference between humans and computers. Music hits an emotional place that cannot be replicated by a computer extrapolating “if this, then that.” This is because the computer will only recognize the material that is fed to it. Without it, does a computer EVER make the series of “rule-breaking” choices that lead to “Bohemian Rhapsody”?
With the Schubert project, this was the result:
“Once the AI suggested a series of new melodies, Cantor used his professional expertise to choose one. Then he elaborated on the software’s notes, adding instruments and harmonies to flesh out the AI’s contribution into a full movement.”
Now we have something that might qualify for copyright: a human being, making artistic choices of selection and adding new creative material. Not just “pushing a button.”
This is why my conclusion is without more, or some sort of human collaboration, AI generated music should be instantly in the public domain. With AI, a human being is not involved in the creation, and the AI program simply copies what has gone before.
Now, the final reason why AI music should be public domain:
Just how did the songs get into the computer?
Obviously they copied them in. But how? In the hypothetical of the 100,000 song database, did they actually pay for all those songs? Or did they just stream-rip them off of Spotify?
I think we all know the answer. Heck, Spotify didn’t even pay all the people whose songs they streamed. Is it right that an AI database be built without payment to the creators of the songs they are copying?
The standard response is that we are “standing in the way of innovation.” To which a colleague of mine at a recent roundtable discussion on the topic succinctly put it: “it is not up to me to subsidize your business model by giving away my property for free.”
Which brings us to this unsettling question: Why do we need AI generated music? Is there now a world-wide shortage of music to be listened to?
Again, I think we know the answer.
As we watch Google, Spotify, Pandora et al press ahead with litigation designed to pay songwriters less and less, the motivating reason for AI music seems to be that computers will work cheap. At least cheaper than humans do. And it should be obvious that the intention of AI music is to directly compete with music created by humans, and the starting point is to make exact copies of what we have created.
This article from The Industry Observer lays it out fairly clearly:
“In contrast, there is a wide perception that streaming and digitization have not only driven down the value of music, but have also inextricably complicated the concept of a song’s ‘market value’ altogether…”
“For instance, Spotify could use creative AI to slash licensing costs and populate a hypothetically infinite landscape of mood and activity playlists (hello, ‘fake artists’)…”
“The worst-case scenario is that AI becomes sufficiently ‘independent’ creatively such that it can churn out hundreds of songs in a day, register the proper copyrights for those songs and then flood platforms like Spotify and SoundCloud, which would make the online music landscape even noisier and more unnavigable than it already is today. And then the answer to ‘how much would you pay for an AI-generated song?’ would probably be, well, nothing.” (emphasis original)
Which is why AI songs should be in the public domain.