Discover more from Fragmentary
Flavours of Creation
I am the weirdo in the room
You’re looking at a tonal deception. Your mind is reading a connection that isn’t there. The lines of the machine and the shape it makes in the frame are mimicked by the human model’s posture so that they appear to be mirroring one another. The attitude of the glasses suggests a toast, and the elegantly-graded colour of the image makes the anthracite of the mechanical arm harmonise with the model’s clothes. We see a relationship. What’s actually there is a woman holding a wine glass next to another wine glass in a mobile clamp.
It’s a very clever picture. It’s also horseshit, of a nicely educative kind.
Yes, we’re back with AI again, because a) it’s a fascinating conversation which takes us, as you’ll see, into all kinds of interesting aspects of humanity and b) this is the Coronation weekend AND the UK local election cycle, and if I get into either of those I will shout until my eyes bleed. So let us, instead, discuss eternal truths and hard questions.
First, a declaration of preconceptions and prejudgements: some time around 2010, when I was debating the Google Book Settlement as one of the few writers in the UK who was both aware of what it was and prepared to talk about it (I wasn’t very good at that, but I tried), I made what I thought was a fairly obvious statement:
The use of copyright books to train AI is either the creation of a derivative work, a new IP use for which we don’t yet have a name, or the deployment of a literary text as a database.
The point being that each of these three options means that using copyright work for this purpose without an explicit agreement is an infringement of that copyright, with all the usual consequences that entails, and in the opposite, happier direction: writers should get paid if their work is used to train AI.
At the time, saying this made me the only weirdo in the room. Lawyers frowned and explained patiently that the law didn’t say that, other writers asked if I thought that was really the point (which at least superficially it wasn’t) and engineers and coders shook their heads at my retrograde luddite perceptions. More than one person told me that information wants to be free.
The quote - a banner of 90s and early 2000a digital activism - actually goes like this:
On the one hand you have—the point you’re making Woz—is that information sort of wants to be expensive because it is so valuable—the right information in the right place just changes your life. On the other hand, information almost wants to be free because the costs of getting it out is getting lower and lower all of the time.
So I have views on AI creativity.
Dissolve to the now, when the WGA have put AI on their list of problem areas.
I really want to know how they’re conceiving AI training and IP here. Did you know that different countries base intellectual property in different ideas, and those ideas tell you a great deal about what their legal system - and therefore arguably their society - values?
The basic US conception of IP (intellectual property) is utilitarian: allowing creators to be rewarded for their creativity stimulates them to do more creative work and this benefits society.
In Germany, as I understand it, IP is understood as proceeding from personal identity, from the inner self. There’s a natural right of ownership which is inherent.
In the UK the whole discussion revolves around the “sweat of the brow”. The system valorises hard work and awards rights based on performing it.
I’m fascinated to see how these different perspectives interact with claims around AI creation. And I’m also fascinated to know whether the WGA is maintaining that there’s an IP right being infringed in training an AI, or whether they’re simply seeking to exert pressure to carve out a safe harbour for WGA members. Certainly with “no AI rewriting of literary material” - adaptation being a core bread-and-butter screenwriting job - it seems to be the latter, but I would really, really like the overarching argument to be the former. That would be massive, and potentially ridiculously complicated in many areas over time, but it’s also really important. I mean, you’d need some huge sophisticated computer capable of sucking down data and making extraordinary calculations just to analyse the code and work out how to apportion credit on training. Where would we even begin to look for such a thing?
As to “no AI source material”, that feels to me like the hardest ask. Sure, it’s better if you can sell an idea based on a pitch than getting a work-for-hire deal around IP they already own, but in real terms, how is AI source material different from an idea which occurs to an exec after seven cups of coffee and thirteen pitch meetings? Have you SEEN Murder by Death?
Seriously, I love this movie, but seriously. (Also I have the best story about why it is the way it is.)
I suppose it comes back - again - to training. If the AI is trained on a bunch of writers’ work - and it almost has to be - then that’s a problem.
A few years ago, a Microsoft AI created a “new Rembrandt”.
There’s so much going on in that statement… Okay, let’s begin. First of all, the AI analysed Rembrandt’s corpus and generated a new image. The machine printed - rather than painted - oil paint onto a canvas and produced a picture that looked like a Rembrandt. It is discussed as a Rembrandt, was reviewed in that framing. The goal of the Microsoft team was to create a Rembrandt. It is undeniably in the fuzzy cloud of Rembrandt’s IP - though of course there’s no issue there because he is looong dead. But doing the same with a living artist would be, pretty transparently, parking your IP tanks on their lawn. If the law cannot presently see that, it absolutely needs updating. And it needs updating to reflect diluted uses too - Rembrandt crossed with one, two, ten million modern artists with no auction value. Or for that matter the use of recent novels to train natural language AI.
The painting is also not a Rembrandt; it’s not a lost Rembrandt that he somehow forgot to paint, or which was destroyed. It’s not even the picture he would have painted next. There was probably never a moment in his career when he could have or would have painted this particular painting, because it’s atemporal in its training base. It’s an abstract of Rembrandt, as if he were taken out of his own life and made to work across it.
And finally: the AI did not create anything, any more than an oven bakes bread. Rembrandt created a style and a form (in the swirl of his own life and artistic experience) and the AI (which is not really an artificial intelligence, for that matter, but that’s another conversation for another day) was patterned on it (not trained, because training is something you do to things that have a pre-existing vitality of their own, be it humans, dogs, rats or vines) and can now replicate it. That pattering process and the code that makes it possible are vastly sophisticated expressions of human cognition and, yes, creativity in their own right and should be recognised as such, but that respect goes both ways. Use a few lines of my book? Fine. Use a few carefully selected chunks of a page or more? I’m raising my eyebrows a bit but we can talk. Use the whole damn novel? Pay me. And so with the Microsoft AI: the system’s entire world was described by Rembrandt. It’s not a general intelligence which can paint. It’s a Rembrandt-ing Engine, itself a creation, arguably an artwork, using patterns created by Rembrandt to create further artworks.
Look again at the formulations of IP by nation:
If we’re German, the paintings proceed from Rembrandt’s identity, deployed through identity-mimicking code.
If we’re British, the paintings proceed from Rembrandt’s work, iterated by a machine produced by the work of the Microsoft team and others in the space.
If we’re under the US perception… it seems different. There’s no inherent IP right derived from the act of creation as far as I know, only the contingent one built around the good of the nation and society. And where lies the greater good? In the protection of artists or in the access to cheaper artistic products in the market place for the many? And by extension, access to/creation of scientific research AIs and so on. The tension between individual rights and collective needs, between respect for human persons and respect for corporations and institutions… I don’t pretend to understand where that’ll go, or even how to express the divisions.
But I’ll say it again. AIs, at this time and perhaps forever, do not create. Just as it remains an open question what consciousness is, so it remains an open question whether we can replicate it non-biologically. AIs are a medium through which creators work, and the product of creation. Maybe we should, on a global level, nationalise AIs and use the cash flow from their product for a UBI…
But it’s important not to accept the slips and inflections of language not designed for discussing mechanisms of this sophistication. It’s important not to accept the cues in the picture which suggest a shared moment of affection between the woman and the robot arm. Break it down and understand what’s happening. The Boston Dynamics robot dogsnakes are not playing with you. You are playing with an object no more capable of relating to you than a tennis ball, but because of the way it behaves your mind and endocrine system are telling you there’s a relationship between you. And so, too, with creation. Companies, governments and labs do things. They create things, for good or ill, and they can pay their bills. AI as we presently have it does not even execute code; it is the execution of code. Everything else is theatre.
Thinking on an industrial scale. Also: don’t mention the Coronation.