I finally sat down and asked ChatGPT some questions, and every reply just strengthens my sense of the surreality of the discourse around it. This isn’t Artificial Intelligence, it’s Zork for 2023. (Incidentally: it absolutely would and almost certainly will make a bunch of video games better.) “I do not see the Consciousness here.”
Yeah, the rest of you are younger and prettier than I am, knock yourselves out.
So how did the interaction - I’m not going to call it a conversation - go? Well, first I asked it to tell me who my dad was, and predictably it delivered a solid if rather boring answer. I noticed the response-pending line says “typing”. Cute bit of skeuomorphic mood music, but you could also say it’s a lie designed into the interface. Looking again, I realise that the same message is inherent in the term “chat” to describe interactions with the system, and indeed the use of a personal pronoun. Those little touches may be very important in how this whole thing is being received. There’s a mood here that’s intended to imply you’re touching the Matrix, that there really is Something on the other end of this call.
Then I asked it to describe my wife, about whom there is a pretty large body of professional information on the interwebs.
“Based on the Internet search I have just run,” the system responded, before giving some accurate generalities about her skillset and making a somewhat plausible but totally inaccurate assertion about her current employment.
I asked for references; the system explained that it can’t give them. I asked what search it had run to produce its answer; the system responded that it can’t run searches.
The more I interact with ChatGPT, the more conscious I am of its thin-ness, its unreality. The system can get where you’re pointing but doesn’t know if you mean the eagle or the sky. Then you tell it you’re talking about the bird but it’s no longer following the line of your finger. It doesn’t even seem aware of its own previous responses. Perhaps SF as a cultural shape is somewhat to blame for the wild romanticisation of this model that’s going on right now - the decades-long offer of benign digital angels to watch over us, combined with recent attempts to portray truly alien consciousness by depicting perceptions of reality so disparate from ours that only the most strenuous effort can allow any kind of bridging of the gap. I can’t shake the feeling that the whole furore about large language models as AI - rather than as an impressively-executed hack which allows better manipulation of the informational mountains we create in 2023 - is the equivalent of staring into a muddy puddle and seeing ourselves reflected, then claiming that there’s a whole parallel universe beyond the meniscus.
But there isn’t. Not yet, maybe not ever, on this model. That sense of disconnection doesn’t indicate a radically different intelligence, but the absence of anything behind the words to connect with. At best, maybe, these systems are the chemical precursors of life which may one day become the digital equivalent of Cyanobacteria 3.5 billion years ago. Maybe that evolution will be fast enough for us to see it happen. But right now it’s just water and light and our own faces staring back. Rather than fretting about the putative rights of large language models, we need to look at protecting the things we know are alive that are currently besieged: whales, forests, reefs. Or even the other humans subjected to intolerable cruelties by nation states, robber barons and balance sheets.
Is that because we’re conditioned to crisis? The nuclear eighties give way to the collapse of the USSR which rolls on to 911, thence to wars in Afghanistan and Iraq, financial crisis, more wars, the pandemic, the war in Ukraine, all under the shadow of a climate apocalypse ever more obvious and ever more sternly ignored by governments and corporations locked in patterns that belong to 1972? We need a conceptual crisis to go with the physical ones?
Is it just that we fail to understand ourselves, on a personal and cultural level, so badly that we honestly cannot tell the difference between interaction with one another and the weird attempt to make meaningful discourse with this mirror?
When I first heard about Miguel Nicolelis’s research, I wondered whether we might one day change a fundamental aspect of human life: the fact that every interaction we have takes place at various removes, that we never know one another directly. Could we have a real meeting of minds? There’s a bunch of interesting assumptions in my question, not least once you factor in embodied cognition (would a mind-to-mind link be just another incomplete communication, this time without touch?) but now I wonder whether we need the technology even more than I realised, just so that we can distinguish the play of light on water from the people we love.
Honestly, though, I think our pundits just need to calm the hell down, stop reading press releases and feel something instead, and maybe go for a walk in the park.
It seemed only fair to give the system a right of reply…
I did not grow up in the US, and I imagine Ian McEwan would be a bit crusty about the assertion that I adapted The Children Act.
Well, then, let’s try something more direct. “Why do you use the pronoun ‘I’?”
And that’s where my interest ends. It’s an elegant mechanism. There will be consequences to its addition to services and society. Some will be good. Some will suck. Sooner or later we’ll have to acknowledge that adding new technologies to our array has consequences for the shape of our society, and start setting policy determined by the outcomes we want and can predict, not just throwing high-consequence engineering at the wall and seeing what slides down it.
But for now, I think we can all put our existential crisis away. This one, at least.
Here's the response to "Who is the screenwriter and director Julian Simpson?"
Julian Simpson is a British screenwriter and director who has worked in both television and radio. He is known for creating and writing the BBC Radio 4 drama "Blondel", as well as for directing and writing several TV episodes of "Doctor Who", "The Outcast", "Case Sensitive", and "Wallander". He has also been a writer and producer for the TV series "The Sarah Jane Adventures", "The Tunnel", and "Cuffs".
Literally the only correct credit in that paragraph is Doctor Who (although I didn't write any of it). I can't even make a six-degrees-of-separation connection to any of the others. For fun, we asked it to give us a history of East Anglia the other night. It told us that the Battle of Hastings had happened there. I really don't know what this is for. It's artificial, certainly, but not intelligent.
I asked it who I am, and it gave me this:
Suw Charman-Anderson is a British writer, journalist, and social technologist. She is the founder of Ada Lovelace Day, an international celebration of the achievements of women in science, technology, engineering, and maths (STEM). She is also a co-founder of the Open Rights Group, a UK-based advocacy group for digital rights and freedoms, and has written extensively on technology and its social implications. Charman-Anderson has been recognized for her work as an advocate for women in STEM and has received numerous awards for her contributions to the field.
That's pretty good, actually, except for the final sentence. I've not received any awards and have barely had any recognition, so yeah, thanks for rubbing salt into the wound, ChatGPT.