

Discover more from Fragmentary
In my ongoing quest to derive some sort of meaning from the current obsession with AI systems - which I’m increasingly sure stands for Artificial Inutility - I paid a couple of quid for some curated AI avatars from the photo editing software I use. They are gloriously awful. My favourite - for a given value of the term that includes a significant admixture of wtf - is this one, which to me is clearly derived from Kurt Russell movies from the 80s and 90s and borders on some sort of image theft. (Let’s ignore for now the fact that Snake Plisskaway here is undergoing some kind of alarming biological melding with his printed shirt). In fact all of them register to me as movie stills remastered, which immediately takes us into problematic territory.
Pretty sure that’s Colin Farrellaway, possibly in Watchmen.
Some kind of pop thing happening here. Robbie Williams? George Michael? I’m bad at music iconography. In case you’re thinking “oh, Nick’s kinda more conventionally attractive than I remember”: no, I’m not. I was never so square-jawed or symmetrical. The system has homogenised and prettified me. The weird thing is that now, if we actually know each other in real life, from time to time you’ll see these lines in my face and your pattern recognition will fold these images into your understanding of how I look. I’ll actually be incrementally better-looking because I’ve been imaged that way, because human cognition is as freaky as AI fake cognition, which is partly why we’re so lousy at making AI in the first place. We don’t actually know what ‘intelligence’ is.
A little while ago I mentioned that I was trying out the Motion app, which has an aggressively uplifting Instagram ad campaign and made me believe I too could be an organisational powerhouse. It turns out the learning curve is essentially vertical, I think because the people who made the app are basically organised in a way I will never be. I felt like I needed an app to manage the app. The first thing you notice is that there’s an assumption that all your tasks come sub-divided into clear units with predictable time requirements. Actually, no, that’s the first thing I realised thinking about the experience later. The first thing I noticed was that I was stressed and unhappy trying to get set up and everything was absurdly hard. As a practical matter, it’s not that “it works better with Chrome”, which you’ll read in some reviews. It’s that it does not work properly in Safari. But even having made the switch I was just floundering around trying to change things from menus that didn’t include the option to change things, wishing I could recategorise things that seemed to be set in stone… I think the most obnoxious thing was that tasks were either done or backlog. There was no “in progress”. As a writer, most of my tasks are “in progress” at any given time. I see the logic of the categories in other contexts, but the hard lines made no sense to me. My only experience of the AI aspect was the system determining that a low priority task should be done before a high priority one, which made me distrust it. I needed an “explain that decision” button.
So I junked it before the free trial ended - interestingly I’m content with paying for lousy avatars but I have no intention of spending a penny on Motion - and I’m now using my mind, which isn’t remotely similar but (and) I like it. The only real problem I’ve identified so far is that they chose to go all e e cummings with the typography so when you name the app it looks as if you’re just talking about your mind. I’m sure it seemed like a good idea at the time, but it’s going to get annoying for everyone pretty fast.
My mind (See? How does that work?) uses AI in a sensible way: it can read text and categorise things for you, so you just splurge images and websites and notes into it and you can search them and get sense out of the system. That’s it. It doesn’t sing or draw pictures and it doesn’t try to talk to you. It just does something useful, quietly, and when/if it doesn’t work perfectly you probably won’t notice or care.
Just taking a moment to dig a bit deeper on AI more generally (ho ho ho): there’s a confusion - fostered, one suspects, by marketing departments - around what’s going on here. AI in the movies is what’s called general AI: a non-human intelligence, self-aware and possessed of a coherent identity. HAL was an evil AI, C3PO was a good one. That isn’t what we’re making right now, and indeed not very many people are even working on general AI because a) it’s really hard and b) when you’ve got one you cannot, by definition, sell it. You’ve won a Nobel Prize, created life, and generated a raft of legal and moral problems amounting to an (existential) crisis. We don’t know, as I said, what human consciousness is or where it resides, so the business of codifying, preserving and collaborating with a non-human one is potentially tricky.
And why do we assume that an AI would speak like us, think like us? Human life isn’t necessarily a useful model for it. What if AI evolves or emerges and takes the form of a sort of collaborative consciousness, like a thinking reef? (What if reefs think anyway? Gosh, that would be embarrassing.)
But what we have right now is hugely sophisticated expert systems and machine learning algo. Our present wave of AI counterfeits human communication but does not attempt cognition. You could theoretically build something that would do the same job out of brass levers and phonemes recorded on wax cylinders, although it would be very, very big.
But I wonder whether people would not simultaneously like the brass version more, while recognising that it was an artwork rather than a new friend. And occasionally, instead of insulting cinema-goers and proposing to run off with NYT journalists, it would just throw a spring and go SPOOOOIIING! in a humorous way, and we’d know it was broken.
Following on from the reef analogy: we’re sort of building artificial siphonphores at the moment, not artificial dolphins, and then we’re surprised when they produce non-human results. I wonder: if we were actually cyborging deep sea minicritters to produce search results, would that be more or less alarming? If you knew that your Amazon parcel was packed by a telepresent jellyfish being rewarded with food for performing a basic manual task of which it had no real understanding, would that be weird? What if the drone airships they use in some countries were piloted by fish via mind-machine chip interfaces?
What if, instead of a chat bot selecting responses on the basis of probability, you were talking to something truly alive but radically incomprehensible? Something which you knew would never understand you, but whose responses were mediated through a chat menu so that they felt real? And what would fit that brief? Algae? Bees? Octopuses? How close to human does it have to get before you no longer believe mutual understanding is impossible?
Well, that got weird. Here, have another avatar which is apparently a picture of me.
The AI goes to the Movies
One of my favourite 'facts that should have brought down an entire field of science but somehow didn't' is the proof that, wherever consciousness resides, it is not in the brain.
Because of this guy. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(07)61127-1/fulltext
Here's a French civil servant with, effectively, no brain. A few twists of ganglia and a lot of CSF and nowt else. Average IQ, functional, *conscious*.
This has been known since 2007, and yet neurology remains unaltered to account for this.