

Discover more from Fragmentary
[ Photo: https://www.pexels.com/@didsss/ ]
A decade ago I wrote a non-fiction book about digitisation and humans. It was broadly optimistic, eclectic, and in retrospect suffered from a lot of missed opportunities. I talked about choice architecture - more commonly called “nudging” - but missed the possibility of its coordinated use on social media alongside epic levels of false information to sway elections. Instead I worried about the careful curation of choice as an almost existential threat over to the ability to make finely balanced decisions: if you don’t practice good decision making in hard situations, you don’t develop the neural pathways. The endless practicing of stacked choices physically sculpts your brain in preparation for more if the same, so when you come on a complex issue you really don’t know what to do with it.
I dunno. Either that’s horseshit or it’s even scarier now than it was then.
But it feels like one of many things I touched on which were real enough questions, but I didn’t imagine hard enough or systematically enough to track the real into 2023.
The most obvious example is the social media themselves. At the time that meant Twitter - fresh from Tahrir Square, still very much the naive version of itself - and Facebook. Research into their effects was hard to come by; one paper showed a mild uptick in positive experiences for people with depression using FB to interact with a community. That was about it.
With these services, I completely missed the simple fact that they would be redesigned and reiterated while seeming to stay the same. Twitter now is not Twitter then, though only the latest iterations make that completely obvious. You probably could not have a Tahrir Square event mediated or recorded by the algorithmic, atemporal version of Twitter that I signed into briefly this morning. The immediacy of the service - frankly for me its defining and most interesting characteristic, and certainly its most politically interesting one - has gone. It’s a cold soup of curated and attention-grabbing rows.
Other changes are more subtle, and other media have different effects. Instagram, in my life, has become a shopping mall where I see occasional posts from friends while I buy clothes and wish I was in the Mediterranean, but since my 50th birthday it has hammered me with adds for hair tonic, luxury cars, Viagra and fitness coaches. More generally, and across platforms, the jaunty promise of sharing snaps has become a gallery of lifestyle and body image, once again stripped of chronology to create a permanent now of consumerist dissatisfaction and FOMO.
I find myself thinking that this is that refining effect of late capitalism again: you start with a human-defined ambition to provide a thing that people want. Over time (a decreasing amount thereof, in these fast fail, iterative/analytical days) that position is refined in several important ways: the fuzzy initial ambition is streamlined, the act of provision becomes less important than profiting from that provision, and most crucially the identity of those “people who want” is revealed. In the digital social media, the customer is generally not the user base. As the service is refined to meet the desires of the actual client, the various somewhat incoherent aspects of it which drew in the original user base - often the most socially and societally useful functions, and therefore sometimes the most commercially problematic - are trimmed away, and the sharp edges become more brutal. If you’re selling consumer toys to ease the pain of 2023, you want people feeling that pain as much as possible.
In the 90s it was popular to ask whether the Internet was the pleasure machine, dispensing hits of meaningless joy at the touch of a button. In retrospect that’s almost quaint. To my eye, social media now dispense a sense of inadequacy, the better to sell palliatives. What we have is nothing so benign as a pleasure machine, and frequently becomes a hate vortex. Happy people, after all, might not need all the crap the commercial Internet’s real customer base is pushing. At the same time, you can’t have the users wandering off, so you need a painful experience that is also compelling. You have to hook people on their own unhappiness (in which the social media industry dovetails with monarchy, various religions and parts of the literary world: sorrow is profound and contentment or inattention is the sign of an inferior mind.) And so that’s what the refining process delivers: the stickiest of misery machines.
As with my concern about choice architecture, so too here: this is not just a conceptual issue. This has a physical, neurological side. It isn’t addiction in the same way that opiates are, but it prints itself in the grey matter. Detox takes about two weeks, if you’re thinking of it, and must be strictly observed during that time. The earliest experiments in neuroplasticity revealed a rough 48 hour period for fairly significant shifts in how the brain processes inputs; I found the habits properly faded after a fortnight. Thereafter, returning has a powerful flinch-reaction, like sniffing ammonia.
There’s so much more to talk about - patriarchy and colonialism; sleep and work; education, money flows and ecology. I could write another book - except that other people already have, and will, and will do it better and from more interesting perspectives. I’m a storyteller, not a researcher, and happy to be so.
But that’s what I was pondering this morning, before I went out to buy eggs.
The Social Media as Misery Machine
I’m a member of several Facebook groups for whippet owners because looking at pictures of puppies and goofy photos and funny posts about these doggies lightens my day.
People also post photos and stories about their sick elder dogs, or eulogies for their recently departed, because of course they do.
And although I react to the funny and cute posts, I probably replied just a little bit more to people who were grieving because I’ve been there and know just how hard it is? And those people are hurting?
So gradually my Facebook feed had fewer and fewer of the cute funny doggie posts, and more and more of the tragic ones… giving me more opportunities to commiserate and fewer opportunities to laugh or otherwise acknowledge the cuteness.
And by now just about all the whippet posts I seem to get in my feed are the ones about sick and dying doggies.
I’m sure that similar things happen with posts about politics and society in general. We engage with things which most strongly activate us emotionally. So this is a reenforcing feedback system.
I’ve known this intellectually for a very long time, but I’ve been watching it unfold in slow motion in this one area in a very unmistakable manner.
I suppose we could game the system if we were very strategic and consistent about what we responded to, and how… but maybe not. But the upshot is that as they are currently constructed, social media engagement algorithms are not good for our mental health.
It's funny, or odd, that my twitter experience is so different than most. Ever since I first joined (March 2009, per twitter) I have almost exclusively used one of the API based desktop apps (Tweetbot, Tweetdeck, etc), as they freed me from being dependent on my phone or an open browser window that didn't update dynamically. Consequently I've been able to see tweets only from those I follow, without ads or "for you" feeds. This option has shrunken lately, with twitter discontinuing API support for third party apps and its having bought tweetdeck, which I still use and still lets me see only those I follow. That list of who I follow grows, when it does, when someone like NH rt's someone I find interesting, or informative (hopefully both), and so I get to see new things without the intrusion of ads, etc. So: a recommendation for tweetdeck, I suppose.