Wednesday, September 18, 2024

The failed analogy of oppressed AIs in science fiction

 I recently finished Ann Leckie's Ancillary series. This was my verdict on various social media:

"I started reading it years ago. Remembered liking the worldbuilding, but was more ambivalent about the way it was written. I remembered the text feeling too slow and cumbersome overall. 
Now I thought: perhaps that was an unfair judgment due to reading the first two parts fairly heavily medicated? So I got this book and finished the series. 
Well, as with my re-read of Vandermeer’s Area X trilogy a year ago, I percieved the same issues off meds as I did on them, but it's easier to deal with when my brain works faster. In Area X , the problem was a really long slog in book two where nothing much happens for a 150 pages or so, except Control walking around in the office building and thinking to himself that something feels off. (This did not require as many pages as it got!) With the Ancillary books, the issues are low-key but they permeate the series: the dialogues should have been tighter, the on-page tea-drinking should have been cut down, and finally, Brec thinks so many purely expository thoughts! Like this: ”I spottad Sievarden in the hallway - whom I had saved when I met her on the planet Nilt and she was addicted to Kef - talking to the high priest….” Yeah we know you saved her, it was in book one. If I had forgotten your backstory in between books, I’d have looked it up on the internet! 
Sometimes it feels like reading a silver age superhero comic - they were always peppered with exposition for the benefit of new readers who didn’t know the characters.
Also, in the end, the book deals with liberation of and rights for AI persons. Gosh, what a tired old trope! I’m probably gonna do a whole blog post about it later.
I really like the worldbuilding, and I really like Brec, the MC, at least in theory. But overall, it’s hardly my favourite series. I much prefer Leckie’s fantasy novel The Raven Tower."
 
Well, here comes the promised blog post! 

I've heard people claim that oppressed androids and other AI persons in scifi is such a great analogy for many oppressed groups in real life. Typically, these stories feature discussions of whether the AI are sentient or not.
Now, TBF to Ann Leckie, that's actually not the case in Ancillary Mercy. The Radch Empire happily draws a very sharp line between citizens and non-citizens in terms of moral worth, without really tying this to theories of sentience or their mental life. No one says that those features of a person drastically changes when they go from non-citizen to citizen status, and yet, their moral and legal status suddenly shoots up. Thus, the fact that they see their sentient stations and ships as far below their human citizens, need not be because they have any doubts about said sentience.
But in many other scifi stories - like the classic Star Trek Next Generation episode "the measure of a man" where the android Data's moral and legal status is questioned - whether he's truly sentient is absolutely crucial to the matters at hand. And, people have told me, this is such a good parallel for lots of real-life oppression, which is often justified by claims about how this or that group allegedly can't think, or can't think very well.

However, claiming that someone can't think very well is quite different from claiming that they're not sentient. 

There are some fine-grained distinctions here sometimes made in philosophy (and then debated - does this distinction really pick something out? Does it pick something important out?) that won't concern me here. I'm just gonna talk about sentience as synonymous with being truly conscious, rather than just mimicking consciousness, as having experiences and a mental life - in short, when someone is sentient, it makes sense to ask what it is like to be that individual. What is it like to be homeless? What is it like to be a billionaire? What is it like to be a dog? are all questions that make sense to ask, even if we can't give a single answer to any of them because there's too much individual variation, and even though we might never fully answer the dog question because dogs are too different from us in, e.g., their sensory apparatus and cognition. In contrast, it does not make sense to ask what it is like to be a desk. It's not just impossible to answer for us mere mortals, because there are limits to what we can know - there is no answer.

Now, oppression is typically not based on the claim that the oppressed group lacks sentience. Such claims have occasionally been made of some disabled and mad groups, typically those who are either non-verbal or merely speak what seems like complete gibberish to others. However, extreme misogyny, racism, queer oppression, or even ableist oppression of disabled groups that the oppressors remain capable of communicating with, is typically not built on denying that the oppressed groups are sentient.
Sure, they're claimed to be, in various ways, irrational, unthinking, hysterical, animal-like, more brutes than humans, etc etc, and these claims are supposed to justify their oppression. But practices where oppressed people are humiliated or punished for disobedience presuppose their sentience.
If you truly believe that another human being is completely "empty on the inside", more akin to a machine like a car or a lawn mower, attempts to humiliate them, punish them, or put them in their place doesn't even make sense. If you label someone "uppity", if you say they don't show proper respect to their superiors, if you say they're lazy, bitchy, slutty, hostile, or any other negative character evaluation, you're also presupposing that they're sentient - a car, lawn mower or other piece of mere equipment can't be any of the above (see also Kate Manne's "down girl" for a discussion of how misogyny is not built on seeing women as non-human).
This is a huge difference between how oppression typically plays out in the human case, and scifi scenarios with oppressed androids and other AIs.

Moreover, all the humiliation, suspicion and punishment that oppressed people often suffer at the hands of their oppressors, and all those negative character labels that gets glued on them all the time, tend to become self-fulfilling prophecies. If a group of people are labelled aggressive and hostile and the only thing they understand is violence, they tend to become more aggressive and hostile in response. If a group of people are labelled stupid and irrational and therefore there's no need trying to explain things to them, no need to give them an education, they don't understand anything anyway, they tend to become, well, at least ignorant and uneducated, and this is easily mistaken for stupidity and irrationality. If someone is forced to work so hard and such long hours that they never have time to get adequate sleep, their cognitive capacities and things like impulse control and ability to regulate their emotions might deteriorate as a result. And so on. Oppression hurts the oppressed in ways that help the oppressors justify what they do.

Oppressed androids and AIs in scifi rarely suffer from these problems. They are always perfectly articulate, perfectly intelligent, perfectly rational - they're not just as good as, but superior to mere humans! They're faster, stronger, smarter, more logical, better-looking, no flabby fat on their bodies or spots on their skin. (Sidenote: Data is claimed to suffer from the flaw of being emotionless. However, that's not how he actually comes across; he seems to possess emotions, just less vivid ones than regular humans. In practice, being less emotional than the rest of the Enterprise crew comes across as an advantage as often as a flaw - Data keeps his head cold in situations when others would panic, and he's not prone to the same prejudices as mere humans due to his superior logic and lack of exaggerated and biased emotions.) They're just so perfect and superior to us mere mortals in every single way - and yet, they're oppressed, because of pure prejudice on part of the humans. There's no trace of the self-reinforcing mechanisms we see in real-life prejudice, where the oppressors can easily rationalize their oppression by pointing to actual aggressive or irrational or ignorant behaviour on part of the oppressed - nope, it's pure prejudice
I think there are several reasons why such stories are so popular. For people who are oppressed in real life, it's a nice wish fulfillment fantasy to imagine that although people say you're inferior, you're actually superior - not just as good as them, but better. Also, all of the audience can feel good about themselves when watching stories that present prejudice as this brute, near-incomprehensible thing; the audience can approve of the message that oppression and prejudice are wrong, while feeling secure in their belief that they would never engage in prejudiced or oppressive practices. They would never look upon a clearly superior being with perfect physique and perfect intellect and then go "you're worse than me and don't deserve any rights". 

Come to think of it, I wonder if lots of these "oppressed AI" stories aren't Jesus-inspired? Maybe not consciously; the creator's conscious intention might have been to write an analogy to oppressed human groups. The creator might be an atheist or agnostic or believe in some other religion than Christianity. Even so, the Gospels and Jesus are a big part of our shared cultural heritage, so they might still be a big unconscious influence. Jesus is, in many ways, portrayed as superior to mere humans; more virtuous and in possession of various divine superpowers. Yet, all these humans hate him and eventually kill him. Presumably, he could have ditched the whole die-on-the-cross-for-our-sins thing, but like an android with Asimov's robot laws installed, he doesn't fight back and allows himself to be killed - that's how virtuous and good he is.

Wow, now I'm way beyond my area of expertise! There are scholars at my department who write about this stuff, like Alana Vincent with religious myths in relation to modern specfic, but I'm not one of them. 

In any case. Androids and AIs generally don't work as an analogy for oppressed human groups.

Finally, the trope of a sentient AI who sees its sentience denied by prejudiced humans feel kinda soured now, by all these ridiculous discussions of whether real-life chatbots and similar might be gaining sentience. I mean, they write so well now! So human-like! Surely there's some emerging sentience there? 
No. Stop it. There's no reason to think so. "This individual seems sentient" might be good evidence that they probably are when it comes to animals - there's no reason why signs of sentience would evolve in the absence of the real thing. But AIs are designed to seem like they can think because the designers know that this will make them more popular. Eric Schwitzgebel has written more about this on his blog here http://schwitzsplinters.blogspot.com/2024/07/how-mimicry-argument-against-robot.html It was more fun to read about or watch such discussions when they remained in the realm of fantasy, where truly sentient androids/AIs have been invented. 
(See also Chris Winkle's post on this at Mythcreants https://mythcreants.com/blog/with-the-advent-of-ai-science-fiction-must-change/ )
 

Subjective feelings and (semi-)objective quality in fiction

Background: Thi Nguyen visited Umeå University for his Burman lectures. This lead to discussions about aesthetics, a topic Nguyen has publis...