Having a couple of raw drafts in wordpress, I never really got around to finish a proper post during the summer. But now that Michael Anissimov ended his summer break with a post on the non-storyness of the future, and I just had a nice post on Wall-E in my queue, I felt I had to waffle a bit about that topic, too. Damn peer pressure!
Last week I saw the new Pixar animated movie Wall-E, and I must say, it’s one of the best movies I’ve seen in a while. Although I have to admit that I’m a sucker for animated and movies loaded with special effects, critics agree with me here, and it’s already on #26 of the all time best movie list on imdb (Warning: minor spoilers ahead).
It’s an interesting story with lots of (heavily) anthropomorphised robots in it, which is already a gem for its slapstick and brilliant display of robotic “emotions” alone. However, it also picks up the classic idea of machines disobeying humans and robots acting on their own judgement, with an interesting twist: Mankind has to be saved by robots gone rogue that act against other command-obeying robots. Now if this conveys the right ideas to the younger audience, I don’t know. However, it makes the younger generation think about “what could happen if robots were ubiquitous and we relied entirely on this autopilot?”.
Wall-E draws an exaggerated but hilarious picture of our future selves of what I’d consider American unlimited consumerism. In an almost matrix-esque way humans are reduced to stupid meatballs without any sense of reality. But I guess that’s necessary in order for the audience to sympathise even more with the robotic protagonists. And, by the way, from a “real AI” point-of-view the programming of the robots in Wall-E isn’t very sound. They possess all kinds of human characteristics that make absolutely no sense to be programmed into specialised robots (such as trembling because of fear) while they lack others (a robot with a full-blown personality, but no proper sound output?). But of course, that’s not the point of a movie. The content was made interesting on a human level, as Michael phrased it.
This cunning bridge brings us right to his the future is not a story-post. Micheal argues that only stories that humans can relate to are interesting:
For a story to be interesting to humans, it has to feature interesting content occurring at the human level. [...] Conversely, humans cannot write meaningful stories about content above the human level, because we lack the cognitive complexity to imagine such things.
Now, ignoring that the “human level” doesn’t seem to be a fixed barrier to me that cannot be moved, this ironically reminds a bit of religious beliefs into a superiour being: God moves in mysterious ways. And, staying in this limping analogy, many people find god quite interesting, although his motives of letting millions of children starve are indeed mysterious.
Of course, I could have just fallen in this very trap of being unable to imagine anything above the human level, but I just don’t think transhuman (AI) actions will be that much more incomprehensible than, say, a superpower declaring war on a small country to get their natural resources under some false pretenses (WMDs, for example).
Total annihilation is also what Michael has in mind when thinking about unfriendly AIs:
More likely, when confronted by a recursively self-improving unFriendly AI with abstract mathematical goals unrelated to human concerns, the simple outcome is death.
Obviousy, this is not content above the human level, because we just imagined it. You could even call it interesting, as it is definitely relating to human concerns. Hey - maybe we should make a movie out of this!
I agree with Michael on the “interestingness bias” (authors make up showy stories to get attention), especially when fiction is sold as science and scentific authors get carried away by stories that start with “no, it we will survive, because…” and then go on with some fancy explanation, that reduce to “or not” when we apply logic or - God forbid! - occams razor to it. However, I don’t really see a worrisome problem with all that. Of course, the “true” threat might be waved aside as fiction, but that may be true for every futuristic scenario. The more we talk about it, the better. And to be honest, most of the futuristic movies nowadays assume rogue robots anyway, so we’re well prepared!