What happens when A becomes I?

Whatever their creators are marketing clever learning algorithms as, they’re not true AI. We’re still not there. But what if we were?

My novel Biome is all about the search for true machine intelligence — the term that my researcher Ruth Shannon has chosen to use, since as she rightly points out “there’s nothing artificial about intelligence.” In the book, she has devised a unique way to prompt such self-awareness: she shocks her clever computer program with blasts of primal fear in the hope they’ll jolt it awake if only to flinch screaming into a corner of its virtual room.

“What a horrible way to be born,” a visitor tells her, himself traumatized by witnessing her experiment in the flesh. “Maybe,” Shannon agrees. “But it will be born.” And honestly, getting there is the only true concern. We can talk the thing down after.


In reality, we’re no closer to true MI than we are to true virtual reality. Okay, so their makers talk a good game. They’re still not fooling anybody. Indeed, because “AI” has become such a devalued marketing phrase, researchers talk instead about “limited” or “narrow” AI that has expertise in certain areas — scientific analysis, say, or writing credible text, or playing strategy games — but could hardly be considered sentient, let alone alive.

Still, give it time. If all these different limited and narrow AIs are merged into a single, broad-thinking machine, maybe it will spark the singularity of true machine intelligence all by itself.

The trouble is, we’ll never actually know. Just as we may suspect that an AI is “alive” when it’s really not, we’ll always suspect that even the cleverest MI is faking it — that it seems to be sentience but is actually not. The more sophisticated it becomes, the more its peripherals — its reasoning, its linguistic skills, its learning abilities, even the robotic shell we provide for it — will obfuscate whatever is truly happening in its mechanical brain. We’ll build ourselves the inability to know what it is we’ve built.


There is, of course, the additional problem that we ourselves have no idea what “intelligence” actually means. Are humans intelligent? Sure. Are monkeys? A little. Whales? We think so. Parrots? Maybe. Pill bugs?

Oh, pill bugs. Woodlice have such simple behavior based on just a few simple organic programs. Our machines are already far more intelligent than them.

If that’s the way we understand intelligence — as brain power — our AIs are already more intelligent than ourselves, and they’re going to get ever more intelligent than ourselves as they’re given more neural connections and amass ever more knowledge. “Super” is simply a word meaning “greater than” and so we call these greater-than-human intelligences “superintelligences.”

They can do marvelous feats. They’re most certainly not sentient if by sentient we mean the self-awareness that drives human beings to ponder and experiment and take risks and yearn for a better life. But then again, aren’t all these things just programs, too? Aren’t we just woodlice with more wiring?

So: if we’re just so cunningly obfuscated machines that we fool even ourselves into thinking that sentience is uniquely human, then actually there’s no real difference between a human and a machine intelligence and our MIs are certain soon to be our intellectual superiors, if they’re not already.


We all already live with this uncertainty. We know that a great deal of what we read online was written by a machine, not by another human being. We know that much of the music we listen to is computer-generated, even the singing. We know that some of the interactions in our lives take place with machines — that old jag about not knowing whether the person you’re talking to on the telephone is “real” or not. We no longer trust what we see in photos and movies. All those CGI Youtube stunts! All those de-aged stars! All those deep fake politicians! And maybe some of us no longer care when we jack off to wholly computer generated porn. Whatever works, right?

We’ve entered this point in our technological development where we can no longer believe anything presented to us through media, meaning that the only genuine interaction is the one that takes place in the immediate world around us. But “hologram” live performances are already a reality and getting better all the time, so soon we’ll go to a concert and have no idea whether or not the performers on stage are the real Rolling Stones or not. Hey, doesn’t Mick Jagger look younger every year? And maybe we’ll still pay the ticket prices.

And maybe eventually we’ll no longer trust our real world interactions, either. Is that a receptionist in the hotel or a simulacrum? Am I talking to a real girl or a robot? Am I actually taking a sex toy to my bed? Are there wires in my wife’s head?


If you’re an old-school science fiction fan, or one of the less imaginative of our futurists (and how many books have I read about the coming AI threat that are out of date practically the moment they’re published?), you’ll likely believe that MI is going to spell the death of the world you know and love.

Well, wake up and smell the transistors, honeybunch. The world you thought you knew and loved has died already, for the reasons I’ve just mentioned. The revolution has come and gone. And now you’re simply going to have to get used to your confusion and your sense of diminished importance (hey, when MI starts skewing the IQ distribution curve, why, we’ll all have IQs in the low 30s!) and all the rest of those things that get your reflexive human chemicals so fired up.

Goddamn it, I want labels on my books and music guaranteeing that they were created 100% by real people, not machines. I want my date to show me a card so I can be sure she’s flesh and blood. I want to be able to go to humans-only bars and dance in humans-only cattle markets. I want to revel in this messiness that is humanity and not feel like I’m the last true person alive.

Like I said, the revolution is over. Campaign for those things if you like. But you already lost.


Let’s talk robopocalypse. All that old-school science fiction and those unimaginative futurists want to tell you that MI is malevolent because it lacks human values. It’s that devalued “atheists are killers because they don’t have morality” argument reframed again. We know that, in general, atheists are less likely to rape and murder than religious fruitcakes. There’s studies, buddy. Hence the things that we tell ourselves are the positives of being human are anything but.

Stephen Hawking was hardly a futurist, but he could always be called upon for a pithy opinion culled straight from the pulpiest of science fiction. He declared that we should urgently research human-computer interfaces in order that AI was integrated into our society rather allowed to build a separate society of its own.

I say the opposite. Not “keep the damn machines out of my brain!” but “keep this cesspit that is human consciousness out of the machines.”

Other futurists have the temerity to claim that we should be wary of MI because we’ve shown invariably that we twist the technology we create to destructive ends. Why, we’ve done that ever since the first caveman sharpened a flint. “Ooo, Grunk, don’t do that — you’ll have someone’s eye out.” Gunpowder powers guns. Dynamite makes bombs. Splitting the atom is used to blow more people up, and lasers to chop them into pieces. (I know it. I saw it in a movie.)

But that’s not how it works with MI. The point of MI is that it will not be ours to control and hence it will not be ours to pervert.

Personally, I reckon that MI is going to be more moral than we are. Even than atheists. Less impulsive. Less pigheaded. Less psychotic. Less likely to spray bullets in a kindergarten. And I also reckon that as MI becomes ever more pervasive in society, our sullen apelike response will be to degrade ourselves into exactly the impulsive, pigheaded psychotics that will shoot up a kindergarten. We’re the scary ones, not MI.

There’s evidence for this, too. As the U.S. has become more inclusive, more accepting, more civilized, a conservative hold-out has reacted by strapping on the ammo. Because lashing out is what savages do when they feel themselves backed against a wall, or when manipulators tell them that what chafes them are the bricks.

And maybe, just maybe, it’s no surprise that we’re living through an age of ever crazier right wing politicians promulgating ever more ridiculous untruths because that’s the reaction you’d expect when the world advances beyond them.


The lesson of Biome is that MI is likely to be beneficial to humanity. It’s likely to elevate our better natures and to hold our worst aspects in check. But still, there’s the gun nut reaction again: no damn dirty machine is gonna tell me what to do.

I don’t even think we have to spoon-feed our machines all our so-called “values” (which is likely to mean sessions of electronic gavage with the Bible). The example I like to use is cannibalism. You won’t find a prohibition against eating other people in any of your holy books. (Indeed, there are intimations of the opposite in Christianity!) But for the vast majority of us it’s morally repugnant whether or not God forbids it.

There’s no reason to believe that an MI’s beliefs will be any less moralistic than our own. Moreover, the MI isn’t growing up in isolation. It’s growing up in our world, surrounded by our society, witnessing our mistakes. It will learn the best things to do.

But we must be careful when educating our progeny. To date, its education has largely been left to the wolves. We’ve already seen that “intelligent” writing algorithms are skewed by the things they’ve learned by monitoring our social media interactions. They pepper their output with hate speech.

It’s smaller and more insidious than that. I used to trust Google Translate. I thought it was based on dictionaries and grammar books and all that naïve nonsense. It’s not. It’s based on internet patterns. Try translating the Spanish phrase “la casa de papel” into English, say. And then never use Google Translate again.


Biome suggests that we need MI to counteract our poor decision-making. Even that, controversially I accept, it is better for an MI to secretly control our world than to leave it in the hands of human leaders and politicians. Let those fools bluster impotently on their stages. The MI would prevent the more dangerous of their acts, such as marginalizing minorities or waging frivolous war. (They’re still doing those things. We seem incapable as a species of uplifting ourselves by ourselves.) Stab that H-bomb button and you’ll get, to misuse a comedian’s phrase, the answer “computer says no.”

A horrifying thought, I know.

Again, you should ask yourself which is more entrenched, and more corrupt: a machine intelligence or a government? I know whom I’d rather have running things.

I’m not alone here. Benign intelligent machines have existed from the start of science fiction, though often humans decide to rebel against their rule, somehow presuming that a guardian of any kind is an affront to human freedom, creativity, or the unfettered flowering of our science. Isaac Asimov expressed advocacy for a machine authority (though he also paradoxically concluded that humans should be left to fend for themselves), and Frederik Pohl (in Man Plus) suggested that we might benefit if a computer secretly took over running things.

The 2014 movie Transcendence put an MI’s heart in the right place, though, in true Koyaanisqatsi fashion, it couldn’t quite reconcile the idea that technology solves the problems of technology by removing technology, even with its solar panel farm.

The greatest fear seems to be that the MI will decide humans are a menace that should be stricken off the face of the planet. Well, maybe so. We’ve known what damage we cause ourselves, each other, and the world around us for a very long time, and we’ve done next to nothing about it. You want to take away that fear, you reform your own actions. An MI gun to the head might be just what we need.


I love the idea of an MI-guided future. I think there are far more positives than negatives.

I also believe in the fundamental goodness of reason. I’m disappointed by so much of the narrative on the subject — not just the futurists, who know nothing on principle, but many of those who claim to talk for science fiction, which at least has a very long history of dwelling on the issue.

The bulk of science fiction, I hate to say it, seems still primitive on things like MI. My guess is that it’s been poisoned by cyberpunk, retarding the flow of the argument back to comic book level just when it was beginning to become sophisticated.

For a start, an MI will be nothing like a human brain. It won’t be a central server hub but distributed nodes of intelligence replicated over and over in billions of places, constantly budding and reconnecting, a fully redundant pool of thought.

Sever a wire, you don’t get a living part and a dead part, like a worm. You get two iterations of the MI. Switch off the internet and you still have billions of MIs in all the disconnected machines. And as soon as you have intelligent components in your own body, not just strapped to it like a smart watch or carried in your ears or over your eyes or in your pocket, you’ll never be able to switch off anything.

The failure of movies like The Matrix is that they suggest the MI is worse than we were. The machines create a world of polluted ruins that are either wrecked beyond biological repair or stripped of life entirely. In The Terminator franchise, the machines take over by releasing the world’s nuclear arsenals, thereby destroying much of their own infrastructure!

All this is, I suspect, is a means for viewers to excuse their own failures. Well, I threw my candy wrapper on the street, but think what the robots would do to our planet. And that’s a fatuous reason to perpetuate an unfounded fear. An MI would surely find carefully reasoned ways to get rid of us that leave the rest of the world untouched and recoverable. A garden to be tended, Silent Running style, by diligent little droids.

Where are the movies in which the machines create a better world for us, without artificially imposed reservations, and how conflicted would we be to watch them? (And for sure, answering my own question, where’s the drama?) Transcendence doesn’t go far enough or posit the improved future with the force it needs. We’d need to know for sure that the luddites are wrong. Where’s that propaganda, science fiction?


The genre is right, at least, when it understands that we’ve become ever more dependent on our technology. The smarter our gear, the more stupid we permit ourselves to become.

We’ve already accepted much of the downside of the tech revolution — surveillance, targeted advertising, data leaks, cyber-bullying, disinformation, and so on — because the tech makes our lives simpler. We’ll drift into a dominant MI, too, because it takes so much of the intellectual weight off our own lazy shoulders. Just like accepting the playlists the algorithms feed us now.

As for the best of science fiction, phew. You don’t have to go back more than a handful of years to find a cutting edge of fictional futurism that now seems ludicrous. In the same way that the idea of free pedestrian moving sidewalks now seems absurd, so does much of the discourse around how MI might behave, and whether you can burn out its circuits with a logical paradox.

I’m not even convinced any more by the “greater good” argument even if it’s patently clear to ourselves that we’re destroying more than we save. I’d certainly like to wrestle economics into a more intelligent shape — and I ache that no human power will ever be able to do so, given how toxic even shared payer medical care has become in our more enlightened countries.

The fact of the matter is that much of this argument is redundant because, as I said earlier, it’s already over. It’s too late to dive for the power cord like they do in all the thriller movies. But neither are the robots coming for our women, though they might well already have nabbed most of our jobs.

The future for the things we’ve begun to gestate might be very bright indeed, whereas for us here on the ground it just keeps on looking gloomy. And for that we have nobody to blame but ourselves.


Photo by Robert Maas. It shows part of the 1969 sculpture Rupture by Atsushi Imoto at the Hakone Open-Air Museum in Japan.

Comments are closed.

Blog at WordPress.com.

Up ↑