Life Imitates Fiction

I recently came across this article in The Guardian discussing drone-controlling AI and what it might do to optimize it’s ability to take out targets. Specifically:

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Long time readers of this blog (and my fiction) might think that sounds familiar….because I wrote that story, 6 years ago! My story “Human in the Loop” won the Machine Intelligence Research Institute’s “Intelligence in Fiction” prize.

At the time I was grateful that MIRI found my story scientifically rigorous and accurate, but I am disappointed to discover it was prescient. 😦 Even though I write science fiction, I have never been so on the nose with a prediction before!

Kind of wish it could have been a happier prediction to be correct on!

MIRI Intelligence in Fiction prize winner!

I am excited to announce that I am one of the winners of the Machine Intelligence Research Institute “Intelligence in Fiction” prize!

The prize is given to:

…people who write thoughtful and compelling stories about artificial general intelligenceintelligence amplification, or the AI alignment problem. We’re looking to appreciate and publicize authors who help readers understand intelligence in the sense of general problem-solving ability, as opposed to thinking of intelligence as a parlor trick for memorizing digits of pi, and who help readers intuit that non-human minds can have all sorts of different non-human preferences while still possessing instrumental intelligence.

And the best part is, you can read my winning story, “Human in the Loop” for free!

I wrote this story while I was working on code related to autonomous vehicles. Technically, a lot of the problems are eminently solvable. But what about the ethical problems?

If an automated vehicle had a crash, say, and someone dies, who is responsible? The “driver” who was behind the wheel at the time? The manufacturer who perhaps installed faulty software? The regulatory agency who allowed these vehicles on the road? The software developer who wrote the algorithm? What about in the case of emergent behavior; actions that were not explicitly programmed by anybody but instead emerged organically from an artificial neural network?

I was also frustrated by misunderstandings related to what exactly neural networks are (“My CPU is a neural-net processor; a learning computer.”), and wanted to set the record straight on that.

I am very happy that the people at MIRI enjoyed this one (and that my science was sufficiently rigorous!). It’s so great to find such a perfect audience for a piece of fiction, and this is about as perfect of a fit as you can get.