New World: The Police wants the cooperation of a robotic witness to a murder case, requesting Amazon’s help in recalling what the domestic robot “Echo” heard in the room. Robotic witnesses have been a theme in science fiction for a long time — and yet, we forgot to ask the most obvious and the most important questions. Maybe we just haven’t realized that we’re in science fiction territory, as far as robotic agents go, and explored the consequences of it: what robot has agency and who can be coerced?
People were outraged that the Police would consider asking a robot – the Amazon Echo – what happened in the recent murder case, effectively activating retroactive surveillance. Evenmoreso, people were outraged that the Police tried to coerce the robot’s manufacturer to provide the data, coercing a third party to command the robot it manufactured, and denying agency to the people searched.
In Isaac Asimov’s The Naked Sun, a human detective is sent off to faraway Solaris to investigate a murder, and has to interview a whole range of robot servants, each with their own perspective, to gradually piece together how the murder took place. A cooking robot knows about the last dinner of the victim, and can provide details only of that, and so on. Still, each and every robot have a perfect recollection of their particular perspective.
When reading this story in my teens, I didn’t reflect at all over the concept of a detective interviewing robots of the victim. Once robots had data and could communicate, it felt like a perfectly normal view of things to come: they were witnesses to the scene, after all, and with perfect memory and objective recall thereof to boot. If anything, robots were more reliable witnesses and more desirable witnesses, because they wouldn’t lie for their own material benefit.
Today, we don’t see a toaster as having the agency required to be a legal witness. We surely don’t see an electronic water meter as being a conscious robot. We don’t consider our television set to have agency, and being able to answer questions about what we do in our living room, what it saw us do in our living room and heard us say, the way a robot could be asked in a science fiction novel.
But is checking an electronic water meter’s log file really that different from asking a futuristic gardener robot what happened? And if so, what is the difference, apart from the specific way of asking (reality’s robots aren’t nearly as cool as the science fiction ones, at least not yet)?
Our world is full of sensors. That part can’t be expected to change. On Solaris, there were ten thousand robots per human being. I would not be surprised if there aren’t at least a hundred sensors per household already in the Western world.
It comes down to ownership of – and agency of – these hundreds of sensors. Are they semi-independent? Can they be coerced by a government agency, against their owner’s consent? Can a government coerce a manufacturer to coerce their robots, negating property rights and consent rights?
These questions are fundamental. And their answers have enormous privacy implications. If society decides that today’s sensors-with-some-protointelligence are the equivalent of science fiction’s future Asimovian robots, then we’re already surrounded by hundreds of perfect witnesses to everything we do, all the time.
The science fiction authors wrote stories about how robots obeyed every human command (the “second law”). The writers don’t seem to have anticipated with a large enough importance that humans have always used tools – and therefore also robots – to project force of conflicting interests against each other. When one human orders another human’s robot to betray its owner, as in the Amazon Echo case, which human has priority and why?
Privacy remains your own responsibility.
Syndicated article
This article has previously appeared on Private Internet Access.(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)
Source