Cory Doctor is one of my favorite fiction authors (read Little Brother, or Homeland or Down and Out In The Magic Kingdom) and also one of my favorite non-fiction writers as well. He has written an article today about the challenges of regulating robots now and in the future.
I don’t know how I feel about the future of artificial intelligence, or if robots will someday take over the world, enslaving humanity. I don’t really think about it, to be honest. It doesn’t scare me because it’s hard enough to envision us humans not causing our own demise first, I guess. But who knows, right? At least we know we have smart people like Cory Doctorow out there thinking of these things from all angles.
The distinction here is between a robot that is designed to do what its owner wants – including asking “are you sure?” when its owner asks it to do something potentially stupid – and a robot that is designed to thwart its owner’s wishes. The former is hard, important work and the latter is a fool’s errand and dangerous to boot.
A fool’s errand
It’s a fool’s errand for the same reason that using technology mandates to stop people from saving a Netflix stream or playing unapproved Xbox games is a fool’s errand. We really only know how to make one kind of computer: the “general purpose computer” that can execute every instruction that can be expressed in symbolic logic. Put more simply: we only know how to make a computer that can run every programme. We don’t know how to make a computer that can run all the programs except for a subset that, for whatever reason, good or bad, we don’t want people to run.
This is not a contentious statement among computer scientists – it’s about as controversial as saying “we can’t make a wheel that only turns for socially beneficial purposes” or “there’s no way to make a lever than can only be used to shift masses in accord with the law of the land.”