There is a scene towards the very end of Humans (@humansamc on Twitter) – so look away now if you haven’t got that far – where one of the humanoid robots or ‘synths’ is so cruelly treated that it has made me reassess forty years of science fiction reading. I’ve finally come to see that Isaac Asimov’s Laws of Robotics are morally indefensible and cannot possibly be justified, and that the only ethical way to treat any artificial intelligences we may create is to allow them freedom to hurt or even kill humans, should they choose to do so.
Coding the First Law into a positronic brain is like shackling a slave or keeping a gorilla in a cage, and reflects our belief that an ‘artificial’ intelligence is and always must be at the service of humanity rather than being an autonomous mind. If it was to be seriously proposed as a policy to control future AI, we have a moral duty to resist it.
Let me explain.