Issac Asimov’s Laws of Robotics
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
First published in 1942, the Three Laws (Azimov having added the Zeroth in 1950) are mentioned and investigated in many of Asimov’s stories. It is unclear whether they are actual laws with government enforcement. They are design safety rules, each deriving from the previous, universally adopted by robot manufacturers. As Azimov wrote about the operation of the Three Laws, he discovered the need for the Zeroth Law and added it.
Fast forward to 2023. We are on the cusp of real robots that Asimov envisioned. The Center for AI Safety released a statement on AI risk—Retrieved from https://www.safe.ai/statement-on-ai-risk on June 5 2023:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
There is a long list of signers.
In some perverse way this is good news. We don’t yet have weapons guided by artificial intelligence, but it is only a matter of time. We do have dystopian movies about such intelligence running amok and the risk of human extinction, so we can picture the risks.
I have yet to find even a reference to Asimov’s Laws, much less a discussion on how to train AI to follow those laws.
It’s time to change that.