An interesting blog from Nicholas Carr referencing a presentation by John Canning, an engineer with the U.S. Naval Surface Warfare Center. As we get more and more automated weapons, people are asking questions about ethics. The example quoted is interesting. It seems it is OK for a robot to kill a robot, but if a robot kills humans it but be guided by another human. Remember the Three Laws of Robotics?
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov used those three laws thinking) to show how simple rules could produce a myriad of complex unforeseeable outcomes; a precursor of complexity approaches. However he was focused on prevention of harm, not rules for creating it. Maybe we should look (this is for the Sci Fi buffs of a certain age) to Theodore Sturgeon’s 1944 short story Killdozer for rules of engagement?
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024
Thinking Meat (always one of the highlights of my morning skim of the RSS feeds) ...
Normally arriving at a hotel with this sign on the door would have triggered an ...