I saw the humanoid robot that Russia developed when it tried to walk across the stage and fell flat on its face. I’ve also seen that Tesla is developing a humanoid robot, and I’ve been amazed by the acrobatic stunts of the robot dogs from Boston Scientific. Now I have heard FigureAI, a Silicon Valley company, is building a humanoid robot, but that in the philosophy of the Valley to “move fast and break things” has fired its Principal Robotic Safety Engineer who now says that the company has almost no safeguards and that its robot has injured human workers. Driverless taxi cabs daily face a real-life version of the Trolley Problem. What gives?
Signed,
Will Robinson
Dear Will:
Many years ago, Isaac Asimov coined the “Three Laws of Robotics” which he said were printed in the “Handbook of Robotics, 56th Edition, 2058 A.D.” The laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These are not necessarily laws in the sense of having been enacted by a legislature. They are more general philosophical principles, but they seem to make sense as a guide for how we should expect autonomous robots to work with humans. Other writers have expanded on these laws, or have adapted them to more real world situations, but there remain ambiguous instances and paradoxical examples (which are also common to the laws that legislatures enact…we often call these “loopholes” and lawyers spend lots of time and effort finding and exploiting them.)
A lawsuit filed by Robert Gruendel, the engineer fired by FigureAI alleges that the company is developing a robot without any real safeguards at all. It is axiomatic that the only way a robot could obey Asimov’s laws, or protect against harming people is for the manufacturer to program it to do so, and we should expect that the software that runs the robot should encode these protections. Mr. Gruendel’s lawsuit may allow a court to examine the issue for the first time.
Predictably, the companies building robots are patenting aspects of their software, and are registering copyrights, and asserting that their systems are trade secrets. Legislation to provide safeguards is far away, because our elected officials know very little about the subject, and corporations do not want any interference in their products or plans. The whole issue is also being eclipsed by controversy over artificial intelligence in general.
So Will, the Doc feels that it may be too late to warn you of the DANGER. We are already seeing many robots that scurry around like Star Wars “droids”, and will soon meet humanoid robots in our own lives. How they will treat us depends on their corporate creators. The Supreme Court has held that corporations are people. Citizens United v. FEC. The Doc is frightened.
Have a technology question that needs a legal perspective? Give the attorneys at LW&H a call. They understand the intersection of law and tech,
Until next month,
The “Doc”


