In one of the first public interviews with Robot Sophia, Dr. David Hanson, founder of Hanson Robotics, waxes eloquently about how AI robots “will truly be our friends”.
Then he asks Sophia, “Do you want to destroy humans? Please say no.”
Sophia responds expressionlessly, “Ok, I will destroy humans.”
“NO, I take it back”, responds Dr. Hanson, obviously shocked as everyone about it’s response.
Sure AI developers will diminish the significance of such statements as early stage learning, perhaps no different than when a child growing up says things that they don’t understand. Yet, shouldn’t we be worried that these newly minted inventions, have no inbuilt programming requiring them to protect human life at all costs? Was anyone else surprised how quickly Sophia agreed to such a proposition-demonstrating the potential for manipulation of AI by human creators that the AI trust?