If we leave [the safety considerations] entirely up to the scientists creating [a new technology], isn’t there the danger of… personal interest [competing with] the common good?
For instance, I was doing my science show in a lab where they were working on robots… there was a big robot that chased me and drove me against the wall, and then it started climbing up my leg… A couple minutes later I asked the scientist who had made this robot… “What if you created a bunch of robots and you gave them the ability to reproduce themselves and then… they finally took over the Earth because they were… so smart. How would you feel about that?”
He said, “Well, would I win the Nobel prize?”
So that may be a problem we’ll face with self-policing.
I was talking to a roboticist and he pointed out that if smarter machines take over they will still be our children, so even if the human species disappears, that’s okay.
So I asked him, “Do you have children?”
He said, “Oh. Well, I assume there will be some local inconvenience.”