Follow-up to: AI researchers on AI risk; Fredkin on AI risk in 1979.
Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel True Names: 1
The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson; or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones…
And according to Eric Drexler (2015), Minsky was making the now-standard “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:
My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, ca. 1977. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, ca. 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a subgoal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.
Footnotes:
Leave a Reply