The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.
…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.
But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.
…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…
…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”
Another interesting section begins at 1:46:35 and runs through about 1:52:00.