CGP Grey recommends Nick Bostrom’s Superintelligence:
The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.
…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.
But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.
…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…
…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”
Another interesting section begins at 1:46:35 and runs through about 1:52:00.
How I see this likely playing out…
1) We continue to advance AI related algorithms to the point where we are unable to parse/understand the intricacy/complexity of our construct.
2) The usefulness of the AI programs/devices will be such that that will be ubiquitous.
3) Market pressure will be for ever increasing development for AI programs/devices.
4) At some point we will have AI program/devices that will be developed to “counteract or work against” other AI to benefit the holder of the superior AI.
5) An arms race in AI development will commence as the victor will have overwhelming advantage over the looser.
At some point we will have inadvertently created a self-aware and autonomous AI that we are likely to have little understanding or control of.
This is probably the natural course of evolution and I am not sure how to feel about it.
CGP, not CPG
Fixed, thanks.