Technology forecasts from The Year 2000

In The Age of Em, Robin Hanson is pretty optimistic about our ability to forecast the long-term future:

Some say that there is little point in trying to foresee the non-immediate future. But in fact there have been many successful forecasts of this sort.

In the rest of this section, Hanson cites eight examples of forecasting success. Two of his examples of “success” are forecasts of technologies that haven’t arrived yet: atomically precise manufacturing and advanced starships. Another of his examples is The Year 2000:

A particularly accurate book in predicting the future was The Year 2000, a 1967 book by Herman Kahn and Anthony Wiener (Kahn and Wiener 1967). It accurately predicted population, was 80% correct for computer and communication technology, and 50% correct for other technology (Albright 2002).

As it happens, when I first read this paragraph I had already begun to evaluate the technology forecasts from The Year 2000 for the Open Philanthropy Project, relying on the same source Hanson did for determining which forecasts came true and which did not (Albright 2002).

However, my assessment of Kahn & Wiener’s forecasting performance is much less rosy than Hanson’s. For details, see here.

Hanson on intelligence explosion, from Age of Em

Economist Robin Hanson is among the most informed critics of the plausibility of what he calls a “local” intelligence explosion. He’s written on the topic many times before (most of it collected here), but here’s one more take from him on it, from Age of Em:

…some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its own mental architecture…

In a prototypical local explosion scenario, a single AI system with a supporting small team starts with resources that are tiny on a global scale. This team finds and then applies a big innovation in AI software architecture to its AI system, which allows this team plus AI combination to quickly find several related innovations. Together this innovation set allows this AI to quickly become more effective than the entire rest of the world put together at key tasks of theft or innovation.

That is, even though an entire world economy outside of this team, including other AIs, works to innovate, steal, and protect itself from theft, this one small AI team becomes vastly better at some combination of (1) stealing resources from others, and (2) innovating to make this AI “smarter,” in the sense of being better able to do a wide range of mental tasks given fixed resources. As a result of being better at these things, this AI quickly grows the resources that it controls and becomes more powerful than the entire rest of the world economy put together, and so it takes over the world. And all this happens within a space of days to months.

Advocates of this explosion scenario believe that there exists an as-yet-undiscovered but very powerful architectural innovation set for AI system design, a set that one team could find first and then keep secret from others for long enough. In support of this belief, advocates point out that humans (1) can do many mental tasks, (2) beat out other primates, (3) have a common IQ factor explaining correlated abilities across tasks, and (4) display many reasoning biases. Advocates also often assume that innovation is vastly underfunded today, that most economic progress comes from basic research progress produced by a few key geniuses, and that the modest wage gains that smarter people earn today vastly underestimate their productivity in key tasks of theft and AI innovation. In support, advocates often point to familiar myths of geniuses revolutionizing research areas and weapons.

Honestly, to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this genius villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa-ha-ha.

Many arguments suggest that this scenario is unlikely (Hanson and Yudkowsky 2013). Specifically, (1) in 60 years of AI research high-level architecture has only mattered modestly for system performance, (2) new AI architecture proposals are increasingly rare, (3) algorithm progress seems driven by hardware progress (Grace 2013), (4) brains seem like ecosystems, bacteria, cities, and economies in being very complex systems where architecture matters less than a mass of capable detail, (5) human and primate brains seem to differ only modestly, (6) the human primate difference initially only allowed faster innovation, not better performance directly, (7) humans seem to have beat other primates mainly via culture sharing, which has a plausible threshold effect and so doesn’t need much brain difference, (8) humans are bad at most mental tasks irrelevant for our ancestors, (9) many human “biases” are useful adaptations to social complexity, (10) human brain structure and task performance suggest that many distinct modules contribute on each task, explaining a common IQ factor (Hampshire et al. 2012), (11) we expect very smart AI to still display many biases, (12) research today may be underfunded, but not vastly so (Alston et al. 2011; Ulku 2004), (13) most economic progress does not come from basic research, (14) most research progress does not come from a few geniuses, and (15) intelligence is not vastly more productive for research than for other tasks.

(And yes, the entire book is roughly this succinct and dense with ideas.)

Books, music, etc. from August 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • Various, BoJack Horseman, Season 3 (2016)
  • Trey Parker, South Park, Season 19 (2015)
  • Steven Zaillian, The Night Of (2016)

Media I’m looking forward to, September 2016 edition

Books

* = added this round

Movies & Miniseries

(only including movies and miniseries which AFAIK have at least started principal photography)

  • C.K. & Adlon, Better Things (Sep 2016)
  • Villeneuve, Arrival (Sep 2016)
  • Dardenne brothers, The Unknown Girl (Oct 2016)
  • Malick, Voyage of Time (Oct 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • BBC Natural History Unit, Planet Earth II (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)
  • Simon & Pelecanos, The Deuce (TBD)

Rockefeller’s chief philanthropy advisor

Frederick T. Gates was the chief philanthropic advisor to oil tycoon John D. Rockefeller, arguably the richest person in modern history and one of the era’s greatest philanthropists. Here’s a brief profile from Rockefeller biography Titan (h/t @danicgross):

Like Rockefeller himself, Gates yoked together two separate selves—one shrewd and worldly, the other noble and high-flown…

After graduating from the seminary in 1880, Gates was assigned his first pastorate in Minnesota. When his young bride, Lucia Fowler Perkins, dropped dead from a massive internal hemorrhage after sixteen months of marriage, the novice pastor not only suffered an erosion of faith but began to question the competence of American doctors — a skepticism that later had far-reaching ramifications for Rockefeller’s philanthropies…

Eventually Gates became Rockefeller’s philanthropic advisor, and:

What Gates gave to his boss was no less vital. Rockefeller desperately needed intelligent assistance in donating his money at a time when he could not draw on a profession of philanthropic experts. Painstakingly thorough, Gates combined moral passion with great intellect. He spent his evenings bent over tomes of medicine, economics, history, and sociology, trying to improve himself and find clues on how best to govern philanthropy. Skeptical by nature, Gates saw a world crawling with quacks and frauds, and he enjoyed grilling people with trenchant questions to test their sincerity. Outspoken, uncompromising, he never hesitated to speak his piece to Rockefeller and was a peerless troubleshooter.

For some details on Rockefeller’s philanthropic successes, see here.

Philosophical habits of mind

In an interesting short paper from 1993, Bernard Baars and Katharine McGovern list several philosophical “habits of mind” and contrast them with typical scientific habits of mind. The philosophical habits of mind they list, somewhat paraphrased, are:

  1. A great preference for problems that have survived centuries of debate, largely intact.
  2. A tendency to set the most demanding criteria for success, rather than more achievable ones.
  3. Frequent appeal to thought experiments (rather than non-intuitional evidence) to carry the major burden of argument.
  4. More focus on rhetorical brilliance than testability.
  5. A delight in paradoxes and “impossibility proofs.”
  6. Shifting, slippery definitions.
  7. A tendency to legislate the empirical sciences.

I partially agree with this list, and would add several items of my own.

Obviously this list does not describe all of philosophy. Also, I think (English-language) philosophy as a whole has become more scientific since 1993.

Books, music, etc. from July 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

Media I’m looking forward to, August 2016 edition

Books

* = added this round

Movies & Miniseries

(only including movies and miniseries which AFAIK have at least started principal photography)

  • *C.K. & Adlon, Better Things (Sep 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • Malick, Voyage of Time (Oct 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • *BBC Natural History Unit, Planet Earth II (TBD 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)
  • *Simon & Pelecanos, The Deuce (TBD)

Rapoport’s First Rule and Efficient Reading

Philosopher Daniel Dennett advocates following “Rapoport’s Rules” when writing critical commentary. He summarizes the first of Rapoport’s Rules this way:

You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

If you’ve read many scientific and philosophical debates, you’re aware that this rule is almost never followed. And in many cases it may be inappropriate, or not worth the cost, to follow it. But for someone like me, who spends a lot of time trying to quickly form initial impressions about the state of various scientific or philosophical debates, it can be incredibly valuable and time-saving to find a writer who follows Rapoport’s First Rule, even if I end up disagreeing with that writer’s conclusions.

One writer who, in my opinion, seems to follow Rapoport’s First Rule unusually well is Dennett’s “arch-nemesis” on the topic of consciousness, the philosopher David Chalmers. Amazingly, even Dennett seems to think that Chalmers embodies Rapoport’s 1st Rule. Dennett writes:

Chalmers manifestly understands the arguments [for and against type-A materialism, which is Dennett’s view]; he has put them as well and as carefully as anybody ever has… he has presented excellent versions of [the arguments for type-A materialism] himself, and failed to convince himself. I do not mind conceding that I could not have done as good a job, let alone a better job, of marshaling the grounds for type-A materialism. So why does he cling like a limpet to his property dualism?

As far as I can tell, Dennett is saying “Thanks, Chalmers, I wish I’d thought of putting the arguments for my view that way.”

And because of Chalmers’ clarity and fairness, I have found Chalmers’ writings on consciousness to be more efficiently informative than Dennett’s, even though my own current best-guesses about the nature of consciousness are much closer to Dennett’s than to Chalmers’.

Contrast this with what I find to be more typical in the consciousness literature (and in many other literatures), which is for an article’s author(s) to present as many arguments as they can think of for their own view, and downplay or mischaracterize or not-even-mention the arguments against their view.

I’ll describe one example, without naming names. Recently I read two recent papers, each of which had a section discussing the evidence for or against the “cortex-required view,” which is the view that a cortex is required for phenomenal consciousness. (I’ll abbreviate it as “CRV.”)

The pro-CRV paper is written as though it’s a closed case that a cortex is required for consciousness, and it doesn’t cite any of the literature suggesting the opposite. Meanwhile, the anti-CRV paper is written as though it’s a closed case that a cortex isn’t required for consciousness, and it doesn’t cite any literature suggesting that it is required. Their differing passages on CRV cite literally zero of the same sources. Each paper pretends as though the entire body of literature cited by the other paper just doesn’t exist.

If you happened to read only one of these papers, you’d come a way with a very skewed view of the likelihood of the cortex-required view. You might realize how skewed that view is later, but if you’re reading only a few papers on the topic, so that you can form an impression quickly, you might not.

So here’s one tip for digging through some literature quickly: try to find out which expert(s) on that topic, if any, seem to follow Rapoport’s First Rule — even if you don’t find their conclusions compelling.

Seeking case studies in scientific reduction and conceptual evolution

Tim Minchin once said “Every mystery ever solved has turned out to be not magic.” One thing I want to understand better is “How, exactly, has that happened in history? In particular, how have our naive pre-scientific concepts evolved in response to, or been eliminated by, scientific progress?

Examples: What is the detailed story of how “water” came to be identified with H2O? How did our concept of “heat” evolve over time, including e.g. when we split it off from our concept of “temperature”? What is the detailed story of how “life” came to be identified with a large set of interacting processes with unclear edge cases such as viruses decided only by convention? What is the detailed story of how “soul” was eliminated from our scientific ontology rather than being remapped onto something “conceptually close” to our earlier conception of it, but which actually exists?

I wish there was a handbook of detailed case studies in scientific reductionism from a variety of scientific disciplines, but I haven’t found any such book yet. The documents I’ve found that are closest to what I want are perhaps:

Some semi-detailed case studies also show up in Kuhn, Feyerabend, etc. but they are typically buried in a mass of more theoretical discussion. I’d prefer to read histories that focus on the historical developments.

Got any such case studies, or collections of case studies, to recommend?

Books, music, etc. from June 2016

Books

  • Mukherjee, The Gene [meh]
  • Balcombe, What a Fish Knows [meh]

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • Haigh, 45 Years (2015)
  • Strong & Lyn, Broadchurch Season 1 (2013)
  • Eggers, The Witch (2015)
  • Haynes, Carol (2015)
  • Nichols, Midnight Special (2016)
  • Various, Game of Thrones, Season 6 (2016)
  • Nemes, Son of Saul (2015)

Media I’m looking forward to, July 2016 edition

Books

* = added this round

Movies

(only including movies which AFAIK have at least started principal photography)

  • Liu, Batman: The Killing Joke (Jul 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • *Malick, Voyage of Time (Oct 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)

Stich on conceptual analysis

Stich (1990), p. 3:

On the few occasions when I have taught the “analysis of knowledge” literature to undergraduates, it has been painfully clear that most of my students had a hard time taking the project seriously. The better students were clever enough to play fill-in-the-blank with ‘S knows that p if and only _____’ … But they could not, for the life of them, see why anybody would want to do this. It was a source of ill-concealed amazement to these students that grown men and women would indulge in this exercise and think it important — and of still greater amazement that others would pay them to do it! This sort of discontent was all the more disquieting because deep down I agreed with my students. Surely something had gone very wrong somewhere when clever philosophers, the heirs to the tradition of Hume and Kant, devoted their time to constructing baroque counterexamples about the weird ways in which a man might fail to own a Ford… for about as long as I can remember I have had deep…misgivings about the project of analyzing epistemic notions.

Books, music, etc. from May 2016

Books

  • Christian & Griffiths, Algorithms to Live By
  • Dennett & LaScola, Caught in the Pulpit
  • Carroll, The Big Picture [my comments]
  • de Waal, Are We Smart Enough to Know How Smart Animals Are?
  • Dreger, Galileo’s Middle Finger [a quoted passage]

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • Lanthimos, The Lobster (2015)
  • Jones, The Homesman (2014)
  • Sciamma, Girlhood (2014)
  • Dumont, Li’l Quinquin (2014)
  • Aubier & Patar, Ernest & Celestine (2012)

Media I’m looking forward to, June 2016 edition

Books

* = added this round

Movies

(only including movies which AFAIK have at least started principal photography)

  • Stanton, Finding Dory (Jun 2016)
  • Liu, Batman: The Killing Joke (Jul 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • *Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)

Social justice and evidence

Galileo’s Middle Finger has some good coverage of several case studies in politicized science, and ends with a sermon on the importance of evidence to social justice:

 

When I joined the early intersex-rights movement, although identity activists inside and outside academia were a dime a dozen, it was pretty uncommon to run into evidence-based activists… Today, all over the place, one finds activist groups collecting and understanding data, whether they’re working on climate change or the rights of patients, voters, the poor, LGBT people, or the wrongly imprisoned…

The bad news is that today advocacy and scholarship both face serious threats. As for social activism, while the Internet has made it cheaper and easier than ever to organize and agitate, it also produces distraction and false senses of success. People tweet, blog, post messages on walls, and sign online petitions, thinking somehow that noise is change. Meanwhile, the people in power just wait it out, knowing that the attention deficit caused by Internet overload will mean the mob will move on to the next house tomorrow, sure as the sun comes up in the morning. And the economic collapse of the investigative press caused by that noisy Internet means no one on the outside will follow through to sort it out, to tell us what is real and what is illusory. The press is no longer around to investigate, spread stories beyond the already aware, and put pressure on those in power to do the right thing.

The threats to scholars, meanwhile, are enormous and growing. Today over half of American university faculty are in non-tenure-track jobs. (Most have not consciously chosen to live without tenure, as I have.) Not only are these people easy to get rid of if they make trouble, but they are also typically loaded with enough teaching and committee work to make original scholarship almost impossible… Add to this the often unfair Internet-based attacks on researchers who are perceived as promoting dangerous messages, and what you end up with is regression to the safe — a recipe for service of those already in power.

Perhaps most troubling is the tendency within some branches of the humanities to portray scholarly quests to understand reality as quaint or naive, even colonialist and dangerous. Sure, I know: Objectivity is easily desired and impossible to perfectly achieve, and some forms of scholarship will feed oppression, but to treat those who seek a more objective understanding of a problem as fools or de facto criminals is to betray the very idea of an academy of learners. When I run into such academics — people who will ignore and, if necessary, outright reject any fact that might challenge their ideology, who declare scientific methodologies “just another way of knowing” — I feel this crazy desire to institute a purge… Call me ideological for wanting us all to share a belief in the importance of seeking reliable, verifiable knowledge, but surely that is supposed to be the common value of the learned.

…I want to say to activists: If you want justice, support the search for truth. Engage in searches for truth. If you really want meaningful progress and not just temporary self-righteousness, carpe datum. You can begin with principles, yes, but to pursue a principle effectively, you have to know if your route will lead to your destination. If you must criticize scholars whose work challenges yours, do so on the evidence, not by poisoning the land on which we all live.

…Here’s the one thing I now know for sure after this very long trip: Evidence really is an ethical issue, the most important ethical issue in a modern democracy. If you want justice, you must work for truth.

Naturally, the sermon is more potent if you’ve read the case studies in the book.

The Big Picture

Sean Carroll’s The Big Picture is a pretty decent “worldview naturalism 101” book.

In case there’s a 2nd edition in the future, and in case Carroll cares about the opinions of a professional dilettante (aka a generalist research analyst without even a bachelor’s degree), here are my requests for the 2nd edition:

  • I think Carroll is too quick to say which physicalist approach to phenomenal consciousness is correct, and doesn’t present alternate approaches as compellingly as he could (before explaining why he rejects them). (See especially chs. 41-42.)
  • In the chapter on death, I wish Carroll had acknowledged that neither physics nor naturalism requires that we live lives as short as we now do, and that there are speculative future technological capabilities that might allow future humans (or perhaps some now living) to live very long lives (albeit not infinitely long lives).
  • I wish Carroll had mentioned Tegmark levels, maybe in chs. 25 or 36.

Check the original source

From Segerstrale (2000), p. 27:

In 1984 I was able to shock my class of well-intended liberal students at Smith College by giving them the assignment to compare [Stephan] Chorover’s [critical] representation of passages of [E.O. Wilson’s] Sociobiology with Wilson’s original text. The students, who were deeply suspicious of Wilson and spontaneous champions of his critics, embarked on this homework with gusto. Many students were quite dismayed at their own findings and angry with Chorover. This surely says something, too, about these educated laymen’s relative innocence regarding what can and cannot be done in academia.

I wish this kind of exercise was more common. Another I would suggest is to compare critics’ representations of Dreyfus’ “Alchemy and Artificial Intelligence” with the original text (see here).

The first AI textbook, on the control problem

The earliest introductory AI textbook I know about — excluding mere “paper collections” like Computers and Thought (1963) — is Jackson’s Introduction to Artificial Intelligence (1974).

It discusses AGI and the control problem starting on page 394:

If [AI] research is unsuccessful at producing a general artificial intelligence, over a period of more than a hundred years, then its failure may raise some serious doubt among many scientists as to the finite describability of man and his universe. However, the evidence presented in this book makes it seem likely that artificial intelligence research will be successful, that a technology will be developed which is capable of producing machines that can demonstrate most, if not all, of the mental abilities of human beings. Let us therefore assume that this will happen, and imagine two worlds that might result.

[First,] …It is not difficult to envision actualities in which an artificial intelligence would exert control over human beings, yet be out of their control.

Given that intelligent machines are to be used, the question of their control and noncontrol must be answered. If a machine is programmed to seek certain ends, how are we to insure that the means it chooses to employ are agreeable to people? A preliminary solution to the problem is given by the fact that we can specify state-space problems to require that their solution paths shall not pass through certain states (see Chapter 3). However, the task of giving machines more sophisticated value systems, and especially of making them ‘ethical,’ has not yet been investigated by AI researchers…

The question of control should be coupled with the ‘lack of understanding’ question; that is, the possibility exists that intelligent machines might be too complicated for us to understand in situations that require real-time analyses (see the discussion of evolutionary programs in Chapter 8). We could conceivably always demand that a machine give a complete output of its reasoning on a problem; nevertheless that reasoning might not be effectively understandable to us if the problem itself were to determine a time limit for producing a solution. In such a case, if we were to act rationally, we might have to follow the machine’s advice without understanding its ‘motives’…

It has been suggested that an intelligent machine might arise accidentally, without our knowledge, through some fortuitous interconnection of smaller machines (see Heinlein, 1966). If the smaller machines each helped to control some aspect of our economy or defense, the accidental intelligent might well act as a dictator… It seems highly unlikely that this will happen, especially if we devote sufficient time to studying the non-accidental systems we implement.

A more significant danger is that artificial intelligence might be used to further the interests of human dictators. A limited supply of intelligent machines in the hands of a human dictator might greatly increase his power over other human beings, perhaps to the extent of giving him complete censorship and supervision of the public…

Let us now paint another, more positive picture of the world that might result from artificial intelligence research… It is a world in which man and his machines have reached a state of symbiosis…

The benefits humanity might gain from achieving such a symbiosis are enormous. As mentioned [earlier], it may be possible for artificial intelligence to greatly reduce the amount of human labor necessary to operate the economy of the world… Computers and AI research may play an important part in helping to overcome the food, population, housing, and other crises that currently grip the earth… Artificial intelligence may eventually be used to… partially automated the development of science itself… Perhaps artificial intelligence will someday be used in automatic teachers… and perhaps mechanical translators will someday be developed which will fluently translate human languages. And (very perhaps) the day may eventually come when the ‘household robot’ and the ‘robot chauffeur’ will be a reality…

In some ways it is reassuring that the progress in artificial intelligence research is proceeding at a relatively slow but regular pace. It should be at least a decade before any of these possibilities becomes an actuality, which will give us some time to consider in more detail the issues involved.