When you’re not sure what to think about something, or what to do in a certain situation, do you instinctively turn to a successful domain expert, or to someone you know who seems generally very smart?
I think most people don’t respect individual differences in intelligence and rationality enough. But some people in my local community tend to exhibit the opposite failure mode. They put too much weight on a person’s signals of explicit rationality (“Are they Bayesian?”), and place too little weight on domain expertise (and the domain-specific tacit rationality that often comes with it).
This comes up pretty often during my work for MIRI. We’re considering how to communicate effectively with academics, or how to win grants, or how to build a team of researchers, and some people (not necessarily MIRI staff) will tend to lean heavily on the opinions of the most generally smart people they know, even though those smart people have no demonstrated expertise or success on the issue being considered. In contrast, I usually collect the opinions of some smart people I know, and then mostly just do what people with a long track record of success on the issue say to do. And that dumb heuristic seems to work pretty well.
Yes, there are nuanced judgment calls I have to make about who has expertise on what, exactly, and whether MIRI’s situation is sufficiently analogous for the expert’s advice to work at MIRI. And I must be careful to distinguish credentials-expertise from success-expertise (aka RSPRT-expertise). And this process doesn’t work for decisions on which there are no success-experts, like long-term AI forecasting. But I think it’s easier for smart people to overestimate their ability to model problems outside their domains of expertise, and easier to underestimate all the subtle things domain experts know, than vice-versa.
I completely agree. As with all containers, one should very much distinguish capacity from load. The first is what it can hold, the later it what it does hold.
Sorry, I didn’t quite follow that. What’s the analogy you’re making?
As I understood it, he is making the point that:
“Intelligence determines the efficiency with which we can process evidence and learn from it, which one could call the “capacity” to learn. But experts have actually encountered the evidence and incorporated it into their internal models, which is what is actually important, even if they haven’t processed it in the most optimal way (leading to higher “load”).”
A similar analogy is also made in the following quote from Warren Buffet, even though in this case to contrast intelligence with rationality:
“The big thing is rationality. I always look at IQ and talent as representing the horsepower of the motor, but the output — the efficiency with which the motor works — depends on rationality. A lot of people start out with 400 horsepower motors but only get 100 horsepower of rationality. It is way better to have a 200 horsepower motor and get it all in output.” ~Warren Buffett
My interpretation:
Capacity = rational people can develop better solutions if they acquire equivalent expertise
Load = trained experts are generally better at present in their subjects than rationalists without expertise
Tetlock’s research shows that elite university undergraduates << experts << algorithms << non-expert super-forecasters. In so far as what we mean by rationality corresponds to what super-forecasters are doing, it should at some attainable level make you better at figuring out answers than experts regardless of relative expertise. OTOH, it's not clear to what extent what we call rationality corresponds to super-forecasting and it's not clear what advantages experts might have in *asking* the right questions.
https://goodjudgmentproject.com/blog/
Thanks, Michael, this prompts me to clarify something about my post.
In this post by “expertise” I don’t mean credentials-expertise but instead what’s called RSPRT-expertise (“Reliably Superior Performance on Representative Tasks”). That’s why I emphasized “people with a long track record of success on the issue” rather than “someone with standard credentials in the domain,” and it’s why I gave “long-term AI forecasting” as an example domain where there *are* no experts — there are people with AI degrees and people who know a lot *about* forecasting but there aren’t people with measured RSPRT in long-term AI forecasting.
What Tetlock showed was that political experts tend not to be RSPRT-experts (at least on the task of geopolitical forecasting), just credentials-experts.
So I’m not necessarily recommending credentialed experts over people who seem generally smart (though that’s also often wise, since people who seem generally smart usually aren’t demonstrated superforecasters), but I *am* recommending RSPRT-experts over non-experts who seem generally smart.
I’ll edit the post to make this point clearer.
The experts in Tetlock’s earlier work didn’t put much effort in or get much feedback. If put through the same training regimen and learning by feedback as the super-forecasters I wouldn’t be surprised if they did better in their domains.
Like the last point you make in the article is called Kruger Dunning Cognitive Bias.
The problem we have is that expertise is not intelligence but academia develops expertise. As a consequence we have academic institutions of experts (of varying ability) that are not particularly intelligent i.e. cannot adapt or rationalise. The student output now populate ALL institutions. This is an impossible problem to solve as academics will not admit that they are not intelligent and even if they do they cannot rectify the situation.
It would be useful to determine what activities develop intelligence as opposed to the expertise training we see in education.