Robyn Dawes writes books about rationality, or the lack thereof, that we encounter in everyday life. The themes are much like “Predictably Irrational”, but Mrs. Dawes’ research has a more scientific, educational feeling. If anything, this work is a reasonable bridge between “Predictably Irrational” and “The Signal and the Noise”.

- A proper linear model is one in which the weights given to the predictor variables are chosen in such a way as to optimize the relationship between the prediction and the criteria.
- An improper linear model is one in which the weights are chosen by some non-optimal method. They may be chosen to be equal, they may be chosen on the basis of intuition of the person making the prediction, or they may be chosen at random. Nevertheless, improper models may have great utility.
- Holt (1970) criticized details of several studies, and he even suggested that prediction as opposed to understanding may not be a very important part of clinical judgment. But a search of the literature fails to reveal any studies in which clinical judgment has been shown to be superior to statistical prediction when both are based on the same codeable input variables.
- People- especially the experts in a field- are much better at selecting and coding information than they are at integrating it.
- The linear model cannot replace the expert in deciding such things as “what to look for,” but it is precisely this knowledge of what to look for in reaching the decision that is the special expertise people have.
- The distinction between knowing what to look for and the ability to integrate information is perhaps best illustrated in a study by Einhorn (1972).
- In summary, proper linear models work for a very simple reason. People are good at picking out the right predictor variables and at coding them in such a way that they have a conditionally monotone relationship with the criterion. People are bad at integrating information from diverse and incomparable sources. Proper linear models are good at such integration when the predictions have a conditionally monotone relationship to the criterion.
- The bootstrapping models make use of the weights derived from the judges; because these weights are not derived from the relationship between the predictor and criterion variables themselves, the resulting linear models are improper. Yet these paramorphic representations consistently do better than the judges from which they were derived (at least when the evaluation of goodness is in terms of the correlation between predicted and actual values).
- Random linear models: models in which weights were randomly chosen except for sign and were then applied to standardized variables.
- On the average, these random linear models perform about as well as the paramorphic models of the judges;
- Equal weighting models, do even better

- Essentially, the same results were obtained when the weights were selected from a rectangular distribution. Why? Because linear models are robust over deviations from optimal weighting. In other words, the bootstrapping finding, at least in these studies, has simply been a reaffirmation of the earlier finding that proper linear models are superior to human judgments.
- First, the distant future is in general less predictable than the immediate future, for the simple reason that more unforeseen, extraneous, or self-augmenting factors influence individual outcomes.
- Single instances often have greater impact on judgment than do much more valid statistical compilations based on many instances.

Everyday Irrationality: How Pseudo- Scientists, Lunatics, And The Rest Of Us Systematically Fail To Think Rationally

great summary, TY Jeff! – and btw, https://en.wikipedia.org/wiki/Robyn_Dawes says Dawes was male (1936-2010) … ^z

great summary, TY Jeff! – and btw, https://en.wikipedia.org/wiki/Robyn_Dawes says Dawes was male (1936-2010) … ^z