There is an interaction between a culture’s common-sense understanding of a subject and science. For example, although folk psychology is partly a biologically-given human ability to reason about the mental state of others, the way people reason about the mental states of others has been greatly influenced in recent times by Freudian ideas and behaviorism. Unfortunately, the popular version of scientific ideas is often quite skewed or out-of-date.
I think there are two modern ideas where the public’s intuitive understanding is especially out-of-date, and I think there is a lot of benefit to be gained by improving that understanding. These ideas are not new at all, but an understanding has still not propagated to the public very well. The two ideas are randomness and responsibility. Chaos and consequentialism. Probability and expected utility.
Randomness. Most people have an intuitive understanding of randomness which looks something like the D&D “chaotic” alignment. Something looks random to the degree it is unexpected and surprising. The gambler’s fallacy can follow from thinking as if events are actively trying to look well-mixed. States of maximal chaos are imagined to hold unexpected ordered objects (think of Douglas Adams’ infinite improbability drive), when in fact maximum-entropy states tend to be rather boring.
Responsibility. People have some strange intuitions about how responsibility and blame should work. I find that people act as if blame is conserved: if you can attribute fault to one thing, then there’s a feeling of release which makes you much less likely to look for other sources of fault. If blame does get spread out among many things or people, it seems as if it “stretches thin”, so that less rests on the shoulders of each point of blame. This does not make very much sense. If a fault has many causes, each need to be addressed equally. This view implies, in particular, that you can’t get out of your share of responsibility just by pointing out someone else’s. In the aspiring rationalist community, this is called heroic responsibility. This is about sane reasoning about the consequences of your actions. There could be a moral duty aspect, if you want to speak of such things, but it’s also just a brute fact of reality: if you act in ways which tend to improve the chances of getting what you want, you’ll tend to get what you want more often; the same cannot be said in favor of putting blame elsewhere. I’ve also heard this idea referred to as “internal locus of control”.
You can’t really impose this kind of responsibility on someone else. It’s compatible with constructive criticism, but not with blame. The kind of responsibility I’m talking about is a favor to yourself, not to other people. (I mean, it may also be a favor to other people, if you care about those people and decide to help them. But then it’s because you decided you care.)
Now, there isn’t a perfect consensus on these issues. For probability, there’s the debate between Bayesians and frequentists. I may think the Bayesian perspective is superior, and points to a specific understanding of randomness as a subjective phenomenon (so randomness and uncertainty are really the same thing). I will say things slanted from that perspective, but I think there’s something to be gained just from the uncontroversial laws of probability theory, applied to the kind of events everyone would agree we can apply them to.
Similarly, there are many versions of, and alternatives to, consequentialism. There’s the debate between causal decision theory and evidential decision theory, and there’s the question of deontology and virtue ethics. Again, although my remarks will be a little biased toward consequentialist thinking, I think what I’m pointing at is mostly common ground — though it isn’t codified by an uncontroversial set of mathematical laws the way probability theory is. The perspective I’m putting forward here can be understood through the lens of expected utility theory, but I suspect it makes about as much sense in alternative frameworks as well.
Now, I can’t just say “do probability correctly” or “decide what you want and go about trying to get it in a sane manner” and call it good. Both of these are complicated skills which take a significant amount of development. However, I think something useful I can do is try to make a list of the important things you can try to get right.
Awareness of the general shape of each of these is (I think) quite helpful. Of course, turning explicit awareness into a deeper intuition which shapes your reflexes regarding randomness and responsibility is more difficult. It requires noticing what intuitions are currently shaping your thinking, and stepping in to re-shape those intuitions by thinking in new ways until the new ways become habit.
I don’t think any of this is too surprising to readers here, but I think it is worth Something to arrange it in this way. The two categories correspond to epistemic rationality and instrumental rationality. By no means have I listed all the important points (or even the most important points) which go under those two headings, but I encourage you to try.
(Thanks to Philip Parker for some conversation about this post and ideas for points.)