Comments: |
I love this essay.
I love it because it's an articulation of a serious argument that I respect but still end up ultimately opposed to.
I've spent a lot of time considering "What should a person do about weird claims?" The stuff that *sounds* like the ideas of a crackpot, but potentially a crackpot so clever that you can't see a hole in his reasoning -- and, also, potentially not a crackpot at all but an insightful, correct thinker. I used to have roughly the same conclusion as you. And roughly the same problem with a tendency to believe the last thing I read, and along with it, a fear of reading things that might delude me.
But the thing is, I've come to the conclusion that it's not actually that hard to make your own judgments about ideas. I was confused about strong AI for a while. What did I do? I read a bunch of papers and textbooks. I talked to my friends who were AI researchers. I still don't *really* know what's going on because I never really learned mathematical logic, but it's a hell of a lot better than a black box. I know *some* mathematics, and I can tell the difference between a proof and a hand-wavy argument, and I've had independent confirmation of the falseness of the ideas I was skeptical about...I'm pretty sure, sure enough to go on with my life, that my picture of "what's up with AI" is more or less accurate.
I'm learning how to do this with biomedical research papers. I am not a biologist so I have to black-box a lot, but not *everything*. I can tell that claims with five conjunctive hypotheses are less likely than claims with one. I can tell when a study was done with 15 subjects or 15,000. I can certainly evaluate statistical methodology. I can come to estimates of my true beliefs -- not high confidence, but not all that biased, and way better than learned helplessness.
I don't go to the trouble of doing this with everything. I haven't checked out climate change skeptics, because I don't know fluid dynamics and I'm a little scared of the work involved in learning. But mostly, my heuristic is, "When confronted with a weird claim that would be really interesting if true and isn't immediately obvious as bullshit, it's worth checking Wikipedia and reading one scholarly paper. If I'm still uncertain and still interested, it's worth reading several more scholarly papers and asking experts I know."
A lot of bunk is not that hard to debunk. I looked through an 1880 book of materia medica (herbal medicine) once; most treatments were not just useless but poisonous, and it took 30 seconds of googling to find that out. (Oil of tansy will *fuck you up*, ladies and gentlemen.)
A good all-purpose scientist can more or less trust his/her bullshit-o-meter. You should know where you're least able to evaluate claims explicitly (for me, that's physics, chemistry, and anything to do with war or foreign policy) and use implicit meta-techniques (were their results reproducible? do they make a lot of conjunctive claims? that sort of thing). But often, I can just *go in and check the math.* Tim Ferriss makes arithmetic errors in his books. You don't have to be a fitness expert to catch them.
I'm no longer afraid of being deluded by charlatans. I wouldn't go to a Scientology meeting, because they engage in physical brainwashing, but I can read racists without becoming a racist, read homeopaths without becoming a homeopath, and so on. I've banged my brain against a *lot* of things, and come out more or less clean.
Maybe not everyone can do this (my education certainly helped a lot), but it is *possible*, and I think most people who are comparably educated and bright (e.g. you) can get better at evaluating weird claims themselves and do better than they would with epistemic learned helplessness.
But I know people with science PhDs who sound as self-aware and confident but they think global warming and Keynesianism are hoaxes and that there's some huge cover-up going on regarding Benghazi and Obama's coming for our guns any day now. (This before the election, so before the Sandy mass shooting.)
>Jonah tells me of a guy in Seattle who is now living according to the principles of Islam
I heard it was Catholicism. (Unless there's a second guy in Seattle who believes in Pascal's Wager and destroying nature to reduce animal suffering, which would be...surprising.) But he's taken down that page on his site, so he might have changed his mind.
I've never met him, but as far as I can tell, he does take his ideas seriously.
I thought I heard Islam...or, actually I think I just inferred he chose Islam from hearing the general story and then a separate comment that he's keeping halal, but he might be a Catholic who keeps halal to hedge his bets.
From: (Anonymous) 2013-01-03 02:12 pm (UTC)
| (Link)
|
"Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging "
The doomsday argument doesn't belong on this list. If you follow what Bostrom calls the "self-indication assumption" then the doomsday argument is obviously false. The alternative to the self-indication assumption that Bostrom uses to make the argument seems non-sensical, note for example that if there is another civilization in parallel with the one you care about, the probabilities change using his alternative.
The trouble with rationalist skills is that the opposite of every rationalist skill is also a rationalist skill. We have the Inside View, and the Outside View. Overconfidence is a problem, but so is underconfidence. You're supposed to listen to the tiniest note of mental discord, yet sometimes it's necessary to shut loud mental voices out. And while knowing the standard catalog of biases is obviously crucial for the aspiring rationalist, it can also hurt you. Et cetera, et cetera. Furthermore, everything exists for a reason -- including things we've decided are bad. Which means that bad things are inevitably -- or at least typically -- going to be good for something, some of the time. Yet they're still bad. Epistemic learned helplessness may have its uses, but it's certainly not something I would want to celebrate, or -- heaven forbid -- teach. "Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole. "Think for yourself" is still a good meme, despite the fact that for most people it's actually a bad idea and they would do better by just following the right guru. (How do you know which guru is the right one in the first place?)
""Normal" people have it by default already, and they already err too much in that direction (to the point, some would argue, of literally killing themselves, e.g. by not signing up for cryonics). I think I'd gladly accept an increased number of homeopaths and terrorists in order to gain an increase in the average rationality of the population as a whole."
I think your argument that they err too much in that direction requires more support than you give it here. I think if we relaxed the average person's epistemic helpfulness we would get many new terrorists and homeopaths for each new better-than-average person we got.
Worth reading, for those who haven't: Anna Salamon's Making your explicit reasoning trustworthy. Key quote: "When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information." Edited at 2013-01-03 02:53 pm (UTC)
Thomas Aquinas deliberately wrote in the flattest style he could muster, so that his errors would not be swept along in rhetorical charms.
I think you're making this into something bigger than it is. Arguments are mental models of reality. Mental models are incredibly error prone. Don't trust a mental model of reality that hasn't been tested against reality. Know how to test a mental model against reality. (Caveat: some mental models are designed such that their flaws are made obvious, like math but not like formal logic with inductive premises.)
In software engineering, this is called unit testing.
Unfortunately, testing against reality is exactly the problem. Usually the evidence gives a certain number of degrees of freedom (which of various conflicting studies you believe were done well vs. poorly, how you interpret evidence, etc).
I agree the questions you can trivially test against reality (like simple physics questions) are the ones that are least vulnerable to crackpottery.
From: (Anonymous) 2013-01-03 02:58 pm (UTC)
| (Link)
|
On "destroy nature guy": I've previously had the thought that maybe the world would be a better place with far fewer non-human animals in it. What kept me from exploring this possibility further is that, if I'm honest with myself, I don't care all that much about animal suffering.
To give you a better idea of the extent of my (non-)caring: I care enough to have been a lacto-ovo vegetarian for a few years, but then someone persuaded me that eggs may contribute more to animal suffering than beef, and I said, "okay... I care about animal suffering, but not badly enough to go full vegan" and went back to being an omnivore.
Oops. That was my comment. I failed to select my usual "use Facebook to post" option by accident.
>"If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don't want to hear about it."
Hey, wait a minute -- didn't you say somewhere before that you liked reading contrarian arguments?
I'm not sure what exact quote you're thinking of, but it seems plausible. But I mostly like them when I expect to learn something from them, not when I expect to be bewildered by them.
For 99% of the cases you're worried about, I think a better solution than, "don't trust your own reason" is "remember that sound pure *a prior* arguments are very rare, and that believing one person's argument without further investigation is just trusting them to get the empirical stuff right, not only by not saying anything false but also by not omitting relevant evidence."
But it seems we have very different formative experiences in this area. My experiences reading replies and counter-replies with things like the evolution-creationism debate or Christian apologetics more generally is that it does eventually become clear who's right and who's full of shit.
My experiences are similar to celandine13's in this way. I wouldn't necessarily say it's "not that hard," as she does, but "doable eventually with time," yeah.
We may also differ psychologically, in that if I read one thing and *don't* have time to read the replies and counter-replies I find it easy to suspend judgment. Your previous comments about your reluctance to read "Good Calories, Bad Calories" suggest you find this hard, so I'd point out that what you need to do to compensate for that problem doesn't necessarily apply to other people.
The evolution/creation debate is a special case for a few reasons. First, the prior is so skewed in favor of evolution that it's hard to take creationism seriously. Even on the rare cases there's a superficially good creationist argument (right now this and my uncle's version of irreducible complexity are my two go-to examples of creationists who at least seem to be putting a little effort into their sophistry) I've never been at risk of taking it seriously; I always just think "Wow, these people are quite skilled at sophistry". Other fields where I am less certain of the consensus position do not give me that feeling and so I get less of an advantage from hindsight bias. Second, there is a really good community of evolutionists, some of them experts in the field, who devote a lot of effort to point-by-point rebuttals of creationist arguments. This is incredibly valuable; some of the better arguments I don't think I would be able to rebut on my own without a daunting amount of work and research. But this is pretty uncommon; real historians rarely address pseudohistorians (Sagan's critique of Velikovsky was a welcome counterexample), and I've never been able to find a mainstream nutritionist really address the paleo people. I am constantly disappointed in the skeptic community, who tend to be domain non-experts in these fields who fail to take them seriously, who just use ad hominems, or who don't even bother to understand the opposing arguments (for example, the number of people who try to tell homeopaths they're wrong because their concoctions don't even have an atom of the active ingredient, even though homeopaths understand this and their theories actually depend upon it, is amazing) So the arguments on many of these topics are very one-sided, which isn't a problem evolution arguments have. But last of all, I'm surprised you've found Christian apologetics in general to be an easy issue. I've been constantly impressed with tektonics.org, and every time I look at them I end up thinking their defenses of certain Biblical points are much stronger than the atheist attacks upon them (this could be because atheists massively overattack the Bible; the Bible being mostly historically accurate, or not having that many contradictions, is perfectly consistent with religion being wrong in general). The camel issue comes to mind as the last time I had this feeling, although apparently that's not tektonics at all and I might be confusing my apologetics sites.
>Also, he wants to destroy nature in order to decrease animal suffering.
I don't. But I do think that what to do about the "Darwinian holocaust" is a troubling problem for consequentialism.
Edited at 2013-01-03 04:41 pm (UTC)
http://lesswrong.com/lw/82g/on_the_openness_personality_trait_rationality/ seems relevant.
> (This is the correct Bayesian action, by the way. If I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way, and I should ignore it and stick with my prior.)
It's correct, I suspect, only with additional assumptions, like assuming you are either average or above-average so accepting new arguments at random hurts you. If you aren't, then you can do better. For example, if you hold 50% false beliefs, but 90% of arguments you are given are true and 10% are false, and the false are exactly as convincing as the true, then you'll still improve your 50% falsity by ignoring convincingness and believing everything you're told.
It would be a neat trick to acquire 50% false beliefs in an environment where 90% of what you're told is true.
The basic defense against Pascal's Mugging and such is to treat "epsilon" probabilities as equal to zero. So it doesn't matter how severe the offered consequence is since it's getting multiplied by zero anyway.
One of my preferred approaches is construction of a Pascal's Mugging compelling a conflicting course of action. If there's no practical way to judge which "infinity is larger", inaction wins by default.
I very much agree with this post!
Another point that complements yours: people often rationalize to convince themselves of something. People also love to argue and to convince others of things. Smart people are better at this, so they do it more.
So smart people are open to good arguments, because the best arguments they hear are usually their own. They not only lack negative associations from harmful arguments that convinced them in the past, but they have positive associations with arguments they themselves made up, which convinced others.
How high a level did your business friend want to work at? I mean, there's certainly plenty of room to argue about capitalism (I've seen otherwise rational-seeming people passionately arguing that the only possible economic system is free market capitalism, which if done properly is completely impeccable and divinely preserved from all sin - presumably courtesy of the 'invisible hand' - and is the only one true way to liberty, happiness, democracy and all good things) but perhaps your friend just means he wants people who will believe, after being presented with the evidence, that option A is the only one that will work in this situation and that no, it is not because "You don't care about social justice!" or whatever.
Bostrom's simulation argument, the anthropic doomsday argument, Pascal's Mugging - I've never heard anyone give a coherent argument against any of these, but I've also never met anyone who fully accepts them and lives life according to their implications.When I think about these arguments, I don't actually see how I'd change my life if I believed them, not in any meaningful way. I actually do believe in Bostrom's simulation argument, in the sense that my prior for that was ~0%, but now it's more like 60%, which is a huge move. How has it changed my life? It means I can argue with singulitarian atheists in a more entertaining way, by pointing out that if they are in a sim that someone created it, and that someone can be considered our God for all intents and purposes. But other than debates, I don't think my life is much different. I also don't believe in free will, but there's no particular way to operationalize that belief. (And if there were, could I do it?) The others are pretty similar. Pascal's Mugging, which I cheerfully fail to believe because human reasoning about morality is completely horrible when the numbers get big, so I don't even try, doesn't effect me in any real way regardless of what I believe. If someone actually tries to pascal's mug me, I think that would be an entertaining novelty. And I can't think of why the anthropic doomsday argument should change my behavior either, though I'm very suspicious of an argument that would have been just as convincing but totally wrong in recent history. So what am I missing? If someone believed those things, how could you tell from their behavior?
From: (Anonymous) 2013-01-03 10:55 pm (UTC)
| (Link)
|
If you really think you're likely living in a simulation, this essay by Robin Hanson about how you should change your behaviour if you are may interest you.
http://www.transhumanist.com/volume7/simulation.html | |