LessWrong-2025-09-24 / lesswrong_2006.json
xTimeCrystal's picture
Upload 22 files
3dfaeab verified
{
"data": {
"posts": {
"results": [
{
"_id": "6hfGNLf4Hg5DXqJCF",
"title": "A Fable of Science and Politics",
"pageUrl": "https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics",
"postedAt": "2006-12-23T04:50:47.000Z",
"baseScore": 370,
"voteCount": 321,
"commentCount": 102,
"url": null,
"contents": {
"documentId": "6hfGNLf4Hg5DXqJCF",
"html": "<p>In the time of the Roman Empire, civic life was divided between the Blue and Green factions. The Blues and the Greens murdered each other in single combats, in ambushes, in group battles, in riots. Procopius said of the warring factions: “So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place neither to the ties of marriage nor of relationship nor of friendship, and the case is the same even though those who differ with respect to these colors be brothers or any other kin.”<sup>1</sup> Edward Gibbon wrote: “The support of a faction became necessary to every candidate for civil or ecclesiastical honors.”<sup>2</sup></p><p>Who were the Blues and the Greens? They were sports fans—the partisans of the blue and green chariot-racing teams.</p><p>Imagine a future society that flees into a vast underground network of caverns and seals the entrances. We shall not specify whether they flee disease, war, or radiation; we shall suppose the first Undergrounders manage to grow food, find water, recycle air, make light, and survive, and that their descendants thrive and eventually form cities. Of the world above, there are only legends written on scraps of paper; and one of these scraps of paper describes the <i>sky</i>, a vast open space of air above a great unbounded floor. The sky is cerulean in color, and contains strange floating objects like enormous tufts of white cotton. But the meaning of the word “cerulean” is controversial; some say that it refers to the color known as “blue,” and others that it refers to the color known as “green.”</p><p>In the early days of the underground society, the Blues and Greens contested with open violence; but today, truce prevails—a peace born of a growing sense of pointlessness. Cultural mores have changed; there is a large and prosperous middle class that has grown up with effective law enforcement and become unaccustomed to violence. The schools provide some sense of historical perspective; how long the battle between Blues and Greens continued, how many died, how little changed as a result. Minds have been laid open to the strange new philosophy that people are people, whether they be Blue or Green.</p><p>The conflict has not vanished. Society is still divided along Blue and Green lines, and there is a “Blue” and a “Green” position on almost every contemporary issue of political or cultural importance. The Blues advocate taxes on individual incomes, the Greens advocate taxes on merchant sales; the Blues advocate stricter marriage laws, while the Greens wish to make it easier to obtain divorces; the Blues take their support from the heart of city areas, while the more distant farmers and watersellers tend to be Green; the Blues believe that the Earth is a huge spherical rock at the center of the universe, the Greens that it is a huge flat rock circling some other object called a Sun. Not every Blue or every Green citizen takes the “Blue” or “Green” position on every issue, but it would be rare to find a city merchant who believed&nbsp;the sky was blue, and yet advocated an individual tax and freer marriage laws.</p><p>The Underground is still polarized; an uneasy peace. A few folk genuinely think that Blues and Greens should be friends, and it is now common for a Green to patronize a Blue shop, or for a Blue to visit a Green tavern. Yet from a truce originally born of exhaustion, there is a quietly growing spirit of tolerance, even friendship.</p><p>One day, the Underground is shaken by a minor earthquake. A sightseeing party of six is caught in the tremblor while looking at the ruins of ancient dwellings in the upper caverns. They feel the brief movement of the rock under their feet, and one of the tourists trips and scrapes her knee. The party decides to turn back, fearing further earthquakes. On their way back, one person catches a whiff of something strange in the air, a scent coming from a long-unused passageway. Ignoring the well-meant cautions of fellow travellers, the person borrows a powered lantern and walks into the passageway. The stone corridor wends upward . . . and upward . . . and finally terminates in a hole carved out of the world, a place where all stone ends. Distance, endless distance, stretches away into forever; a gathering space to hold a thousand cities. Unimaginably far above, too bright to look at directly, a searing spark casts light over all visible space, the naked filament of some huge light bulb. In the air, hanging unsupported, are great incomprehensible tufts of white cotton. And the vast glowing ceiling above . . . the <i>color</i> . . . is . . .</p><p>Now history branches, depending on which member of the sightseeing party decided to follow the corridor to the surface.</p><blockquote><p>Aditya the Blue stood under the blue forever, and slowly smiled. It was not a pleasant smile. There was hatred, and wounded pride; it recalled every argument she’d ever had with a Green, every rivalry, every contested promotion. <i>“You were right all along,” </i>the sky whispered down at her, <i>“and now you can prove it.” </i>For a moment Aditya stood there, absorbing the message, glorying in it, and then she turned back to the stone corridor to tell the world. As Aditya walked, she curled her hand into a clenched fist. “The truce,” she said, “is over.”</p></blockquote><p>&nbsp;</p><blockquote><p>Barron the Green stared uncomprehendingly at the chaos of colors for long seconds. Understanding, when it came, drove a pile-driver punch into the pit of his stomach. Tears started from his eyes. Barron thought of the Massacre of Cathay, where a Blue army had massacred every citizen of a Green town, including children; he thought of the ancient Blue general, Annas Rell, who had declared Greens “a pit of disease; a pestilence to be cleansed”; he thought&nbsp;of the glints of hatred he’d seen in Blue eyes and something inside him cracked. <i>“How can you be on their side?” </i>Barron screamed at the sky, and then he began to weep; because he knew, standing under the malevolent blue glare, that the universe had always been a place of evil.</p></blockquote><p>&nbsp;</p><blockquote><p>Charles the Blue considered the blue ceiling, taken aback. As a professor in a mixed college, Charles had carefully emphasized that Blue and Green viewpoints were equally valid and deserving of tolerance: The sky was a metaphysical construct, and cerulean a color that could be seen in more than one way. Briefly, Charles wondered whether a Green, standing in this place, might not see a green ceiling above; or if perhaps the ceiling would be green at this time tomorrow; but he couldn’t stake the continued survival of civilization on that. This was merely a natural phenomenon of some kind, having nothing to do with moral philosophy or society . . . but one that might be readily misinterpreted, Charles feared. Charles sighed, and turned to go back into the corridor. Tomorrow he would come back alone and block off the passageway.</p></blockquote><p>&nbsp;</p><blockquote><p>Daria, once Green, tried to breathe amid the ashes of her world. <i>I will not flinch</i>, Daria told herself, <i>I will not look away</i>. She had been Green all her life, and now she must be Blue. Her friends, her family, would turn from her. <i>Speak the truth, even if your voice trembles,</i> her father had told her; but her father was dead now, and her mother would never understand. Daria stared down the calm blue gaze of the sky, trying to accept it, and finally her breathing quietened. <i>I was wrong</i>, she said to herself mournfully; <i>it’s not so complicated, after all</i>. She would find new friends, and perhaps her family would forgive her . . . or, she wondered with a tinge of hope, rise to this same test, standing underneath this same sky? “The sky is blue,” Daria said experimentally, and nothing dire happened to her; but she couldn’t bring herself to smile. Daria the Blue exhaled sadly, and went back into the world, wondering what she would say.</p></blockquote><p>&nbsp;</p><blockquote><p>Eddin, a Green, looked up at the blue sky and began to laugh cynically. The course of his world’s history came clear at last; even he couldn’t believe they’d been such fools. “Stupid,” Eddin said, “stupid,&nbsp;<i>stupid, </i>and all the time it was right here.” Hatred, murders, wars, and all along it was just a <i>thing </i>somewhere, that someone had written about like they’d write about any other thing. No poetry, no beauty, nothing that any sane person would ever care about,&nbsp;just one pointless thing that had been blown out of all proportion. Eddin leaned against the cave mouth wearily, trying to think of a way to prevent this information from blowing up the world, and wondering if they didn’t all deserve it.</p></blockquote><p>&nbsp;</p><blockquote><p>Ferris gasped involuntarily, frozen by sheer wonder and delight. Ferris’s eyes darted hungrily about, fastening on each sight in turn before moving reluctantly to the next; the blue&nbsp;<i>sky</i>, the white <i>clouds</i>, the vast unknown <i>outside</i>, full of places and things (and people?) that no Undergrounder had ever seen. “Oh, so <i>that’s </i>what color it is,” Ferris said, and went exploring.</p></blockquote><hr><p><sup>1</sup> Procopius, <i>History of the Wars</i>, ed. Henry B. Dewing, vol. 1 (Harvard University Press, 1914).</p><p><sup>2</sup> Edward Gibbon, <i>The History of the Decline and Fall of the Roman Empire</i>, vol. 4 (J. &amp; J. Harper, 1829).</p>"
}
},
{
"_id": "Pm83rA8MTYYeR4Ci4",
"title": "\"I don't know.\"",
"pageUrl": "https://www.lesswrong.com/posts/Pm83rA8MTYYeR4Ci4/i-don-t-know",
"postedAt": "2006-12-21T18:27:10.000Z",
"baseScore": 57,
"voteCount": 62,
"commentCount": 22,
"url": null,
"contents": {
"documentId": "Pm83rA8MTYYeR4Ci4",
"html": "<p>An edited transcript of a long instant-messenger conversation that took place regarding the phrase, \"I don't know\", sparked by Robin Hanson's previous post, \"You Are Never Entitled to Your Opinion.\"</p>\n<p><a id=\"more\"></a></p>\n<p class=\"MsoNormal\">[08:50] Eliezer: http://www.overcomingbias.com/2006/12/you_are_never_e.html</p>\n<p class=\"MsoNormal\">[09:01] X: it still seems that saying \"i don't know\" in some situations is better than giving your best guess</p>\n<p class=\"MsoNormal\">[09:01] X: especially if you are dealing with people who will take you at your word who are not rationalists</p>\n<p class=\"MsoNormal\">[09:02] Eliezer: in real life, you have to choose, and bet, at some betting odds</p>\n<p class=\"MsoNormal\">[09:02] Eliezer: i.e., people who want to say \"I don't know\" for cryonics still have to sign up or not sign up, and they'll probably do the latter</p>\n<p class=\"MsoNormal\">[09:03] Eliezer: \"I don't know\" is usually just a screen that people think is defensible and unarguable before they go on to do whatever they feel like, and it's usually the wrong thing because they refused to admit to themselves what their guess was, or examine their justifications, or even realize that they're guessing</p>\n<p class=\"MsoNormal\">[09:02] X: how many apples are in a tree outside?</p>\n<p class=\"MsoNormal\">[09:02] X: i've never seen it and neither have you</p>\n<p class=\"MsoNormal\">[09:02] Eliezer: 10 to 1000</p>\n<p class=\"MsoNormal\">[09:04] Eliezer: if you offer to bet me a million dollars against one dollar that the tree outside has fewer than 20 apples, when neither of us have seen it, I will take your bet</p>\n<p class=\"MsoNormal\">[09:04] X: is it better to say \"maybe 10 to 1000\" to make it clear that you are guessing when talking to people</p>\n<p class=\"MsoNormal\">[09:04] Eliezer: therefore I have assigned a nonzero and significant probability to apples &lt; 20 whether I admit it or not</p>\n<p class=\"MsoNormal\">[09:05] Eliezer: what you *say* is another issue, especially when speaking to nonrationalists, and then it is well to bear in mind that words don't have fixed meanings; the meaning of the sounds that issue from your lips is whatever occurs in the mind of the listener. If they're going to misinterpret something then you shouldn't say it to them no matter what the words mean inside your own head</p>\n<p class=\"MsoNormal\">[09:06] Eliezer: often you are just screwed unless you want to go back and teach them rationality from scratch, and in a case like that, all you can do is say whatever creates the least inaccurate image</p>\n<p class=\"MsoNormal\">[09:06] X: 10 to 1000 is misleading when you say it to a nonrationalist?</p>\n<p class=\"MsoNormal\">[09:06] Eliezer: \"I don't know\" is a good way to duck when you say it to someone who doesn't know about probability distributions</p>\n<p class=\"MsoNormal\">[09:07] Eliezer: if they thought I was certain, or that my statement implied actual knowledge of the tree</p>\n<p class=\"MsoNormal\">[09:07] Eliezer: then the statement would mislead them</p>\n<p class=\"MsoNormal\">[09:07] Eliezer: and if I knew this, and did it anyway for my own purposes, it would be a lie</p>\n<p class=\"MsoNormal\">[09:08] Eliezer: if I just couldn't think of anything better to say, then it would be honest but not true, if you can see the distinction</p>\n<p class=\"MsoNormal\">[09:08] Eliezer: honest for me, but the statement that formed in their minds would still not be true</p>\n<p class=\"MsoNormal\">[09:09] X: most people will say to you.... but you said....10-1000 apples</p>\n<p class=\"MsoNormal\">[09:09] Eliezer: then you're just screwed</p>\n<p class=\"MsoNormal\">[09:10] Eliezer: nothing you can do will create in their minds a true understanding, not even \"I don't know\"</p>\n<p class=\"MsoNormal\">[09:10] X: why bother, why not say i don't know?</p>\n<p class=\"MsoNormal\">[09:10] Eliezer: honesty therefore consists of misleading them the least and telling them the most</p>\n<p class=\"MsoNormal\">[09:10] X: it's better than misleading them with 10-1000</p>\n<p class=\"MsoNormal\">[09:10] Eliezer: as for \"why bother\", well, if you're going to ask that question, just don't reply to their email or whatever</p>\n<p class=\"MsoNormal\">[09:11] Eliezer: what if you're dealing with someone who thinks my saying \"I don't know\" is a license for them to make up their own ideas, which will be a lot worse?</p>\n<p class=\"MsoNormal\">[09:11] X: they may act on your guess, and then say \"but you said....\" and lose money or get in trouble or have less respect for you</p>\n<p class=\"MsoNormal\">[09:11] Eliezer: then you choose to wave them off</p>\n<p class=\"MsoNormal\">[09:11] Eliezer: with \"I don't know\"</p>\n<p class=\"MsoNormal\">[09:11] Eliezer: but it's for your own sake, not for their sake</p>\n<p class=\"MsoNormal\">[09:12] X: [09:11] Eliezer: what if you're dealing with someone who thinks my saying \"I don't know\" is a license for them to make up their own ideas, which will be a lot worse?</p>\n<p class=\"MsoNormal\">[09:12] X: here i could see why</p>\n<p class=\"MsoNormal\">[09:12] X: but it's difficult working with typical people in the real world</p>\n<p class=\"MsoNormal\">[09:13] Eliezer: the first thing to decide is, are you trying to accomplish something for yourself (like not getting in trouble) or are you trying to improve someone else's picture of reality</p>\n<p class=\"MsoNormal\">[09:13] Eliezer: \"I don't know\" is often a good way of not getting in trouble</p>\n<p class=\"MsoNormal\">[09:13] Eliezer: as for it being difficult to work with people in the real world, well, yeah</p>\n<p class=\"MsoNormal\">[09:13] X: if you say...10-1000, and you are wrong, and they are mad, then you say, i don't know, they will be even madder</p>\n<p class=\"MsoNormal\">[09:13] Eliezer: are you trying not to get in trouble?</p>\n<p class=\"MsoNormal\">[09:14] Eliezer: or are you trying to improve their picture of reality?</p>\n<p class=\"MsoNormal\">[09:14] Eliezer: these are two different tasks</p>\n<p class=\"MsoNormal\">[09:14] X: especially if they have lost money or have been proven wrong by someone else</p>\n<p class=\"MsoNormal\">[09:14] Eliezer: if they intersect you have to decide what your tradeoff is</p>\n<p class=\"MsoNormal\">[09:14] Eliezer: which is more important to you</p>\n<p class=\"MsoNormal\">[09:14] Eliezer: then decide whether to explain for their benefit or say \"I don't know\" for yours</p>\n<p class=\"MsoNormal\">[09:15] X: well, if it was my job, i would say i don't know rather than be wrong, because who knows what your boss will do after he loses money listening to you</p>\n<p class=\"MsoNormal\">[09:15] Eliezer: okay</p>\n<p class=\"MsoNormal\">[09:16] Eliezer: just be clear that this is not because \"I don't know\" is the rational judgment, but because \"I don't know\" is the political utterance</p>\n<p class=\"MsoNormal\">[09:16] X: he may take your guess, and try to turn it into an actual anwser because no one around you has a better plan</p>\n<p class=\"MsoNormal\">[09:17] Eliezer: you can easily see this by looking at your stated reason: you didn't talk about evidence and reality and truth, but, how you might get in trouble based on someone's reaction</p>\n<p class=\"MsoNormal\">[09:17] X: yes</p>\n<p class=\"MsoNormal\">[09:17] X: that's what you have to put up with in the real world</p>\n<p class=\"MsoNormal\">[09:17] Eliezer: if you're really worried about your boss's welfare then you should consider that if you say \"I don't know\" he must do something anyway - refusing to choose is also a choice, and refusing to act is like refusing to let time pass - and he will construct that plan based on some information, which doesn't include your information</p>\n<p class=\"MsoNormal\">[09:18] Eliezer: if your life isn't worth more than someone else's, neither it is worth any less, and it is often proper to let fools make their own mistakes</p>\n<p class=\"MsoNormal\">[09:18] Eliezer: you can only throw yourself in front of so many bullets before you run out of flesh to stop them with</p>\n<p class=\"MsoNormal\">[09:19] X: ?</p>\n<p class=\"MsoNormal\">[09:19] Eliezer: in other words, you cannot always save people from themselves</p>\n<p class=\"MsoNormal\">[09:23] Eliezer: but all of this is wandering away from the original point, which is true and correct, that no one is ever entitled to their own opinion</p>\n<p class=\"MsoNormal\">[09:26] X: what is his name?</p>\n<p class=\"MsoNormal\">[09:26] Eliezer: ?</p>\n<p class=\"MsoNormal\">[09:26] X: a man outside</p>\n<p class=\"MsoNormal\">[09:26] X: random guy</p>\n<p class=\"MsoNormal\">[09:26] Eliezer: It's probably not \"Xpchtl Vaaaaaarax\"</p>\n<p class=\"MsoNormal\">[09:26] X: probably not</p>\n<p class=\"MsoNormal\">[09:27] Eliezer: I suppose I could construct a second-order Markov transition diagram for the letters in names expressed in English, weighted by their frequency</p>\n<p class=\"MsoNormal\">[09:27] Eliezer: but that would be a lot of work</p>\n<p class=\"MsoNormal\">[09:28] Eliezer: so I could say \"I don't know\" as shorthand for the fact that, although I possess a lot of knowledge about possible and probable names, I don't know anything *more* than you do</p>\n<p class=\"MsoNormal\">[09:28] X: ok, so you say ruling out what you see as likely not correct is ok?</p>\n<p class=\"MsoNormal\">[09:28] Eliezer: what I'm saying is that I possess a large amount of knowledge about possible names</p>\n<p class=\"MsoNormal\">[09:28] Eliezer: all of which influences what I would bet on</p>\n<p class=\"MsoNormal\">[09:28] Eliezer: if I had to take a real-world action, like, guessing someone's name with a gun to my head</p>\n<p class=\"MsoNormal\">[09:29] Eliezer: if I had to choose it would suddenly become very relevant that I knew Michael was one of the most statistically common names, but couldn't remember for which years it was the most common, and that I knew Michael was more likely to be a male name than a female name</p>\n<p class=\"MsoNormal\">[09:29] Eliezer: if an alien had a gun to its head, telling it \"I don't know\" at this point would not be helpful</p>\n<p class=\"MsoNormal\">[09:29] Eliezer: because there's a whole lot I know that it doesn't</p>\n<p class=\"MsoNormal\">[09:30] X: ok</p>\n<p class=\"MsoNormal\">[09:33] X: what about a question for which you really don't have any information?</p>\n<p class=\"MsoNormal\">[09:33] X: like something only an alien would know</p>\n<p class=\"MsoNormal\">[09:34] Eliezer: if I have no evidence I use an appropriate Ignorance Prior, which distributes probability evenly across all possibilities, and assigns only a very small amount to any individual possibility because there are so many</p>\n<p class=\"MsoNormal\">[09:35] Eliezer: if the person I'm talking to already knows to use an ignorance prior, I say \"I don't know\" because we already have the same probability distribution and I have nothing to add to that</p>\n<p class=\"MsoNormal\">[09:35] Eliezer: the ignorance prior tells me my betting odds</p>\n<p class=\"MsoNormal\">[09:35] Eliezer: it governs my choices</p>\n<p class=\"MsoNormal\">[09:35] X: and what if you don't know how to use an ignorance prior</p>\n<p class=\"MsoNormal\">[09:36] X: have never heard of it etc</p>\n<p class=\"MsoNormal\">[09:36] Eliezer: if I'm dealing with someone who doesn't know about ignorance priors, and who is dealing with the problem by making up this huge elaborate hypothesis with lots of moving parts and many places to go wrong, then the truth is that I automatically know s/he's wrong</p>\n<p class=\"MsoNormal\">[09:36] Eliezer: it may not be possible to explain this to them, short of training them from scratch in rationality</p>\n<p class=\"MsoNormal\">[09:36] Eliezer: but it is true</p>\n<p class=\"MsoNormal\">[09:36] Eliezer: and if the person trusts me for a rationalist, it may be both honest and helpful to tell them, \"No, that's wrong\"</p>\n<p class=\"MsoNormal\">[09:36] X: what if that person says, \"i don't know what their name is\", that ok?</p>\n<p class=\"MsoNormal\">[09:37] Eliezer: in real life you cannot choose \"I don't know\", it's not an option on your list of available actions</p>\n<p class=\"MsoNormal\">[09:37] Eliezer: in real life it's always, \"I don't know, so I'm going to say Vkktor Blackdawn because I think it sounds cool\"</p>\n<p class=\"MsoNormal\">[09:39] Eliezer: Vkktor Blackdawn is as (im)probable as anything else, but if you start assigning more probability to it than the ignorance prior calls for - because it sounds cool, because you don't have room in your mind for more than one possibility, or because you've started to construct an elaborate mental explanation of how the alien might end up named Vkktor Blackdawn</p>\n<p class=\"MsoNormal\">[09:39] Eliezer: then I know better</p>\n<p class=\"MsoNormal\">[09:40] Eliezer: and if you trust me, I may be able to honestly and usefully tell you so</p>\n<p class=\"MsoNormal\">[09:40] Eliezer: rather than saying \"I don't know\", which is always something to say, not to think</p>\n<p class=\"MsoNormal\">[09:40] Eliezer: this is important if someone asks you, \"At what odds would you bet that the alien is named Vkktor Blackdawn?\"</p>\n<p class=\"MsoNormal\">[09:41] Eliezer: or if you have to do anything else, based on your guesses and the weight you assign to them</p>\n<p class=\"MsoNormal\">[09:41] Eliezer: which is what probability is all about</p>\n<p class=\"MsoNormal\">[09:41] X: and if they say \"I don't know, I don't know anything about probability\"?</p>\n<p class=\"MsoNormal\">[09:41] Eliezer: then either they trust me blindly or I can't help them</p>\n<p class=\"MsoNormal\">[09:41] Eliezer: that's how it goes</p>\n<p class=\"MsoNormal\">[09:41] Eliezer: you can't always save people from themselves</p>\n<p class=\"MsoNormal\">[09:42] X: trust you blindly about what you are saying or about your guess as to what the alien's name is?</p>\n<p class=\"MsoNormal\">[09:42] Eliezer: trust me blindly when I tell them, \"Don't bet at those odds.\"</p>"
}
},
{
"_id": "NKECtGX4RZPd7SqYp",
"title": "The Modesty Argument",
"pageUrl": "https://www.lesswrong.com/posts/NKECtGX4RZPd7SqYp/the-modesty-argument",
"postedAt": "2006-12-10T21:42:55.000Z",
"baseScore": 62,
"voteCount": 50,
"commentCount": 40,
"url": null,
"contents": {
"documentId": "NKECtGX4RZPd7SqYp",
"html": "<p>The Modesty Argument states that when two or more human beings have common knowledge that they disagree about a question of simple fact, they should each adjust their probability estimates in the direction of the others'.&nbsp; (For example, they might adopt the common mean of their probability distributions.&nbsp; If we use the <a href=\"http://yudkowsky.net/bayes/technical.html\">logarithmic scoring rule</a>, then the score of the average of a set of probability distributions is better than the average of the scores of the individual distributions, by Jensen's inequality.)</p>\n<p>Put more simply:&nbsp; When you disagree with someone, even after talking over your reasons, the Modesty Argument claims that you should each adjust your probability estimates toward the other's, and keep doing this until you agree.&nbsp; The Modesty Argument is inspired by Aumann's Agreement Theorem, a very famous and oft-generalized result which shows that genuine Bayesians literally <em>cannot</em> agree to disagree; if genuine Bayesians have common knowledge of their individual probability estimates, they must all have the same probability estimate.&nbsp; (\"Common knowledge\" means that I know you disagree, you know I know you disagree, etc.)</p>\n<p>I've always been suspicious of the Modesty Argument.&nbsp; It's been a long-running debate between myself and Robin Hanson.</p>\n<p><a id=\"more\"></a></p>\n<p>Robin seems to endorse the Modesty Argument in papers such as <a href=\"http://hanson.gmu.edu/deceive.pdf\">Are Disagreements Honest?</a>&nbsp; I, on the other hand, have held that it can be rational for an individual to not adjust their own probability estimate in the direction of someone else who disagrees with them.</p>\n<p>How can I maintain this position in the face of Aumann's Agreement Theorem, which proves that genuine Bayesians cannot have common knowledge of a dispute about probability estimates?&nbsp; If genunie Bayesians will always agree with each other once they've exchanged probability estimates, shouldn't we Bayesian wannabes do the same?</p>\n<p>To explain my reply, I begin with a metaphor:&nbsp; If I have five different <em>accurate</em> maps of a city, they will all be consistent with each other.&nbsp; Some philosophers, inspired by this, have held that \"rationality\" consists of having beliefs that are consistent among themselves.&nbsp; But, although accuracy necessarily implies consistency, consistency does not necessarily imply accuracy.&nbsp; If I sit in my living room with the curtains drawn, and make up five maps that are consistent with each other, but I don't actually walk around the city and make lines on paper that correspond to what I see, then my maps will be consistent but not accurate.&nbsp; When genuine Bayesians agree in their probability estimates, it's not because they're <em>trying</em> to be consistent - Aumann's Agreement Theorem doesn't invoke any explicit drive on the Bayesians' part to be consistent.&nbsp; That's what makes AAT surprising!&nbsp; Bayesians only try to be accurate; in the course of seeking to be accurate, they end up consistent.&nbsp; The Modesty Argument, that we can end up accurate in the course of seeking to be consistent, does not necessarily follow.</p>\n<p>How can I maintain my position in the face of my admission that disputants will always improve their average score if they average together their individual probability distributions?</p>\n<p>Suppose a creationist comes to me and offers:&nbsp; \"You believe that natural selection is true, and I believe that it is false.&nbsp; Let us both agree to assign 50% probability to the proposition.\"&nbsp; And suppose that by drugs or hypnosis it was actually possible for both of us to contract to adjust our probability estimates in this way.&nbsp; This unquestionably improves our combined log-score, and our combined squared error.&nbsp; If as a matter of altruism, I value the creationist's accuracy as much as my own - if my loss function is symmetrical around the two of us - then I should agree.&nbsp; But what if I'm trying to maximize only my own individual accuracy?&nbsp; In the former case, the question is absolutely clear, and in the latter case it is not absolutely clear, to me at least, which opens up the possibility that they are different questions.</p>\n<p>If I agree to a contract with the creationist in which we both use drugs or hypnosis to adjust our probability estimates, because I know that the group estimate <em>must</em> be improved thereby, I regard that as pursuing the goal of social altruism.&nbsp; It doesn't make creationism actually <em>true</em>, and it doesn't mean that I think creationism is true when I agree to the contract.&nbsp; If I thought creationism was 50% probable, I wouldn't need to sign a contract - I would have already updated my beliefs!&nbsp; It is tempting but <em>false</em> to regard adopting someone else's beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise.&nbsp; Therefore it is written:&nbsp; \"Do not believe you do others a favor if you accept their arguments; the favor is to you.\"&nbsp; Am I really doing <em>myself</em> a favor by agreeing with the creationist to take the average of our probability distributions?</p>\n<p>I regard rationality in its purest form as an individual thing - not because rationalists have only selfish interests, but because of the form of the only admissible question:&nbsp; \"Is is actually true?\"&nbsp; Other considerations, such as the collective accuracy of a group that includes yourself, may be legitimate goals, and an important part of human existence - but they differ from that single pure question.</p>\n<p>In Aumann's Agreement Theorem, all the individual Bayesians are trying to be accurate as individuals.&nbsp; If their explicit goal was to maximize group accuracy, AAT would not be surprising.&nbsp; So the improvement of group score is not a knockdown argument as to what an <em>individual</em> should do if they are trying purely to maximize their own accuracy, and it is that last quest which I identify as rationality.&nbsp; It is written:&nbsp; \"Every step of your reasoning must cut through to the correct answer in the same movement.&nbsp; More than anything, you must think of carrying your map through to reflecting the territory.&nbsp; If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.\"&nbsp; From the standpoint of social altruism, someone may wish to be Modest, and enter a drug-or-hypnosis-enforced contract of Modesty, even if they fail to achieve a correct answer thereby.</p>\n<p>The central argument for Modesty proposes something like a Rawlsian veil of ignorance - how can you know which of you is the honest truthseeker, and which the stubborn self-deceiver?&nbsp; The creationist believes that <em>he</em> is the sane one and <em>you</em> are the fool.&nbsp; Doesn't this make the situation symmetric around the two of you?&nbsp; If you average your estimates together, one of you must gain, and one of you must lose, since the shifts are in opposite directions; but by Jensen's inequality it is a positive-sum game.&nbsp; And since, by something like a Rawlsian veil of ignorance, you don't know which of you is really the fool, you ought to take the gamble.&nbsp; This argues that the socially altruistic move is also always the individually rational move.</p>\n<p>And there's also the obvious reply:&nbsp; \"But I know perfectly well who the fool is.&nbsp; It's the other guy.&nbsp; It doesn't matter that he says the same thing - he's <em>still</em> the fool.\"</p>\n<p>This reply sounds bald and unconvincing when you consider it abstractly.&nbsp; But if you actually face a creationist, then it certainly <em>feels</em> like the correct answer - you're right, he's wrong, and you have valid evidence to know that, even if the creationist can recite exactly the same claim in front of a TV audience.</p>\n<p>Robin Hanson sides with symmetry - this is clearest in his paper <a href=\"http://hanson.gmu.edu/prior.pdf\">Uncommon Priors Require Origin Disputes</a> - and therefore endorses the Modesty Argument.&nbsp; (Though I haven't seen him analyze the particular case of the creationist.)</p>\n<p>I respond:&nbsp; Those who dream do not know they dream; but when you wake you know you are awake.&nbsp; Dreaming, you may think you are awake.&nbsp; You may even be convinced of it.&nbsp; But right now, when you really <em>are</em> awake, there isn't any doubt in your mind - nor should there be.&nbsp; If you, persuaded by the clever argument, decided to start doubting right now that you're really awake, then your Bayesian score would go down and you'd become that much less accurate.&nbsp; If you seriously tried to make yourself doubt that you were awake - in the sense of wondering if you might be in the midst of an ordinary human REM cycle - then you would probably do so because you wished to appear to yourself as rational, or because it was how you conceived of \"rationality\" as a matter of moral duty.&nbsp; Because you wanted to act with propriety.&nbsp; Not because you felt genuinely curious as to whether you were awake or asleep.&nbsp; Not because you felt you might <em>really and truly</em> be asleep.&nbsp; But because you didn't have an answer to the clever argument, just an (ahem) incommunicable insight that you were awake.</p>\n<p>Russell Wallace put it thusly:&nbsp; \"That we can postulate a mind of sufficiently low (dreaming) or distorted (insane) consciousness as to genuinely not know whether it's Russell or Napoleon doesn't mean I (the entity currently thinking these thoughts) could have been Napoleon, any more than the number 3 could have been the number 7. If you doubt this, consider the extreme case: a rock doesn't know whether it's me or a rock. That doesn't mean I could have been a rock.\"</p>\n<p>There are other problems I see with the Modesty Argument, pragmatic matters of human rationality - if a fallible human tries to follow the Modesty Argument in practice, does this improve or disimprove personal rationality?&nbsp; To me it seems that the adherents of the Modesty Argument tend to <a href=\"/lw/gq/the_proper_use_of_humility/\">profess</a> Modesty but not actually practice it.</p>\n<p>For example, let's say you're a scientist with a controversial belief - like the Modesty Argument itself, which is hardly a matter of common accord - and you spend some substantial amount of time and effort trying to prove, argue, examine, and generally forward this belief.&nbsp; Then one day you encounter the Modesty Argument, and it occurs to you that you should adjust your belief toward the modal belief of the scientific field.&nbsp; But then you'd have to give up your cherished hypothesis.&nbsp; So you do the obvious thing - I've seen at least two people do this on two different occasions - and say:&nbsp; \"Pursuing my personal hypothesis has a net expected utility to Science.&nbsp; Even if I don't really believe that my theory is correct, I can still pursue it because of the categorical imperative: Science as a whole will be better off if scientists go on pursuing their own hypotheses.\"&nbsp; And then they continue <em>exactly</em> as before.</p>\n<p>I am skeptical to say the least.&nbsp; Integrating the Modesty Argument as new evidence ought to produce a <em>large</em> effect on someone's life and plans.&nbsp; If it's being really integrated, that is, rather than <a href=\"/ /lw/gq/the_proper_use_of_humility\">flushed down a black hole</a>.&nbsp; Your personal anticipation of success, the bright emotion with which you anticipate the confirmation of your theory, should diminish by literally orders of magnitude after accepting the Modesty Argument.&nbsp; The reason people buy lottery tickets is that the bright anticipation of winning ten million dollars, the dancing visions of speedboats and mansions, is not sufficiently diminished - as a strength of emotion - by the probability factor, the odds of a hundred million to one.&nbsp; The ticket buyer may even profess that the odds are a hundred million to one, but they don't <em>anticipate</em> it properly - they haven't integrated the mere verbal phrase \"hundred million to one\" on an emotional level.</p>\n<p>So, when a scientist integrates the Modesty Argument as new evidence, should the resulting nearly total loss of hope have <em>no effect</em> on real-world plans originally formed in blessed ignorance and joyous anticipation of triumph?&nbsp; Especially when you consider that the scientist knew about the social utility to start with, while making the original plans?&nbsp; I think that's around as plausible as maintaining your exact original investment profile after the expected returns on some stocks change by a factor of a hundred.&nbsp; What's actually happening, one naturally suspects, is that the scientist finds that the Modesty Argument has uncomfortable implications; so they reach for an excuse, and invent on-the-fly the argument from social utility as a way of exactly cancelling out the Modesty Argument and preserving all their original plans.</p>\n<p>But of course if I say that this is an argument against the Modesty Argument, that is pure <em>ad hominem tu quoque</em>.&nbsp; If its adherents fail to use the Modesty Argument properly, that does not imply it has any less force as logic.</p>\n<p>Rather than go into more detail on the manifold ramifications of the Modesty Argument, I'm going to close with the thought experiment that initially convinced me of the falsity of the Modesty Argument.&nbsp; In the beginning it seemed to me reasonable that if feelings of 99% certainty were associated with a 70% frequency of true statements, on average across the global population, then the state of 99% certainty was like a \"pointer\" to 70% probability.&nbsp; But at one point I thought:&nbsp; \"What should an (AI) superintelligence say in the same situation?&nbsp; Should it treat its 99% probability estimates as 70% probability estimates because so many <em>human beings</em> make the same mistake?\"&nbsp; In particular, it occurred to me that, on the day the first true superintelligence was born, it would be undeniably true that - across the whole of Earth's history - the enormously vast majority of entities who had believed themselves superintelligent would be wrong.&nbsp; The majority of the referents of the pointer \"I am a superintelligence\" would be schizophrenics who believed they were God.</p>\n<p>A superintelligence doesn't just believe the bald statement that it is a superintelligence - it presumably possesses a very detailed, very accurate self-model of its own cognitive systems, tracks in detail its own calibration, and so on.&nbsp; But if you tell this to a mental patient, the mental patient can immediately respond:&nbsp; \"Ah, but I too possess a very detailed, very accurate self-model!\"&nbsp; The mental patient may even come to sincerely believe this, in the moment of the reply.&nbsp; Does that mean the superintelligence should wonder if it is a mental patient?&nbsp; This is the opposite extreme of Russell Wallace asking if a rock could have been you, since it doesn't know if it's you or the rock.</p>\n<p>One obvious reply is that human beings and superintelligences occupy different classes - we do not have the same ur-priors, or we are not part of the same anthropic reference class; some sharp distinction renders it impossible to group together superintelligences and schizophrenics in probability arguments.&nbsp; But one would then like to know exactly what this \"sharp distinction\" is, and how it is justified relative to the Modesty Argument.&nbsp; Can an evolutionist and a creationist also occupy different reference classes?&nbsp; It sounds astoundingly arrogant; but when I consider the actual, pragmatic situation, it seems to me that this is genuinely the case.</p>\n<p>Or here's a more recent example - one that inspired me to write today's blog post, in fact.&nbsp; It's the true story of a customer struggling through five levels of Verizon customer support, all the way up to floor manager, in an ultimately futile quest to find someone who could understand the difference between .002 dollars per kilobyte and .002 cents per kilobyte.&nbsp; <a href=\"http://media.putfile.com/Verizon-Bad-Math\">Audio</a> [27 minutes], <a href=\" http://verizonmath.blogspot.com/2006/12/transcription-jt.html\">Transcript</a>.&nbsp; It has to be heard to be believed.&nbsp; Sample of conversation:&nbsp; \"Do you recognize that there's a difference between point zero zero two dollars and point zero zero two cents?\"&nbsp; \"No.\"</p>\n<p>The key phrase that caught my attention and inspired me to write today's blog post is from the floor manager:&nbsp; \"You already talked to a few different people here, and they've all explained to you that you're being billed .002 cents, and if you take it and put it on your calculator... we take the .002 as everybody has told you that you've called in and spoken to, and as our system bills accordingly, is correct.\"</p>\n<p>Should George - the customer - have started doubting his arithmetic, because five levels of Verizon customer support, some of whom cited multiple years of experience, told him he was wrong?&nbsp; Should he have adjusted his probability estimate in their direction?&nbsp; A straightforward extension of Aumann's Agreement Theorem to impossible possible worlds, that is, uncertainty about the results of computations, <em>proves</em> that, had all parties been genuine Bayesians with common knowledge of each other's estimates, they would have had the same estimate.&nbsp; Jensen's inequality proves even more straightforwardly that, if George and the five levels of tech support had averaged together their probability estimates, they would have improved their average log score.&nbsp; If such arguments fail in this case, why do they succeed in other cases?&nbsp; And if you claim the Modesty Argument carries in this case, are you <em>really</em> telling me that if George had wanted <em>only</em> to find the truth for himself, he would have been wise to adjust his estimate in Verizon's direction?&nbsp; I know this is an argument from personal incredulity, but I think it's a good one.</p>\n<p>On the whole, and in practice, it seems to me like Modesty is sometimes a good idea, and sometimes not.&nbsp; I exercise my individual discretion and judgment to decide, even knowing that I might be biased or self-favoring in doing so, because the alternative of being Modest in every case seems to me much worse.</p>\n<p>But the question also seems to have a definite anthropic flavor.&nbsp; Anthropic probabilities still confuse me; I've read arguments but I have been unable to resolve them to my own satisfaction.&nbsp; Therefore, I confess, I am not able to give a full account of how the Modesty Argument is resolved.</p>\n<p>Modest, aren't I?</p>"
}
},
{
"_id": "GrDqnMjhqoxiqpQPw",
"title": "The Proper Use of Humility",
"pageUrl": "https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility",
"postedAt": "2006-12-01T19:55:08.000Z",
"baseScore": 200,
"voteCount": 181,
"commentCount": 54,
"url": null,
"contents": {
"documentId": "GrDqnMjhqoxiqpQPw",
"html": "\n\n\n\n \n\n \n\n <p>It is widely recognized that good science requires some kind of humility. <em>What sort</em> of humility is more controversial. </p>\n\n <p>Consider the creationist who says: &#x201C;But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded.&#x201D; Is this humility? The creationist practices a very selective underconfidence, refusing to integrate massive weights of evidence in favor of a conclusion they find uncomfortable. I would say that whether you call this &#x201C;humility&#x201D; or not, it is the wrong step in the dance.</p>\n\n <p>What about the engineer who humbly designs fail-safe mechanisms into machinery, even though they&#x2019;re damn sure the machinery won&#x2019;t fail? This seems like a good kind of humility to me. Historically, it&#x2019;s not unheard-of for an engineer to be damn sure a new machine won&#x2019;t fail, and then it fails anyway.</p>\n\n <p>What about the student who humbly double-checks the answers on their math test? Again I&#x2019;d categorize that as good humility. The student who double-checks their answers <em>wants to become stronger</em>; they react to a possible inner flaw by doing what they can to repair the flaw.</p>\n\n <p>What about a student who says, &#x201C;Well, no matter how many times I check, I can&#x2019;t ever be <em>certain</em> my test answers are correct,&#x201D; and therefore doesn&#x2019;t check even once? Even if this choice stems from an emotion similar to the emotion felt by the previous student, it is less wise.</p>\n\n <p>You suggest studying harder, and the student replies: &#x201C;No, it wouldn&#x2019;t work for me; I&#x2019;m not one of the smart kids like you; nay, one so lowly as myself can hope for no better lot.&#x201D; This is social modesty, not humility. It has to do with regulating status in the tribe, rather than scientific process. If you ask someone to &#x201C;be more humble,&#x201D; by default they&#x2019;ll associate the words to social modesty&#x2014;which is an intuitive, everyday, ancestrally relevant concept. Scientific humility is a more recent and rarefied invention, and it is not inherently social. Scientific humility is something you would practice even if you were alone in a spacesuit, light years from Earth with no one watching. Or even if you received an absolute guarantee that no one would ever criticize you again, no matter what you said or thought of yourself. You&#x2019;d still double-check your calculations if you were wise.</p>\n\n <p>The student says: &#x201C;But I&#x2019;ve seen other students double-check their answers and then they still turned out to be wrong. Or what if, by the problem of induction, 2 + 2 = 5 this time around? No matter what I do, I won&#x2019;t be sure of myself.&#x201D; It sounds very profound, and very modest. But it is not coincidence that the student wants to hand in the test quickly, and go home and play video games.</p>\n\n <p>The end of an era in physics does not always announce itself with thunder and trumpets; more often it begins with what seems like a small, small flaw . . . But because physicists have this arrogant idea that their models should work <em>all</em> the time, not just <em>most</em> of the time, they follow up on small flaws. Usually, the small flaw goes away under closer inspection. Rarely, the flaw widens to the point where it blows up the whole theory. Therefore it is written: &#x201C;If you do not seek perfection you will halt before taking your first steps.&#x201D;</p>\n\n <p>But think of the social audacity of trying to be right <em>all</em> the time! I seriously suspect that if Science claimed that evolutionary theory is true most of the time but not all of the time&#x2014;or if Science conceded that maybe on some days the Earth <em>is</em> flat, but who really knows&#x2014;then scientists would have better social reputations. Science would be viewed as less confrontational, because we wouldn&#x2019;t have to argue with people who say the Earth is flat&#x2014;there would be room for compromise. When you argue a lot, people look upon you as confrontational. If you repeatedly refuse to compromise, it&#x2019;s even worse. Consider it as a question of tribal status: scientists have certainly earned some extra status in exchange for such socially useful tools as medicine and cellphones. But this social status does not justify their insistence that <em>only</em> scientific ideas on evolution be taught in public schools. Priests also have high social status, after all. Scientists are getting above themselves&#x2014;they won a little status, and now they think they&#x2019;re chiefs of the whole tribe! They ought to be more humble, and compromise a little.</p>\n\n <p>Many people seem to possess rather hazy views of &#x201C;rationalist humility.&#x201D; It is dangerous to have a prescriptive principle which you only vaguely comprehend; your mental picture may have so many degrees of freedom that it can adapt to justify almost any deed. Where people have vague mental models that can be used to argue anything, they usually end up believing whatever they started out wanting to believe. This is so convenient that people are often reluctant to give up vagueness. But the purpose of our ethics is to move us, not be moved by us.</p>\n\n <p>&#x201C;Humility&#x201D; is a virtue that is often misunderstood. This doesn&#x2019;t mean we should discard the concept of humility, but we should be careful using it. It may help to look at the <em>actions</em> recommended by a &#x201C;humble&#x201D; line of thinking, and ask: &#x201C;Does acting this way make you stronger, or weaker?&#x201D; If you think about the problem of induction as applied to a bridge that needs to stay up, it may sound reasonable to conclude that nothing is certain no matter what precautions are employed; but if you consider the real-world difference between adding a few extra cables, and shrugging, it seems clear enough what makes the stronger bridge.</p>\n\n <p>The vast majority of appeals that I witness to &#x201C;rationalist&#x2019;s humility&#x201D; are excuses to shrug. The one who buys a lottery ticket, saying, &#x201C;But you can&#x2019;t <em>know</em> that I&#x2019;ll lose.&#x201D; The one who disbelieves in evolution, saying, &#x201C;But you can&#x2019;t <em>prove</em> to me that it&#x2019;s true.&#x201D; The one who refuses to confront a difficult-looking problem, saying, &#x201C;It&#x2019;s probably too hard to solve.&#x201D; The problem is motivated skepticism a.k.a. disconfirmation bias&#x2014;more heavily scrutinizing assertions that we don&#x2019;t want to believe.<sup><a href=\"#fn1x2\" id=\"fn1x2-bk\">1</a></sup><span id=\"x6-7001f1\"> Humility, in its most commonly misunderstood form, is a fully general excuse not to believe something; since, after all, you can&#x2019;t be <em>sure</em>. Beware of fully general excuses!</span></p>\n\n <p>A further problem is that humility is all too easy to <em>profess.</em> Dennett, in <em>Breaking the Spell: Religion as a Natural Phenomenon</em>, points out that while many religious assertions are very hard to believe, it is easy for people to believe that they <em>ought</em> to believe them. Dennett terms this &#x201C;belief in belief.&#x201D; What would it mean to really assume, to really believe, that three is equal to one? It&#x2019;s a lot easier to believe that you <em>should</em>, somehow, believe that three equals one, and to make this response at the appropriate points in church. Dennett suggests that much &#x201C;religious belief&#x201D; should be studied as &#x201C;religious profession&#x201D;&#x2014;what people think they should believe and what they know they ought to say.</p>\n\n <p>It is all too easy to meet every counterargument by saying, &#x201C;Well, of course I could be wrong.&#x201D; Then, having dutifully genuflected in the direction of Modesty, having made the required obeisance, you can go on about your way without changing a thing.</p>\n\n <p>The temptation is always to claim the most points with the least effort. The temptation is to carefully integrate all incoming news in a way that lets us change our beliefs, and above all our <em>actions</em>, as little as possible. John Kenneth Galbraith said: &#x201C;Faced with the choice of changing one&#x2019;s mind and proving that there is no need to do so, almost everyone gets busy on the proof.&#x201D;<sup><a href=\"#fn2x2\" id=\"fn2x2-bk\">2</a></sup><span id=\"x6-7002f2\"> And the greater the <em>inconvenience</em> of changing one&#x2019;s mind, the more effort people will expend on the proof.</span></p>\n\n <p>But y&#x2019;know, if you&#x2019;re gonna <em>do</em> the same thing anyway, there&#x2019;s no point in going to such incredible lengths to rationalize it. Often I have witnessed people encountering new information, apparently accepting it, and then carefully explaining why they are going to do exactly the same thing they planned to do previously, but with a different justification. The point of thinking is to <em>shape</em> our plans; if you&#x2019;re going to keep the same plans anyway, why bother going to all that work to justify it? When you encounter new information, the hard part is to <em>update</em>, to <em>react</em>, rather than just letting the information disappear down a black hole. And humility, properly misunderstood, makes a wonderful black hole&#x2014;all you have to do is admit you could be wrong. Therefore it is written: &#x201C;To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.&#x201D;</p>\n\n <div class=\"footnotes\">\n \n\n <p><span><sup><a href=\"#fn1x2-bk\" id=\"fn1x2\">1</a></sup></span><span id=\"cite.0.Taber.2006\">Charles S. Taber and Milton Lodge, &#x201C;Motivated Skepticism in the Evaluation of Political Beliefs,&#x201D; <em>American Journal of Political Science</em> 50, no. 3 (2006): 755&#x2013;769</span>.</p>\n\n <p><span><sup><a href=\"#fn2x2-bk\" id=\"fn2x2\">2</a></sup></span><span id=\"cite.0.Galbraith.1981\">John Kenneth Galbraith, <em>Economics, Peace and Laughter</em> (Plume, 1981), 50</span>.</p>\n </div>\n\n"
}
},
{
"_id": "jnZbHi873v9vcpGpZ",
"title": "What's a Bias?",
"pageUrl": "https://www.lesswrong.com/posts/jnZbHi873v9vcpGpZ/what-s-a-bias",
"postedAt": "2006-11-27T01:50:34.000Z",
"baseScore": 206,
"voteCount": 204,
"commentCount": 18,
"url": null,
"contents": {
"documentId": "jnZbHi873v9vcpGpZ",
"html": "<p>The availability heuristic is a cognitive shortcut humans use to reach conclusions; and where this shortcut reliably causes inaccurate conclusions, we can say that an availability bias is at work. Scope insensitivity is another example of a <em>cognitive bias</em>. </p>\n\n <p>“Cognitive biases” are those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by <em>the shape of our own mental machinery</em>. For example, our mental processes might be evolutionarily adapted to specifically believe some things that arent true, so that we could win political arguments in a tribal context. Or the mental machinery might be adapted not to particularly care whether something is true, such as when we feel the urge to believe what others believe to get along socially. Or the bias may be a side-effect of a useful reasoning heuristic. The availability heuristic is not itself a bias, but it gives rise to them; the machinery uses an algorithm (give things more evidential weight if they come to mind more readily) that does some good cognitive work but also produces systematic errors. </p>\n\n <p>Our brains are doing something wrong, and after a lot of experimentation and/or heavy thinking, someone identifies the problem verbally and concretely; then we call it a “(cognitive) bias.” Not to be confused with the colloquial “that person is biased,” which just means “that person has a skewed or prejudiced attitude toward something.”</p>\n\n <p>In cognitive science, “biases” are distinguished from errors that arise from <em>cognitive content</em>, such as learned false beliefs. These we call “mistakes” rather than “biases,” and they are much easier to correct, once we’ve noticed them for ourselves. (Though the source of the mistake, or the source of the source of the mistake, may ultimately be some bias.) </p>\n\n <p>“Biases” are also distinguished from errors stemming from damage to an individual human brain, or from absorbed cultural mores; biases arise from machinery that is humanly universal.</p>\n\n <p>Plato wasn’t “biased” because he was ignorant of General Relativity—he had no way to gather that information, his ignorance did not arise from the shape of his mental machinery. But if Plato believed that philosophers would make better kings because he himself was a philosopher—and this belief, in turn, arose because of a universal adaptive political instinct for self-promotion, and not because Plato’s daddy told him that everyone has a moral duty to promote their own profession to governorship, or because Plato sniffed too much glue as a kid—then that was a bias, whether Plato was ever warned of it or not.</p>\n\n <p>While I am not averse (as you can see) to discussing definitions, I don’t want to suggest that the project of better wielding our own minds rests on a particular choice of terminology. If the term “cognitive bias” turns out to be unhelpful, we should just drop it.</p>\n\n <p>We don’t start out with a moral duty to “reduce bias,” simply because biases are bad and evil and Just Not Done. This is the sort of thinking someone&nbsp;might end up with if they acquired a deontological duty of “rationality” by social osmosis, which leads to people trying to execute techniques without appreciating the reason for them. (Which is bad and evil and Just Not Done, according to <em>Surely You’re Joking, Mr. Feynman</em>, which I read as a kid.) A bias is an obstacle to our goal of obtaining truth, and thus <em>in our way</em>. </p>\n\n <p>We are here to pursue the great human quest for truth: for we have desperate need of the knowledge, and besides, we're curious. To this end let us strive to overcome whatever obstacles lie in our way, whether we call them “biases” or not.</p>\n\n"
}
},
{
"_id": "YshRbqZHYFoEMqFAu",
"title": "Why Truth?",
"pageUrl": "https://www.lesswrong.com/posts/YshRbqZHYFoEMqFAu/why-truth",
"postedAt": "2006-11-27T01:49:28.000Z",
"baseScore": 189,
"voteCount": 186,
"commentCount": 61,
"url": null,
"contents": {
"documentId": "YshRbqZHYFoEMqFAu",
"html": "<p>The goal of instrumental rationality mostly speaks for itself. Some commenters have wondered, on the other hand, why rationalists care about truth. Which invites a few different answers, depending on who you ask; and these different answers have differing characters, which can shape the search for truth in different ways.</p><p>You might hold the view that pursuing truth is inherently noble, important, and worthwhile. In which case your priorities will be determined by your ideals about which truths are most important, or about when truthseeking is most virtuous.</p><p>This motivation tends to have a moral character to it. If you think it your duty to look behind the curtain, you are a lot more likely to believe that someone <i>else</i> should look behind the curtain too, or castigate them if they deliberately close their eyes.</p><p>I tend to be suspicious of morality as a motivation for rationality, <i>not</i> because I reject the moral ideal, but because it invites certain kinds of trouble. It is too easy to acquire, as learned moral duties, modes of thinking that are dreadful missteps in the dance.</p><p>Consider Spock, the naive archetype of rationality. Spock's affect is always set to “calm,” even when wildly inappropriate. He often gives many significant digits for probabilities that are grossly uncalibrated.<sup>1</sup> Yet this popular image is how many people conceive of the duty to be “rational”—small wonder that they do not embrace it wholeheartedly.</p><p>To make rationality into a moral duty is to give it all the dreadful degrees of freedom of an arbitrary tribal custom. People arrive at the wrong answer, and then indignantly protest that they acted with propriety, rather than learning from their mistake.</p><p>What other motives are there?</p><p>Well, you might want to accomplish some specific real-world goal, like building an airplane, and therefore you need to know some specific truth about aerodynamics. Or more mundanely, you want chocolate milk, and therefore you want to know whether the local grocery has chocolate milk, so you can choose whether to walk there or somewhere else.</p><p>If this is the reason you want truth, then the priority you assign to your questions will reflect the expected utility of their information—how much the possible answers influence your choices, how much your choices matter, and how much you expect to find an answer that changes your choice from its default.</p><p>To seek truth merely for its instrumental value may seem impure—should we not desire the truth for its own sake?—but such investigations are extremely important because they create an outside criterion of verification: if your airplane drops out of the sky, or if you get to the store and find no chocolate milk, its a hint that you did something wrong. You get back feedback on which modes of thinking work, and which don't.</p><p>Another possibility: you might care about whats true because, damn it, you're <i>curious</i>.</p><p>As a reason to seek truth, curiosity has a special and admirable purity. If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your aesthetic sense. A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it's more fun.</p><p>Curiosity and morality can both attach an intrinsic value to truth. Yet being curious about whats behind the curtain is a very different state of mind from believing that you have a moral duty to look there. If you're curious, your priorities will be determined by which truths you find most intriguing, not most important or most useful.</p><p>Although pure curiosity is a wonderful thing, it may not linger too long on verifying its answers, once the attractive mystery is gone. Curiosity, as a human emotion, has been around since long before the ancient Greeks. But what set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us <i>manipulate the world</i>—truth as an instrument. As far as sheer curiosity goes, spinning campfire tales of gods and heroes satisfied that desire just as well, and no one realized that anything was wrong with that.</p><p>At the same time, if we're going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, we'll need deliberate beliefs about how to think—things that look like norms of rationalist “propriety.” When we write new mental programs for ourselves, they start out as explicit injunctions, and are only slowly (if ever) trained into the neural circuitry that underlies our core motivations and habits.</p><p>Curiosity, pragmatism, and quasi-moral injunctions are all key to the rationalist project. Yet if you were to ask me which of these is most foundational, I would say: “curiosity.” I have my principles, and I have my plans, which may well tell me to look behind the curtain. But then, I also just really want to know. What will I see? The world has handed me a puzzle, and a solution feels tantalizingly close.</p><hr><p><sup>1</sup> E.g., “Captain, if you steer the Enterprise directly into that black hole, our probability of surviving is only 2.234%.” Yet nine times out of ten the <i>Enterprise</i> is not destroyed. What kind of tragic fool gives four significant digits for a figure that is off by two orders of magnitude?</p>"
}
},
{
"_id": "teaxCFgtmCQ3E9fy8",
"title": "The Martial Art of Rationality",
"pageUrl": "https://www.lesswrong.com/posts/teaxCFgtmCQ3E9fy8/the-martial-art-of-rationality",
"postedAt": "2006-11-22T20:00:00.000Z",
"baseScore": 330,
"voteCount": 308,
"commentCount": 50,
"url": null,
"contents": {
"documentId": "teaxCFgtmCQ3E9fy8",
"html": "<p>I often use the metaphor that <a href=\"http://wiki.lesswrong.com/wiki/Rationality\">rationality</a> is the <a href=\"http://wiki.lesswrong.com/wiki/Rationality_as_martial_art\">martial art of mind</a>. You don’t need huge, bulging muscles to learn martial arts—there’s a tendency toward more athletic people being more likely to learn martial arts, but that may be a matter of <i>enjoyment</i> as much as anything else. If you have a hand, with tendons and muscles in the appropriate places, then you can learn to make a fist.</p><p>Similarly, if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you’re a fast learner, you might learn faster—but the art of rationality isn’t about that; it’s about training brain machinery we all have in common. And where there are systematic errors human brains tend to make—like an insensitivity to scope—rationality is about fixing those mistakes, or finding work-arounds.</p><p>Alas, our minds respond less readily to our will than our hands. Our ability to control our muscles is evolutionarily ancient; our ability to reason about our own reasoning processes is a much more recent innovation. We shouldn’t be surprised, then, that muscles are easier to use than brains. But it is not wise to neglect the latter training because it is more difficult. It is not by bigger muscles that the human species rose to prominence upon Earth.</p><p>If you live in an urban area, you probably don’t need to walk very far to find a martial arts dojo. Why aren’t there dojos that teach rationality?</p><p>One reason, perhaps, is that it’s harder to verify skill. To rise a level in Tae Kwon Do, you might need to break a board of a certain width. If you succeed, all the onlookers can see and applaud. If you fail, your teacher can watch how you shape a fist, and check if you shape it correctly. If not, the teacher holds out a hand and makes a fist correctly, so that you can observe how to do so.</p><p>Within martial arts schools, techniques of muscle have been refined and elaborated over generations. Techniques of rationality are harder to pass on, even to the most willing student.</p><p>Very recently—in just the last few decades—the human species has acquired a great deal of new knowledge about human rationality. The most salient example would be the <a href=\"https://en.wikipedia.org/wiki/Heuristics_in_judgment_and_decision-making\">heuristics and biases</a> program in experimental psychology. There is also the <a href=\"https://wiki.lesswrong.com/wiki/Bayesian\">Bayesian</a> systematization of probability theory and statistics; evolutionary psychology; social psychology. Experimental investigations of empirical human psychology; and theoretical probability theory to interpret what our experiments tell us; and evolutionary theory to explain the conclusions. These fields give us new focusing lenses through which to view the landscape of our own minds. With their aid, we may be able to see more clearly the muscles of our brains, the fingers of thought as they move. We have a shared vocabulary in which to describe problems and solutions. Humanity may finally be ready to synthesize the martial art&nbsp;of mind: to refine, share, systematize, and pass on techniques of personal rationality.</p><p>Such understanding as I have of rationality, I acquired in the course of wrestling with the challenge of artificial general intelligence (an endeavor which, to actually succeed, would require sufficient mastery of rationality to build a complete working rationalist out of toothpicks and rubber bands). In most ways the AI problem is enormously more demanding than the personal art of rationality, but in some ways it is actually easier. In the martial art of mind, we need to acquire the realtime procedural skill of pulling the right levers at the right time on a large, pre-existing thinking machine whose innards are not end-user-modifiable. Some of the machinery is optimized for evolutionary selection pressures that run directly counter to our declared goals in using it. Deliberately we decide that we want to seek only the truth; but our brains have hardwired support for rationalizing falsehoods. We can try to compensate for what we choose to regard as flaws of the machinery; but we can’t actually rewire the neural circuitry. Nor may martial artists plate titanium over their bones—not today, at any rate.</p><p>Trying to synthesize a personal art of rationality, using the science of rationality, may prove awkward: One imagines trying to invent a martial art using an abstract theory of physics, game theory, and human anatomy.</p><p>But humans arent reflectively blind. We do have a native instinct for introspection. The inner eye isnt sightless, though it sees blurrily, with systematic distortions. We need, then, to <i>apply</i> the science to our intuitions, to use the abstract knowledge to <i>correct</i> our mental movements and <i>augment</i> our metacognitive skills.</p><p>We aren't writing a computer program to make a string puppet execute martial arts forms; it is our own mental limbs that we must move. Therefore we must connect theory to practice. We must come to see what the science means, for ourselves, for our daily inner life.</p>"
}
},
{
"_id": "7ZqGiPHTpiDMwqMN2",
"title": "Twelve Virtues of Rationality",
"pageUrl": "https://www.lesswrong.com/posts/7ZqGiPHTpiDMwqMN2/twelve-virtues-of-rationality",
"postedAt": "2006-01-01T08:00:05.370Z",
"baseScore": 387,
"voteCount": 236,
"commentCount": 16,
"url": null,
"contents": {
"documentId": "7ZqGiPHTpiDMwqMN2",
"html": "<div class=\"ory-row\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-row ory-cell-inner\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-cell-inner ory-cell-leaf\"><div><p>The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance. If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. The glory of glorious mystery is to be solved, after which it ceases to be mystery. Be wary of those who speak of being open-minded and modestly confess their ignorance. </p><p>There is a time to confess your ignorance and a time to relinquish your ignorance.\nThe second virtue is relinquishment. P. C. Hodgell said: “That which can be destroyed by the truth should be.”[1] Do not flinch from experiences that might destroy your beliefs. The thought you cannot think controls you more than thoughts you speak aloud. Submit yourself to ordeals and test yourself in fire. Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let\n yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” Beware lest you become attached to beliefs you may not want.\n</p><p>The third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting, the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.</p><p>\nThe fourth virtue is evenness. One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” Beware lest you place huge burdens of proof only on propositions you dislike, and then defend yourself by saying: “But it is good to be skeptical.” If you attend only to favorable evidence, picking and choosing from your gathered data, then the more data you gather, the less you know. If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider. If you first write at the bottom of a sheet of paper \n“And therefore, the sky is green!” it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong. To be clever in argument is not rationality but rationalization. Intelligence, to be useful, must be used for something other than defeating itself. Listen to hypotheses as they plead their cases before you, but remember that you are not a hypothesis; you are the judge. Therefore do not seek to argue \nfor one side or another, for if you knew your destination, you would already be there.\n</p><p>The fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them. Those who smile wisely and say “I will not argue” remove themselves from help and withdraw from the communal effort. In argument strive for exact honesty, for the sake of others and also yourself: the part of yourself that distorts what you say to others also distorts your own thoughts. Do not believe you do others a favor if you accept their arguments; the favor is to you. Do not think that fairness to all sides means balancing yourself evenly between positions; truth is not handed out in equal portions before the start of a debate. You cannot move forward on factual questions by fighting with fists or insults. Seek a test that lets reality judge between you.\n</p><p>The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says,\n“No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes,” and one saying “No,” the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”[2] Do not be blinded by words. When words are subtracted, anticipation remains.\n</p><p>The seventh virtue is simplicity. Antoine de Saint-Exupéry said: “Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away.”[3] Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web\n breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere. In mathematics a mountain of good deeds cannot atone for a single sin. Therefore, be careful on every step.\n</p><p>The eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.\n</p><p>The ninth virtue is perfectionism. The more errors you correct in yourself, the more you notice. As your mind becomes more silent, you hear more noise. When you notice an error in yourself, this signals your readiness to seek advancement to the next level. If you tolerate the error rather than correcting it, you will not advance to the next level and you will not gain the skill to notice new errors. In every art, if you do not seek perfection you will halt before taking your first steps. If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right.\n</p><p>The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: The quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge \nof the blade. As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.\n</p><p>The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.</p><p>\nBefore these eleven virtues is a virtue which is nameless. </p><p>Miyamoto Musashi wrote, in <em>The Book of Five Rings</em>:[4]</p><blockquote><p>\nThe primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.</p></blockquote><p>\nEvery step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.\n</p><p>If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.\n</p><p>How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think, “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.\n</p><p>Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.\nYou may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory.” But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.\n</p><p>If for many years you practice the techniques and submit yourself to strict constraints, it may be that you will glimpse the center. Then you will see how all techniques are one technique, and you will move correctly without feeling constrained. Musashi wrote: “When you appreciate the power of nature, knowing the rhythm of any situation, you will be able to hit the enemy naturally and strike naturally. All this is the Way of the Void.”</p><p>\nThese then are twelve virtues of rationality:\n</p><p>Curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.\n</p></div></div></div></div><div class=\"ory-row ory-cell-inner\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-cell-inner ory-cell-leaf\"><hr class=\"ory-plugins-content-divider\"></div></div></div><div class=\"ory-row ory-cell-inner\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-cell-inner ory-cell-leaf\"><div><h6>1. Patricia C. Hodgell, <em>Seeker’s Mask</em> (Meisha Merlin Publishing, Inc., 2001).</h6><h6>\n2. Cleaver, <em>Immediate Fiction: A Complete Writing Course</em>.</h6><h6>\n3. Antoine de Saint-Exupéry, <em>Terre des Hommes </em>(Paris: Gallimard, 1939).</h6><h6>\n4. Musashi,<em> Book of Five Rings</em>.</h6></div></div></div></div><div class=\"ory-row ory-cell-inner\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-cell-inner ory-cell-leaf\"><hr class=\"ory-plugins-content-divider\"></div></div></div><div class=\"ory-row ory-cell-inner\"><div class=\"ory-cell ory-cell-sm-12 ory-cell-xs-12\"><div class=\"ory-cell-inner ory-cell-leaf\"><div><p style=\"text-align:center;\"><em>The first publication of this post is </em><a href=\"http://yudkowsky.net/rational/virtues/\"><em>here</em></a><em>.</em></p></div></div></div></div></div></div>"
}
}
]
}
}
}