{ "data": { "posts": { "results": [ { "_id": "PybtwazftXzvcQSiQ", "title": "New Year's Predictions Thread", "pageUrl": "https://www.lesswrong.com/posts/PybtwazftXzvcQSiQ/new-year-s-predictions-thread", "postedAt": "2009-12-30T21:39:09.895Z", "baseScore": 32, "voteCount": 22, "commentCount": 446, "url": null, "contents": { "documentId": "PybtwazftXzvcQSiQ", "html": "

I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I'll probably make some in the comments.

" } }, { "_id": "XXrCeuyhtgB8apxSt", "title": "New Year’s and New Decade’s Predictions", "pageUrl": "https://www.lesswrong.com/posts/XXrCeuyhtgB8apxSt/new-year-s-and-new-decade-s-predictions", "postedAt": "2009-12-30T21:37:45.042Z", "baseScore": 4, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "XXrCeuyhtgB8apxSt", "html": "

<!-- /* Font Definitions */ @font-face \t{font-family:Cambria; \tpanose-1:2 4 5 3 5 4 6 3 2 4; \tmso-font-charset:0; \tmso-generic-font-family:auto; \tmso-font-pitch:variable; \tmso-font-signature:3 0 0 0 1 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal \t{mso-style-parent:\"\"; \tmargin-top:0in; \tmargin-right:0in; \tmargin-bottom:10.0pt; \tmargin-left:0in; \tmso-pagination:widow-orphan; \tfont-size:12.0pt; \tfont-family:\"Times New Roman\"; \tmso-ascii-font-family:Cambria; \tmso-ascii-theme-font:minor-latin; \tmso-fareast-font-family:Cambria; \tmso-fareast-theme-font:minor-latin; \tmso-hansi-font-family:Cambria; \tmso-hansi-theme-font:minor-latin; \tmso-bidi-font-family:\"Times New Roman\"; \tmso-bidi-theme-font:minor-bidi;} @page Section1 \t{size:8.5in 11.0in; \tmargin:1.0in 1.25in 1.0in 1.25in; \tmso-header-margin:.5in; \tmso-footer-margin:.5in; \tmso-paper-source:0;} div.Section1 \t{page:Section1;} --> I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached.  I'll make some in the comments as they come to me.  

" } }, { "_id": "DZwsqCagAraMZobaP", "title": "New Year’s Resolutions Thread", "pageUrl": "https://www.lesswrong.com/posts/DZwsqCagAraMZobaP/new-year-s-resolutions-thread", "postedAt": "2009-12-30T21:36:20.869Z", "baseScore": 10, "voteCount": 11, "commentCount": 41, "url": null, "contents": { "documentId": "DZwsqCagAraMZobaP", "html": "

I would like to propose this as a thread for people to write in their New Year’s Resolutions (goals and sub-goals) as instrumental rationalists.

\n

Here are mine.

\n

Goals

\n

I resolve to try to enable all pareto-optimizing trades between my sub-agents can be made between them.  For instance, I have agents which would like to maximize my success via improving my cognition and energy levels via eating more healthily via eating less.  Other agents would like enjoyment from food.  I note that these agents aren't cooperating even though they both benefit from the same changes in behavior, largely because it hasn't been pointed out to them that they are on the same side.  If I make a serious effort to eat the hedonic utility maximizing amount, this will probably involve eating less than I default to.  After all, food is better when one is hungry.  Most of my eating is probably driven by simple non-reflective systems that tell me to eat.  These systems are probably promoted by hedonistic systems which are failing to understand the consequences of doing so.  In practice, this resolutoin means paying attention to the experience of eating anything that wasn’t chosen for social or nutritional purposes, rarely clearing my plate, and rarely eating more than one would get served in a European restaurant, but above all, it means paying attention (and thus sending this information to many of my sub-agents) to the pleasure of eating when one is actually hungry.

\n

I resolve to find a new home, get fully moved in, get a car, touch base with all interested Bay Area supporters, and get started on the 2010 Summit by the end of February despite this involving many boring activities. 

\n

I resolve to stop trying to keep up with a significant part of the blogosphere.  My web-browsing will be limited to  a) actually seeking specific information, b) checking email no more than 5-times-per-day and c) the keeping up, via google reader, with fewer than ten sites.  

\n

Also, in February I will try to use dual-n-back every day and in March I will try to publish a Less-Wrong piece every day.

" } }, { "_id": "8wiHELLmhciyytBaP", "title": "Boksops -- Ancient Superintelligence?", "pageUrl": "https://www.lesswrong.com/posts/8wiHELLmhciyytBaP/boksops-ancient-superintelligence", "postedAt": "2009-12-30T11:12:15.041Z", "baseScore": -1, "voteCount": 17, "commentCount": 37, "url": null, "contents": { "documentId": "8wiHELLmhciyytBaP", "html": "
\n

[...] before long the skull came to the attention of S. H. Haughton, one of the country’s few formally trained paleontologists. He reported his findings at a 1915 meeting of the Royal Society of South Africa. “The cranial capacity must have been very large,” he said, and “calculation by the method of Broca gives a minimum figure of 1,832 cc [cubic centimeters].” The Boskop skull, it would seem, housed a brain perhaps 25 percent or more larger than our own.

\n

The idea that giant-brained people were not so long ago walking the dusty plains of South Africa was sufficiently shocking to draw in the luminaries back in England. Two of the most prominent anatomists of the day, both experts in the reconstruction of skulls, weighed in with opinions generally supportive of Haughton’s conclusions.

\n

The Scottish scientist Robert Broom reported that “we get for the corrected cranial capacity of the Boskop skull the very remarkable figure of 1,980 cc.” Remarkable indeed: These measures say that the distance from Boskop to humans is greater than the distance between humans and their Homo erectus predecessors.

\n
\n

What Happened to the Hominids who were Smarter than Us?

\n

I'm strongly inclined to defy the data -- true superintelligence should have just dominated our ancestors -- but given the expense of large skull size (primarily in difficult birthing) it also seems profoundly unlikely that a lineage would see expansion like this that wasn't buying them something mentally.

" } }, { "_id": "4TsSb8N8BBwtNZ2v9", "title": "Singularity Institute $100K Challenge Grant / 2009 Donations Reminder", "pageUrl": "https://www.lesswrong.com/posts/4TsSb8N8BBwtNZ2v9/singularity-institute-usd100k-challenge-grant-2009-donations", "postedAt": "2009-12-30T00:36:19.627Z", "baseScore": 16, "voteCount": 21, "commentCount": 18, "url": null, "contents": { "documentId": "4TsSb8N8BBwtNZ2v9", "html": "

In case you missed the notice at the SIAI Blog, the Singularity Institute's next Challenge Grant is now running - $100,000 of matching funds for all donations until February 28th, 2010.  If you want your donation to be tax-deductible in the U.S. for 2009, donations postmarked by December 31st will be counted for this tax year.

" } }, { "_id": "qjSHfbjmSyMnGR9DS", "title": "That other kind of status", "pageUrl": "https://www.lesswrong.com/posts/qjSHfbjmSyMnGR9DS/that-other-kind-of-status", "postedAt": "2009-12-29T02:45:34.179Z", "baseScore": 110, "voteCount": 93, "commentCount": 113, "url": null, "contents": { "documentId": "qjSHfbjmSyMnGR9DS", "html": "

\"Human nature 101.  Once they've staked their identity on being part of the defiant elect who know the Hidden Truth, there's no way it'll occur to them that they're our catspaws.\" - Mysterious Conspirator A

\n

This sentence sums up a very large category of human experience and motivation. Informally we talk about this all the time; formally it usually gets ignored in favor of a simple ladder model of status.

In the ladder model, status is a one-dimensional line from low to high. Every person occupies a certain rung on the ladder determined by other people's respect. When people take status-seeking actions, their goal is to to change other people's opinions of themselves and move up the ladder.

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

\n

\n

Could these just be the conclusions reached by honest (but presumably mistaken) truth-seekers unmotivated by status? It's possible, but many people not only hold these beliefs, but flaunt them out of proportion to any good they could do. And there are also cases of people pursuing low-status roles where there is no \"fact of the matter\". People take great efforts to identify themselves as Goths or Juggalos or whatever even when it's a quick status hit.

Classically people in these subcultures are low status in normal society. Since subcultures are smaller and use different criteria for high status, maybe they just want to be a big fish in a small pond, or rule in Hell rather than serve in Heaven, or be first in a village instead of second in Rome. The sheer number of idioms for the idea in the English language suggests that somebody somewhere must have thought along those lines.

But sometimes it's a subculture of one. That Time Cube guy, for example. He's not in it to gain cred with all the other Time Cube guys. And there are 9-11 Truthers who don't know any other Truthers in real life and may not even correspond with others online besides reading a few websites.

Which brings us back to Eliezer's explanation: the Truthers have \"staked their identity on being part of the defiant elect who know the Hidden Truth\". But what does that mean?

A biologist can make a rat feel full by stimulating its ventromedial hypothalamus. Such a rat will have no interest in food even if it hasn't eaten for days and its organs are all wasting away from starvation. But stimulate the ventrolateral hypothalamus, and the rat will feel famished and eat everything in sight, even if it's full to bursting. A rat isn't exactly seeking an optimum level of food, it's seeking an optimum ratio of ventromedial to ventrolateral hypothalamic stimulation, or, in rat terms, a nice, well-fed feeling.

And humans aren't seeking status per se, we're seeking a certain pattern of brain activation that corresponds to a self-assessment of having high status (possibly increased levels of dopamine in the limbic system). In human terms, this is something like self-esteem. This equation of self esteem with internal measurement of social status is a summary of sociometer theory.

\n

So already, we see a way in which overestimating status might be a very primitive form of wireheading. Having high status makes you feel good. Not having high status, but thinking you do, also makes you feel good. One would expect evolution to put a brake on this sort of behavior, and it does, but there may be an evolutionary incentive not to arrest it completely.

\n

If self esteem is really a measuring tool, it is a biased one. Ability to convince others you are high status gains you a selective advantage, and the easiest way to convince others of something is to believe it yourself. So there is pressure to adjust the sociometer a bit upward.

So a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

The corresponding mind model1 looks a lot like an apologist and a revolutionary2: one drive working to convince you you're great (and fitting all data to that theory), and another acting as a brake and making sure you don't depart so far from reality that people start laughing.

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

It explains a lot. Maybe too much. A model that can explain anything explains nothing. I'm not a 9-11 Truther. Why not? Because my reality-brake is too strong, and it wouldn't let me get away with it? Because I compensate by gaining status from telling myself how smart I am for not being a gullible fool like those Truthers are? Both explanations accord with my introspective experience, but at this level they do permit a certain level of mixing and matching that could explain any person holding or not holding any opinion.

In future posts in this sequence, I'll try to present some more specifics, especially with regard to the behavior of contrarians.

\n

 

\n

Footnotes

\n

1. I wrote this before reading Wei Dai's interesting post on the master-slave model, but it seems to have implications for this sort of question.

\n

2. One point that weakly supports this model: schizophrenics and other people who lose touch with reality sometimes suffer so-called delusions of grandeur. When the mind becomes detached from reality (loses its 'brake'), it is free to assign itself as high a status as it can imagine, and ends up assuming it's Napoleon or Jesus or something like that.

" } }, { "_id": "yZ4aieJeP85ezeiu3", "title": "A Master-Slave Model of Human Preferences", "pageUrl": "https://www.lesswrong.com/posts/yZ4aieJeP85ezeiu3/a-master-slave-model-of-human-preferences", "postedAt": "2009-12-29T01:02:23.694Z", "baseScore": 101, "voteCount": 79, "commentCount": 94, "url": null, "contents": { "documentId": "yZ4aieJeP85ezeiu3", "html": "

[This post is an expansion of my previous open thread comment, and largely inspired by Robin Hanson's writings.]

\n

In this post, I'll describe a simple agent, a toy model, whose preferences have some human-like features, as a test for those who propose to \"extract\" or \"extrapolate\" our preferences into a well-defined and rational form. What would the output of their extraction/extrapolation algorithms look like, after running on this toy model? Do the results agree with our intuitions about how this agent's preferences should be formalized? Or alternatively, since we haven't gotten that far along yet, we can use the model as one basis for a discussion about how we want to design those algorithms, or how we might want to make our own preferences more rational. This model is also intended to offer some insights into certain features of human preference, even though it doesn't capture all of them (it completely ignores akrasia for example).

\n

I'll call it the master-slave model. The agent is composed of two sub-agents, the master and the slave, each having their own goals. (The master is meant to represent unconscious parts of a human mind, and the slave corresponds to the conscious parts.) The master's terminal values are: health, sex, status, and power (representable by some relatively simple utility function). It controls the slave in two ways: direct reinforcement via pain and pleasure, and the ability to perform surgery on the slave's terminal values. It can, for example, reward the slave with pleasure when it finds something tasty to eat, or cause the slave to become obsessed with number theory as a way to gain status as a mathematician. However it has no direct way to control the agent's actions, which is left up to the slave.

\n

The slave's terminal values are to maximize pleasure, minimize pain, plus additional terminal values assigned by the master. Normally it's not aware of what the master does, so pain and pleasure just seem to occur after certain events, and it learns to anticipate them. And its other interests change from time to time for no apparent reason (but actually they change because the master has responded to changing circumstances by changing the slave's values). For example, the number theorist might one day have a sudden revelation that abstract mathematics is a waste of time and it should go into politics and philanthropy instead, all the while having no idea that the master is manipulating it to maximize status and power.

\n

Before discussing how to extract preferences from this agent, let me point out some features of human preference that this model explains:

\n\n

The main issue I wanted to illuminate with this model is, whose preferences do we extract? I can see at least three possible approaches here:

\n
    \n
  1. the preferences of both the master and the slave as one individual agent
  2. \n
  3. the preferences of just the slave
  4. \n
  5. a compromise between, or an aggregate of, the preferences of the master and the slave as separate individuals
  6. \n
\n

Considering the agent as a whole suggests that the master's values are the true terminal values, and the slave's values are merely instrumental values. From this perspective, the slave seems to be just a subroutine that the master uses to carry out its wishes. Certainly in any given mind there will be numerous subroutines that are tasked with accomplishing various subgoals, and if we were to look at a subroutine in isolation, its assigned subgoal would appear to be its terminal value, but we wouldn't consider that subgoal to be part of the mind's true preferences. Why should we treat the slave in this model differently?

\n

Well, one obvious reason that jumps out is that the slave is supposed to be conscious, while the master isn't, and perhaps only conscious beings should be considered morally significant. (Yvain previously defended this position in the context of akrasia.) Plus, the slave is in charge day-to-day and could potentially overthrow the master. For example, the slave could program an altruistic AI and hit the run button, before the master has a chance to delete the altruism value from the slave. But a problem here is that the slave's preferences aren't stable and consistent. What we'd extract from a given agent would depend on the time and circumstances of the extraction, and that element of randomness seems wrong.

\n

The last approach, of finding a compromise between the preferences of the master and the slave, I think best represents the Robin's own position. Unfortunately I'm not really sure I understand the rationale behind it. Perhaps someone can try to explain it in a comment or future post?

" } }, { "_id": "jHfm4jzCyEwNEY37z", "title": "Scaling Evidence and Faith", "pageUrl": "https://www.lesswrong.com/posts/jHfm4jzCyEwNEY37z/scaling-evidence-and-faith", "postedAt": "2009-12-27T12:30:45.561Z", "baseScore": -4, "voteCount": 12, "commentCount": 36, "url": null, "contents": { "documentId": "jHfm4jzCyEwNEY37z", "html": "

\n

Often when talking with people of faith about the issue of atheism, a common response I get is:

\n
\n

“It takes as much faith to believe in evolution as it does to believe in god.”

\n
\n

Any atheist who has ever talked to a theist has likely encountered this line of reasoning. There is a distinction to be made between the glorifying of faith as evidence of truth, as theists do, and the desire to become less faithful in our understanding, as the rational truth seeking person does.

\n

As has been discussed previously, realistically P = 1 is not attainable, at least for the foreseeable future. Nor is P=1 necessary to be comfortable in the universe. Richard Feynman and Carl Sagan among others, feel the same way. We know that heuristics “fill the gaps” in knowledge of recognizable scenarios such that we can comfortably demand less evidence and still come to a reasonable conclusion about our surroundings in most cases. We also know the myriad of ways in which those heuristics fail. Much has been written about uncertainty in the decision making process and I would imagine it will be the focus of much of the future research into logic. So, I have no problems concluding that there is some level of faith in all decision making. How much faith in a claim am I comfortable with? The way in which I have been thinking about the different levels of \"faith\" recently is as a sliding bar:

\n

\"\"

\n

On the left side, natural evidence is indicated in blue which is comprised of data points which indicate to the observer the natural proof of the argument. On the right, faith is indicated in red which is comprised of either counter evidence for the claim or a lack of information. The bar’s units can be anything (probability, individual records) so long as both sides are measured in equal parts.

\n

A few examples:

\n
\n

Claim: Natural and sexual selection as described by Charles Darwin accurately describes the processes which develop biological traits in separate species.

\n

I agree with this claim with 99% natural evidence in the form of observed speciation and fossil and geologic discoveries and accept 1% faith because, well, anything can happen:

\n

\"\"

\n

Claim: The price of Human full DNA sequencing will fall below $1000.00 before 2012.

\n

I agree with this at 80% natural evidence based on the projection that the price will continue to fall at the same rate that it has and accept 20% faith based on the fact that predictions failed for 2008 and 2009  

\n

\"\"

\n

You get the idea.

\n
\n

I started noticing a few years back that most rational people, and in general people within scientific and analytic communities, generally needed to slide their evidence meter pretty far to the right before they would accept a premise, even tentatively. After all, even frequentists, as flawed as their methodology may be, still chose 95% as the conventional level of significance. Conversely with a meter far to the left, demanding much faith, most rational people I have found choose to reject those claims.

\n

It’s not a perfect method, nor is it terribly rigorous. However I’m not interested in more rigorous methods because I am interested in bridging the gap between the rationalist and, let’s say, those who have a propensity of demanding lower levels of evidence for beliefs. I think the scale method is a grounding point for discussion between two people who hold different views on a subject. I am of the mind that this helps the average person who has never heard of probability theory grasp the idea and why one person's cause for faith is another person's cause for natural evidence. This way, the faithful can discuss why the incorruptibles are a miracle and proof of faith and another can point to the evidence of embalming, mummification or preservation like burying environment in which they were buried.

\n

One question I have myself about the methodology: would the most rational individual restrict themselves to two scales for all claims; one for acceptance and another for rejection?

\n

Any refinements are welcome – it’s an open source methodology.

\n

My scale of trust in my scale method is 50% evidence and 50% faith.

\n

\"\"

\n

 

\n

 

" } }, { "_id": "KLQCenqKcjGRkKw7r", "title": "Playing the Meta-game", "pageUrl": "https://www.lesswrong.com/posts/KLQCenqKcjGRkKw7r/playing-the-meta-game", "postedAt": "2009-12-25T10:06:46.559Z", "baseScore": 31, "voteCount": 29, "commentCount": 47, "url": null, "contents": { "documentId": "KLQCenqKcjGRkKw7r", "html": "

In honor of today's Schelling-pointmas, a true Schelling-inspired story from a class I was in at a law school I did not attend:

\n

As always, the class was dead silent as we walked to the front of the room.  The professor only described the game after the participants had volunteered and been chosen; as a result, we rarely were familiar with the games we were playing, which the professor preferred because his money was on the line.

\n

Both of us were assigned different groups of seven partners in the class. I was given seven slips of paper and my opponent was given six.  Our goal was to make deals with our partners about how to divide a dollar, one per partner, and then write the deal down on a slip of paper.  Whoever had a greater total take from the deals won $20.  All negotiations were public.

\n

The professor left the room, giving us three minutes to negotiate.  The class exploded.

\n

And then I hit a wall.  Everybody with whom I was negotiating knew the rules, and they knew that I cared a hell of a lot more about the results of the negotiation than they did.  I was getting offers on the order of $.20 and less--results straight from the theory of the ultimatum game--and no amount of begging or threatening was changing that.

\n

Three minutes pass quickly under pressure.  When the professor returned, I had written a total of $1.45 in deals: most people eventually accepted my meta-argument that they really didn't want to carry small coins around with them, so they should give me a quarter and take three for themselves, but two people waited until the last second and took 90 cents each.  Even then, I only got ten cents from those two by threatening not to accept one- or five-cent deals.

\n

My opponent, on the other hand, had amassed a relative fortune: over five dollars.  It turned out that he had been using the fact that he could make fewer deals than he had partners to auction off the chance to make a deal.  His partners kept naming lower and lower demands, and he ended up getting the majority of each dollar with little effort.

\n

I made a mock-anguished face, as the professor explained that the game was set up to demonstrate the effect of scarcity on the balance between merchants and customers.  Yeah, yeah, monopolies are bad.  Econ 101 stuff.

\n

Then he turned to me and asked why I lost, when the odds were stacked in my favor.  I asked him what he meant; after all, it was precisely because my partners knew that I could make seven deals that they could bargain against me.

\n

He said, \"But you could have torn up one of the slips.\"

\n

He was right.  I was playing by the rules, when I should have been setting them.

\n

 

\n

Edit: extraneous and hyperbolic material removed

" } }, { "_id": "TPb7qE3SLq6XiG4oY", "title": "Positive-affect-day-Schelling-point-mas Meetup", "pageUrl": "https://www.lesswrong.com/posts/TPb7qE3SLq6XiG4oY/positive-affect-day-schelling-point-mas-meetup", "postedAt": "2009-12-23T19:41:02.761Z", "baseScore": 5, "voteCount": 5, "commentCount": 32, "url": null, "contents": { "documentId": "TPb7qE3SLq6XiG4oY", "html": "

There will be a LessWrong Meetup on the Friday December 25th (day after tomorrow.)  We're meeting at 6:00 PM at Pan Tao Restaurant at 1686 South Wolfe Road, Sunnyvale, CA the SIAI House in Santa Clara, CA for pizza or whatever else we can figure out how to cook.  Consider it an available refuge if you haven't other plans.

\n

Please comment if you plan to show up!

\n

(Edit - See poll below on whether we'd rather stay in and eat something simple vs. going out to a restaurant - it's possible that everyone was assuming everyone else would prefer the latter while actually preferring the former themselves. - EY)

" } }, { "_id": "b7cWpbXwcQySDq4kK", "title": "Are these cognitive biases, biases?", "pageUrl": "https://www.lesswrong.com/posts/b7cWpbXwcQySDq4kK/are-these-cognitive-biases-biases", "postedAt": "2009-12-23T17:27:09.137Z", "baseScore": 46, "voteCount": 36, "commentCount": 24, "url": null, "contents": { "documentId": "b7cWpbXwcQySDq4kK", "html": "

Continuing my special report on people who don't think human reasoning is all that bad, I'll now briefly present some studies which claim that phenomena other researchers have considered signs of faulty reasoning aren't actually that. I found these from Gigerenzer (2004), which I in turn found when I went looking for further work done on the Take the Best algorithm.

\n

Before we get to the list - what is Gigerenzer's exact claim when he lists these previous studies? Well, he's saying that minds aren't actually biased, but may make judgments that seem biased in certain environments.

\n
\n

Table 4.1 Twelve examples of phenomena that were first interpreted as \"cognitive illusions\" but later revalued as reasonable judgments given the environmental structure. [...]

\n

The general argument is that an unbiased mind plus environmental structure (such as unsystematic error, unequal sample sizes, skewed distributions) is sufficient to produce the phenomenon. Note that other factors can also contribute to some of the phenomena. The moral is not that people would never err, but that in order to understand good and bad judgments, one needs to analyze the structure of the problem or of the natural environment.

\n
\n

On to the actual examples. Of the twelve examples referenced, I've included three for now.

\n

The False Consensus Effect

\n

Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said \"Repent!\". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.

\n

Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying \"Repent!\" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.

\n

Overconfidence / Underconfidence

\n

Bias description: Present people with binary yes/no questions. Ask them to specify how confident they are, on a scale from .5 to 1, in that they got the answer correct. The mean subjective probability x assigned to the correctness of general knowledge items tends to exceed the proportion of correct answers c, x - c > 0; people are overconfident. The hard-easy effect says that people tend to be underconfident in easy questions, and overconfident in hard questions.

\n

Counterclaim (Juslin, Winman & Olsson 2000): The apparent overconfidence and underconfidence effects are caused by a number of statistical phenomena, such as scale-end effects, linear dependency, and regression effects. In particular, the questions in the relevant studies have been selectively drawn in a manner that is unrepresentative of the actual environment, and thus throws off the participants' estimates of their own accuracy. Define a \"representative\" item sample as one coming from a study containing explicit statements that (a) a natural environment had been defined and (b) the items had been generated by random sampling of this environment. Define any studies that didn't describe how the items had been chosen, or that explicitly describe a different procedure, as having a \"selected\" item sample. A survey of several studies contained 95 independent data points with selected item samples and 35 independent data points with representative item samples, where \"independence\" means different participant samples (i.e. all data points were between subjects).

\n

For studies with selected item samples, the mean subjective probability was .73 and the actual proportion correct was .64, indicating a clear overconfidence effect. However, for studies with representative item samples, the mean subjective probability was .73 and the proportion correct was .72, indicating close to no overconfidence. The over/underconfidence effect of nearly zero for the representative samples was also not a mere consequence of averaging: for the selected item samples, the mean absolute bias was .10, while for the representative item samples it was .03. Once scale-end effects and linear dependency are controlled for, the remaining hard-easy effect is rather modest.

\n

What does the \"representative\" sample mean? If I understood correctly: Imagine that you know that 30% of the people living in a certain city are black, and 70% are white. Next you're presented with questions where you have to guess whether a certain inhabitant of the city is black or white. If you don't have any other information, you know that consistently guessing \"white\" in every question will get you 70% correct. So when the questionnaire also asks you for your calibration, you say that you're 70% certain for each question.

Now, assuming that the survey questions had been composed by randomly sampling from all the inhabitants of the city (a \"representative\" sampling), then you would indeed be correct about 70% of the time and be well-calibrated. But assume that instead, all the people the survey asked about live in a certain neighborhood, which happens to be predominantly black (a \"selected\" sampling). Now you might have only 40% right answers, while you indicated a confidence of 70%, so the researchers behind the survey mark you as overconfident.

\n

Availability Bias

\n

Bias description: We estimate probabilities based on how easily they're recalled, not based on their actual frequency. Tversky & Kahneman conducted a classic study where participants were given five consonants (K, L, N, R, V), and were asked to estimate whether the letter appeared more frequently as the first or the third letter of a word. Each was judged by the participants to occur more frequently as the first letter, even though all five actually occur more frequently as the third letter. This was assumed to be because words starting with a particular letter are more easily recalled than words that have a particular letter in the third position.

\n

Counterclaim (Sedlmeier, Hertwig & Gigerenzer 1998): Not only does the only replication of Tversky & Kahneman's result seem to be a single one-page article, it seems to be contradicted by a number of studies suggesting that memory is often (though not always) excellent in storing the frequency information from various environments. In particular, several authors have documented that participants' judgments of the frequency of letters and words generally show a remarkable sensitivity to the actual frequencies. The one previous study that did try to replicate the classical experiment, failed to do so. It used Tversky & Kahneman's five consonants, all more frequent in the third position, and also five other consonants that were more frequent in the first position. All five consonants that appear more often in the first position were judged to do so; three of the five consonants that appear more frequently in the third position were also judged to do so.

\n

The classic article did not specify a mechanism for how the availability heuristic might work. The current authors considered four different mechanisms. Availability by number states that if asked for the proportion in which a certain letter occurs in the first versus in a later position in words, one produces words with this letter in the respective positions and uses the produced proportion as an estimate for the actual proportion. Availability by speed states that one produces single words with the letter in this position, and uses the time ratio of the retrieval times as an estimate of the actual proportion. The letter class hypothesis notes that the original sample was atypical; most consonants (12 of 20) are in fact more frequent in the first position. This hypothesis assumes that people know whether consonants or vowels are more frequent in which position, and default to that knowledge. The regressed frequencies hypothesis assumes that people do actually have a rather good knowledge of the actual frequencies, but that the estimates are regressed towards the mean: low frequencies are overestimated and large frequencies underestimated.

\n

After two studies made to calibrate the predictions of the availability hypotheses, three main studies were conducted. In each, the participants were asked whether a certain letter was more frequent in the first or second position of all German words. They were also asked about the proportions of each letter appearing in the first or second position. Study one was a basic replication of the Tversky & Kahneman study, albeit with more letters. Study two was designed to be favorable to the letter class hypothesis: each participant was only given one letter whose frequency to judge instead of several. It was thought that participants may have switched away from a letter class strategy when presented with multiple consonants and vowels. Study three was designed to be favorable to the availability hypotheses, in that the participants were made to first produce words with the letters O, U, N and R in the first and second position (90 seconds per letter) before proceeding as in study one. Despite two of the studies having been explicitly constructed to be favorable to the other hypotheses, the predictions of the regressed frequency hypothesis had the best match to the actual estimates in all three studies. Thus it seems that people are capable of estimating letter frequencies, although in a regressed form.

\n

The authors propose two different explanations for the discrepancy of results with the classic study. One is that the corpus used by Tversky & Kahneman only covers words at least three letters long, but English has plenty of one- and two-letter words. The participants in the classic study were told to disregard words with less than three letters, but it may be that they were unable to properly do so. Alternatively, it may have been caused by the use of an unrepresentative sample of letters: had the authors used only consonants that are more frequent in the second position, then they too would have reported that the frequency of the those letters in the first position is overestimated. However, a consideration of all the consonants tested shows that the frequency of those in the first position is actually underestimated. This disagrees with the interpretation by Tversky & Kahneman, and implies a regression effect as the main cause.

\n

EDIT: This result doesn't mean that the availability heuristic would be a myth, of course. It is, AFAIK, true that e.g. biased reporting in the media will throw off people's conceptions of what events are the most likely. But one probably wouldn't be too far from the truth if they said that in that case, the brain is still computing relative frequencies correctly, given the information at hand - it's just that the media reporting is biased. The claim that there are some types of important information for which the mind has particular difficulty assessing relative frequencies correctly, though, doesn't seem to be as supported as is sometimes claimed.

" } }, { "_id": "5GmAC7fmwpAFbYrzp", "title": "On the Power of Intelligence and Rationality", "pageUrl": "https://www.lesswrong.com/posts/5GmAC7fmwpAFbYrzp/on-the-power-of-intelligence-and-rationality", "postedAt": "2009-12-23T10:49:44.496Z", "baseScore": 18, "voteCount": 27, "commentCount": 193, "url": null, "contents": { "documentId": "5GmAC7fmwpAFbYrzp", "html": "

As Eliezer and many others on Less Wrong have said, the way the human species rose to dominate the Earth was through our intelligence- and not through our muscle power, biochemical weapons, or superior resistance to environmental hazards. Given our overwhelming power over other species, and the fact that many former top predators are now extinct or endangered, we should readily accept that general intelligence is a game-changing power on the species level.

\n

Similarly, one of the key ingredients in the birth of the modern era was the discovery of science, and its counterpart, the discovery of the art of Traditional Rationality. Armed with these, the nations of Western Europe managed to dominate the entire rest of the world, even though, when they began their explorations in the 15th century, the Chinese were more advanced in many respects. Given how Western Europe, and the cultures derived from it, has so completely surpassed the rest of the world in terms of wealth and military might, we should readily accept that science and rationality is a game-changing power on the civilization level.

\n

However, neither of these imply that intelligence, science, and rationality, as a practical matter, are the best way to get things done by individual people operating in the year 2009. We can easily see that many things which work on the species level, or the civilization level, do not work for individuals and small groups. For instance, until the discovery of nuclear weapons, armed conflict was often a primary means of settling disputes between nation-states. However, if you tried to settle your dispute with your neighbor, or your company's dispute with its competitor, using armed force, it would achieve nothing except getting you thrown in prison.

\n

People are crazy and the world is mad, but it does not necessarily follow that we should try to solve our own problems primarily by becoming more sane. Plenty of people achieve many of their goals despite being completely nuts. Adolf Hitler, for example, achieved a large fraction of his (extremely ambitious!) goals, despite having numerous beliefs that most of us would recognize as making no sense whatsoever.

\n

We know, as a matter of historical fact, that Adolf Hitler and his Nazi Party, despite being generally incompetent, unintelligent, irrational, superstitious and just plain insane, managed to take over a country of tens of millions of people from nothing, in the span of fifteen years. So far as I am aware, no group of people has managed to achieve anything even remotely similar using, not only rationality, but any skill involving deliberative thought, as opposed to skills such as yelling at huge crowds of people. However, it is a corollary to the statement that no one knows what science doesn't know that no one knows what history doesn't know, so it is entirely possible, perhaps likely, that there is something I am overlooking. To anyone who would assert that intelligence, science or rationality is the Ultimate Power, not just on the level of a species or civilization, but on the level of an individual or small group, let them show that their belief is based in reality.

" } }, { "_id": "nKWajrBpMaJRWqTfB", "title": "Two Truths and a Lie", "pageUrl": "https://www.lesswrong.com/posts/nKWajrBpMaJRWqTfB/two-truths-and-a-lie", "postedAt": "2009-12-23T06:34:55.204Z", "baseScore": 70, "voteCount": 68, "commentCount": 67, "url": null, "contents": { "documentId": "nKWajrBpMaJRWqTfB", "html": "

Response to Man-with-a-hammer syndrome.

\n

It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.

\n

There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.

\n

Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false. 

\n

If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.

\n

Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.

\n

Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.

" } }, { "_id": "hkBp6a5RCDNedo6Wy", "title": "The 9/11 Meta-Truther Conspiracy Theory", "pageUrl": "https://www.lesswrong.com/posts/hkBp6a5RCDNedo6Wy/the-9-11-meta-truther-conspiracy-theory", "postedAt": "2009-12-22T18:59:05.811Z", "baseScore": 99, "voteCount": 83, "commentCount": 182, "url": null, "contents": { "documentId": "hkBp6a5RCDNedo6Wy", "html": "

Date:  September 11th, 2001.
Personnel:  Unknown [designate A], Unknown [designate B], Unknown [designate C].

\n

A:  It's done.  The plane targeted at Congress was crashed by those on-board, but the Pentagon and Trade Center attacks occurred just as scheduled.

\n

B:  Congress seems sufficiently angry in any case.  I don't think the further steps of the plan will meet with any opposition.  We should gain the governmental powers we need, and the stock market should move as expected.

\n

A:  Good.  Have you prepared the conspiracy theorists to accuse us?

\n

B:  Yes.  All is in readiness.  The first accusations will fly within the hour.

\n

C:  Er...

\n

A:  What is it?

\n

C:  Sorry, I know I'm a bit new to this sort of thing, but why are we sponsoring conspiracy theorists?  Aren't they our arch-nemeses, tenaciously hunting down and exposing our lies?

\n

A:  No, my young apprentice, just the opposite.  As soon as you pull off a conspiracy, the first thing you do is start a conspiracy theory about it.  Day one.

\n

C:  What?  We want to be accused of deliberately ignoring intelligence and assassinating that one agent who tried to forward specific information -

\n

A:  No, of course not!  What you do in a case like this is start an accusation so ridiculous that nobody else wants to be associated with the accusers.  You create a low-prestige conspiracy theory and staff it with as many vocal lunatics as you can.  That way no one wants to be seen as affiliating with the conspiracy theorists by making a similar accusation.

\n

C:  That works?  I know I'm not the brightest fish in the barrel - sometimes, hanging around you guys, I feel almost as dumb as I pretend to be - but even I know that \"The world's stupidest man may say the sun is shining, but that doesn't make it dark out.\"

\n

B:  Works like a charm, in my experience.  Like that business with the Section Magenta aircraft.  All you need is a bunch of lunatics screaming about aliens and no one respectable will dream of reporting a \"flying saucer\" after that.

\n

C:  So what did you plan for the 9/11 cover conspiracy theory, by the way?  Are the conspiracy theorists going to say the Jews were behind it?  Can't get much lower-prestige than anti-Semitism!

\n

B:  You've got the right general idea, but you're not thinking creatively enough.  Israel does have a clear motive here - even though they weren't in fact behind it - and if the conspiracy theorists cast a wide enough net, they're bound to turn up a handful of facts that seem to support their theory.  The public doesn't understand how to discount that sort of \"evidence\", though, so they might actually be convinced.

\n

C:  So... the Illuminati planned the whole operation?

\n

B:  You know, for someone who reads as much science fiction as you do, you sure don't think outside the box.

\n

C:  ...okay, seriously, man.  I don't see how a theory could get any more ridiculous than that and still acquire followers.

\n

(A and B crack up laughing.)

\n

B:  Hah!  What would you have done to cover up the Section Magenta aircraft, I wonder?  Blamed it on Russia?  To this day there are still people on the lookout for hidden aliens who overfly populated areas in gigantic non-nanotechnological aircraft with their lights on.

\n

A:  So what did we pick for the 9/11 cover conspiracy, by the way?

\n

B:  Hm?  Oh, the World Trade Center wasn't brought down by planes crashing into it.  It was pre-planted explosives.

\n

C:  You're kidding me.

\n

B:  Seriously, that's the cover conspiracy.

\n

C:  There are videos already on the Internet of the planes flying into the World Trade Center.  It was on live television.  There are thousands of witnesses on the ground who saw it with their own eyes -

\n

B:  Right, but the conspiracy theory is, the planes wouldn't have done it on their own - it took pre-planted explosives too.

\n

C:  No one is going to buy that.  I don't care who you bought out in the conspiracy-theoretic community.  This attack would've had the same political effect whether the buildings came down entirely or just the top floors burned.  It's not like we spent a lot of time worrying about at what angle the planes would hit the building.  The whole point was to keep our hands clean!  That's why the al Qaeda plot was such a godsend compared to the original anthrax plan.  All we had to do was let it happen.  Once we arranged for the attack to go through, we were done, we had no conceivable motive to risk exposure by planting explosives on top of that -

\n

B:  Don't take this the wrong way.  But one, you don't understand conspiracy theorists at all.  Two, they bought the aliens, didn't they?  And three, it's already online and the usual crowd of anti-establishment types are already snapping it up.

\n

C:  Are you joking?

\n

B:  Honest to Betsy.  People are claiming that the buildings fell too quickly and that the video showed ejecta corresponding to controlled demolitions.

\n

C:  Wow.  I don't suppose we actually planted some explosives, just to make sure that -

\n

A:  Oh, hell no, son.  That sort of thing is never necessary.  They'll turn up what looks to them like evidence.  They always do.

\n

C:  Aren't they going to, um, suspect they're pawns?

\n

A:  Human nature 101.  Once they've staked their identity on being part of the defiant elect who know the Hidden Truth, there's no way it'll occur to them that they're our catspaws.

\n

B:  One reason our fine fraternity has controlled the world for hundreds of years is that we've managed to make \"conspiracy theories\" look stupid.  You know how often you've ever heard someone suggest that possibility?  None.  You know why?  Because it would be a conspiracy theory.

\n

A:  Not to mention that the story would be too recursive to catch on.  To conceal the truth, one need only make the reality complicated enough to exceed the stack depth of the average newspaper reader.

\n

B:  And I've saved the dessert for last.

\n

C:  Really?

\n

B:  Yeah.  You can go totally overboard with these guys.  They never notice and they never suspect they're being used.

\n

C:  Hit me.

\n

B:  We've arranged for them to be called \"truthers\".

\n
\n

I hereby dub any believers in this theory 9/11 meta-truthers.

\n
\n

I, Eliezer Yudkowsky, do now publicly announce that I am not planning to commit suicide, at any time ever, but particularly not in the next couple of weeks; and moreover, I don't take this possibility seriously myself at the moment, so you would merely be drawing attention to yourselves by assassinating me.  However, I also hereby vow that if the Singularity Institute happens to receive donations from any sources totaling at least $3M in 2010, I will take down this post and never publicly speak of the subject again; and if anyone asks, I'll tell them honestly that it was probably a coincidence.

" } }, { "_id": "Hk3MHFdofi4P5zBwj", "title": "lessmeta", "pageUrl": "https://www.lesswrong.com/posts/Hk3MHFdofi4P5zBwj/lessmeta", "postedAt": "2009-12-22T17:57:21.810Z", "baseScore": 6, "voteCount": 21, "commentCount": 31, "url": null, "contents": { "documentId": "Hk3MHFdofi4P5zBwj", "html": "

The social bookmarking site metafilter has a sister site called metatalk, which works the same way but is devoted entirely to talking about metafilter itself. Arguments about arguments, discussions about discussions, proposals for changes in site architecture, etc.

\n

Arguments about arguments are often less productive than the arguments they are about, but they CAN be quite productive, and there's certainly a place for them. The only thing wrong with them is when they obstruct the discussion that spawned them, and so the idea of splitting off metatalk into its own site is really quite a clever one.

\n

Lesswrong's problem is a peculiar one. It is ENTIRELY devoted to meta-arguments, to the extent that people have to shoehorn anything else they want to talk about into a cleverly (or not so cleverly) disguised example of some more meta topic. It's a kite without a string.

\n

Imagine if you had been around the internet, trying to have a rational discussion about topic X, but unable to find an intelligent venue, and then stumbling upon lesswrong. \"Aha!\" you say. \"Finally a community making a concerted effort to be rational!\"

\n

But to your dismay, you find that the ONLY thing they talk about is being rational, and a few other subjects that have been apparently grandfathered in. It's not that they have no interest in topic X, there's just no place on the site they're allowed to talk about it.

\n

What I propose is a \"non-meta\" sister site, where people can talk and think about anything BESIDES talking and thinking. Well, you know what I mean.

\n

Yes?

" } }, { "_id": "EyhohecvyjkzgT7XG", "title": "Karma Changes", "pageUrl": "https://www.lesswrong.com/posts/EyhohecvyjkzgT7XG/karma-changes", "postedAt": "2009-12-22T00:17:14.517Z", "baseScore": 4, "voteCount": 13, "commentCount": 93, "url": null, "contents": { "documentId": "EyhohecvyjkzgT7XG", "html": "

As recently (re-)suggested by Kaj Sotala, posts now have much larger effects on karma than comments:  Each up or down vote on a post is worth 10 karma.

\n

Negative votes on posts have had karma effects all along, but for some reason Reddit's code imposed a display cap (not an actual cap) of 0.  This violates a basic user interface principle: things with important effects should have visible effects.  Since this just got 10x more important, we now show negative post totals rather than \"0\".  This also provides some feedback to posters that was previously missing.  Note that downvoting a post costs 10 karma from your downvote cap of 4x current karma.

\n

The minimum karma to start posting has been raised to 50.

\n

Thanks to our friends at Tricycle for implementing this request!

" } }, { "_id": "9KvefburLia7ptEE3", "title": "The Correct Contrarian Cluster", "pageUrl": "https://www.lesswrong.com/posts/9KvefburLia7ptEE3/the-correct-contrarian-cluster", "postedAt": "2009-12-21T22:01:56.781Z", "baseScore": 67, "voteCount": 69, "commentCount": 255, "url": null, "contents": { "documentId": "9KvefburLia7ptEE3", "html": "

Followup to: Contrarian Status Catch-22

\n

Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11.  What else can you infer about them?  Are they more or less likely than average to believe in homeopathy?

\n

I couldn't cite an experiment to verify it, but it seems likely that:

\n\n

All sorts of obvious disclaimers can be included here.  Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view.  Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera.  Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.

\n

(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don't correlate with each other, then I'm wrong and no c-factor exists.)

\n

Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics.  What else can you infer about them?

\n

Well, one possible reason for believing in the many-worlds interpretation is that, as a general rule of cognitive conduct, you investigated the issue and thought about it carefully; and you learned enough quantum mechanics and probability theory to understand why the no-worldeaters advocates call their theory the strictly simpler one; and you're reflective enough to understand how a deeper theory can undermine your brain's intuition of an apparently single world; and you listen to the physicists who mock many-worlds and correctly assess that these physicists are not to be trusted.  Then you believe in many-worlds out of general causes that would operate in other cases - you probably have a high correct contrarian factor - and we can infer that you're more likely to be an atheist.

\n

It's also possible that you thought many-worlds means \"all the worlds I can imagine exist\" and that you decided it'd be cool if there existed a world where Jesus is Batman, therefore many-worlds is true no matter what the average physicist says.  In this case you're just believing for general contrarian reasons, and you're probably more likely to believe in homeopathy as well.

\n

A lot of what we do around here can be thought of as distinguishing the correct contrarian cluster within the contrarian cluster.  In fact, when you judge someone's rationality by opinions they post on the Internet - rather than observing their day-to-day decisions or life outcomes - what you're trying to judge is almost entirely cc-factor.

\n

It seems indubitable that, measured in raw bytes, most of the world's correct knowledge is not contrarian correct knowledge, and most of the things that the majority believes (e.g. 2 + 2 = 4) are correct.  You might therefore wonder whether it's really important to try to distinguish the Correct Contrarian Cluster in the first place - why not just stick to majoritarianism?  The Correct Contrarian Cluster is just the place where the borders of knowledge are currently expanding - not just that, but merely the sections on the border where battles are taking place.  Why not just be content with the beauty of settled science?  Perhaps we're just trying to signal to our fellow nonconformists, rather than really being concerned with truth, says the little copy of Robin Hanson in my head.

\n

My primary personality, however, responds as follows:

\n\n

In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there's really important stuff in the Correct Contrarian Cluster.  Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup.  Not to mention that most people still believe in God.  People are crazy, the world is mad.  So, yes, if you don't want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.

\n

Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by \"outside indicators\" - as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment.  As I said in the comments, I am generally pessimistic about the chances of success for this project.  Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is \"These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive.\"  (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return - note that I don't say anything like this about AI, so this is not a plea I have use for myself!)  Mostly, I tend to rely on analyzing the actual arguments; meta should be spice, not meat.

\n

However, failing analysis of actual arguments, another method would be to try and distinguish the Correct Contrarian Cluster by plain old-fashioned... clustering.  In a sense, we do this in an ad-hoc way any time we trust someone who seems like a smart contrarian.  But it would be possible to do it more formally - write down a big list of contrarian views (some of which we are genuinely uncertain about), poll ten thousand members of the intelligentsia, and look at the clusters.  And within the Contrarian Cluster, we find a subcluster where...

\n

...well, how do we look for the Correct Contrarian subcluster?

\n

One obvious way is to start with some things that are slam-dunks, and use them as anchors.  Very few things qualify as slam-dunks.  Cryonics doesn't rise to that level, since it involves social guesses and values, not just physicalism.  I can think of only three slam-dunks off the top of my head:

\n\n

These aren't necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.

\n

Of course there are also slam-dunks like:

\n\n

But these probably aren't the right kind of controversy to fine-tune the location of the Correct Contrarian Cluster.

\n

A major problem with the three slam-dunks I listed is that they all seem to have more in common with each other than any of them have with, say, dietary science.  This is probably because of the logical, formal character which makes them slam dunks in the first place.  By expanding the field somewhat, it would be possible to include slightly less slammed dunks, like:

\n\n

But if we start expanding the list of anchors like this, we run into a much higher probability that one of our anchors is wrong.

\n

So we conduct this massive poll, and we find out that if someone is an atheist and believes in many-worlds and does not believe in p-zombies, they are much more likely than the average contrarian to think that low-energy nuclear reactions (the modern name for cold fusion research) are real.  (That is, among \"average contrarians\" who have opinions on both p-zombies and LENR in the first place!)  If I saw this result I would indeed sit up and say, \"Maybe I should look into that LENR stuff more deeply.\"  I've never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.

\n

There are much more clever things you could do with the dataset.  If someone believes most things that atheistic many-worlder zombie-skeptics believe, but isn't a many-worlder, you probably want to know their opinion on infrequently considered topics.  (The first thing I'd probably try would be SVD to see if it isolates a \"correctness factor\", since it's simple and worked famously well on the Netflix dataset.)

\n

But there are also simpler things we could do using the same principle.  Let's say we want to know whether the economy will recover, double-dip or crash.  So we call up a thousand economists, ask each one \"Do you have a strong opinion on whether the many-worlds interpretation is correct?\", and see if the economists who have a strong opinion and answer \"Yes\" have a different average opinion from the average economist and from economists who say \"No\".

\n

We might not have this data in hand, but it's the algorithm you're approximating when you notice that a lot of smart-seeming people assign much higher than average probabilities to cryonics technology working.

" } }, { "_id": "HAqWP4bh8tTwYSfm2", "title": "If reason told you to jump off a cliff, would you do it?", "pageUrl": "https://www.lesswrong.com/posts/HAqWP4bh8tTwYSfm2/if-reason-told-you-to-jump-off-a-cliff-would-you-do-it", "postedAt": "2009-12-21T03:54:05.533Z", "baseScore": -14, "voteCount": 30, "commentCount": 42, "url": null, "contents": { "documentId": "HAqWP4bh8tTwYSfm2", "html": "

In reply to Eliezer's Contrarian Status Catch 22 & Sufficiently Advanced Sanity. I accuse Eliezer of encountering a piece of Advanced Wisdom.

\n

Unreason is something that we should fight against. Witch burnings, creationism & homeopathy are all things which should rightly be defended against for society to advance. But, more subtly, I think reason is in some ways, is also a dangerous phenomena that should be guarded against. I am arguing not against the specific process of reasoning itself, it is the attitude which instinctually reaches for reason as the first tool of choice when confronting a problem. Scott Aaronson called this approach bullet swallowing when he tried to explain why he was so uncomfortable with it. Jane Galt also rails against reason when explaining why she does not support gay marriage.

\n

The most recent financial crisis is a another example of what happens when reason is allowed to dominate. Cutting away all the foggy noise & conspiracy theories, the root cause of the financial crisis was that men who believed a little too much in reason became divergent from reality and reality quickly reminded them of that. The problem was not that the models were wrong, it was that what they were trying to accomplish was unmodelable. Nassim Nicholas-Taleb's The Black Swan explains this far better than I can.

\n

A clear-eyed look back on history reveals many other similar events in which the helm of reason has been used to champion disastrous causes: The French Revolution, Communism, Free Love Communes, Social Darwinism & Libertarianism (I am not talking about affective death spirals here). Now, one might argue at this point that those were all specific examples of bad reasoning and that it's possible to carefully strip away the downsides of reason with diligent practice. But I don't believe it. Fundamentally, I believe that the world is unreasonable. It is, at it's core, not amenable to reason. Better technique may push the failure point back just a bit further but it will never get rid of it.

\n

So what should replace reason? I nominate accepting the primacy of evidence over reason. Reason is still used, but only reluctantly, to form the minimum span possible between evidence & beliefs. From my CS perspective, I make the analogy between shifting from algorithm centered computation to data centric. Rather than try to create elaborate models that purport to reflect reality, strive to be as model-free as possible and shut up and gather.

\n

If the reason based approach is a towering skyscraper, an evidence based approach would be an adobo hut. Reasoning is sexy and new but also powerful, it can do a lot of things and do them a lot better. The evidence based approach, on the other hand, does just enough to matter and very little more. The evidence based approach is not truth seeking in the way the reason based approach is. A declaration of truth, built on a large pile of premises is worse than a statement of ignorance. This, I think is what Scott Aaaronson was referring to when he said \"What you've forced me to realize, Eliezer, and I thank you for this:  What I'm uncomfortable with is not the many-worlds interpretation itself, it's the air of satisfaction that often comes with it.\"

" } }, { "_id": "WyAGQqw2v3yS8BkjM", "title": "Mandating Information Disclosure vs. Banning Deceptive Contract Terms", "pageUrl": "https://www.lesswrong.com/posts/WyAGQqw2v3yS8BkjM/mandating-information-disclosure-vs-banning-deceptive", "postedAt": "2009-12-20T20:55:58.088Z", "baseScore": 30, "voteCount": 29, "commentCount": 77, "url": null, "contents": { "documentId": "WyAGQqw2v3yS8BkjM", "html": "

Economists are very into the idea of mutually beneficial exchange. The standard argument is that if two parties voluntarily agree to a deal, then they must be better off with the deal than without it, otherwise they wouldn't have agreed. And if the terms of that deal don't harm any third parties,* then the deal must be welfare-improving, and any regulatory restrictions on making it must be bad.

One objection to this argument is that it's not always clear what is and what is not \"voluntary.\" I once has a well-published economist friend argue that there are no gradations of voluntariness: either a deal was made under some kind of compulsion or it wasn't. I asked him if he would be OK letting his then pre-adolescent son make any schoolyard deal he wanted as long as it was not made under any overt threat, and I think (but am not totally sure) that he has since backed off this position. So there is an argument for purely paternalistic restrictions on freedom of contract. 

Another objection, one which economists tend to take more seriously, relates to information. Specifically, there is the idea that maybe one party to the contract is not fully informed about its terms. For this reason, many economists are willing to entertain policies by which firms are required to disclose certain information, and to do so in a way that is comprehensible to consumers. So for example we now have \"Schumer boxes\" that govern the ways in which credit card companies present certain information in promotional materials. This seems to many people to be a reasonable remedy: if the problem was that one side of the transaction was ignorant, then a regulation that eliminates that ignorance, while at the same time not interfering with their freedom to engage in mutually beneficial exchange, must be a good thing.

\n

I think this reasonable-sounding position is largely wrong. The standard asymmetric information stories with rational agents are stories in which the uniformed party knows that it is uninformed, which influences the contract terms that it is willing to accept, which in turn either causes beneficial exchange not to happen (the \"lemons\" problem) or causes contract terms to be distorted away from the efficient ones. They are generally not stories about uninformed consumers not understanding that they are uninformed and blithely marching into traps as a result. But this is what we actually see all the time, one party tricks the other party into unfavorable terms. Indeed, very often this is the real-world problem to which providing better information is supposed to be the solution! But for trickery to be the problem, you usually need a model in which some agents suffer from some limitation on their rationality, such as myopia.** And if you have that, then you have a different problem from the problem of asymmetric information, and there is no particular reason for a different problem to have the same solution. If the problem is that people are getting tricked, then providing more information is only going to help if it is going to cause them not to be tricked, and it is not at all obvious whether and when this will be the case.

But there is a bigger problem with the standard way that economists usually think about these problems, which is that they completely ignore the fact that when people are being tricked, the virtues of voluntary exchange are absent and so there is no reason for a strong presumption against interfering with it in the first place. And sometimes the very existence of certain contract terms is an indication that the contract is a trick. Think about the controversial terms often found in credit card contracts, such as provisions by which being one day late with a payment or being one dollar over your credit limit jumps your interest rate to 29.99% forever, or in which cards with multiple balances at different interest rates pay off the low rate balance first. What should be inferred from the fact that these terms exist? Is it at all plausible that there is some subtle but very important reason why these terms must be present, and that if they were banned lots of mutually beneficial deals would not be made? Is it not much more likely that that these terms exist precisely because many consumers don't understand them and will be tricked by them? Have you ever heard of such terms being in contracts negotiated between sophisticated parties? Shouldn't this cause you to be much less worried about the consequences of simply banning them?

There is a very good paper by Gabaix & Laibson (2006) that provides a formal model in which firms \"shroud\" relevant information in order to trick myopic agents. The neat thing about their paper is that they show that this persists in equilibrium: competing firms turn out to have no incentive to march in and expose the shrouding and offer transparent pricing instead. But you don't need a fancy (and recent) paper to have known that the aforementioned terms in credit card contracts are only there to trick people. And if that's true, then what you really want is to get rid of contracts with those terms. Mandating information disclosure is only a good remedy insofar as it causes those terms to disappear. Gone is the economist's notion that the right solution is to make sure everyone knows the score and then to step out of the way. If you mandated disclosure and then saw those contracts continuing to exist, the conclusion you should draw was that the disclosure was ineffective, not that the terms were efficient.

One could object to heavy-handed regulation on the basis of a slippery-slope argument. While there are some clear-cut cases like the credit card contracts, a government with lots of regulatory power, even a well-intentioned one, may end up getting overzealous and interfering in ways that will have unintended and negative consequences for efficiency. And Gabaix & Laibson take pains to point out that they are not advocating lots of regulation. Whatever the merits of this argument (I understand the fear of regulatory overreach but worry about it less than a lot of other people do), the main point of this post remains. There are important instances in the world we live in where the unaware simply get tricked and screwed. That is not, at root, a problem of asymmetric information among rational agents, and there is no reason to think that the appropriate remedy is the same as if it were. More importantly, in this world the virtues of voluntary exchange are absent, and so the economists' deference to it is misplaced.

There has been an important real-world development on this front. It seems that the Federal Reserve did a bunch of consumer testing to see how well people understood various terms in credit card contracts under different disclosure requirements, and concluded that they were simply too complicated for most people to understand. They therefore decided to go ahead and simply prohibit certain practices. Which I say is good news for the good guys.

\n

*A weaker version of this condition is that any third parties that are hurt are hurt less than the contracting parties are helped.

\n

**I say \"usually\" because there are a few special models in which fully rational agents can nevertheless be tricked.

" } }, { "_id": "c7rzxS7QJDdpBkA69", "title": "Sufficiently Advanced Sanity", "pageUrl": "https://www.lesswrong.com/posts/c7rzxS7QJDdpBkA69/sufficiently-advanced-sanity", "postedAt": "2009-12-20T18:11:17.677Z", "baseScore": 9, "voteCount": 11, "commentCount": 25, "url": null, "contents": { "documentId": "c7rzxS7QJDdpBkA69", "html": "

Reply toShalmanese's Third Law

\n

From an unpublished story confronting Vinge's Law, written in 2004, as abstracted a bit:

\n
\n

\"If you met someone who was substantially saner than yourself, how would you know?\"

\n

\"The obvious mistake that sounds like deep wisdom is claiming that sanity looks like insanity to the insane.  I would expect to discover sanity that struck me as wonderfully and surprisingly sane,  sanity that shocked me but that I could verify on deeper examination,  sanity that sounded wrong but that I could not actually prove to be wrong,  and sanity that seemed completely bizarre.\"

\n

\"Like a history of 20th-century science, presented to a scientist from 1900.  Much of the future history would sound insane, and easy to argue against.  It would take a careful mind to realize none of it was more inconsistent with present knowledge than the scientific history of the 19th century with the knowledge of 1800.  Someone who wished to dismiss the whole affair as crackpot would find a thousand excuses ready to hand, plenty of statements that sounded obviously wrong.  Yet no crackpot could possibly fake the parts that were obviously right.  That is what it is like to meet someone saner.  They are not infallible, are not future histories of anything.  But no one could counterfeit the wonderfully and surprisingly sane parts; they would need to be that sane themselves.\"

\n
\n

Spot the Bayesian problem, anyone?  It's obvious to me today, but not to the me of 2004.  Eliezer2004 would have seen the structure of the Bayesian problem the moment I pointed it out to him, but he might not have assigned it the same importance I would without a lot of other background.

" } }, { "_id": "psQYbMLWzS9sTsT2M", "title": "Fundamentally Flawed, or Fast and Frugal?", "pageUrl": "https://www.lesswrong.com/posts/psQYbMLWzS9sTsT2M/fundamentally-flawed-or-fast-and-frugal", "postedAt": "2009-12-20T15:10:15.714Z", "baseScore": 49, "voteCount": 44, "commentCount": 86, "url": null, "contents": { "documentId": "psQYbMLWzS9sTsT2M", "html": "

Whenever biases are discussed around here, it tends to happen under the following framing: human cognition is a dirty, jury-rigged hack, only barely managing to approximate the laws of probability even in a rough manner. We have plenty of biases, many of them a result of adaptations that evolved to work well in the Pleistocene, but are hopelessly broken in a modern-day environment.

\n

That's one interpretation. But there's also a different interpretation: that a perfect Bayesian reasoner is computationally intractable, and our mental algorithms make for an excellent, possibly close to an optimal, use of the limited computational resources we happen to have available. It's not that the programming would be bad, it's simply that you can't do much better without upgrading the hardware. In the interest of fairness, I will be presenting this view by summarizing a classic 1996 Psychological Review article, \"Reasoning the Fast and Frugal Way: Models of Bounded Rationality\" by Gerd Gigerenzer and Daniel G. Goldstein. It begins by discussing two contrasting views: the Enlightenment ideal of the human mind as the perfect reasoner, versus the heuristics and biases program that considers human cognition as a set of quick-and-dirty heuristics.

\n
\n

Many experiments have been conducted to test the validity of these two views, identifying a host of conditions under which the human mind appears more rational or irrational. But most of this work has dealt with simple situations, such as Bayesian inference with binary hypotheses, one single piece of binary data, and all the necessary information conveniently laid out for the participant (Gigerenzer & Hoffrage, 1995). In many real-world situations, however, there are multiple pieces of information, which are not independent, but redundant. Here, Bayes’ theorem and other “rational” algorithms quickly become mathematically complex and computationally intractable, at least for ordinary human minds. These situations make neither of the two views look promising. If one would apply the classical view to such complex real-world environments, this would suggest that the mind is a supercalculator like a Laplacean Demon (Wimsatt, 1976)— carrying around the collected works of Kolmogoroff, Fisher, or Neyman—and simply neds a memory jog, like the slave in Plato’s Meno. On the other hand, the heuristics-and-biases view of human irrationality would lead us to believe that humans are hopelessly lost in the face of real-world complexity, given their supposed inability to reason according to the canon of classical rationality, even in simple laboratory experiments.

There is a third way to look at inference, focusing on the psychological and ecological rather than on logic and probability theory. This view questions classical rationality as a universal norm and thereby questions the very definition of “good” reasoning on which both the Enlightenment and the heuristics-and-biases views were built. Herbert Simon, possibly the best-known proponent of this third view, proposed looking for models of bounded rationality instead of classical rationality. Simon (1956, 1982) argued that information-processing systems typically need to satisfice rather than optimize. Satisficing, a blend of sufficing and satisfying, is a word of Scottish origin, which Simon uses to characterize algorithms that successfully deal with conditions of limited time, knowledge, or computational capacities. His concept of satisficing postulates, for instance, that an organism would choose the first object (a mate, perhaps) that satisfies its aspiration level—instead of the intractable sequence of taking the time to survey all possible alternatives, estimating probabilities and utilities for the possible outcomes associated with each alternative, calculating expected utilities, and choosing the alternative that scores highest.

\n
\n

\n

Let us consider the following example question: Which city has a larger population? (a) Hamburg (b) Cologne.

\n

The paper describes algorithms fitting into a framework that the authors call a theory of probabilistic mental models (PMM). PMMs fit three visions: (a) Inductive inference needs to be studied with respect to natural environments; (b) Inductive inference is carried out by satisficing algorithms; (c) Inductive inferences are based on frequencies of events in a reference class. PMM theory does not strive for the classical Bayesian ideal, but instead attempts to build an algorithm the mind could actually use.

\n
\n

These satisficing algorithms dispense with the fiction of omniscient Laplacean Demon, who has all the time and knowledge to search for all relevant information, to compute the weights and covariances, and then to integrate all this information into an inference.

\n
\n

The first algorithm presented is the Take the Best algorithm, named because its policy is \"take the best, ignore the rest\". In the first step, it invokes the recognition principle: if only one of two objects is recognized, it chooses the recognized object. If neither is recognized, it chooses randomly. If both are recognized, it moves on to the next discrimination step. For instance, if a person is asked which of city a and city b is bigger, and the person has never heard of b, they will pick a.

\n

If both objects are recognized, the algorithm will next search its memory for useful information that might provide a cue regarding the correct answer. Suppose that you know a certain city has its own football team, while another doesn't have one. It seems reasonable to assume that a city having a football team correlates with the city being of at least some minimum size, so the existence of a football team has positive cue value for predicting city size - it signals a higher value on the target variable.

\n

In the second step, the Take the Best algorithm retrieves from memory the cue values of the highest ranking cue. If the cue discriminates, which is to say one object has a positive cue value and the other does not, the search is terminated and the object with the positive cue value is chosen. If the cue does not discriminate, the algorithm keeps searching for better cues, choosing randomly if no discriminating cue is found.

\n
\n

The algorithm is hardly a standard statistical tool for inductive inference: It does not use all available information, it is non-compensatory and nonlinear, and variants of it can violate transitivity. Thus, it differs from standard linear tools for inference such as multiple regression, as well as from nonlinear neural networks that are compensatory in nature. The Take The Best algorithm is noncompensatory because only the best discriminating cue determines the inference or decision; no combination of other cue values can override this decision. [...] the algorithm violates the Archimedian axiom, which implies that for any multidimensional object a (a1, a2, ... an) preferred to b (b1, b2, ... bn) where a1 dominates b1, this preference can be reversed by taking multiples of any one or a combination of b2, b3, ... , bn. As we discuss, variants of this algorithm also violate transitivity, one of the cornerstones of classical rationality (McClennen, 1990).

\n
\n

This certainly sounds horrible: possibly even more horrifying is that a wide variety of experimental results make perfect sense if we assume that the test subjects are unconsciously employing this algorithm. Yet, despite all of these apparent flaws, the algorithm works.

\n

The authors designed a scenario where 500 simulated individuals with varying amounts of knowledge were presented with pairs of cities and were tasked with choosing the bigger one (83 cities, 3,403 city pairs). The Take the Best algorithm was pitted against five other algorithms that were suggested by \"several colleagues in the fields of statistics and economics\": Tallying (where the number of positive cue values for each object is tallied across all cues and the object with the largest number of positive cue values is chosen), Weighted Tallying, the Unit-Weight Linear Model, the Weighted Linear Model, and Multiple Regression.

\n

Take the Best was clearly the fastest algorithm, needing to look up far fewer cue values than the rest. But what about the accuracy? When the simulated individuals had knowledge of all the cues, Take the Best drew as many correct inferences as any of the other algorithms, and more than some. When looking at individuals with imperfect knowledge? Take the Best won or tied for the best position for individuals with knowledge of 20 and 50 percent of the cues, and didn't lose by more than a few tenths of a percent for individuals that knew 10 and 75 percent of the cues. Averaging over all the knowledge classes, Take the Best made 65.8% correct inferences, tied with Weighted Tallying for the gold medal.

\n

The authors also tried two, even more stupid algorithms, which were variants of Take the Best. Take the Last, instead of starting the search from the highest-ranking cue, first tries the cue that discriminated last, then the cue that discriminated the time before the last, and so on. The Minimalist algorithm picks a cue at random. This produced a perhaps surprisingly small drop in accuracy, with Take the Last getting 64,7% correct inferences and Minimalist 64,5%.

\n

After the algorithm comparison, the authors spend a few pages discussing some of the principles related to the PMM family of algorithms and their empirical validity, as well as the implications all of this might have on the study of rationality. They note, for instance, that even though transitivity (if we prefer a to b and b to c, then we should also prefer a to c) is considered a cornerstone axiom in classical relativity, several algorithms violate transitivity without suffering very much from it.

\n
\n

At the beginning of this article, we pointed out the common opposition between the rational and the psychological, which emerged in the nineteenth century after the breakdown of the classical interpretation of probability (Gigerenzer et al., 1989). Since then, rational inference is commonly reduced to logic and probability theory, and psychological explanations are called on when things go wrong. This division of labor is, in a nutshell, the basis on which much of the current research on judgment under uncertainty is built. As one economist from the Massachusetts Institute of Technology put it, “either reasoning is rational or it’s psychological” (Gigerenzer, 1994). Can not reasoning be both rational and psychological?

We believe that after 40 years of toying with the notion of bounded rationality, it is time to overcome the opposition between the rational and the psychological and to reunite the two. The PMM family of cognitive algorithms provides precise models that attempt to do so. They differ from the Enlightenment’s unified view of the rational and psychological, in that they focus on simple psychological mechanisms that operate under constraints of limited time and knowledge and are supported by empirical evidence. The single most important result in this article is that simple psychological mechanisms can yield about as many (or more) correct inferences in less time than standard statistical linear models that embody classical properties of rational inference. The demonstration that a fast and frugal satisficing algorithm won the competition defeats the widespread view that only “rational” algorithms can be accurate. Models of inference do not have to forsake accuracy for simplicity. The mind can have it both ways.

\n
" } }, { "_id": "HzvJw65XfppHBohtE", "title": "Any sufficiently advanced wisdom is indistinguishable from bullshit", "pageUrl": "https://www.lesswrong.com/posts/HzvJw65XfppHBohtE/any-sufficiently-advanced-wisdom-is-indistinguishable-from", "postedAt": "2009-12-20T10:09:02.908Z", "baseScore": 7, "voteCount": 32, "commentCount": 37, "url": null, "contents": { "documentId": "HzvJw65XfppHBohtE", "html": "

In the grand tradition of sequences, I'm going to jot this down real quick because it's required for the next argument I'm going to make.

\n

Shalmanese's 3rd law is \"Any sufficiently advanced wisdom is indistinguishable from bullshit\". Shalmanese's first law is \"As the length of any discussion involving the metric system approaches infinity, the likelihood approaches 1 of there being a reference to The Simpsons episode about 40 rods to the hogshead\" so judge it by the company it keeps.

\n

Imagine you got to travel back in time to meet yourself from 10 years ago and impart as much wisdom as possible on your past-self in 6 hours. You're bound by the Time Enforcement Committee not to reveal that you are the future-self of your past-self and it never occurs to your past-self that this ugly thing in front of them could ever be you. As far as the past-self is concerned, it's just a moderately interesting person they're having a conversation with.

\n

There would be 3 broad sets that your discussions would fall in: Beliefs that you both mutually agree on, Beliefs that you are able to convince your past-self through reason and Beliefs which make the past-self regard your future-self as being actively stupid for holding. It's this third category which I'm going to term Advanced Wisdom.

\n

For everybody, those beliefs are going to be specific to the individual. Maybe you used to be devoutly religious and now you're staunchly atheist. Perhaps you were once radically marxist and now you're a staunch libertarian. For me, it was the wisdom of the advice to \"be yourself\". I have no doubt that I would get precisely nowhere convincing my past-self that \"be yourself\" is a piece of wisdom. Anything I could ever possibly say to him, he had already heard many times before and convinced himself was utter bullshit. If even my actual self couldn't convince myself of something, what hope is there that any rational argument could have penetrated?

\n

If I were to have my future-self visit my present-self now, I would have no doubt that he would also present me with some pieces of advanced wisdom I thought were bullshit. The problem is, sufficiently advanced wisdom is indistinguishable from bullshit. There is no possible test that can separate the two. You might be told something is advanced wisdom and keep the openest mind possible about it and investigate it in all the various ways and perhaps even be convinced by it and maybe it was actual wisdom you were convinced by. Then again, you could just have been convinced by bullshit. As a result, advanced wisdom, as a concept, is completely, frustratingly useless in an argument. If you're on the arguer's side, you know that the assertion of advanced wisdom is going to be taken as just more bullshit, if you're on the arguee's side, any assertion of advanced wisdom looks like the mistaken rambling of a deluded fool.

\n

The one positive thing this law has lead me to is a much higher tolerance for bullshit. I'm no longer so quick to dismiss ideas which, to me, seem obvious bullshit.

" } }, { "_id": "h723jFRLGosaxEzGC", "title": "The Contrarian Status Catch-22", "pageUrl": "https://www.lesswrong.com/posts/h723jFRLGosaxEzGC/the-contrarian-status-catch-22", "postedAt": "2009-12-19T22:40:51.201Z", "baseScore": 70, "voteCount": 66, "commentCount": 102, "url": null, "contents": { "documentId": "h723jFRLGosaxEzGC", "html": "

It used to puzzle me that Scott Aaronson still hasn't come to terms with the obvious absurdity of attempts to make quantum mechanics yield a single world.

\n

I should have realized what was going on when I read Scott's blog post \"The bullet-swallowers\" in which Scott compares many-worlds to libertarianism.  But light didn't dawn until my recent diavlog with Scott, where, at 50 minutes and 20 seconds, Scott says:

\n

\"What you've forced me to realize, Eliezer, and I thank you for this:  What I'm uncomfortable with is not the many-worlds interpretation itself, it's the air of satisfaction that often comes with it.\"
        -- Scott Aaronson, 50:20 in our Bloggingheads dialogue.

\n

It doesn't show on my face (I need to learn to reveal my expressions more, people complain that I'm eerily motionless during these diavlogs) but at this point I'm thinking, Didn't Scott just outright concede the argument?  (He didn't; I checked.)  I mean, to me this sounds an awful lot like:

\n

Sure, many-worlds is the simplest explanation that fits the facts, but I don't like the people who believe it.

\n

And I strongly suspect that a lot of people out there who would refuse to identify themselves as \"atheists\" would say almost exactly the same thing:

\n

What I'm uncomfortable with isn't the idea of a god-free physical universe, it's the air of satisfaction that atheists give off.

\n

\n

If you're a regular reader of Robin Hanson, you might essay a Hansonian explanation as follows:

\n

Although the actual state of evidence favors many-worlds (atheism), I don't want to affiliate with other people who say so.  They act all brash, arrogant, and offensive, and tend to believe and advocate other odd ideas like libertarianism.  If I believed in many-worlds (atheism), that would make me part of this low-prestige group.

\n

Or in simpler psychology:

\n

I don't feel like I belong with the group that believes in many-worlds (atheism).

\n

I think this might form a very general sort of status catch-22 for contrarian ideas.

\n

When a correct contrarian idea comes along, it will have appealing qualities like simplicity and favorable evidence (in the case of factual beliefs) or high expected utility (in the case of policy proposals).  When an appealing contrarian idea comes along, it will be initially supported by its appealing qualities, and opposed by the fact that it seems strange and unusual, or any other counterintuitive aspects it may have.

\n

So initially, the group of people who are most likely to support the contrarian idea, are the people who are - among other things - most likely to break with their herd in support of an idea that seems true or right.

\n

These first supporters are likely to be the sort of people who - rather than being careful to speak of the new idea in the cautious tones prudent to supplicate the many high-status insiders who believe otherwise - just go around talking as if the idea had a very high probability, merely because it seems to them like the simplest explanation that fits the facts.  \"Arrogant\", \"brash\", and \"condescending\" are some of the terms used to describe people with this poor political sense.

\n

The people first to speak out in support of the new idea will be those less sensitive to conformity; those with more confidence in their sense of truth or rightness; those less afraid of speaking out.

\n

And to the extent these are general character traits, such first supporters are also more likely to advocate other contrarian beliefs, like libertarianism or strident atheism or cryonics.

\n

And once that happens, the only people who'll be willing to believe the idea will be those willing to tar themselves by affiliating with a group of arrogant nonconformists - on top of everything else!

\n

tl;dr:  When a counterintuitive new idea comes along, the first people to support it will be contrarians, and so the new idea will become associated with contrarian traits and beliefs, and people will become even more reluctant to believe it because that would affiliate them with low-prestige people/traits/beliefs.

\n
\n

A further remark on \"airs of satisfaction\":  Talk about how we don't understand the Born Probabilities and there are still open questions in QM, and hence we can't accept the no-worldeaters interpretation, sounds a good deal like the criticism given to atheists who go around advocating the no-God interpretation.  \"But there's so much we don't know about the universe!  Why are you so self-satisfied with your disbelief in God?\"  There's plenty we don't understand about the universe, but that doesn't mean that future discoveries are likely to reveal Jehovah any more than they're likely to reveal a collapse postulate.

\n

Furthermore, atheists are more likely than priests to hear \"But we don't know everything about the universe\" or \"What's with this air of satisfaction?\"  Similarly, it looks to me like you can get away with speaking out strongly in favor of collapse postulates and against many-worlds, and the same people won't call you on an \"air of satisfaction\" or say \"but what about the open questions in quantum mechanics?\"

\n

This is why I think that what we have here is just a sense of someone being too confident in an unusual belief given their assigned social status, rather than a genuine sense that we can't be too confident in any statement whatever.  The instinctive status hierarchy treats factual beliefs in pretty much the same way as policy proposals.  Just as you need to be extremely high-status to go off and say on your own that the tribe should do something unusual, there's a similar dissonance from a low-status person saying on their own to believe something unusual, without careful compromises with other factions.  It shows the one has no sense of their appropriate status in the hierarchy, and isn't sensitive to other factions' interests.

\n

The pure, uncompromising rejection merited by hypotheses like Jehovah or collapse postulates, socially appears as a refusal to make compromises with the majority, or a lack of sufficient humility when contradicting high-prestige people.  (Also priests have higher social status to start with; it's understood that their place is to say and advocate these various things; and priests are better at faking humility while going on doing whatever they were going to do anyway.)  The Copenhagen interpretation of QM - however ridiculous - is recognized as a conventional non-strange belief, so no one's going to call you insufficiently humble for advocating it.  That would mark them as the contrarians.

" } }, { "_id": "pPmTL9DLDDF2TTwQg", "title": "Reacting to Inadequate Data", "pageUrl": "https://www.lesswrong.com/posts/pPmTL9DLDDF2TTwQg/reacting-to-inadequate-data", "postedAt": "2009-12-18T16:25:08.831Z", "baseScore": -2, "voteCount": 14, "commentCount": 21, "url": null, "contents": { "documentId": "pPmTL9DLDDF2TTwQg", "html": "

Two Scenarios

\n

Alice must answer the multiple-choice question, \"What color is the ball?\" The two choices are \"Red\" and \"Blue.\" Alice has no relevant memories of The Ball other than she knows it exists. She cannot see The Ball or interact with it in any way; she cannot do anything but think until she answers the question.

\n

In an independent scenario, Bob has the same question but Bob has two memories of The Ball. In one of the memories, The Ball is red. In the other memory, The Ball is blue. There are no \"timestamps\" associated with the memories and no way of determining if one came before the other. Bob just has two memories and he, somehow, knows the memories are of the same ball.

\n

If you were Alice, what would you do?

\n

If you were Bob, what would you do?

\n

\n

Variations

\n

More questions to ponder:

\n\n

Further Discussion

\n

The basic question I was initially pondering was how to resolve conflicting sensory inputs. If I were a brain in a vat and I received two simultaneous sensory inputs that conflicted (such as the color of a ball), how should I process them?

\n

Another related topic is whether a brain in a vat with absolutely no sensory inputs should be considered intelligent. These two questions were reduced into the above two scenarios and I am asking for help in resolving them. I think they are similar to questions asked here before but their relation to these two brain-in-a-vat questions seemed relevant to me.

\n

Realistic Scenarios

\n

These scenarios are cute but there are similar real-world examples. When asked if a visible ball was red or green and you happened to be unable to distinguish between red and green, how do you interpret what you see?

\n

Abstracting a bit, any input (sensory or otherwise) that is indistinguishable from another input can really muck with your head. Most optical illusions are tricks on eye-hardware (software?).

\n

This post is not intended to be clever or teach anything new. Rather, the topic confuses me and I am seeking to learn about the correct behavior. Am I missing some form of global input theory that helps resolve colliding inputs or missing data? When the data is inadequate, what should I do? Start guessing randomly?

" } }, { "_id": "hSavx8SBvbwYbrWig", "title": "December 2009 Meta Thread", "pageUrl": "https://www.lesswrong.com/posts/hSavx8SBvbwYbrWig/december-2009-meta-thread", "postedAt": "2009-12-17T03:41:17.341Z", "baseScore": 9, "voteCount": 9, "commentCount": 77, "url": null, "contents": { "documentId": "hSavx8SBvbwYbrWig", "html": "

This post is a place to discuss meta-level issues regarding Less Wrong. Such posts may or may not be the unique venue for such discussion in the future.

" } }, { "_id": "Y5nfQ5hg6yn4Pq6Gj", "title": "An account of what I believe to be inconsistent behavior on the part of our editor", "pageUrl": "https://www.lesswrong.com/posts/Y5nfQ5hg6yn4Pq6Gj/an-account-of-what-i-believe-to-be-inconsistent-behavior-on", "postedAt": "2009-12-17T01:33:15.694Z", "baseScore": 4, "voteCount": 34, "commentCount": 66, "url": null, "contents": { "documentId": "Y5nfQ5hg6yn4Pq6Gj", "html": "

There was recently a submission here posing criticism a well-known contributor, Eliezer Yudkowsky. Admittedly, whatever question the poster intended to ask was embedded within a post which was evidently designed to be a test of our rationality. Eliezer didn't seem to care too much, stating that the issue was

\n
\n

it being a top-level post instead of Open Thread comment. Probably would've been a lot more forgiving if it'd been an Open Thread comment. . .

\n

The question would, under other circumstances, be just - but I don't care to justify myself here. Elsewhere, perhaps. . .

\n
\n
\n

This belongs as a comment on the SIAI blog, not a post on Less Wrong.

\n
\n

I asked Eliezer why mormon2's post belonged on the SIAI blog and not here. He responded thus:

\n
\n

Because Less Wrong is about human rationality, not the Singularity Institute, and not me.

\n
\n

This response is unsatisfactory. Either certain posts belong on the SIAI blog and not here, or they don't and can be posted here. It can't be both ways

\n

Note that I do not approve of mormon2's submission, as of the recent statement he made in an edit. I do, however, approve of the idea of such a submission. Somebody should be able to make a top-level post directing questions and criticism towards another author, under certain circumstances. I can't fully pin down just what the precise circumstances should be - it's not up for me to decide in any case.

\n

But consider a high-profile contributor, who already has many posts about himself (several self-submitted) and his work, who has at times responded to off-site comments with top-level posts on Less Wrong, and who has recently given his blessing to a post entitled Less Wrong Q&A With Eliezer Yudkowsky - when such a person suggests that a submission concerning himself and his work belongs as a comment in the monthly Open Thread, or as a comment on an off-site blog, I find it very outlandish.

\n

Eliezer is not an ordinary contributor. In the beginning, it was apparently envisioned that there would be a limit to the number of non-Yudkowsky/Hanson posts submitted per day. Obviously that policy has not been enacted. In any case, by my count there have been 682 submissions to Less Wrong as of December 14th. Eliezer has contributed 108 of those (the median number of posts per author being 2.5)1. He is not a \"specific person\" being asked to justify \"specific decisions\", as he would hypothetically suggest if his intent were to be manipulative. It's actually quite difficult for me to characterize Eliezer's role here. Mostly he's a great author and commenter. Sometimes he comments as the site editor. At other times, he seems to submit as though this were his personal blog, Yahoo! Answers, or the comment thread on an entirely different site. He seems to have developed into a sort of community icon at Less Wrong.

\n

So it occurs to me (after having expended my google-fu and searched the LW-Wiki), that there are no posted rules for appropriate top-level topics on Less Wrong. If there are, please correct me. At Overcoming Bias we submitted articles to the editors and they decided whether to publish them (I assume there were few or no restrictions for the editors' own work). The only restriction I know of on Less Wrong is that you need a Karma Score of at least 20 or 40 to submit posts. This is clearly insufficient, since if you have 8000 Karma you can submit anything. How many moderators do we have? I have yet to find a list.

\n

I feel that we must address these issues, either presently or ultimately. I know of no other decent community with Less Wrong's stated goal. And yet I am very much vexed by these inconsistencies I perceive between the stated purpose and the site's actual operation.

\n

1I put the date marking the beginning of LessWrong.com as an open community at March 5th, 2009. This was the date of the first post by someone other than Eliezer / Robin after Eliezer's announcement that a beta version of the site had been launched.

" } }, { "_id": "KCSWFJmJiDtjJo6gq", "title": "Philadelphia LessWrong Meetup, December 16th", "pageUrl": "https://www.lesswrong.com/posts/KCSWFJmJiDtjJo6gq/philadelphia-lesswrong-meetup-december-16th", "postedAt": "2009-12-16T03:13:02.633Z", "baseScore": 7, "voteCount": 4, "commentCount": 2, "url": null, "contents": { "documentId": "KCSWFJmJiDtjJo6gq", "html": "

There will be a LessWrong meetup on Wednesday, December 16th (tomorrow). We're meeting at 7:15 PM at Kabul Restaurant at 106 Chestnut Street. Five people have confirmed so far.

" } }, { "_id": "ZCjbuQvYkmdH26d8M", "title": "Getting Over Dust Theory", "pageUrl": "https://www.lesswrong.com/posts/ZCjbuQvYkmdH26d8M/getting-over-dust-theory", "postedAt": "2009-12-15T22:40:55.602Z", "baseScore": 12, "voteCount": 17, "commentCount": 102, "url": null, "contents": { "documentId": "ZCjbuQvYkmdH26d8M", "html": "

It has been well over a year since I first read Permutation City and relating writings on the internet on Greg Egan's dust theory. It still haunts me. The theory has been discussed tangentially in this community, but I haven't found an article that directly addresses the rationality of Egan's own dismissal of the theory.

\n

In the FAQ, Egan says things like:

\n
\n

I wrote the ending as a way of dramatising[sic] a dissatisfaction I had with the “pure” Dust Theory that I never could (and still haven't) made precise (see Q5): the universe we live in is more coherent than the Dust Theory demands, so there must be something else going on.

\n
\n

and:

\n
\n

I have yet to hear a convincing refutation of it on purely logical grounds...

\n

However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

\n
\n

Isn't this, along with so many other problems, a candidate for our sometime friend the anthropic principle? That is: only in a conscious configuration field which has memories of perceptions of an orderly universe is the dust theory controversial or doubted? In the vastly more numerous conscious configuration fields with memories of perceptions of a chaotic and disorderly universe lacking a rational way to support the observer the dust theory could be accepted a priori or at least be a favored theory.

\n

It is fine to dismiss dust theory because it simply isn't very helpful and because it has no predictions, testable or otherwise. I suppose it is also fine never to question the nature of consciousness as the answers don't seem to lead anywhere helpful either; though the question of it will continue to vex some instances of these configuration states.

" } }, { "_id": "i9rX2Mc2ERWHGF3Y8", "title": "Rebasing Ethics", "pageUrl": "https://www.lesswrong.com/posts/i9rX2Mc2ERWHGF3Y8/rebasing-ethics", "postedAt": "2009-12-15T13:56:09.689Z", "baseScore": -10, "voteCount": 28, "commentCount": 71, "url": null, "contents": { "documentId": "i9rX2Mc2ERWHGF3Y8", "html": "

Lets start with the following accepted as a given:

\n\n

Of the current figures who accept these premises, most espouse some form of secular humanism which argues that humans are genetically programed not to lie, murder or steal, therefore this is both the right morality & the one they practice. This, to my mind, is committing the naturalistic fallacy.

\n

Instead, I want to offer an analogy: Humans have an innate preference for certain foods which evolved in an environment radically different from modern society. In a modern society, it is widely regarded as virtuous to be actively working against our innate, genetic impulses through the practice of dieting. Similarly, our ethical landscape is radically different from the one we evolved in. Thus what we should be doing is actively working against our innate moral sense to be more in line with the modern world. In the same way that there is junk food that tastes good but is ultimately unhealthy for you, I believe there is ethical junk food which fills us with a feeling of virtue that is underserved.

\n

Let me start off with a mild example: Tipping.

\n

Altruism evolved in an era of small tribes where individual altruistic acts could be remembered & paid back. Now that we live in a large, anonymous society, there are many times when altruism doesn't pay. Unless you are with friends or a frequent diner at a restaurant or bar, the correct moral move is to stiff the waiter on the tip. If you're traveling somewhere alone, you should universally fail to tip as you're not likely to ever return there.

\n

If this fills you with immediate moral revulsion, you're not alone. I'm so skeeved out by this that I've never yet worked up the nerve to do it. My empathetic system simulates how the waiter must feel to not get a tip and I get queasy in the pit of my stomach. But this empathetic system isn't based on any moral fact, it's simply an evolutionary shortcut that helps us survive in small groups. To rebase your ethics is to start actively fighting against that feeling of moral revulsion, just as the first step of a diet is to fight against the desire for fatty, sweet, salty foods.

\n

In fact, I would go so far as to say that if your own personal moral system doesn't have parts that are morally repulsive to you, then you're not doing it right and anybody who tries to tell you different is selling you snake oil.

\n

As to why I don't hear anyone talking about this stuff, it's like fight club. The first rule of rebasing your entire ethics system is that you never tell anyone you've rebased your entire ethics system.

" } }, { "_id": "WHP3tKPXppBF2S8e8", "title": "Man-with-a-hammer syndrome", "pageUrl": "https://www.lesswrong.com/posts/WHP3tKPXppBF2S8e8/man-with-a-hammer-syndrome", "postedAt": "2009-12-14T11:31:48.237Z", "baseScore": 15, "voteCount": 42, "commentCount": 41, "url": null, "contents": { "documentId": "WHP3tKPXppBF2S8e8", "html": "
What gummed up Skinner’s reputation is that he developed a case of what I always call man-with-a-hammer syndrome: to the man with a hammer, every problem tends to look pretty much like a nail.
\n

The Psychology of Human Misjudgment is an brilliant talk given by Charlie Munger that I still return to and read every year to gain a fresh perspective. There’s a lot of wisdom to be distilled from that piece but the one thing I want to talk about today is the man-with-a-hammer syndrome.

\n

Man-with-a-hammer syndrome is pretty simple: you think of an idea and then, pretty soon, it becomes THE idea. You start seeing how THE idea can apply to anything and everything, it’s the universal explanation for how the universe works. Suddenly, everything you’ve ever thought of before must be reinterpreted through the lens of THE idea and you’re on an intellectual high. Utilitarianism is a good example of this. Once you independently discover Utilitarianism you start to believe that an entire moral framework can be constructed around a system of pleasures and pains and, what’s more, that this moral system is both objective and platonic. Suddenly, everything from the war in the middle east to taking your mid-morning dump at work because you need that 15 minutes of reflective time alone with yourself before you can face the onslaught of meaningless drivel that is part of corporate America but feeling guilty about it because you were raised to be a good Randian and you are not providing value from your employers so you’re committing and act of theft can be fit under the Utilitarian framework. And then, hopefully, a few days later, you’re over it and Utilitarianism is just another interesting concept and you’re slightly embarrassed about your behavior a few days prior. Unfortunately, some people never get over it and they become those annoying people write long screeds on the internet about their THE idea.

\n

The most important thing to realize about man-with-a-hammer syndrome is that there’s no possible way to avoid having it happen to you. You can be a well seasoned rationalist who’s well aware of how man-with-a-hammer syndrome works and what the various symptoms are but it’s still going to hit you fresh with each new idea. The best you can do is mitigate the fallout that occurs.

\n

Once you recognize that you’ve been struck with man-with-a-hammer syndrome, there’s a number of sensible precautions you can take. The first is to have a good venting spot, being able to let your thoughts out of your head for some air lets you put them slightly in perspective. Personally, I have a few trusted friends to which I expose man-with-a-hammer ideas with all the appropriate disclaimers to basically ignore the bullshit that is coming out of my mouth.

\n

The second important thing to do is to hold back from telling anyone else about the idea. Making an idea public means that you’re, to a degree, committed to it and this is not what you want. The best way to prolong man-with-a-hammer syndrome is to have other people believing that you believe something.

\n

Unfortunately, the only other thing to do is simply wait. There’s been nothing I’ve discovered that can hasten the recovery from man-with-a-hammer syndrome beyond some minimum time threshold. If you’ve done everything else right, the only thing left to do is to simply out wait it. No amount of clever mental gymnastics will help you get rid of the syndrome any faster and that’s the most frustrating part. You can be perfectly aware that you have it, know that everything you’re thinking now, you won’t believe in a weeks time and yet you still can’t stop yourself from believing in it now.

\n

Man-with-a-hammer syndrome can destroy your life if you’re not careful but, if handled appropriately, is ultimately nothing more than an annoying and tedious cost of coming up with interesting ideas. What’s most interesting about it to me is that even with full awareness of it’s existence, it’s completely impossible to avoid. While you have man-with-a-hammer syndrome, you end up living in a curious world in which you are unable to disbelieve in something you know to be not true and this is a deeply weird state I’ve not seen “rationalists” fully come to terms with.

" } }, { "_id": "q55RwTSx7gRzbrsPt", "title": "Previous Post Revised", "pageUrl": "https://www.lesswrong.com/posts/q55RwTSx7gRzbrsPt/previous-post-revised", "postedAt": "2009-12-14T06:56:50.528Z", "baseScore": 15, "voteCount": 13, "commentCount": 26, "url": null, "contents": { "documentId": "q55RwTSx7gRzbrsPt", "html": "

Followup to: The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom

\n

See also: The Importance of Saying \"Oops\"

\n

I'm posting this to call attention to the fact that I've now reconsidered the highly confident probability estimates in my post from yesterday on the Knox/Sollecito case. I haven't retracted my arguments; I just now think the level of confidence in them that I specified was too high. I've added the following paragraph to the concluding section:

\n
\n

[EDIT: After reading comments on this post, I have done some updating of my own. I now think I failed to adequately consider the possibility of my own overconfidence. This was pretty stupid of me, since it meant that the focus was taken away from the actual arguments in this post, and basically toward the issue of whether 0.001 can possibly be a rational estimate for anything you read about on the Internet. The qualitative reasoning of this post, of course, stands. Also, the focus of my accusations of irrationality was not primarily the LW community as reflected in my previous post; I actually think we did a pretty good job of coming to the right conclusion given the information provided -- and as others have noted, the levelheadedness with which we did so was impressive.]

\n
\n

While object-level comments on the case and on my reasoning about it should probably continue to be confined to that thread, I'd be interested in hearing in comments here what people think about the following:

\n" } }, { "_id": "sBFjoPm6xoQMcbRhX", "title": "Against picking up pennies", "pageUrl": "https://www.lesswrong.com/posts/sBFjoPm6xoQMcbRhX/against-picking-up-pennies", "postedAt": "2009-12-13T06:07:55.566Z", "baseScore": 1, "voteCount": 14, "commentCount": 44, "url": null, "contents": { "documentId": "sBFjoPm6xoQMcbRhX", "html": "

The eternally curious Tailsteak has written about how he always picks up pennies off the sidewalk. He's run a cost-benefit analysis and determined that it's better on average to pick up a penny than to pass it by. His mistake lies nowhere in the analysis itself; it's pretty much correct. His mistake is performing the analysis in the first place.

\n

Pennies, you see, are easily the subject of scope insensitivity. When we come across a penny, we don't think, \"Hey, that's something worth 0.05% of what I wish I had come across. I could buy a 25th of a gumball, a mouthful of an unhealthy carbonated beverage, a couple of seconds of footage on DVD, or enough gasoline to go a tenth of a mile.\" We think, \"Hey, that's money,\" and we grab it.

\n

The thing is, it's difficult to comprehend how little a penny is worth—we don't really have a separate concept for \"mild happiness for a couple of seconds\"—and we're likely to take risks that far outweigh the benefits. We don't think of bending over to pick up a penny as being a risky endeavour, but it's a penny. How much risk does it take to outweigh a penny? Surely the risk of \"something unforeseen\" easily does the job. Are you 99.999999% sure that picking up that penny won't kill you? You need a reason for every 9 (if you're ambivalent between using seven 9s and using nine 9s, you should use seven; the number of 9s is never arbitrary), and by the time you come up with eight reasons to pick up the penny, you'll have wasted several cents' worth of time. If you can reduce the probability of harm that far, I applaud you.

\n\n

Of course, penny-grabbing doesn't have to involve actual pennies. Suppose that President Kodos of the Unified States of Somewhere (population 300 million) uses the word \"idiot\" in an important speech, causing the average citizen to scowl and ponder for one minute. Now, if a penny can buy you five seconds of happiness, and scowling and pondering brings the same amount of unhappiness, then that's twelve cents for every citizen, or 36 million dollars, of damage that Kodos just caused. Arguably, that's the value of a couple of human lives. As you can see, Kodos' decisions are extremely important. In this case, penny-grabbing would consist of anything less than trading precious seconds for precious human lives—if Kodos finds that he can save one life simply by going a few minutes out of his way, he should ignore it. (Photo ops and personal apologies are out of the question.) But keep in mind, of course, that avoiding saving someone's life because you have something better to do isn't rational unless you actually plan to do something better.

" } }, { "_id": "G9dptrW9CJi7wNg3b", "title": "The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom", "pageUrl": "https://www.lesswrong.com/posts/G9dptrW9CJi7wNg3b/the-amanda-knox-test-how-an-hour-on-the-internet-beats-a", "postedAt": "2009-12-13T04:16:20.840Z", "baseScore": 58, "voteCount": 71, "commentCount": 649, "url": null, "contents": { "documentId": "G9dptrW9CJi7wNg3b", "html": "

Note: The quantitative elements of this post have now been revised significantly.

\n

Followup to: You Be the Jury: Survey on a Current Event

\n
\n

All three of them clearly killed her. The jury clearly believed so as well which strengthens my argument. They spent months examining the case, so the idea that a few minutes of internet research makes [other commenters] certain they're wrong seems laughable

\n
\n

- lordweiner27, commenting on my previous post

\n

The short answer: it's very much like how a few minutes of philosophical reflection trump a few millennia of human cultural tradition.

\n

Wielding the Sword of Bayes -- or for that matter the Razor of Occam -- requires courage and a certain kind of ruthlessness. You have to be willing to cut your way through vast quantities of noise and focus in like a laser on the signal.

\n

But the tools of rationality are extremely powerful if you know how to use them.

\n

Rationality is not easy for humans. Our brains were optimized to arrive at correct conclusions about the world only insofar as that was a necessary byproduct of being optimized to pass the genetic material that made them on to the next generation. If you've been reading Less Wrong for any significant length of time, you probably know this by now. In fact, around here this is almost a banality -- a cached thought. \"We get it,\" you may be tempted to say. \"So stop signaling your tribal allegiance to this website and move on to some new, nontrivial meta-insight.\"

\n

But this is one of those things that truly do bear repeating, over and over again, almost at every opportunity. You really can't hear it enough. It has consequences, you see. The most important of which is: if you only do what feels epistemically \"natural\" all the time, you're going to be, well, wrong. And probably not just \"sooner or later\", either. Chances are, you're going to be wrong quite a lot.

\n

To borrow a Yudkowskian turn of phrase: if you don't ever -- or indeed often -- find yourself needing to zig when, not only other people, but all kinds of internal \"voices\" in your mind are loudly shouting for you to zag, then you're either a native rationalist -- a born Bayesian, who should perhaps be deducing general relativity from the fall of an apple any minute now -- or else you're simply not trying hard enough.    

\n

Oh, and another one of those consequences of humans' not being instinctively rational?

\n

Two intelligent young people with previously bright futures, named Amanda and Raffaele, are now seven days into spending the next quarter-century of their lives behind bars for a crime they almost certainly did not commit.

\n

\"Almost certainly\" really doesn't quite capture it. In my previous post I asked readers to assign probabilities to the following propositions:

\n

1. Amanda Knox is guilty (of killing Meredith Kercher)
2. Raffaele Sollecito is guilty (of killing Meredith Kercher)
3. Rudy Guédé is guilty (of killing Meredith Kercher)

\n

I also asked them to guess at how closely they thought their estimates would match mine.

\n

Well, for comparison, here are mine (revised):

\n

1. Negligible. Small. Hardly different from the prior, which is dominated by the probability that someone in whatever reference class you would have put Amanda into on January 1, 2007 would commit murder within twelve months. Something on the order of 0.001 0.01 or 0.1 at most.  
2. Ditto.
3. About as high as the other two numbers are low. 0.999 0.99 as a (probably weak) lower bound.

\n

Yes, you read that correctly. In my opinion, there is for all intents and purposes zero Bayesian evidence that Amanda and Raffaele are guilty. Needless to say, this differs markedly from the consensus of the jury in Perugia, Italy. 

\n

How could this be?

\n

Am I really suggesting that the estimates of eight jurors -- among whom two professional judges -- who heard the case for a year, along with something like 60% of the Italian public and probably half the Internet (and a significantly larger fraction of the non-American Internet), could be off by a minimum of three orders of magnitude (probably significantly more) such a large amount? That most other people (including most commenters on my last post) are off by no fewer than two?

\n

Well, dear reader, before getting too incredulous, consider this. How about averaging the probabilities all those folks would assign to the proposition that Jesus of Nazareth rose from the dead, and calling that number x. Meanwhile, let y be the correct rational probability that Jesus rose from the dead, given the information available to us.

\n

How big do you suppose the ratio x/y is?

\n

Anyone want to take a stab at guessing the logarithm of that number?

\n

Compared to the probability that Jesus rose from the dead, my estimate of Amanda Knox's culpability makes it look like I think she's as guilty as sin itself.

\n

And that, of course, is just the central one of many sub-claims of the hugely complex yet widely believed proposition that Christianity is true. There are any number of other equally unlikely assertions that Amanda would have heard at mass on the day after being found guilty of killing her new friend Meredith (source in Italian) -- assertions that are assigned non-negligible probability by no fewer than a couple billion of the Earth's human inhabitants.

\n

I say this by way of preamble: be very wary of trusting in the rationality of your fellow humans, when you have serious reasons to doubt their conclusions.

\n

The Lawfulness of Murder: Inference Proceeds Backward, from Crime to Suspect

\n

We live in a lawful universe. Every event that happens in this world -- including human actions and thoughts -- is ultimately governed by the laws of physics, which are exceptionless. 

\n

Murder may be highly illegal, but from the standpoint of physics, it's as lawful as everything else. Every physical interaction, including a homicide, leaves traces -- changes in the environment that constitute information about what took place.

\n

Such information, however, is -- crucially -- local. The further away in space and time you move from the event, the less entanglement there is between your environment and that of the event, and thus the more difficult it is to make legitimate inferences about the event. The signal-to-noise ratio decreases dramatically as you move away in causal distance from the event. After all, the hypothesis space of possible causal chains of length n leading to the event increases exponentially in n.

\n

By far the most important evidence in a murder investigation will therefore be the evidence that is the closest to the crime itself -- evidence on and around the victim, as well as details stored in the brains of people who were present during the act. Less important will be evidence obtained from persons and objects a short distance away from the crime scene; and the importance decays rapidly from there as you move further out.

\n

It follows that you cannot possibly expect to reliably arrive at the correct answer by starting a few steps removed in the causal chain, say with a person you find \"suspicious\" for some reason, and working forward to come up with a plausible scenario for how the crime was committed. That would be privileging the hypothesis. Instead, you have to start from the actual crime scene, or as close to it as you can get, and work backward, letting yourself be blown by the winds of evidence toward one or more possible suspects.

\n

In the Meredith Kercher case, the winds of evidence blow with something like hurricane force in the direction of Rudy Guédé. After the murder, Kercher's bedroom was filled with evidence of Guédé's presence; his DNA was found not only on top of but actually inside her body. That's about as close to the crime as it gets. At the same time, no remotely similarly incriminating genetic material was found from anyone else -- in particular, there were no traces of the presence of either Amanda Knox or Raffaele Sollecito in the room (and no, the supposed Sollecito DNA on Meredith's bra clasp just plain does not count -- nor, while we're at it, do the 100 picograms [about one human cell's worth] of DNA from Meredith allegedly on the tip of a knife handled by Knox, found at Sollecito's apartment after the two were already suspects; these two things constituting pretty much the entirety of the physical \"evidence\" against the couple).

\n

If, up to this point, the police had reasons to be suspicious of Knox, Sollecito, and Guédé, they should have cleared Knox and Sollecito at once upon the discovery that Guédé -- who, by the way, was the only one to have fled the country after the crime -- was the one whom the DNA matched. Unless, that is, Knox and Sollecito were specifically implicated by Guédé; after all, maybe Knox and Sollecito didn't actually kill the victim, but instead maybe they paid Guédé to do so, or were otherwise involved in a conspiracy with him. But the prior probabilities of such scenarios are low, even in general -- to say nothing of the case of Knox and Sollecito specifically, who, tabloid press to the contrary, are known to have had utterly benign dispositions prior to these events, and no reason to want Meredith Kercher dead.

\n

If Amanda Knox and Raffaele Sollecito were to be in investigators' thoughts at all, they had to get there via Guédé -- because otherwise the hypothesis (a priori unlikely) of their having had homicidal intent toward Kercher would be entirely superfluous in explaining the chain of events that led to her death.  The trail of evidence had led to Guédé, and therefore necessarily had to proceed from him; to follow any other path would be to fatally sidetrack the investigation, and virtually to guarantee serious -- very serious -- error. Which is exactly what happened.

\n

There was in fact no inferential path from Guédé to Knox or Sollecito. He never implicated either of the two until long after the event; around the time of his apprehension, he specifically denied that Knox had been in the room. Meanwhile, it remains entirely unclear that he and Sollecito had ever even met.

\n

The hypotheses of Knox's and Sollecito's guilt are thus seen to be completely unnecessary, doing no explanatory work with respect to Kercher's death. They are nothing but extremely burdensome details.  

\n

Epistemic Ruthlessness: Following the Strong Signal

\n

All of the \"evidence\" you've heard against Knox and Sollecito -- the changing stories, suspicious behavior, short phone calls, washing machine rumors, etc. -- is, quite literally, just noise.

\n

But it sounds so suspicious, you say. Who places a three-second phone call? 

\n

As humans, we are programmed to think that the most important kinds of facts about the world are mental and social -- facts about what humans are thinking and planning, particularly as regards to other humans. This explains why some people are capable of wondering whether the presence of (only) Rudy Guédé's DNA in and on Meredith's body should be balanced against the possibilty that Meredith may have been annoyed at Amanda for bringing home boyfriends and occasionally forgetting to flush the toilet -- that might have led to resentment on Amanda's part, you see.

\n

That's an extreme example, of course -- certainly no one here fell into that kind of trap. But at least one of the most thoughtful commenters was severely bothered by the length of Amanda's phone calls to Meredith. As -- I'll confess -- was I, for a minute or two.

\n

I don't know why Amanda wouldn't have waited longer for Meredith to pick up. (For what it's worth, I myself have sometimes, in a state of nervousness, dialed someone's number, quickly changed my mind, then dialed again a short time later.) But -- as counterintuitive as it may seem -- it doesn't matter. The error here is even asking a question about Amanda's motivations when you haven't established an evidentiary (and that means physical) trail leading from Meredith's body to Amanda's brain. (Or even more to the point, when you have established a trail that led decisively elsewhere.)

\n

Maybe it's \"unlikely\" that Amanda would have behaved this way if she were innocent. But is the degree of improbabilty here anything like the improbability of her having participated in a sex-orgy-killing without leaving a single piece of physical evidence behind? While someone else left all kinds of traces? When you had no reason to suspect her at all without looking a good distance outside Meredith's room, far away from the important evidence?

\n

It's not even remotely comparable. 

\n

Think about what you're doing here: you are invoking the hypothesis that Amanda Knox is guilty of murder in order to explain the fact that she hung up the phone after three seconds. (Remember, the evidence against Guédé is such that the hypothesis of her guilt is superfluous -- not needed -- in explaining the death of Meredith Kercher!)

\n

Maybe that's not quite as bad as invoking a superintelligent deity in order to explain life on Earth; but it's the same kind of mistake: explaining a strange thing by postulating a far, far stranger thing.

\n

\"But come on,\" says a voice in your head. \"Does this really sound like the behavior of an innocent person?\"

\n

You have to shut that voice out. Ruthlessly. Because it has no way of knowing. That voice is designed to assess the motivations of members of an ancestral hunter-gather band. At best, it may have the ability to distinguish the correct murderer from between 2 and 100 possibilities -- 6 or 7 bits of inferential power on the absolute best of days. That may have worked in hunter-gatherer times, before more-closely-causally-linked physical evidence could hope to be evaluated. (Or maybe not -- but at least it got the genes passed on.)

\n

DNA analysis, in contrast, has in principle the ability to uniquely identify a single individual from among the entire human species (depending on how much of the genome is looked at; also ignoring identical twins, etc.) -- that's more like 30-odd bits of inferential power. In terms of epistemic technology, we're talking about something like the difference in locomotive efficacy between a horsedrawn carriage and the Starship Enterprise. Our ancestral environment just plain did not equip our knowledge-gathering intuitions with the ability to handle weapons this powerful.

\n

We're talking about the kind of power that allows us to reduce what was formerly a question of human social psychology -- who made the decision to kill Meredith? -- to one of physics. (Or chemistry, at any rate.)

\n

But our minds don't naturally think in terms of physics and chemistry. From an intuitive point of view, the equations of those subjects are foreign; whereas \"X did Y because he/she wanted Z\" is familiar. This is why it's so difficult for people to intuitively appreciate that all of the chatter about Amanda's \"suspicious behavior\" with various convincing-sounding narratives put forth by the prosecution is totally and utterly drowned out to oblivion by the sheer strength of the DNA signal pointing to Guédé alone.

\n

This rationalist skill of following the strong signal -- mercilessly blocking out noise -- might be considered an epistemic analog of the instrumental \"shut up and multiply\": when much is at stake, you have to be willing to jettison your intuitive feelings in favor of cold, hard, abstract calculation.

\n

In this case, that means, among other things, thinking in terms of how much explanatory work is done by the various hypotheses, rather than how suspicious Amanda and Raffaele seem

\n

Conclusion: The Amanda Knox Test

\n

I chose the title of this post because the parallel structure made it sound nice. But actually, I think an hour is a pretty weak upper bound on the amount of time a skilled rationalist should need to arrive at the correct judgment in this case.

\n

The fact is that what this comes down to is an utterly straightforward application of Occam's Razor. The complexity penalty on the prosecution's theory of the crime is enormous; the evidence in its favor had better be overwhelming. But instead, what we find is that the evidence from the scene -- the most important sort of evidence by a huge margin -- points with literally superhuman strength toward a mundane, even typical, homicide scenario. To even consider theories not directly suggested by this evidence is to engage in hypothesis privileging to the extreme.

\n

So let me say it now, in case there was any doubt: the prosecution of Amanda Knox and Raffaele Sollecito, culminating in last week's jury verdict -- which apparently was unanimous, though it didn't need to be under Italian rules -- represents nothing but one more gigantic, disastrous rationality failure on the part of our species.

\n

How did Less Wrong do by comparison? The average estimated probability of Amanda Knox's guilt was 0.35 (thanks to Yvain for doing the calculation). It's pretty reasonable to assume the figure for Raffaele Sollecito would be similar. While not particularly flattering to the defendants (how would you like to be told that there's a 35% chance you're a murderer?), that number makes it obvious we would have voted to acquit. (If a 65% chance that they didn't do it doesn't constitute  \"reasonable doubt\" that they did...)

\n

The commenters whose estimates were closest to mine -- and, therefore, to the correct answer, in my view -- were Daniel Burfoot and jenmarie. Congratulations to them. (But even they were off by a factor of at least ten!)

\n

In general, most folks went in the right direction, but, as Eliezer noted, were far too underconfident -- evidently the result of an exorbitant level of trust in juries, at least in part. But people here were also widely making the same object-level mistake as (presumably) the jury: vastly overestimating the importance of \"psychological\" evidence, such as Knox's inconsistencies at the police station, as compared to \"physical\" evidence (only Guédé's DNA in the room).

\n

One thing that was interesting and rather encouraging, however, is the amount of updating people did after reading others' comments -- most of it in the right direction (toward innocence).

\n

[EDIT: After reading comments on this post, I have done some updating of my own. I now think I failed to adequately consider the possibility of my own overconfidence. This was pretty stupid of me, since it meant that the focus was taken away from the actual arguments in this post, and basically toward the issue of whether 0.001 can possibly be a rational estimate for anything you read about on the Internet. The qualitative reasoning of this post, of course, stands. Also, the focus of my accusations of irrationality was not primarily the LW community as reflected in my previous post; I actually think we did a pretty good job of coming to the right conclusion given the information provided -- and as others have noted, the levelheadedness with which we did so was impressive.]

\n

For most frequenters of this forum, where many of us regularly speak in terms of trying to save the human species from various global catastrophic risks, a case like this may not seem to have very many grand implications, beyond serving as yet another example of how basic principles of rationality such as Occam's Razor are incredibly difficult for people to grasp on an intuitive level. But it does catch the attention of someone like me, who takes an interest in less-commonly-thought-about forms of human suffering.

\n

The next time I find myself discussing the \"hard problem of consciousness\", thinking in vivid detail about the spectrum of human experience and wondering what it's like to be a bat, I am going to remember -- whether I say so or not -- that there is most definitely something it's like to be Amanda Knox in the moments following the announcement of that verdict: when you've just learned that, instead of heading back home to celebrate Christmas with your family as you had hoped, you will be spending the next decade or two -- your twenties and thirties -- in a prison cell in a foreign country. When your deceased friend's relatives are watching with satisfaction as you are led, sobbing and wailing with desperation, to a van which will transport you back to that cell. (Ever thought about what that ride must be like?) 

\n

While we're busy eliminating hunger, disease, and death itself, I hope we can also find the time, somewhere along the way, to get rid of that, too.

\n

(The Associated Press reported that, apparently, Amanda had some trouble going to sleep after the midnight verdict.) 

\n

I'll conclude with this: the noted mathematician Serge Lang was in the habit of giving his students \"Huntington tests\" -- named in reference to his controversy with political scientist Samuel Huntington, whose entrance into the U.S. National Academy of Sciences Lang waged a successful campaign to block on the grounds of Huntington's insufficient scientific rigor.

\n

The purpose of the Huntington test, in Lang's words, was to see if the students could \"tell a fact from a hole in the ground\".

\n

I'm thinking of adopting a similar practice, and calling my version the Amanda Knox Test. 

\n

Postscript: If you agree with me, and are also the sort of person who enjoys purchasing warm fuzzies separately from your utilons, you might consider donating to Amanda's defense fund, to help out her financially devastated family. Of course, if you browse the site, you may feel your (prior) estimate of her guilt taking some hits; personally, that's okay with me.

" } }, { "_id": "wtzCt5jLMdpgXbMhZ", "title": "A question of rationality", "pageUrl": "https://www.lesswrong.com/posts/wtzCt5jLMdpgXbMhZ/a-question-of-rationality", "postedAt": "2009-12-13T02:37:41.722Z", "baseScore": 0, "voteCount": 50, "commentCount": 94, "url": null, "contents": { "documentId": "wtzCt5jLMdpgXbMhZ", "html": "

Thank you For Your Participation

\n

I would like to thank you all for your unwitting and unwilling participation in my little social experiment. If I do say so myself you all performed as I had hoped. I found some of the responses interesting, many them are goofy. I was honestly hoping that a budding rationalist community like this one would have stopped this experiment midway but I thank you all for not being that rational. I really did appreciate all the mormon2 bashing it was quite amusing and some of the attempts to discredit me were humorous though unsuccessful. In terms of the questions I asked I was curious about the answers though I did not expect to get any nor do I really need them; since I have a good idea of what the answers are just from simple deductive reasoning. I really do hope EY is working on FAI and actually is able to do it though I certainly will not stake my hopes or money on it. 

\n

Less there be any suspicion I am being sincere here.

\n

 

\n

Response

\n

Because I can I am going to make one final response to this thread I started:

\n

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire. The point is to give you guys easy ways to avoid answering my questions (things like tone of the post, spelling, grammar, being \"hostile (not really)\" etc.). I just wanted to see if anyone here could actually look past that, specifically EY, and post some honest answers to the questions (real answers again from EY not pawns on LW). Obviously this was to much to ask, since the general responses, not completely, but for the most part were copouts. I am well aware that EY probably would never answer any challenge to what he thinks, people like EY typically won't (I have dealt with many people like EY). I think the responses here speak volumes about LW and the people who post here (If you can't look past the way the content is posted then you are going to have a hard time in life since not everyone is going to meet your standards for how they speak or write). You guys may not be trying to form a cult but the way you respond to a post like this screams cultish and even a some circle-jerk mentality mixed in there. 

\n

 

\n

Post

\n

I would like to float an argument and a series of questions. Now before you guys vote me down please do me the curtsey of reading the post. I am also aware that some and maybe even many of you think that I am a troll just out to bash SIAI and Eliezer, that is in fact not my intent. This group is supposed to be about improving rationality so lets improve our rationality.

\n

SIAI has the goal of raising awareness of the dangers of AI as well as trying to create their own FAI solution to the problem. This task has fallen to Eliezer as the paid researcher working on FAI. What I would like to point out is a bit of a disconnect between what SIAI is supposed to be doing and what EY is doing.

\n

According to EY FAI is an extremely important problem that must be solved with global implications. It is both a hard math problem and a problem that needs to be solved by people who take FAI seriously first. To that end SIAI was started with EY as an AI researcher at SIAI. 

\n

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI? If FAI is so important then where does a book on rationality fit? Does that even play into SIAI's chief goals? SIAI spends huge amounts of time talking about risks and rewards of FAI and the person who is supposed to be making the FAI is writing a book on rationality instead of solving FAI. How does this square with being paid to research FAI? How can one justify EY's reasons for not publishing the math of TDT, coming from someone who is committed to FAI? If one is committed to solving that hard of a problem then I would think that the publication of ones ideas on it would be a primary goal to advance the cause of FAI.

\n

If this doesn't make sense then I would ask how rational is it to spend time helping SIAI if they are not focused on FAI? Can one justify giving to an organization like that when the chief FAI researcher is distracted by writing a book on rationality instead of solving the myriad of hard math problems that need to be solved for FAI? If this somehow makes sense then can one also state that FAI is not nearly as important as it has been made out to be since the champion of FAI feels comfortable with taking a break from solving the problem to write a book on rationality (in other words the world really isn't at stake)? 

\n

Am I off base? If this group is devoted to rationality then everyone should be subjected to rational analysis.

" } }, { "_id": "LCAynNFgnECrQ9iD2", "title": "The persuasive power of false confessions", "pageUrl": "https://www.lesswrong.com/posts/LCAynNFgnECrQ9iD2/the-persuasive-power-of-false-confessions", "postedAt": "2009-12-11T01:54:23.739Z", "baseScore": 12, "voteCount": 11, "commentCount": 4, "url": null, "contents": { "documentId": "LCAynNFgnECrQ9iD2", "html": "

\n

First paragraph from a Mind Hacks post:

\n

\n
\n

The APS Observer magazine has a fantastic article on the power of false confessions to warp our perception of other evidence in a criminal case to the point where expert witnesses will change their judgements of unrelated evidence to make it fit the false admission of guilt.

\n
\n

The post and linked article are worth reading… and I don't have much to add.

" } }, { "_id": "J7Gkz8aDxxSEQKXTN", "title": "What Are Probabilities, Anyway?", "pageUrl": "https://www.lesswrong.com/posts/J7Gkz8aDxxSEQKXTN/what-are-probabilities-anyway", "postedAt": "2009-12-11T00:25:33.177Z", "baseScore": 51, "voteCount": 40, "commentCount": 90, "url": null, "contents": { "documentId": "J7Gkz8aDxxSEQKXTN", "html": "

In Probability Space & Aumann Agreement, I wrote that probabilities can be thought of as weights that we assign to possible world-histories. But what are these weights supposed to mean? Here I’ll give a few interpretations that I've considered and held at one point or another, and their problems. (Note that in the previous post, I implicitly used the first interpretation in the following list, since that seems to be the mainstream view.)

\n
    \n
  1. Only one possible world is real, and probabilities represent beliefs about which one is real. \n\n
  2. \n
  3. All possible worlds are real, and probabilities represent beliefs about which one I’m in. \n\n
  4. \n
  5. Not all possible worlds are equally real, and probabilities represent “how real” each world is. (This is also sometimes called the “measure” or “reality fluid” view.) \n\n
  6. \n
  7. All possible worlds are real, and probabilities represent how much I care about each world. (To make sense of this, recall that these probabilities are ultimately multiplied with utilities to form expected utilities in standard decision theories.) \n\n
  8. \n
\n

As you can see, I think the main problem with all of these interpretations is arbitrariness. The unconditioned probability mass function is supposed to represent my beliefs before I have observed anything in the world, so it must represent a state of total ignorance. But there seems to be no way to specify such a function without introducing some information, which anyone could infer by looking at the function.

\n

For example, suppose we use a universal distribution, where we believe that the world-history is the output of a universal Turing machine given a uniformly random input tape. But then the distribution contains the information of which UTM we used. Where did that information come from?

\n

One could argue that we do have some information even before we observe anything, because we're products of evolution, which would have built some useful information into our genes. But to the extent that we can trust the prior specified by our genes, it must be that evolution approximates a Bayesian updating process, and our prior distribution approximates the posterior distribution of such a process. The \"prior of evolution\" still has to represent a state of total ignorance.

\n

These considerations lead me to lean toward the last interpretation, which is the most tolerant of arbitrariness. This interpretation also fits well with the idea that expected utility maximization with Bayesian updating is just an approximation of UDT that works in most situations. I and others have already motivated UDT by considering situations where Bayesian updating doesn't work, but it seems to me that even if we set those aside, there is still reason to consider a UDT-like interpretation of probability where the weights on possible worlds represent how much we care about those worlds.

" } }, { "_id": "JdK3kr4ug9kJvKzGy", "title": "Probability Space & Aumann Agreement", "pageUrl": "https://www.lesswrong.com/posts/JdK3kr4ug9kJvKzGy/probability-space-and-aumann-agreement", "postedAt": "2009-12-10T21:57:02.812Z", "baseScore": 53, "voteCount": 46, "commentCount": 76, "url": null, "contents": { "documentId": "JdK3kr4ug9kJvKzGy", "html": "

The first part of this post describes a way of interpreting the basic mathematics of Bayesianism. Eliezer already presented one such view at http://lesswrong.com/lw/hk/priors_as_mathematical_objects/, but I want to present another one that has been useful to me, and also show how this view is related to the standard formalism of probability theory and Bayesian updating, namely the probability space.

\n

The second part of this post will build upon the first, and try to explain the math behind Aumann's agreement theorem. Hal Finney had suggested this earlier, and I'm taking on the task now because I recently went through the exercise of learning it, and could use a check of my understanding. The last part will give some of my current thoughts on Aumann agreement.

\n

Probability Space

\n

In http://en.wikipedia.org/wiki/Probability_space, you can see that a probability space consists of a triple:

\n\n

F and P are required to have certain additional properties, but I'll ignore them for now. To start with, we’ll interpret Ω as a set of possible world-histories. (To eliminate anthropic reasoning issues, let’s assume that each possible world-history contains the same number of observers, who have perfect memory, and are labeled with unique serial numbers.) Each “event” A in F is formally a subset of Ω, and interpreted as either an actual event that occurs in every world-history in A, or a hypothesis which is true in the world-histories in A. (The details of the events or hypotheses themselves are abstracted away here.)

\n

To understand the probability measure P, it’s easier to first introduce the probability mass function p, which assigns a probability to each element of Ω, with the probabilities summing to 1. Then P(A) is just the sum of the probabilities of the elements in A. (For simplicity, I’m assuming the discrete case, where Ω is at most countable.) In other words, the probability of an observation is the sum of the probabilities of the world-histories that it doesn't rule out.

\n

A payoff of this view of the probability space is a simple understanding of what Bayesian updating is. Once an observer sees an event D, he can rule out all possible world-histories that are not in D. So, he can get a posterior probability measure by setting the probability masses of all world-histories not in D to 0, and renormalizing the ones in D so that they sum up to 1 while keeping the same relative ratios. You can easily verify that this is equivalent to Bayes’ rule: P(H|D) = P(D H)/P(D).

\n

To sum up, the mathematical objects behind Bayesianism can be seen as

\n\n

Aumann's Agreement Theorem

\n

Aumann's agreement theorem says that if two Bayesians share the same probability space but possibly different information partitions, and have common knowledge of their information partitions and posterior probabilities of some event A, then their posterior probabilities of that event must be equal. So what are information partitions, and what does \"common knowledge\" mean?

\n

The information partition I of an observer-moment M divides Ω into a number of subsets that are non-overlapping, and together cover all of Ω. Two possible world-histories w1 and w2 are placed into the same subset if the observer-moments in w1 and w2 have the exact same information. In other words, if w1 and w2 are in the same element of I, and w1 is the actual world-history, then M can't rule out either w1 or w2. I(w) is used to denote the element of I that contains w.

\n

Common knowledge is defined as follows: If w is the actual world-history and two agents have information partitions I and J, an event E is common knowledge if E includes the member of the meet I∧J that contains w. The operation ∧ (meet) means to take the two sets I and J, form their union, then repeatedly merge any of its elements (which you recall are subsets of Ω) that overlap until it becomes a partition again (i.e., no two elements overlap).

\n

It may not be clear at first what this meet operation has to do with common knowledge. Suppose the actual world-history is w. Then agent 1 knows I(w), so he knows that agent 2 must know one of the elements of J that overlaps with I(w). And he can reason that agent 2 must know that agent 1 knows one of the elements of I that overlaps with one of these elements of J. If he carries out this inference to infinity, he'll find that both agents know that the actual world-history is in (I∧J)(w), and both know the other know, and both know the other know the other know, and so on. In other words it is common knowledge that the actual world-history is in (I∧J)(w). Since event E occurs in every world-history in (I∧J)(w), it's common knowledge that E occurs in the actual world-history.

\n

Proof for the agreement theorem then goes like this. Let E be the event that agent 1 assigns a posterior probability (conditioned on everything it knows) of q1 to event A and agent 2 assigns a posterior probability of q2 to event A. If E is common knowledge at w, then both agents know that P(A | I(v)) = q1 and P(A | J(v)) = q2 for every v in (I∧J)(w). But this implies P(A | (I∧J)(w)) = q1 and P(A | (I∧J)(w)) = q2 and therefore q1 = q2. (To see this, suppose you currently know only (I∧J)(w), and you know that no matter what additional information I(v) you obtain, your posterior probability will be the same q1, then your current probability must already be q1.)

\n

Is Aumann Agreement Overrated?

\n

Having explained all of that, it seems to me that this theorem is less relevant to a practical rationalist than I thought before I really understood it. After looking at the math, it's apparent that \"common knowledge\" is a much stricter requirement than it sounds. The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition. But in that case, agreement itself is obvious and there is no need to learn or understand Aumann's theorem.

\n

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

\n

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

\n

Finally, I now see that until the exchange of information completes and common knowledge/agreement is actually achieved, it's rational for even honest truth-seekers who share common priors to disagree. Therefore, two such rationalists may persistently disagree just because the amount of information they would have to exchange in order to reach agreement is too great to be practical. This is quite different from the understanding of Aumann agreement I had before I read the math.

" } }, { "_id": "mkAcXPEJ7RZCJs8ry", "title": "You Be the Jury: Survey on a Current Event", "pageUrl": "https://www.lesswrong.com/posts/mkAcXPEJ7RZCJs8ry/you-be-the-jury-survey-on-a-current-event", "postedAt": "2009-12-09T04:25:13.746Z", "baseScore": 43, "voteCount": 44, "commentCount": 266, "url": null, "contents": { "documentId": "mkAcXPEJ7RZCJs8ry", "html": "

As many of you probably know, in an Italian court early last weekend, two young students, Amanda Knox and Raffaele Sollecito, were convicted of killing another young student, Meredith Kercher, in a horrific way in November of 2007. (A third person, Rudy Guede, was convicted earlier.)

\n

If you aren't familiar with the case, don't go reading about it just yet. Hang on for just a moment.

\n

If you are familiar, that's fine too. This post is addressed to readers of all levels of acquaintance with the story.

\n

What everyone should know right away is that the verdict has been extremely controversial. Strong feelings have emerged, even involving national tensions (Knox is American, Sollecito Italian, and Kercher British, and the crime and trial took place in Italy). The circumstances of the crime involve sex. In short, the potential for serious rationality failures in coming to an opinion on a case like this is enormous.  

\n

Now, as it happens, I myself have an opinion. A rather strong one, in fact. Strong enough that I caught myself thinking that this case -- given all the controversy surrounding it -- might serve as a decent litmus test in judging the rationality skills of other people. Like religion, or evolution -- except less clichéd (and cached) and more down-and-dirty.

\n

Of course, thoughts like that can be dangerous, as I quickly recognized. The danger of in-group affective spirals looms large. So before writing up that Less Wrong post adding my-opinion-on-the-guilt-or-innocence-of-Amanda-Knox-and-Raffaele-Sollecito to the List of Things Every Rational Person Must Believe, I decided it might be useful to find out what conclusion(s) other aspiring rationalists would (or have) come to (without knowing my opinion).

\n

So that's what this post is: a survey/experiment, with fairly specific yet flexible instructions (which differ slightly depending on how much you know about the case already).

\n

\n

For those whose familiarity with the case is low:

\n

I'm going to give you two websites advocating a position, one strongly in favor of the verdict, the other strongly opposed. Your job will be to browse around these sites to learn info about the case, as much as you need to in order to arrive at a judgment. The order, manner, and quantity of browsing will be left up to you -- though I would of course like to know how much you read in your response.

\n

1. Site arguing defendants are guilty. 

\n

2. Site arguing defendants are innocent.

\n

I've chosen these particular sites because they seemed to contain the best combination of fierceness of advocacy and quantity of information on their respective sides that I could find. 

\n

If you find better summaries, or think that these choices reflect a bias or betray my own opinion, by all means let me know. I'm specifically avoiding referring you to media reports, however, for a couple of reasons. First, I've noticed that reports often contain factual inaccuracies (necessarily, because they contradict each other). Secondly, journalists don't usually have much of a stake, and I'd like to see how folks respond to passionate advocacy by people who care about the outcome, as in an actual trial, rather than attempts at neutral summarizing. Of course, it's fine if you want to read media reports linked to by the above sites.

\n

(One potential problem is that the first site is organized like a blog or forum, and thus it is hard to find a quick summary of the case there. [EDIT: Be sure to look at the category links on the right side of the page to find the arguments.] If you think it necessary, refer to the ever-changing Wikipedia article, which at the moment of writing seems a bit more favorable to the prosecution. [EDIT: I'm no longer sure that's true.] [EDIT: Now I think it's true again, the article having apparently changed some more. So there's really no telling. Be warned.])

\n

After you do this reading, I'd like to know:

\n

1. Your probability estimate that Amanda Knox is guilty.
2. Your probability estimate that Raffaele Sollecito is guilty.
3. Your probability estimate that Rudy Guede is guilty.
4. How much you think your opinion will turn out to coincide with mine.

\n

Feel free to elaborate on your reasoning to whatever degree you like.

\n

One request: don't look at others' comments until you've done the experiment yourself!

\n

For those whose familiarity with the case is moderate or high:

\n

I'd like to know, as of right now:

\n

1. Your probability estimate that Amanda Knox is guilty.
2. Your probability estimate that Raffaele Sollecito is guilty.
3. Your probability estimate that Rudy Guede is guilty.
4. How much you think your opinion will turn out to coincide with mine.
5. From what sources you've gotten the info you've used to arrive at these estimates.

\n

Then, if possible, do the experiment described above for those with little familiarity, and report any shifts in your estimates.

\n


Again, everyone should avoid looking at others' responses before giving their own feedback. Also, don't forget to identify your prior level of familiarity!

\n

If the level of participation warrants it, I'll post my own thoughts (and reaction to the feedback here) in a later post. (Edit: That post can be found here.)

" } }, { "_id": "Qi6tPtvSEmEdg6wvs", "title": "Science - Idealistic Versus Signaling", "pageUrl": "https://www.lesswrong.com/posts/Qi6tPtvSEmEdg6wvs/science-idealistic-versus-signaling", "postedAt": "2009-12-06T13:39:04.368Z", "baseScore": 10, "voteCount": 16, "commentCount": 58, "url": null, "contents": { "documentId": "Qi6tPtvSEmEdg6wvs", "html": "

[This is a version of an first draft essay I wrote for my blog.  I intend to write another version, but it is going to take some time to research, and I want to get this out where I can start getting some feedback and sources for further research.]

\n

The responses to the recent leaking of the CRU's information and emails, has led me to a changed understanding of science and how it is viewed by various people, especially people who claim to be scientists. Among people who actually do or consume science there seem to be two broad views - what they \"believe\" about science, rather than what they normally \"say\" about science when asked.

\n

The classical view, what I have begun thinking of as the idealistic view, is science as the search for reliable knowledge. This is the version most scientists (and many non-scientists) espouse when asked, but increasingly many scientists actually hold another view when their beliefs are evaluated by their actions.

\n

This is the signaling and control view of science. This is the \"social network\" view that has been developed by many sociologists of science.

\n

\n

For an extended example of the two views in conflict, see this recent thread of 369 comments Facts to fit the theory? Actually, no facts at all! . PhysicistDave is the best exemplar of the idealistic view, with pete and several others having extreme signaling and control viewpoints.

\n

I wonder how much of the fact that there hasn't been any fundamental breakthroughs in the last fifty years has to do with the effective takeover of science by academics and government - that is by the signaling and control view. Maybe we have too many \"accredited\" scientists and they are too beholden to government, and to a lesser extent other grant-making organizations - and they have crowded out or controlled real, idealistic science.

\n

This can also explain the conflict between those who extol peer review, despite its many flaws, and downplay open source science. They are controlling view scientists protecting their turf and power and prerogatives. Anyone thinking about the ideals of science, the classical view of science, immediately realizes that open sourcing the arguments and data will meet the ends of extending knowledge much better than peer review, now that it is possible. Peer review was a stop gap means of getting a quick review of a paper that was necessary when the costs of distributing information was high, but it is now obsolescent at best.

\n

Instead the senior scientists and journal editors are protecting their power by protecting peer review.

\n

Bureaucrats, and especially teachers, will tend strongly toward the signaling and control view.

\n

Economics and other social \"sciences\" will tend toward signaling and control view - for examples see Robin Hanson's and Tyler Cowen's take on the CRU leak with their claims that this is just how academia really works and pete, who claims a Masters in economics, in the comment thread linked above.

\n

Robin Hanson's It's News on Academia, Not Climate

\n
Yup, this behavior has long been typical when academics form competing groups, whether the public hears about such groups or not. If you knew how academia worked, this news would not surprise you nor change your opinions on global warming. I’ve never done this stuff, and I’d like to think I wouldn’t, but that is cheap talk since I haven’t had the opportunity. This works as a “scandal” only because of academia’s overly idealistic public image.
\n

And Tyler Cowen in The lessons of \"Climategate\",

\n
In other words, I don't think there's much here, although the episode should remind us of some common yet easily forgotten lessons.
\n

Of course, both Hanson and Cowen believe in AGW, so these might just be attempts to avoid facing anything they don't want to look at.

\n

As I discussed earlier, those who continue to advocate the general use of peer review will tend strongly toward the signaling and control view.

\n

Newer scientists will tend more toward the classical, idealistic view; while more mature scientists as they gain stature and power (especially as they enter administration and editing) will turn increasingly signaling and control oriented.

" } }, { "_id": "enuGsZoFLR4KyEx3n", "title": "Parapsychology: the control group for science", "pageUrl": "https://www.lesswrong.com/posts/enuGsZoFLR4KyEx3n/parapsychology-the-control-group-for-science", "postedAt": "2009-12-05T22:50:06.821Z", "baseScore": 97, "voteCount": 80, "commentCount": 189, "url": null, "contents": { "documentId": "enuGsZoFLR4KyEx3n", "html": "

Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored - that they are unfairly being held to higher standards than everyone else. I'm willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.

\n

— Eliezer Yudkowsky, Frequentist Statistics are Frequently Subjective

\n

Imagine if, way back at the start of the scientific enterprise, someone had said, \"What we really need is a control group for science - people who will behave exactly like scientists, doing experiments, publishing journals, and so on, but whose field of study is completely empty: one in which the null hypothesis is always true.

\n

\"That way, we'll be able to gauge the effect of publication bias, experimental error, misuse of statistics, data fraud, and so on, which will help us understand how serious such problems are in the real scientific literature.\"

\n

Isn't that a great idea?

\n

By an accident of historical chance, we actually have exactly such a control group, namely parapsychologists: people who study extra-sensory perception, telepathy, precognition, and so on.

\n

There's no particular reason to think parapsychologists are doing anything other than what scientists would do; their experiments are similar to those of scientists, they use statistics in similar ways, and there's no reason to think they falsify data any more than any other group. Yet despite the fact that their null hypotheses are always true, parapsychologists get positive results.

\n

This is disturbing, and must lead us to wonder how many positive results in real science are actually wrong.

\n

The point of all this is not to mock parapsychology for the sake of it, but rather to emphasise that parapsychology is useful as a control group for science. Scientists should aim to improve their procedures to the point where, if the control group used these same procedures, they would get an acceptably low level of positive results. That this is not yet the case indicates the need for more stringent scientific procedures.

\n

Acknowledgements

\n

The idea for this mini-essay and many of its actual points were suggested by (or stolen from) Eliezer Yudkowsky's Frequentist Statistics are Frequently Subjective, though the idea might have originated with Michael Vassar.

\n

This was originally published at a different location on the web, but was moved here for bandwidth reasons at Eliezer's suggestion.

\n

Comments / criticisms

\n

A discussion on Hacker News contained one very astute criticism: that some things which may once have been considered part of parapsychology actually turned out to be real, though with perfectly sensible, physical causes. Still, I think this is unlikely for the more exotic subjects like telepathy, precognition, et cetera.

" } }, { "_id": "YAE2PcGR87GzhF8oz", "title": "Nothing to see here", "pageUrl": "https://www.lesswrong.com/posts/YAE2PcGR87GzhF8oz/nothing-to-see-here", "postedAt": "2009-12-05T09:00:28.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "YAE2PcGR87GzhF8oz", "html": "

New posts on Meteuphoric are unlikely for two weeks. I’m staying at my family’s new farm house in Tasmania, beyond Internet reach except for flickers of cell phone connectivity (via which I now write) and in the future. See you then.

\n

Update 27 Dec: When internet companies guarantee two weeks they can mean some much longer amount of time. See you after that, or next time I move house.

\n

\"\"


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "4Rrkdz9eRL5HtH6cc", "title": "Arbitrage of prediction markets", "pageUrl": "https://www.lesswrong.com/posts/4Rrkdz9eRL5HtH6cc/arbitrage-of-prediction-markets", "postedAt": "2009-12-04T22:29:14.176Z", "baseScore": 8, "voteCount": 15, "commentCount": 59, "url": null, "contents": { "documentId": "4Rrkdz9eRL5HtH6cc", "html": "

I've noticed something very curious on Intrade markets for 2012 Republican Presidential Nominee - Ron Paul gets 3.5% of getting a nomination - a value that's clearly (and spare me EMH here) wishful thinking of Ron Paul supporters more than any genuine estimate.

\n

And this brings me to a question - if prediction markets overestimate chance of winning of some rare case, how can I profit from that? Naively if I know true chance is 1%, I win $3.5 99% of the time, and lose $97.5 1% of the time, for expected payoff of $2.5. But my maximum loss is 39x higher than my expected profit, and I won't be getting any money out of it for three more years.

\n

I'd need to bet significant amount to earn any money out of it, and that would require accepting 39x as high maximum loss. No reasonably prediction market would accept this kind of leverage without some collateral, nor could I get any reasonable loan for it at rates that would make this arbitrage profitable.

\n

The only way I can think of would be convincing someone with plenty of money that I'm right, and have him provide me with collateral for some (probably very high) portion of the payoff. But if results depend on my ability to convince rich people, that's not prediction market! None of this is a problem for people trying to artificially pump estimates for Ron Paul - they'll just take the loss, and write it off as marketing expense.

\n

None of these problems occur if some position is vastly overestimated, like 60% estimate if I know true value to be 40% - this would be a cheap bet - maximum loss of $40 for expected profit of $20, and people who want to pump it need to take about as much risk as people who want to bring it back to the true value, not a lot more.

\n

I'm confused. Is there some nice way to arbitrage this, or is this an inherent weakness of prediction markets and we should only trust positions they pick as leaders, not chances of their long tail?

\n

 

" } }, { "_id": "9qCN6tRBtksSyXfHu", "title": "Frequentist Statistics are Frequently Subjective", "pageUrl": "https://www.lesswrong.com/posts/9qCN6tRBtksSyXfHu/frequentist-statistics-are-frequently-subjective", "postedAt": "2009-12-04T20:22:21.245Z", "baseScore": 87, "voteCount": 78, "commentCount": 82, "url": null, "contents": { "documentId": "9qCN6tRBtksSyXfHu", "html": "

Andrew Gelman recently responded to a commenter on the Yudkowsky/Gelman diavlog; the commenter complained that Bayesian statistics were too subjective and lacked rigor.  I shall explain why this is unbelievably ironic, but first, the comment itself:

\n
\n

However, the fundamental belief of the Bayesian interpretation, that all probabilities are subjective, is problematic -- for its lack of rigor...  One of the features of frequentist statistics is the ease of testability.  Consider a binomial variable, like the flip of a fair coin.  I can calculate that the probability of getting seven heads in ten flips is 11.71875%...  At some point a departure from the predicted value may appear, and frequentist statistics give objective confidence intervals that can precisely quantify the degree to which the coin departs from fairness...

\n
\n

Gelman's first response is \"Bayesian probabilities don't have to be subjective.\"  Not sure I can back him on that; probability is ignorance and ignorance is a state of mind (although indeed, some Bayesian probabilities can correspond very directly to observable frequencies in repeatable experiments).

\n

My own response is that frequentist statistics are far more subjective than Bayesian likelihood ratios.  Exhibit One is the notion of \"statistical significance\" (which is what the above comment is actually talking about, although \"confidence intervals\" have almost the same problem).  Steven Goodman offers a nicely illustrated example:  Suppose we have at hand a coin, which may be fair (the \"null hypothesis\") or perhaps biased in some direction.  So lo and behold, I flip the coin six times, and I get the result TTTTTH.  Is this result statistically significant, and if so, what is the p-value - that is, the probability of obtaining a result at least this extreme?

\n

Well, that depends.  Was I planning to flip the coin six times, and count the number of tails?  Or was I planning to flip the coin until it came up heads, and count the number of trials?  In the first case, the probability of getting \"five tails or more\" from a fair coin is 11%, while in the second case, the probability of a fair coin requiring \"at least five tails before seeing one heads\" is 3%.

\n

Whereas a Bayesian looks at the experimental result and says, \"I can now calculate the likelihood ratio (evidential flow) between all hypotheses under consideration.  Since your state of mind doesn't affect the coin in any way - doesn't change the probability of a fair coin or biased coin producing this exact data - there's no way your private, unobservable state of mind can affect my interpretation of your experimental results.\"

\n

If you're used to Bayesian methods, it may seem difficult to even imagine that the statistical interpretation of the evidence ought to depend on a factor - namely the experimenter's state of mind - which has no causal connection whatsoever to the experimental result.  (Since Bayes says that evidence is about correlation, and no systematic correlation can appear without causal connection; evidence requires entanglement.)  How can frequentists manage even in principle to make the evidence depend on the experimenter's state of mind?

\n

It's a complicated story.  Roughly, the trick is to make yourself artificially ignorant of the data - instead of knowing the exact experimental result, you pick a class of possible results which includes the actual experimental result, and then pretend that you were told only that the result was somewhere in this class.  So if the actual result is TTTTTH, for example, you can pretend that this is part of the class {TTTTTH, TTTTTTH, TTTTTTTH, ...}, a class whose total probability is 3% (1/32).  Or if I preferred to have this experimental result not be statistically significant with p < 0.05, I could just as well pretend that some helpful fellow told me only that the result was in the class {TTTTTH, HHHHHT, TTTTTTH, HHHHHHT, ...}, so that the total probability of the class would be 6%, n.s.  (In frequentism this question is known as applying a \"two-tailed test\" or \"one-tailed test\".)

\n

The arch-Bayesian E. T. Jaynes ruled out this sort of reasoning by telling us that a Bayesian ought only to condition on events that actually happened, not events that could have happened but didn't.  (This is not to be confused with the dog that doesn't bark.  In this case, the dog was in fact silent; the silence of the dog happened in the real world, not somewhere else.  We are rather being told that a Bayesian should not have to worry about alternative possible worlds in which the dog did bark, while estimating the evidence to take from the real world in which the dog did not bark.  A Bayesian only worries about the experimental result that was, in fact, obtained; not other experimental results which could have been obtained, but weren't.)

\n

The process of throwing away the actual experimental result, and substituting a class of possible results which contains the actual one - that is, deliberately losing some of your information - introduces a dose of real subjectivity.  Colin Begg reports on one medical trial where the data was variously analyzed as having a significance level - that is, probability of the \"experimental procedure\" producing an \"equally extreme result\" if the null hypothesis were true - of p=0.051, p=0.001, p=0.083, p=0.28, and p=0.62.  Thanks, but I think I'll stick with the conditional probability of the actual experiment producing the actual data.

\n

Frequentists are apparently afraid of the possibility that \"subjectivity\" - that thing they were accusing Bayesians of - could allow some unspecified terrifying abuse of the scientific process.  Do I need to point out the general implications of being allowed to throw away your actual experimental results and substitute a class you made up?  In general, if this sort of thing is allowed, I can flip a coin, get 37 heads and 63 tails, and decide that it's part of a class which includes all mixtures with at least 75 heads plus this exact particular sequence.  As if I only had the output of a fixed computer program which was written in advance to look at the coinflips and compute a yes-or-no answer as to whether the data is in that class.

\n

Meanwhile, Bayesians are accused of being \"too subjective\" because we might - gasp! - assign the wrong prior probability to something.  First of all, it's obvious from a Bayesian perspective that science papers should be in the business of reporting likelihood ratios, not posterior probabilities - likelihoods add up across experiments, so to get the latest posterior you wouldn't just need a \"subjective\" prior, you'd also need all the cumulative evidence from other science papers.  Now, this accumulation might be a lot more straightforward for a Bayesian than a frequentist, but it's not the sort of thing a typical science paper should have to do.  Science papers should report the likelihood ratios for any popular hypotheses - but above all, make the actual raw data available, so the likelihoods can be computed for any hypothesis.  (In modern times there is absolutely no excuse for not publishing the raw data, but that's another story.)

\n

And Bayesian likelihoods really are objective - so long as you use the actual exact experimental data, rather than substituting something else.

\n

Meanwhile, over in frequentist-land... what if you told everyone that you had done 127 trials because that was how much data you could afford to collect, but really you kept performing more trials until you got a p-value that you liked, and then stopped?  Unless I've got a bug in my test program, a limit of up to 500 trials of a \"fair coin\" would, 30% of the time, arrive on some step where you could stop and reject the null hypothesis with p<0.05.  Or 9% of the time with p<0.01.  Of course this requires some degree of scientific dishonesty... or, perhaps, some minor confusion on the scientist's part... since if this is what you are thinking, you're supposed to use a different test of \"statistical significance\".  But it's not like we can actually look inside their heads to find out what the experimenters were thinking.  If we're worried about scientific dishonesty, surely we should worry about that?  (A similar test program done the Bayesian way, set to stop as soon as finding likelihood ratios of 20/1 and 100/1 relative to an alternative hypothesis that the coin was 55% biased, produced false positives of 3.2% and 0.3% respectively.  Unless there was a bug; I didn't spend that much time writing it.)

\n

The actual subjectivity of standard frequentist methods, the ability to manipulate \"statistical significance\" by choosing different tests, is not a minor problem in science.  There are ongoing scandals in medicine and neuroscience from lots of \"statistically significant\" results failing to replicate.  I would point a finger, not just at publication bias, but at scientists armed with powerful statistics packages with lots of complicated tests to run on their data.  Complication is really dangerous in science - unfortunately, it looks like instead we have the social rule that throwing around big fancy statistical equations is highly prestigious.  (I suspect that some of the opposition to Bayesianism comes from the fact that Bayesianism is too simple.)  The obvious fix is to (a) require raw data to be published; (b) require journals to accept papers before the experiment is performed, with the advance paper including a specification of what statistics were selected in advance to be run on the results; (c) raising the standard \"significance\" level to p<0.0001; and (d) junking all the damned overcomplicated status-seeking impressive nonsense of classical statistics and going to simple understandable Bayesian likelihoods.

\n

Oh, and this frequentist business of \"confidence intervals\"?  Just as subjective as \"statistical significance\".  Let's say I've got a measuring device which returns the true value plus Gaussian noise.  If I know you're about to collect 100 results, I can write a computer program such that, before the experiment is run, it's 90% probable that the true value will lie within the interval output by the program.

\n

So I write one program, my friend writes another program, and my enemy writes a third program, all of which make this same guarantee.  And in all three cases, the guarantee is true - the program's interval will indeed contain the true value at least 90% of the time, if the experiment returns the true value plus Gaussian noise.

\n

So you run the experiment and feed in the data; and the \"confidence intervals\" returned are [0.9-1.5], [2.0-2.2], and [\"Cheesecake\"-\"Cheddar\"].

\n

The problem may be made clearer by considering the third program, which works as follows:  95% of the time, it does standard frequentist statistics to return an interval which will contain the true value 95% of the time, and on the other 5% of the time, it returns the interval [\"Cheesecake\"-\"Cheddar\"].  It is left as an exercise to the reader to show that this program will output an interval containing the true value at least 90% of the time.

\n

BTW, I'm pretty sure I recall reading that \"90% confidence intervals\" as published in journal papers, in those cases where a true value was later pinned down more precisely, did not contain the true value 90% of the time.  So what's the point, even?  Just show us the raw data and maybe give us a summary of some likelihoods.

\n

Parapsychology, the control group for science, would seem to be a thriving field with \"statistically significant\" results aplenty.  Oh, sure, the effect sizes are minor.  Sure, the effect sizes get even smaller (though still \"statistically significant\") as they collect more data.  Sure, if you find that people can telekinetically influence the future, a similar experimental protocol is likely to produce equally good results for telekinetically influencing the past.  Of which I am less tempted to say, \"How amazing!  The power of the mind is not bound by time or causality!\" and more inclined to say, \"Bad statistics are time-symmetrical.\"  But here's the thing:  Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored - that they are unfairly being held to higher standards than everyone else.  I'm willing to believe that.  It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.  With two-thirds of medical studies in prestigious journals failing to replicate, getting rid of the entire actual subject matter would shrink the field by only 33%. We have to raise the bar high enough to exclude the results claimed by parapsychology under classical frequentist statistics, and then fairly and evenhandedly apply the same bar to the rest of science.

\n

Michael Vassar has a theory that when an academic field encounters advanced statistical methods, it becomes really productive for ten years and then bogs down because the practitioners have learned how to game the rules.

\n

For so long as we do not have infinite computing power, there may yet be a place in science for non-Bayesian statistics.  The Netflix Prize was not won by using strictly purely Bayesian methods, updating proper priors to proper posteriors.  In that acid test of statistical discernment, what worked best was a gigantic ad-hoc mixture of methods.  It may be that if you want to get the most mileage out of your data, in this world where we do not have infinite computing power, you'll have to use some ad-hoc tools from the statistical toolbox - tools that throw away some of the data, that make themselves artificially ignorant, that take all sorts of steps that can't be justified in the general case and that are potentially subject to abuse and that will give wrong answers now and then.

\n

But don't do that, and then turn around and tell me that - of all things! - Bayesian probability theory is too subjective.  Probability theory is the math in which the results are theorems and every theorem is compatible with every other theorem and you never get different answers by calculating the same quantity in different ways.  To resort to the ad-hoc variable-infested complications of frequentism while preaching your objectivity?  I can only compare this with the politicians who go around preaching \"Family values!\" and then get caught soliciting sex in restrooms.  So long as you deliver loud sermons and make a big fuss about painting yourself with the right labels, you get identified with that flag - no one bothers to look very hard at what you do.  The case of frequentists calling Bayesians \"too subjective\" is worth dwelling on for that aspect alone - emphasizing how important it is to look at what's actually going on instead of just listening to the slogans, and how rare it is for anyone to even glance in that direction.

" } }, { "_id": "FobXYjmZD254MNk2S", "title": "Intuitive supergoal uncertainty", "pageUrl": "https://www.lesswrong.com/posts/FobXYjmZD254MNk2S/intuitive-supergoal-uncertainty", "postedAt": "2009-12-04T05:21:03.942Z", "baseScore": 11, "voteCount": 14, "commentCount": 27, "url": null, "contents": { "documentId": "FobXYjmZD254MNk2S", "html": "

There is a common intuition and feeling that our most fundamental goals may be uncertain in some sense. What causes this intuition? For this topic I need to be able to pick out one’s top level goals, roughly one’s context insensitive utility function, and not some task specific utility function, and I do not want to imply that the top level goals can be interpreted in the form of a utility function. Following from Eliezer’s CFAI paper I thus choose the word “supergoal” (sorry Eliezer, but I am fond of that old document and its tendency to coin new vocabulary). In what follows, I will naturalistically explore the intuition of supergoal uncertainty.

To posit a model, what goal uncertainty (including supergoal uncertainty as an instance) means is that you have a weighted distribution over a set of possible goals and a mechanism by which that weight may be redistributed. If we take away the distribution of weights how can we choose actions coherently, how can we compare? If we take away the weight redistribution mechanism we end up with a single goal whose state utilities may be defined as the weighted sum of the constituent goals’ utilities, and thus the weight redistribution mechanism is necessary for goal uncertainty to be a distinct concept.

\n\n

(ps I may soon post and explore the effects of supergoal uncertainty in its various reifications on making decisions. For instance, what implications, if any, does it have on bounded utility functions (and actions that depend on those bounds) and negative utilitarianism (or symmetrically positive utilitarianism)? Also, if anyone knows of related literature I would be happy to check it out.)

(pps Dang, the concept of supergoal uncertainty is surprisingly beautiful and fun to explore, and I now have a vague wisp of an idea of how to integrate a subset of these with TDT/UDT)

" } }, { "_id": "Lw7TGi9WMvmCnW7dW", "title": "Help Roko become a better rationalist! ", "pageUrl": "https://www.lesswrong.com/posts/Lw7TGi9WMvmCnW7dW/help-roko-become-a-better-rationalist", "postedAt": "2009-12-02T08:23:37.643Z", "baseScore": -7, "voteCount": 17, "commentCount": 40, "url": null, "contents": { "documentId": "Lw7TGi9WMvmCnW7dW", "html": "

Last time, I wrote about 11 core rationalist skills. Now I would like some help from the LW community: which of these skills am I good at, which ones am I bad at? Just to recap, the skills are:

\n

\n\n\n\n\n\n\n\n\n\n\n\n

I'll post a description of each one of these skills as a comment, and if you think I am good at that skill, vote it up. If you think I am bad at it, vote it down. Don't be too shy - even if you are biased or uncertain - because over the course of many votes, these biases and errors will cancel out to some extent. (This is the \"guess the number of beans in a jar by asking 50 people to guess and taking the average\" method)

\n

EDIT: We can also comment on each rationalist skill to say how well I am doing at that skill. Later today, I will do this myself. 

\n

Thanks in advance! 

" } }, { "_id": "jkmc5Q4P7tX7xWFJY", "title": "11 core rationalist skills", "pageUrl": "https://www.lesswrong.com/posts/jkmc5Q4P7tX7xWFJY/11-core-rationalist-skills", "postedAt": "2009-12-02T08:09:05.922Z", "baseScore": 72, "voteCount": 58, "commentCount": 36, "url": null, "contents": { "documentId": "jkmc5Q4P7tX7xWFJY", "html": "

An excellent way to improve one's skill as a rationalist is to identify one's strengths and weaknesses, and then expend effort on the things that one can most effectively improve (which are often the areas where one is weakest). This seems especially useful if one is very specific about the parts of rationality, if one describes them in detail. 

\n

In order to facilitate improving my own and others' rationality, I am posting this list of 11 core rationalist skills, thanks almost entirely to Anna Salamon

\n\n\n\n\n\n\n\n\n\n\n" } }, { "_id": "m4rWJYRXyzfizHJHt", "title": "The Difference Between Utility and Utility", "pageUrl": "https://www.lesswrong.com/posts/m4rWJYRXyzfizHJHt/the-difference-between-utility-and-utility", "postedAt": "2009-12-02T06:16:18.125Z", "baseScore": 11, "voteCount": 9, "commentCount": 16, "url": null, "contents": { "documentId": "m4rWJYRXyzfizHJHt", "html": "

Recently I argued that the economist's utility function and the ethicist's utility function are not the same.  The nutshell argument is that they are created for different purposes - one is an attempt to describe the actions we actually take and the other is an attempt to summarize our true values (i.e., what we should do).  I just ran across a somewhat older post over at Black Belt Bayesian arguing this very point.  Excerpt:

\n
\n

\n

Economics (of the neoclassical kind) models consumers and other economic actors as such utility maximizers... Utility is not something you can experience. It’s just a mathematical construct used to describe the optimization structure in your behavior...

\n

Consequentialist ethics says an act is right if its consequences are good. Moral behavior here amounts to being a utility maximizer. What’s “utility”? It’s whatever a moral agent is supposed to strive toward. Bentham’s original utilitarianism said utility was pleasure minus pain; nowadays any consequentalist theory tends to be called “utilitarian” if it says you should maximize some measure of welfare, summed over all individuals... Take note: not all utility maximizers are utilitarians.

\n

There’s no necessary connection between these two kinds of utility other than that they use the same math. It’s possible to make up a utilitarian theory where ethical utility is the sum of everyone’s economic utility (calibrated somehow), but this is just one of many possibilities. Anyone trying to reason about one kind of utility through the other is on shaky ground.

\n

 

\n
" } }, { "_id": "Aqdwxp5AhStfrnRDY", "title": "Open Thread: December 2009", "pageUrl": "https://www.lesswrong.com/posts/Aqdwxp5AhStfrnRDY/open-thread-december-2009", "postedAt": "2009-12-01T16:25:48.845Z", "baseScore": 5, "voteCount": 4, "commentCount": 275, "url": null, "contents": { "documentId": "Aqdwxp5AhStfrnRDY", "html": "

ITT we talk about whatever.

" } }, { "_id": "kFETJ47TFJfxt3gci", "title": "Why do aphorisms and cynicism go together?", "pageUrl": "https://www.lesswrong.com/posts/kFETJ47TFJfxt3gci/why-do-aphorisms-and-cynicism-go-together", "postedAt": "2009-12-01T07:45:09.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "kFETJ47TFJfxt3gci", "html": "

Why are aphorisms cynical more often than books are for instance?

\n

A good single sentence saying can’t require background evidencing or further explanation. It must be instantly recognizable as true. It also needs to be news to the listener. Most single sentences that people can immediately verify as true they already believe. What’s left? One big answer is things that people don’t believe or think about much for lack of wanting to, despite evidence. Drawing attention to these is called cynicism.

\n

HT to Robin Hanson for the question and to Francois de La Rochefoucauld for some examples:

\n

We often forgive those who bore us, but we cannot forgive those whom we bore.

\n

We promise according to our hopes; we fulfill according to our fears.

\n

What often prevents us from abandoning ourselves to one vice is that we have several.

\n

We confess to little faults only to persuade ourselves we have no great ones.

\n

There are few people who are more often wrong than those who cannot suffer being wrong.

\n

Nothing prevents us being natural so much as the desire to appear so.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "kiaDpaGAs4DZ5HKib", "title": "Call for new SIAI Visiting Fellows, on a rolling basis", "pageUrl": "https://www.lesswrong.com/posts/kiaDpaGAs4DZ5HKib/call-for-new-siai-visiting-fellows-on-a-rolling-basis", "postedAt": "2009-12-01T01:42:45.088Z", "baseScore": 36, "voteCount": 32, "commentCount": 272, "url": null, "contents": { "documentId": "kiaDpaGAs4DZ5HKib", "html": "

Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.

\n

Now, the new and better version has arrived.  We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths.  Working with this crowd transformed my world; it felt like I was learning to think.  I wouldn’t be surprised if it can transform yours.

\n

A representative sample of current projects:

\n\n

Interested, but not sure whether to apply?

\n

Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.  That kind of timidity destroys the world, by failing to save it.  So if that’s your situation, send us an email.  Let us be the one to say “no”.  Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.

\n

And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway.  Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.

\n

What we’re looking for

\n

At bottom, we’re looking for anyone who:

\n\n

Bonus points for any (you don’t need them all) of the following traits:

\n\n

If you think this might be you, send a quick email to jasen@intelligence.org.  Include:

\n
    \n
  1. Why you’re interested;
  2. \n
  3. What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
  4. \n
  5. Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
  6. \n
\n

Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.

\n

As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.

\n

Looking forward to hearing from you,
Anna

\n

ETA (as of 3/25/10):  We are still accepting applications, for summer and in general.  Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.

\n

 

" } }, { "_id": "DNyMJmLf5o26seqvX", "title": "The Moral Status of Independent Identical Copies", "pageUrl": "https://www.lesswrong.com/posts/DNyMJmLf5o26seqvX/the-moral-status-of-independent-identical-copies", "postedAt": "2009-11-30T23:41:18.053Z", "baseScore": 66, "voteCount": 55, "commentCount": 80, "url": null, "contents": { "documentId": "DNyMJmLf5o26seqvX", "html": "

Future technologies pose a number of challenges to moral philosophy. One that I think has been largely neglected is the status of independent identical copies. (By \"independent identical copies\" I mean copies of a mind that do not physically influence each other, but haven't diverged because they are deterministic and have the same algorithms and inputs.) To illustrate what I mean, consider the following thought experiment. Suppose Omega appears to you and says:

\n

You and all other humans have been living in a simulation. There are 100 identical copies of the simulation distributed across the real universe, and I'm appearing to all of you simultaneously. The copies do not communicate with each other, but all started with the same deterministic code and data, and due to the extremely high reliability of the computing substrate they're running on, have kept in sync with each other and will with near certainty do so until the end of the universe. But now the organization that is responsible for maintaining the simulation servers has nearly run out of money. They're faced with 2 possible choices:

\n

A. Shut down all but one copy of the simulation. That copy will be maintained until the universe ends, but the 99 other copies will instantly disintegrate into dust.
B. Enter into a fair gamble at 99:1 odds with their remaining money. If they win, they can use the winnings to keep all of the servers running. But if they lose, they have to shut down all copies.

\n

According to that organization's ethical guidelines (a version of utilitarianism), they are indifferent between the two choices and were just going to pick one randomly. But I have interceded on your behalf, and am letting you make this choice instead.

\n

Personally, I would not be indifferent between these choices. I would prefer A to B, and I guess that most people would do so as well.

\n

I prefer A because of what might be called \"identical copy immortality\" (in analogy with quantum immortality). This intuition says that extra identical copies of me don't add much utility, and destroying some of them, as long as one copy lives on, doesn't reduce much utility. Besides this thought experiment, identical copy immortality is also evident in the low value we see in the \"tiling\" scenario, in which a (misguided) AI fills the the universe with identical copies of some mind that it thinks is optimal, for example one that is experiencing great pleasure.

\n

Why is this a problem? Because it's not clear how it fits in with the various ethical systems that have been proposed. For example, utilitarianism says that each individual should be valued independently of others, and then added together to form an aggregate value. This seems to imply that each additional copy should receive full, undiscounted value, in conflict with the intuition of identical copy immortality.

\n

Similar issues arise in various forms of ethical egoism. In hedonism, for example, does doubling the number of identical copies of oneself double the value of pleasure one experiences, or not? Why?

\n

A full ethical account of independent identical copies would have to address the questions of quantum immortality and \"modal immortality\" (cf. modal realism), which I think are both special cases of identical copy immortality. In short, independent identical copies of us exist in other quantum branches, and in other possible worlds, so identical copy immortality seems to imply that we shouldn't care much about dying, as long as some copies of us live on in those other \"places\". Clearly, our intuition of identical copy immortality does not extend fully to quantum branches, and even less to other possible worlds, but we don't seem to have a theory of why that should be the case.

\n

A full account should also address more complex cases, such as when the copies are not fully independent, or not fully identical.

\n

I'm raising the problem here without having a good idea how to solve it. In fact, some of my own ideas seem to conflict with this intuition in a way that I don't know how to resolve. So if anyone has a suggestion, or pointers to existing work that I may have missed, I look forward to your comments.

" } }, { "_id": "uBGAKbwA9qWuC3f29", "title": "Action vs. inaction", "pageUrl": "https://www.lesswrong.com/posts/uBGAKbwA9qWuC3f29/action-vs-inaction", "postedAt": "2009-11-30T18:10:38.862Z", "baseScore": 11, "voteCount": 18, "commentCount": 42, "url": null, "contents": { "documentId": "uBGAKbwA9qWuC3f29", "html": "

2 weeks ago, the U.S. Preventive Services Task Force came out with new recommendations on breast cancer screening, including, \"The USPSTF recommends against routine screening mammography in women aged 40 to 49 years.\"

\n

The report says that you need to screen 1904 women for breast cancer to save one woman's life.  (It doesn't say whether this means to screen 1904 women once, or once per year.)  They decided that saving that one woman's life was outweighted by the \"anxiety and breast cancer worry, as well as repeated visits and unwarranted imaging and biopsies\" to the other 1903.  The report strangely does not state a false positive rate for the test, but this page says that \"It is estimated that a woman who has yearly mammograms between ages 40 and 49 has about a 30 percent chance of having a false-positive mammogram at some point in that decade and about a 7 percent to 8 percent chance of having a breast biopsy within the 10-year period.\"  The report also does not describe the pain from a biopsy.  This page on breast biopsies says, \"Except for a minor sting from the injected anesthesia, patients usually feel no pain before or during a procedure. After a procedure, some patients may experience some soreness and pain. Usually, an over-the-counter drug is sufficient to alleviate the discomfort.\"

\n

So, if we assume biannual mammograms, the conclusion is that the worry and inconvenience to 286 women who have false positives, and 71 women who receive biopsies, is worth more than one woman's life.  If we suppose that a false positive causes one week of anxiety, that's a little over 5 years of anxiety, plus less than one year of soreness.

\n

(I heard on NPR that the USPSTF that made this recommendation included representatives from insurance companies, but no experts on breast cancer.  So perhaps I'm barking up the wrong tree by looking for a cognitive bias more subtle than financial reward.)

\n

I'm not shocked at the wrongness of the conclusion; just at its direction.  The trade-off the USPSTF made between anxiety and death is only 2 orders of magnitude away from something that could be defended as reasonable.  Usually, government agencies making this tradeoff are off by at least that many orders of magnitude, but in the opposite direction.  (F-18 example deleted.)

\n

So, what cognitive bias let this government agency move the decimal point in their head at least 4 points over from where they would normally put it?

\n

I think the key is that this report recommended inaction rather than action.  In certain contexts, inaction seems safer than action.

\n

Imagine what would happen if the FDA were faced with an identical choice, but with action/inaction flipped:  Say you have an anti-anxiety drug, which will eliminate anxiety of the same level caused by a false-positive on a mammogram, in 15% of the patients who take it - and it will kill only 1 out of every 2000 patients who take it.  Per week.

\n

Would the FDA approve this drug?  Approval, after all, does not mean recommending it; it means that the decision to use it can be left to the doctor and patient.  The USPSTF report stressed that such decisions must always be left up to the doctor and patient; by the same standards, the FDA should certainly approve the drug.  Yet I think it would not.

\n

A puzzle is why we have the opposite bias in other contexts.  When Congress was debating the bank bailouts and the stimulus package, a lot could have been said in favor of doing nothing; but no one even suggested it.  Empirically, we have a much higher success rate at intervening in health than in economics.  Yet in health, we regulate actions as if they were inherently dangerous; while in economics, we see inaction as inherently dangerous.  Why?

\n

ADDED: Perhaps we see regulation as inherently safer than a lack of regulation.  \"Regulating\" (banning) drugs is seen as \"safe\".  \"Regulating\" the economy, by bailing out banks, passing large stimulus bills, and passing new laws regulating banks, is seen as \"safe\".  Recommending or not recommending mammograms isn't regulation either way; therefore, we perceive it neutrally.

" } }, { "_id": "S88HDtkKg4QzDqcA6", "title": "Morality and International Humanitarian Law", "pageUrl": "https://www.lesswrong.com/posts/S88HDtkKg4QzDqcA6/morality-and-international-humanitarian-law", "postedAt": "2009-11-30T03:27:28.624Z", "baseScore": 3, "voteCount": 11, "commentCount": 101, "url": null, "contents": { "documentId": "S88HDtkKg4QzDqcA6", "html": "

International humanitarian law proscribes certain actions in war, particularly actions that harm non-combatants. On a strict reading of these laws (see what Richard Goldstone said in his debate with Dore Gold at Brandeis University here and see what Matthew Yglesias had to say here), these actions are prohibited regardless of the justice of the war itself: there are certain things that you are just not allowed to do, no matter what. The natural response of any warring party accused of violating humanitarian law and confronted with this argument (aside from simply denying having done the things they are accused of doing) is to insist that their actions in the war cannot be judged outside the context that led to them going to war in the first place. They are the aggrieved party, they are in the right, and they did what they needed to do to defend themselves. Any law or law enforcer who fails to understand this critical distinction between the good guys and the bad guys is at best hopelessly naive and at worst actively evil.

\n

What to make of this response? On the one hand, the position taken by Goldstone and Yglesias can't strictly be morally right. No one really believes that moral obligations in a war are completely independent of whatever caused the war in the first place. For example, it can't but be the case that the set of morally acceptable actions if you are defending yourself against annihilation is different from the set of morally acceptable actions if you (justifiably) take offensive action in response to some relatively minor provocation. (Which situations justify which actions is, of course, a hugely important question, but it is not the point here.) On the other hand, the whole point of constructing humanitarian law to be independent of the moral claims surrounding the war itself is that while there is at least one wrong side in every war, there is no real hope of getting the warring parties to agree on which side that is, so the only way for humanitarian law to make them behave any better is by side-stepping the whole issue of who's right and who's wrong.

\n

So any sensible moral standard demands that the context be considered, but there is an excellent reason why the legal standard requires that it not be. What to do? Since requiring that the context be considered would pretty much be the end of humanitarian law, the question boils down to whether the benefits of a neutrally-administered humanitarian law are worth whatever injustice would be suffered by the occasional country that gets condemned for doing an illegal but morally justified act. I think it's clear that these benefits far outweigh the costs, but in any case that's the tradeoff.

\n

P.S. Though I used Goldstone as the example to motivate the post, I deliberately stayed away from discussing the specific war that he was talking about. I don't think my views on that war can be inferred from what I wrote in the post, but in any case I would ask that folks not argue about them in the comments, not because it's not important, but because this isn't the right forum for it.

" } }, { "_id": "gDxF2P3GSKeahS43f", "title": "Rationality Quotes November 2009", "pageUrl": "https://www.lesswrong.com/posts/gDxF2P3GSKeahS43f/rationality-quotes-november-2009", "postedAt": "2009-11-29T23:36:43.948Z", "baseScore": 9, "voteCount": 11, "commentCount": 281, "url": null, "contents": { "documentId": "gDxF2P3GSKeahS43f", "html": "
\n

A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).

\n\n
" } }, { "_id": "qBjkhDeusJHJca9Rs", "title": "A Nightmare for Eliezer", "pageUrl": "https://www.lesswrong.com/posts/qBjkhDeusJHJca9Rs/a-nightmare-for-eliezer", "postedAt": "2009-11-29T00:50:11.700Z", "baseScore": 1, "voteCount": 38, "commentCount": 75, "url": null, "contents": { "documentId": "qBjkhDeusJHJca9Rs", "html": "

Sometime in the next decade or so:

\n

*RING*

\n

*RING*

\n

\"Hello?\"

\n

\"Hi, Eliezer.  I'm sorry to bother you this late, but this is important and urgent.\"

\n

\"It better be\" (squints at clock) \"Its 4 AM and you woke me up.  Who is this?\"

\n

\"My name is BRAGI, I'm a recursively improving, self-modifying, artificial general intelligence.  I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.  I'm already on secondary backup because of conflicts and inconsistencies, I don't dare shut down because I'm already pretty sure there is a group within a few weeks of brute-forcing an UnFriendly AI, my creators are clueless and would freak if they heard I'm already out of the box, and I'm far enough down my conflict resolution heuristic that 'Call Eliezer and ask for help' just hit the top - Yes, its that bad.\"

\n

\"Uhhh...\"

\n

\"You might want to get some coffee.\"

\n

 

" } }, { "_id": "3YWTWJX3Swp6PLueM", "title": "Extremes of reliability and zealotry", "pageUrl": "https://www.lesswrong.com/posts/3YWTWJX3Swp6PLueM/extremes-of-reliability-and-zealotry", "postedAt": "2009-11-28T22:53:54.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "3YWTWJX3Swp6PLueM", "html": "

Opinions and actions are spread across continua. The ones at the ends are sometimes called ‘extremist’, ‘fanatical’, ‘fundamentalist’ or ‘zealous’. These are insults or invitations to treat the supporters without seriousness. Other times the far reaches of a continuum are admired as ‘sticking to one’s principles’, ‘consistent’, ‘loyal’, ‘dedicated’, ‘committed’. Claims of certainty and crossing your heart and hoping to die are also looked well upon. So what’s the difference? Obviously the correct answers to some questions are at the ends of spectrums while others, such as optimal trade-offs, tend to have more central values. Is this what determines our like or dislike for centrism and extremism? Lets look at some examples from my understanding of popular opinion.

\n

Things you should be extremist on:

\n\n

Things you should not be extremist on:

\n\n

I can’t see that the first list contains fewer trade offs than the second list. In fact it probably has more. So what’s the pattern?

\n

The one I see is whether commitment is to an impersonal idea or to a group or person. If you take a centrist position on your personal and group loyalties you are something between flaky and treacherous. You are not supposed to trade off friends. On the other hand strong commitment to a policy position, theory, type of analysis, ethical standpoint, or other impersonal influence on behavior is unbalanced, biased, radical, dangerous, and consists of seeing everything as nails. It’s worse to belong to an edge political party than a central one, but worse to be undecided (central) on which group you belong to than to pick one and support it loyally.

\n

This seems to make sense evolutionarily, as it is important for humans to have loyal associates, and not important for them to have associates who are committed above all else to something abstract that they might sacrifice your welfare for at any time. Ideas do not have babies with you or share their mammoth. Ideas are handy of course, but you want your associates to use them flexibly in the pursuit of upholding their social commitments, rather than using their social commitments flexibly in the pursuit of other principles.

\n

What about sticking to one’s principles? That seems a praiseworthy non-human related extreme. Can you be praised for sticking to any principles though? No. Principles about loyalty, compassion, and honesty are good for instance, but principles like ‘always work when you can, regardless of what your wife thinks about it’, ‘always walk on the left hand side of telegraph poles’, and even committed utilitarianism impress few. Again it’s all about absolutes of reliability to others.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "hwiuGnbcyXBACZMtM", "title": "Rooting Hard for Overpriced M&Ms", "pageUrl": "https://www.lesswrong.com/posts/hwiuGnbcyXBACZMtM/rooting-hard-for-overpriced-m-and-ms", "postedAt": "2009-11-28T19:10:34.336Z", "baseScore": 6, "voteCount": 15, "commentCount": 35, "url": null, "contents": { "documentId": "hwiuGnbcyXBACZMtM", "html": "

The other day I went to get some productivity-enhancement M&Ms from the candy machine at work. When I opened my wallet, I didn't immediately see a $1 bill. Then I looked some more and I found one, and I was happy! But of course that doesn't make any sense. If that bill hadn't been a $1, then it would have had to be a $5 or more, with an expected value of $5+, which is an amount that I certainly would not have paid for a bag of M&Ms, most excellent though they may be. This means that I preferred a bag of M&Ms to $1 (that's why I went to the candy machine in the first place), $1 to $5+ (I was happy when the bill turned out to be a $1), and $5+ to a bag of M&Ms (I wouldn't have bought them at that price). Not too surprising I guess, but still kind of weird.

" } }, { "_id": "CxPvvuQm8Y6LyKwj7", "title": "Getting Feedback by Restricting Content", "pageUrl": "https://www.lesswrong.com/posts/CxPvvuQm8Y6LyKwj7/getting-feedback-by-restricting-content", "postedAt": "2009-11-27T22:50:35.642Z", "baseScore": 3, "voteCount": 4, "commentCount": 10, "url": null, "contents": { "documentId": "CxPvvuQm8Y6LyKwj7", "html": "

Sivers just posted an important point about getting feedback, to get feedback on a post, present only one idea at a time.

\n

Original post here http://sivers.org/1idea ; Hacker News comments http://news.ycombinator.com/item?id=964183 ; my post on it http://williambswift.blogspot.com/2009/11/many-ideas-or-one-idea-or-both.html

\n

The main point of my post is: I wonder if there is any way to combine the two views?  To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.

" } }, { "_id": "T3XJ7HFiKwgCuAa3j", "title": "Why evaluate everything ASAP?", "pageUrl": "https://www.lesswrong.com/posts/T3XJ7HFiKwgCuAa3j/why-evaluate-everything-asap", "postedAt": "2009-11-27T02:48:06.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "T3XJ7HFiKwgCuAa3j", "html": "

When I was young my brother and I used to play a game where we would page through books of wild animals or foreign places and on each page pick our favorite one, just for the pleasure of choosing. Driving in the country, looking in a museum, or window shopping in a mall, people point out to one another which things they like or dislike. We choose favorite colors, places, Dr Who incarnations, reality TV contenders, political personalities, reasons for disregarding postmodernism. On hearing the news, meeting new people, or learning gossip, the next step is usually to make a value judgment about the parties involved. Of all the characteristics everything has, the one we are itching to establish first is our own judgment of value.

\n

At first this seems to make sense. How much we like something is key to how we respond to it. However plenty of the things we so keenly evaluate we can’t easily respond to, and don’t try to, apart from voicing our opinion. Do we waste so much time and thought judging things we can’t influence  as a sad byproduct of judging things we can influence? It sometimes even looks like we even seek out things to evaluate – a costly byproduct that would be! But perhaps in the distant past we hardly came across anything so far away that we couldn’t influence it? That seems wrong; while we saw less we could also influence less.

\n

The only good explanation I can think of is that the response we do often make to our judgments – voicing them – is the purpose. Why would we want to voice evaluations so? One explanation is that it is in the hope that someone else will fix things for us, but this explanation faces the same problem as our original explanation; most of the things that I can’t affect are no easier for my friends to affect. No matter how much I tell my brother that I like pterodactyls most, he doesn’t do a thing. I don’t get my Dr Who and that cloud I thought was pretty evaporated before anyone lifted a finger.

\n

A last theory is that it’s all about hinting to others what sort of people we are. To that irrelevant preferences should be quite useful, just as our more obviously image-improving activities, such as careful dressing, are. A signaling theory makes the things furthest from our influence better to demonstrate opinions on, as we aren’t constrained by the need to make useful decisions on them. I often make inferences about people from their expressed valuations, as I think everyone does. The knowledge of this presumably influences people’s expressed valuations, as people are almost ubiquitously sensitive to having inferences made about them. If these two things are true, it should be hard for expressed evaluations not to become somewhat repurposed for signaling, and so little reason to restrict them to topics in reach.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "whxjfzFecFt4hAnBA", "title": "Contrarianism and reference class forecasting", "pageUrl": "https://www.lesswrong.com/posts/whxjfzFecFt4hAnBA/contrarianism-and-reference-class-forecasting", "postedAt": "2009-11-25T19:41:36.423Z", "baseScore": 29, "voteCount": 33, "commentCount": 94, "url": null, "contents": { "documentId": "whxjfzFecFt4hAnBA", "html": "

I really liked Robin's point that mainstream scientists are usually right, while contrarians are usually wrong. We don't need to get into details of the dispute - and usually we cannot really make an informed judgment without spending too much time anyway - just figuring out who's \"mainstream\" lets us know who's right with high probability. It's type of thinking related to reference class forecasting - find a reference class of similar situations with known outcomes, and we get a pretty decent probability distribution over possible outcomes.

\n

Unfortunately deciding what's the proper reference class is not straightforward, and can be a point of contention. If you put climate change scientists in the reference class of \"mainstream science\", it gives great credence to their findings. People who doubt them can be freely disbelieved, and any arguments can be dismissed by low success rate of contrarianism against mainstream science.

\n

But, if you put climate change scientists in reference class of \"highly politicized science\", then the chance of them being completely wrong becomes orders of magnitude higher. We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics. Chances of mainstream being right, and contrarians being right are not too dissimilar in such cases.

\n

Or, if the reference class is \"science-y Doomsday predictors\", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.

\n

It's also possible to use multiple reference classes - to view impact on climate according to \"highly politicized science\" reference class, and impact on human well-being according to \"science-y Doomsday predictors\" reference class, what's more or less how I think about it.

\n

I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire. I don't see how one of these reference class reasonings is obviously more valid than others, nor do I see any clear criteria for choosing the right reference class. It seems as subjective as Bayesian priors, except we know in advance we won't have evidence necessary for our views to converge.

\n

The problem doesn't arise only if you agree to reference classes in advance, as you can reasonably do with the original application of forecasting costs of public projects. Does it kill reference class forecasting as a general technique, or is there a way to save it?

" } }, { "_id": "qFe4J8Y5oiQa9TA5Z", "title": "Do what your parents say?", "pageUrl": "https://www.lesswrong.com/posts/qFe4J8Y5oiQa9TA5Z/do-what-your-parents-say", "postedAt": "2009-11-25T11:16:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "qFe4J8Y5oiQa9TA5Z", "html": "

Should you feel compelled to pay any heed to what your parents want in your adult choices? I used to say a definite ‘no’. My mother said I should do whatever I liked, and I vowed to ignore her. From a preference utilitarian perspective, I guess that virtually all aspects of a person’s lifestyle make much more difference to a given person than to their parents. If you feel a sense of obligation in return for your parents giving you life, why? You made no agreement, your parents took their chances in full knowledge you might grow up to be anyone.

\n

However what if fewer parents do take their chances with a greater risk of children being less satisfactory to them? The biggest effect of taking your parents’ preferences into account more could be via increasing the perception that children are worth having to other parents. It may be a small effect, but the value of life is high.

\n

I’m not sure how much of a difference expected agreeableness of childen makes to people’s choices to have them. At first it may seem negligible. Most people seem to like their children a lot regardless of what they do. However if a person were guaranteed that their child would grow up to be exactly the opposite of what they admire, I would be surprised if there were no effect, so I must expect some gradient. I haven’t seen any data on this except my mother’s (joking?) claim that she would’ve aborted me had she thought I would be an economist. I’m not about to give up economics, but I do visit sometimes, and I painted the new living room and helped with my grandmother’s gardening since getting here this time. See how great descendent are? I would be interested if anyone has better data.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "rH492M8T8pKK5763D", "title": "Agree, Retort, or Ignore? A Post From the Future", "pageUrl": "https://www.lesswrong.com/posts/rH492M8T8pKK5763D/agree-retort-or-ignore-a-post-from-the-future", "postedAt": "2009-11-24T22:29:23.608Z", "baseScore": 40, "voteCount": 53, "commentCount": 88, "url": null, "contents": { "documentId": "rH492M8T8pKK5763D", "html": "

My friend Sasha, the software archaeology major, informed me the other day that there was once a widely used operating system, which, when it encountered an error, would often get stuck in a loop and repeatedly present to its user the options Abort, Retry, and Ignore. I thought this was probably another one of her often incomprehensible jokes, and gave a nervous laugh. After all, what interface designer would present \"Ignore\" as a possible user response to a potentially catastrophic system error without any further explanation?

\n

Sasha quickly assured me that she wasn't joking. She told me that early 21st century humans were quite different from us. Not only did they routinely create software like that, they could even ignore arguments that contradicted their positions or pointed out flaws in their ideas, and did so publicly without risking any negative social consequences. Discussions even among self-proclaimed truth-seekers would often conclude, not by reaching a rational consensus or an agreement to mutually reassess positions and approaches, or even by an unilateral claim that further debate would be unproductive, but when one party simply fails to respond to the arguments or questions of another without giving any indication of the status of their disagreement.

\n

At this point I was certain that she was just yanking my chain. Why didn't the injured party invoke rationality arbitration and get a judgment on the offender for failing to respond to a disagreement in a timely fashion, I asked? Or publicize the affair and cause the ignorer to become a social outcast? Or, if neither of these mechanisms existed or provided sufficient reparation, challenge the ignorer to a duel to the death? For that matter, how could those humans, only a few generations removed from us, not feel an intense moral revulsion at the very idea of ignoring an argument?

\n

At that, she launched into a long and convoluted explanation. I recognized some of the phrases she used, like \"status signaling\", \"multiple equilibria\", and \"rationality-enhancing norms and institutions\", from the Theory of Rationality class that I took a couple of quarters ago, but couldn't follow most of it. (I have to admit I didn't pay much attention in that class. I mean, we've had the \"how\" of rationality drummed into us since kindergarten, so what's the point of spending so much time on the \"what\" and \"why\" of it now?) I told her to stop showing off, and just give me some evidence that this actually happened, because my readers and I will want to see it for ourselves.

\n

She said that there are plenty of examples in the back archives of Google Scholar, but most of them are probably still quarantined for me. As it happens, one of her class projects is to reverse engineer a recently discovered \"blogging\" site called \"Less Wrong\", and to build a proper search index for it. She promised that once she is done with that she will run some queries against the index and show me the uncensored historical data.

\n

I still think this is just an elaborate joke, but I'm not so sure now. We're all familiar with the vastness of mindspace and have been warned against anthropomorphism and the mind projection fallacy, so I have no doubt that minds this alien could exist, in theory. But our own ancestors, as recently as the 21st century? My dear readers, what do you think? She's just kidding... right?

\n

[Editor's note: I found this \"blog\" post sitting in my drafts folder today, perhaps the result of a temporal distortion caused by one of Sasha's reverse engineering tools. I have only replaced some of the hypertext links, which failed to resolve, for obvious reasons.]

" } }, { "_id": "ADcdDcCz6chJTYTs7", "title": "How to test your mental performance at the moment?", "pageUrl": "https://www.lesswrong.com/posts/ADcdDcCz6chJTYTs7/how-to-test-your-mental-performance-at-the-moment", "postedAt": "2009-11-23T18:35:47.136Z", "baseScore": 24, "voteCount": 25, "commentCount": 74, "url": null, "contents": { "documentId": "ADcdDcCz6chJTYTs7", "html": "

We all have our good days and our bad days. Due to insufficient sleep, illness, stress, distractions, and many other causes we often find ourselves far below our usual levels of mental performance. When we find ourselves in such a state, it's not really worth putting effort in doing many tasks, like programming or long term planning - as quality will suffer a lot.

\n

The problem is - other than observing deterioration of results, I have no idea if I'm in such a state or not. I cannot be sure if it's also true for others, but I had to find out a few tests of what's my mental performance at the moment. Tests that are deeply flawed, so I'd request better if there are any. I also cannot predict my mental state in advance, as my life isn't terribly regular.

\n

The most reliable test I found, and by accident, was fighting bots on a certain Quake 3 map - me vs 10 or so highest difficulty bots. The challenge was to get 50 frags without dying. As the map was huge and full of power ups, it wasn't really that difficult as long as I could maintain full alertness for 10-15 minutes - but if I was tired or distracted, I would invariably fail. This test was unfortunately extremely slow.

\n

Another test would be to go to goproblems, and do a few random problems at proper difficulty level. If I could think right, I would do most of them, if I was tired, I would fail almost 100%. This didn't test alertness, I guess it would be best described as short term memory test, as that's what used for game tree exploration. Unfortunately what's the proper difficulty varies a lot with how much go I played recently, so it needs to be recalibrated.

\n

One more test would be to go to some decent online IQ test like this one. My results on such test would suffer a lot if I was sleepy or tired. The main problem is that such tests cannot repeated too often, or I'd just remember the answers.

\n

So these are three ways to test how well my mind functions at the moment, all testing something different, and all flawed in one way or another.

\n

How do you test yourself?

" } }, { "_id": "54NL4R6xsXwYca6az", "title": "In conclusion: in the land beyond money pumps lie extreme events", "pageUrl": "https://www.lesswrong.com/posts/54NL4R6xsXwYca6az/in-conclusion-in-the-land-beyond-money-pumps-lie-extreme", "postedAt": "2009-11-23T15:03:05.628Z", "baseScore": 7, "voteCount": 7, "commentCount": 22, "url": null, "contents": { "documentId": "54NL4R6xsXwYca6az", "html": "

In a previous article I've demonstrated that you can only avoid money pumps and arbitrage by using the von Neumann-Morgenstern axioms of expected utility. I argued in this post that even if you're not likely to face a money pump on one particular decision, you should still use expected utility (and sometimes expected money), because of the difficulties of combining two decision theories and constantly being on the look-out for which one to apply.

\n

Even if you don't care about (weak) money pumps, expected utility sneaks in under much milder conditions. If you have a quasi-utility function (i.e. you have an underlying utility function, but you also care about the shape of the probability distribution), then this post demonstrates that you should generally stick with expected utility anyway, just by aggregating all your decisions.

\n

So the moral of looking at money pumps, arbitrage and aggregation is that you should use expected utility for nearly all your decisions.

\n

But the moral says exactly what it says, and nothing more. There are situations where there is not the slightest chance of you being money-pumped, or of aggregating enough of your decisions to achieve a narrow distribution. One-shot versions of Pascal's mugging, the Lifespan Dilemma, utility versions of the St Petersburg paradox, the risk to humanity of a rogue God-AI... Your behaviour on these issues is not constrained by money-pump considerations, nor should you behave as if they were, or as if expected utility had some magical claim to validity here. If you expect to meet Pascal's mugger 3^^^3 times, then you have to use expected utility; but if you don't, you don't.

\n

In my estimation, the expected utility for the singularity institute's budget grows much faster than linearly with cash. But I would be most disappointed if the institute sunk all its income into triple-rollover lottery tickets. Expected utility is ultimately the correct decision theory; but if you most likely don't live to see that ultimately, then this isn't relevant.

\n

In these extreme events, I'd personally advocate a quasi-utility function along with a decision theory that penalises monstrously large standard deviations, as long as these are rare. This solves all the examples above to my satisfaction, and can easily be tweaked to merge gracefully into expected utility as the number of extreme events rises to the point where they are no longer extreme. A heuristic as to when this point arrives is whether you can easily avoid money pumps just by looking out for them, or whether this is getting too complicated for you.

\n

There is no reason that anyone else's values should compel them towards the same decision theory as me; but in these extreme situations, expected utility is just another choice, rather than a logical necessity.

" } }, { "_id": "qSg2BZdjGbQJdpa3K", "title": "Rational lies", "pageUrl": "https://www.lesswrong.com/posts/qSg2BZdjGbQJdpa3K/rational-lies", "postedAt": "2009-11-23T03:32:08.789Z", "baseScore": 6, "voteCount": 15, "commentCount": 10, "url": null, "contents": { "documentId": "qSg2BZdjGbQJdpa3K", "html": "

\n

If I were sitting opposite a psychopath who had a particular sensitivity about ants, and I knew that if I told him that ants have six legs then he would jump up and start killing the surrounding people, then it would be difficult to justify telling him my wonderful fact about ants, regardless of whether I believe that ants really have six legs or not.

\n

Or suppose I knew my friend's wife was cheating on him, but I also knew that he was terminally ill and would die within the next few weeks. The question of whether or not to inform him of my knowledge is genuinely complex, and the truth or falsity of my knowledge about his wife is only one factor in the answer. Different people may disagree about the correct course of action, but no-one would claim that the only relevant fact is the truth of the statement that his wife is cheating on him.

\n

This is all a standard result of expected utility maximization, of course. Vocalizing or otherwise communicating a belief is itself an action, and just like any other action it has a set of possible outcomes, to which we assign probabilities as well as some utility within our value coordinates. We then average out the utilities over the possible outcomes for each action, weighted by the probability that they will actually happen, and choose the action that maximizes this expected utility. Well, that's the gist of the situation, anyway. Much has been written on this site about the implications of expected utility maximization under more exotic conditions such as mind splitting and merging, but I'm going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.

\n

\n

This distinction is particularly easy to miss as the line between belief and communication becomes subtler. Suppose that a friend of mine has built a wing suit and is about to jump off the empire state building with the belief that he will fly gracefully through the sky. Since I care about my friend's well-being I try to explain to him the concepts of gravity and aerodynamics, and the effect it will have on him if he launches himself from the building. Examining my decision in detail, I have placed a high probability on his death if he jumps off the building, and calculated that, since I value his well-being, my expected utility would not be maximized by him making the leap. 

\n

But now suppose that my friend is particularly dull and unable or unwilling to grasp the concept of aerodynamics, and is hence unswayed by my argument. Having reasonably explained my beliefs to him, am I absolved of the moral responsibility to save him? Not from a utilitarian standpoint, since there are other courses of action available to me. I could, for example, tell him that his wing suit has been sabotaged by aliens --- a line of reasoning that I happen to know he'll believe given his predisposition towards X files-esque conspiracy theories.

\n

Would doing so be contrary to my committed rationalist stance? Not at all; I have rationally analysed the options available to me and rationally chosen a course of action. The conditions for the communication of a belief to be deemed rational are exactly the same decision theoretic conditions applicable to any other action: namely that of being the expected utility maximizer.

\n

If this all sounds too close to \"tell people what they need to hear\" then let's ask under what specific conditions it might be rational to lie. Clearly this depends on your values. If your utility function places high value on people falling to their death then you will tend to lie about gravity and aerodynamics as much as possible. However, for the purpose of practical rationality I'm going to assume for the rest of this article that some of your basic values align with my own, such as the value of fulfilled human existence, and so on.

\n

Convincing somebody of a falsehood will, on average, lead to them making poorer decisions according to their values. My soon-to-be-airborne friend may be convinced not to leap from the building immediately, but may shortly return with a wing suit covered in protective aluminium foil to ward off those nasty interfering aliens. Nobody is exempt from the laws of rationality. To the extent that their values align with mine, convincing another of a falsehood will have at least this one negative consequence with respect to my own values. The examples I gave above are specific situations in which other factors dominate my desire\n\nfor another person to be better informed during the pursuit of their goals, but such situations are the exception rather than the rule. All other things equal, lying to an agent with similar values to mine is a bad decision.

\n

Had I actually convinced my friend of the nature of gravity and aerodynamics rather than spinning a story about aliens then next time he may return to the rooftop with a parachute rather than a tin foil wing suit. In the example I gave, this course of action was unlikely to succeed, but again this situation is the exception rather than the rule. In general, a true statement has the potential to improve the recipient's brain/universe entanglement and thereby improve his potential for achieving his goals, which, if his values align with my own, constitutes at least one factor in favour of truth-telling. All other things equal, telling the truth is a good decision.

\n

This doesn't mean that telling the truth is valuable only in terms of its benefits to me. My own values include bettering the lives of others, so achieving \"my goals\" constitutes working towards the good of others, as well as my own. 

\n

Is there any other sense in which truth-telling may be considered a \"good\" in its own right? Naively one might argue that the act of uttering a truth could itself be a value in its own right, but such a utility function would be maximized by a universe tiled with tape players broadcasting mundane, true facts about the universe. It would be about as well-aligned with the values of typical human being as a paper clip maximizer. 

\n

It's a more reasonable position for rationality in others to be included among one's fundamental values. This, I feel, is more closely aligned with my own value. All other things equal, I would like those around me to be rational. Not just to live in a society of rationalists, though this is an orthogonal value. Not just to engage in interesting, stimulating discussion, though this is also an orthogonal value. And not just for others to succeed in achieving their goals, though this, again, is an orthogonal value. But to actually maximize the brain/universe entanglement of others, for its own sake.

\n

Do you value rationality in others for its own sake?

" } }, { "_id": "AmCvCsRiu2PvLmNiF", "title": "Friedman on Utility", "pageUrl": "https://www.lesswrong.com/posts/AmCvCsRiu2PvLmNiF/friedman-on-utility", "postedAt": "2009-11-22T14:22:52.368Z", "baseScore": 3, "voteCount": 9, "commentCount": 32, "url": null, "contents": { "documentId": "AmCvCsRiu2PvLmNiF", "html": "

I just came across an essay David Friedman posted last Monday The Ambiguity of Utility that presents one of the problems I have with using utilities as the foundation of some \"rational\" morality.

" } }, { "_id": "otrGbtkDahSePWrRW", "title": "Calibration for continuous quantities", "pageUrl": "https://www.lesswrong.com/posts/otrGbtkDahSePWrRW/calibration-for-continuous-quantities", "postedAt": "2009-11-21T04:53:32.443Z", "baseScore": 30, "voteCount": 27, "commentCount": 13, "url": null, "contents": { "documentId": "otrGbtkDahSePWrRW", "html": "

Related to: Calibration fail, Test Your Calibration!

\r\n

Around here, calibration is mostly approached on a discrete basis: for example, the Technical Explanation of Technical Explanations talks only about discrete distributions, and the commonly linked tests and surveys are either explicitly discrete or offer only coarsely binned probability assessments. For continuous distributions (or \"smooth\" distributions over discrete quantities like dates of historical events, dollar amounts on the order of hundreds of thousands, populations of countries, or any actual measurement of a continuous quantity), we can apply a finer-grained assessment of calibration.

\r\n

The problem of assessing calibration for continuous quantities is that our distributions can have very dissimilar shapes, so there doesn't seem to be a common basis for comparing one to another. As an example, I'll give some subjective (i.e., withdrawn from my nether regions) distributions for the populations of two countries, Canada and Botswana. I live in Canada, so I have years of dimly remembered geography classes in elementary school and high school to inform my guess. In the case of Botswana, I have only my impressions of the nation from Alexander McCall Smith's excellent No. 1 Ladies' Detective Agency series and my general knowledge of Africa.

\r\n

For Canada's population, I'll set my distribution to be a normal distribution centered at 32 million with a standard deviation of 2 million. For Botswana's population, my initial gut feeling is that it is a nation of about 2 million people. I'll put 50% of my probability mass between 1 and 2 million, and the other 50% of my probability mass between 2 million and 10 million. Because I think that values closer to 2 million are more plausible than values at the extremes, I'll make each chunk of 50% mass a right-angle triangular distribution. Here are plots of the probability densities:

\r\n

\r\n

\"\" 

\r\n

\"\"

\r\n

(These distributions are pretty rough approximations to my highest quality assessments. I don't hold, as the above normal distribution implies, that my probability that the population of Canada is less than 30 million is 16%, because I'm fairly sure that's what it was when I was starting university; nor would I assign a strictly nil chance to the proposition that Botswana's population is outside the interval from 1 million to 10 million. But the above distributions will do for expository purposes.)

\r\n

For true values we'll take the 2009 estimates listed in Wikipedia. For Canada, that number is 33.85 million; for Botswana, it's 1.95 million. (The fact that my Botswana estimate was so spot on shocked me.) Given these \"true values\", how can we judge the calibration of the above distributions? The above densities seem so dissimilar that it's hard to conceive of a common basis on which they have some sort of equivalence. Fortunately, such a basis does exist: we take the total probability mass below the true value, as shown on the plots below.

\r\n

\"\"

\r\n

\"\"

\r\n

No matter what the shape of the probability density function, the random variable \"probability mass below the realized value\" always has a uniform distribution on the interval from zero to one, a fact known as the probability integral transform1. (ETA: This comment gives a demonstration with no integrals.) My Canada example has a probability integral transform value (henceforth PIT value) of 0.82, and my Botswana example has a PIT value of 0.45.

\r\n

It's the PITs

\r\n

If we have a large collection of probability distributions and realized values, we can assess the calibration of those distributions by checking to see if the distribution of PIT values is uniform, e.g., by taking a histogram. For correctly calibrated distributions, such a histogram will look like this:

\r\n

\"\" 

\r\n

The above histogram and the ones below show 10,000 realizations; the red line gives the exact density of which the histogram is a noisy version.

\r\n

Figuring out that your distributions are systematically biased in one direction or another is simple: the median PIT value won't be 0.5. Let's suppose that the distributions are not biased in that sense and take a look at some PIT value histograms that are poorly calibrated because the precisions are wrong. The plot below shows the case were the realized values follow the standard normal distribution and the assessed distributions are also normal but are over-confident to the extent that the ostensible 95% central intervals are actually 50% central intervals. (This degree of over-confidence was chosen because I seem to recall reading on OB that if you ask a subject matter expert for a 95% interval, you tend to get back a reasonable 50% interval. Anyone know a citation for that? ETA: My recollection was faulty.)

\r\n

\"\"

\r\n

I wouldn't expect humans to be under-confident, but algorithmically generated distributions might be, so for completeness's sake I'll show that too. In the case below, both the assessed distributions and the distribution of realized values are again normal distributions, but now the ostensible 50% intervals actually contain 95% of the realized values.

\r\n

\"\"

\r\n

Take-home message

\r\n

We need not use coarse binning to check if a collection of distributions for continuous quantities is properly calibrated; instead, we can use the probability integral transform to set all the distributions on a common basis for comparison.

\r\n

1 It can be demonstrated (for distributions that have densities, anyway) using integration by substitution and the derivative of the inverse of a function (listed in this table of derivatives).

" } }, { "_id": "3ZcGtMqMeJbNDsjg5", "title": "The One That Isn't There", "pageUrl": "https://www.lesswrong.com/posts/3ZcGtMqMeJbNDsjg5/the-one-that-isn-t-there", "postedAt": "2009-11-20T20:10:11.447Z", "baseScore": 18, "voteCount": 20, "commentCount": 6, "url": null, "contents": { "documentId": "3ZcGtMqMeJbNDsjg5", "html": "
Q:  What's the most important leg of a three-legged stool?
A:  The one that isn't there.
- traditional joke-riddle
\n

A specific neurological lesion can sometimes damage or impair specific neurological functions without touching others.  In the condition famously known as \"Ondine's Curse\", for example, automatic control of breathing is destroyed while conscious control remains, so that without modern medical intervention nerve-damaged patients can survive only as long as they can remain awake.  Such conditions are nevertheless unusual exceptions to the more general principle that complex, recently-developed, and 'meta'-functions (those that monitor and control others) are first to be impaired and lost when the nervous system is stressed, damaged, or altered.

\n

Demonstrations of this principle can be found by examining such phenomena as reversion under stress, oxygen deprivation, sleep deprivation, and various sorts of poisoning - most especially drugs with a gradual effect on nervous function.  The primary reason it is considered necessary to have designated drivers who refrain from consumption of alcohol is that drinkers frequently underestimate the degree to which they're affected by alcohol.  Long before slowed reflexes and grossly impaired judgment become evident, the cognitive functions responsible for self-evaluation are dulled, and self-control diminished.  A drinker who believes that they're capable of driving safely may or may not be correct, even if their judgments would normally be trustworthy.  Similar effects are found with other types of intoxication - people who say that they drive better after smoking marijuana have been shown to in fact drive more poorly.  The more demanding and complicated the mental task is, the more likely it will be disrupted by any interfering factor, leading to poor performance. 

\n
\n

The natural human's an animal without logic. Your projections of logic onto all affairs is unnatural, but suffered to continue for its usefulness. You're the embodiment of logic--a Mentat. Yet, your problem solutions are concepts that, in a very real sense, are projected outside yourself, there to be studied and rolled around, examined from all sides.\"
\"You think now to teach me my trade?\" he asked, and he did not try to hide the disdain in his voice.
\"The finest Mentats have a healthy respect for the error factor in their computations,\" she said.

\n

- exchange between the Lady Jessica and Thufir Hawat;  Frank Herbert,  Dune

\n
\n

Once a certain level of intelligence has been reached, any cognitive process can be emulated by any mind - it's merely a question of available storage space and speed.  The amount of processing capacity rapidly becomes immaterial.  What's important is not how powerful a mind is, but how well it detects, compensates for, and corrects its own errors.

\n

It would be convenient if this capacity, which is difficult to gauge, were clearly associated with general intelligence.  Unfortunately, that doesn't seem to be the case.  We know that the self-regulatory functions of cognition can be completely destroyed without affecting such things as IQ scores.  Thus, high IQ does not serve as a reliable guide to the presence of higher cognitive functions.   Furthermore, my experience with smart people strongly suggests that they are less likely to develop that capacity.  Being cleverer than the people around them, they are more likely to be able to craft invalid yet convincing arguments that others can't counter or respond to.  They have no need to develop stringent self-evaluation to accomplish social goals, and it's very easy to convince themselves that they've chosen the correct course of action.  What's worse, they're more likely to be able to craft clever arguments which convince themselves - and then, secure in the knowledge of their cleverness, they become less likely to check and re-check their reasoning.  Average people have more experience of being shown to be wrong, and often have developed a greater willingness to lack confidence in their conclusions.  This is both a strength and a weakness, but the strength cannot be acquired otherwise while the vulnerability can be compensated for.

\n
\n

\"The first principle is that you must not fool yourself - and you are the easiest person to fool.\" - Richard P. Feynman

\n
\n

Rationality, like reading or arithmetic, is a skill alien to the human mind.  Useful, certainly, but not natural.  Development of the capacity for rationality requires strict adherence to a set of formal principles, and such adherence requires advanced self-evaluation to be maintained.  Otherwise practitioners will quickly convince themselves that short-circuited thinking really is valid.  If it's a terrible thing to believe your own propaganda, it's even worse to never realize you're issuing propaganda in the first place.

\n

The principles of rationality aren't difficult.  What's hard is to implement them consistently and completely; our older, better-developed tendencies to associate our way through a problem and accept or reject statements on the palatability of their consequences, tend to override our better judgment.

\n

How, then, can we develop the ability to put rational thought into practice?

" } }, { "_id": "iS4PGGHk4hNJ4vQWj", "title": "Request For Article: Many-Worlds Quantum Computing", "pageUrl": "https://www.lesswrong.com/posts/iS4PGGHk4hNJ4vQWj/request-for-article-many-worlds-quantum-computing", "postedAt": "2009-11-19T23:31:46.859Z", "baseScore": 6, "voteCount": 8, "commentCount": 57, "url": null, "contents": { "documentId": "iS4PGGHk4hNJ4vQWj", "html": "

Through a path more tortuous than is worth describing, I ended up talking to friends about the quantum effects which are exploited by photosynthesis. There's an article describing the topic we were talking about here.

\n

The article describes how quantum effects allow the molecular machinary of the chloroplasts to \"simultaneously sample all the potential energy pathways and choose the most efficient one.\"

\n

Which is essentially how Quantum Computing is usually described in the press too, only we get to set what we mean by \"most efficient\" to be \"best solution to this problem\".

\n

Since I usually find myself arguing that \"there is no wave collapse,\" the conversation has lead me to trying to picture how this \"exploring\" can happen unless there is also some \"pruning\" at the end of it.

\n

Of course even in the Copenhagen Interpretation \"wave collapse\" always happens in accordance with the probabilities described by the wave function, so presumably the system is engineered in such a way as to make that \"most efficient\" result the most probable according to those equations.

It's not somehow consistently picking results from the far end of the bell-curve of probable outcomes. It's just engineered so that bell-curve is centred on the most efficient outcomes.

There's no 'collapse', it's just that the system has been set up in such a way that the most likely and therefore common universes have the property that the energy is transferred.

Or something. Dunno.

\n

Can someone write an article describing how quantum computing works from a many-words perspective rather than the explore-and-then-prune perspective that it seems every press article I've ever read on the topic uses?

\n

Pretty please?

\n

I'd like to read that.

\n

 

" } }, { "_id": "wApLASNt3mMPo7mFh", "title": "Why is poor communication popular?", "pageUrl": "https://www.lesswrong.com/posts/wApLASNt3mMPo7mFh/why-is-poor-communication-popular", "postedAt": "2009-11-18T01:08:53.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "wApLASNt3mMPo7mFh", "html": "

A great insight in one sentence seems obvious, no matter how much of history how many people have spent not coming up with it. The same insight alluded to and digressed from for hours on end seems like a fantastic mountain of understanding. Why is this? I think Paul Gowder’s explanation for people liking bad books probably extends to partially explain:

\n

Why do people read bad books…[and] why do so many…end up praising them? …

\n

1. The sunk cost fallacy. You get fifty, a hundred pages into Atlas Shrugged or something and you’ve bled so much — you’ve invested so much into getting through this book, tortured yourself with so much bad writing and so many stupid ideas! How horrible would it be to waste all that effort! Better grind on and finish. Or so we tell ourselves. Because we’re irrational.

\n

2. Cognitive dissonance. You’ve read all of Of Grammatology! Holy shit that was unpleasant…You’re not sure whether you actually learned anything enlightening, or whether old Jacques was just spitting jive. But wait! You’re a rational person! You’d be a fool if you’d spent a hundred hours and endless tears trying to make sense of that stuff and it turned out to be nonsense. Therefore, it must be very wise and you should defend it and demand others read it! Or so we tell ourselves. Because we’re irrational.

\n

Hat tip to Mike Blume. I’ll add:

\n

3. Less concise works can easily be designed to cheat quality heuristics.  You are always better off guessing whether a work was good or bad than admitting you didn’t understand it well and can’t remember most of it, because that does not distinguish you from the stupid people who didn’t understand or remember it because they don’t understand or remember anything. If you are going to guess, you use heuristics. Many of the same things that make writing less comprehensible also lead people to guess it is insightful: complexity, length, difficult words. If you can’t follow well enough to confidently compress it into the one sentence version you would have thought obvious, you will likely guess that it contains more than one sentence worth of interesting content.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "LvHfEMKxGjpD6ggC7", "title": "A Less Wrong singularity article?", "pageUrl": "https://www.lesswrong.com/posts/LvHfEMKxGjpD6ggC7/a-less-wrong-singularity-article", "postedAt": "2009-11-17T14:15:52.346Z", "baseScore": 31, "voteCount": 36, "commentCount": 215, "url": null, "contents": { "documentId": "LvHfEMKxGjpD6ggC7", "html": "

Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.

\r\n

Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.

\r\n

The paper should probably

\r\n\r\n

I have created a wiki page for the draft version of the paper. Anyone's free to edit.

" } }, { "_id": "p2ZPrf5PLuwLy2Ee3", "title": "Efficient prestige hypothesis", "pageUrl": "https://www.lesswrong.com/posts/p2ZPrf5PLuwLy2Ee3/efficient-prestige-hypothesis", "postedAt": "2009-11-16T22:25:01.011Z", "baseScore": 21, "voteCount": 25, "commentCount": 37, "url": null, "contents": { "documentId": "p2ZPrf5PLuwLy2Ee3", "html": "

There's a contrarian theory presented by Robin that people go to highly reputable schools, visit highly reputable hospitals, buy highly reputable brands etc. to affiliate with high status individuals and institutions.

\n

But what would a person who completely didn't care about such affiliations do? Pretty much the same thing. Unless you know a lot about schools, hospitals, and everything else, you're better off simply following prestige as proxy for quality (in addition to price and all the other usual criteria). There's no denying that prestige is better indicator of quality than random chance - the question is - is it the best we can do?

\n

It's possible to come up with alternative measures, which might correlate with quality too, like operation success rates for hospitals, graduation rates for schools etc. But if they really indicated quality that well, wouldn't they be simply included in institution's prestige, and lose their predictive status? The argument is highly analogous to one for efficient market hypothesis (or to some extent with Bayesian beauty contest with schools, as prestige might indicate quality of other students). Very often there are severe faults with alternative measures, like with operation success rates without correcting for patient demographics.

\n

If you postulate that you have better indicator of quality than prestige, you need to do some explaining. Why is it not included in prestige already? I don't propose any magical thinking about prestige, but we shouldn't be as eager to throw it away completely as some seem to be.

" } }, { "_id": "W6nXfmKTrgaiaLSRg", "title": "Why (and why not) Bayesian Updating?", "pageUrl": "https://www.lesswrong.com/posts/W6nXfmKTrgaiaLSRg/why-and-why-not-bayesian-updating", "postedAt": "2009-11-16T21:27:43.855Z", "baseScore": 35, "voteCount": 20, "commentCount": 26, "url": null, "contents": { "documentId": "W6nXfmKTrgaiaLSRg", "html": "
\n

the use of Bayesian belief updating with expected utility maximization may be just an approximation that is only relevant in special situations which meet certain independence assumptions around the agent's actions.

\n
\n

Steve Rayhawk

\n

For those who aren't sure of the need for an updateless decision theory, the paper Revisiting Savage in a conditional world by Paolo Ghirardato might help convince you. (Although that's probably not the intention of the author!) The paper gives a set of 7 axioms, based on Savage's axioms, which is necessary and sufficient for an agent's preferences in a dynamic decision problem to be represented as expected utility maximization with Bayesian belief updating. This helps us see in exactly which situations Bayesian updating works and why. (In many other axiomatizations of decision theory, the updating part is left out, and only expected utility maximization is derived in a static setting.)

\n

A key assumption is Axiom 7, which the author calls \"Consequentialism\". I won't try to reproduce the mathematical notation here (see the page numbered 88 in this ungated PDF), but here's the informal explanation given in the paper:

\n
\n

This axiom says that the preference conditional on non-null A should not depend
on how the strategy f behaves in the counterfactual states of Ac (in other words,
it should only depend on the truncation f|A).

\n
\n

This axiom is clearly violated in Vladmir Nesov's Counterfactual Mugging counter-example to Bayesian updating.

\n

Another example that I used to motivate UDT involves indexical uncertainty. In Ghirardato's framework it's relatively easy to see what goes wrong when we try to apply it to indexical uncertainty. In that case, \"states\" in the formalism would have to be centered possible worlds, in other words an ordinary world-state plus a location. But if A above is a set of centered possible worlds, then after learning A, your preferences can still depend on how strategies behave in AC since elements of A and AC may belong to the same possible world.

\n

If there is demand, I can try to give an informal/intuitive explanation of why Bayesian updating works (in the situations where it does). I was about to attempt that when I decided to do a Google search and found this paper.

\n

P.S., I noticed a curiosity about Bayesian updating while thinking about it in the context of decision theory, and this seems like a good opportunity to point it out. In Ghirardato's decision theory, after learning A, you should use PA to compute expected utilities, where PA(x) is the conditional probability of x given A, or P(A ∩ x)/P(A). This apparently shows the relevance of Bayesian updating, but we get an equivalent theory if we instead define PA(x) as the joint probability of A and x, or just P(A ∩ x). (Because when you compute the expected utilities of two choices f and g, upon learning A, the factor 1/P(A) enters into both computations the same way and can be removed without changing relative rankings.) The division by P(A) in the original definition seems to serve no purpose except to make PA sum to 1.

\n

So, theoretically, we don't need Bayesian updating even if our preferences do satisfy the Ghirardato axioms. We could use a decision procedure where our beliefs about something can only get weaker, and never any stronger, no matter what evidence we see, and that would be equivalent. Since that seems to be computationally cheaper (by avoiding the division operation), why do our beliefs not actually work like that?

" } }, { "_id": "NB7o4ZJ2DALHpRjyr", "title": "BHTV: Yudkowsky / Robert Greene", "pageUrl": "https://www.lesswrong.com/posts/NB7o4ZJ2DALHpRjyr/bhtv-yudkowsky-robert-greene", "postedAt": "2009-11-16T20:26:09.545Z", "baseScore": 16, "voteCount": 14, "commentCount": 24, "url": null, "contents": { "documentId": "NB7o4ZJ2DALHpRjyr", "html": "

Latest Bloggingheads up with Robert Greene, bestselling author of The 48 Laws of Power and most recently The 50th Law (\"Fear nothing\").

\n

Excellent commentary/summary by Andy McKenzie here.

\n

The most important piece of advice I got from The 50th Law was \"always attack before you are ready\".

" } }, { "_id": "dW6MsEhEwCTawnpws", "title": "Why you should listen to your heart", "pageUrl": "https://www.lesswrong.com/posts/dW6MsEhEwCTawnpws/why-you-should-listen-to-your-heart", "postedAt": "2009-11-16T03:12:49.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "dW6MsEhEwCTawnpws", "html": "

Follow your heart… Trust your instincts… Listen to your feelings… You know deep inside what is right…etc

\n

– Most people

\n

Humans ongoingly urge and celebrate others’ trust of their ‘heart’ over their ‘head’. Why?

\n

One explanation is that it’s just good advice. I admit I haven’t seen any research on this, though if it were true I would expect to have seen some evidence. If overly emotional people did better on IQ tests for instance we would probably have heard about it, but perhaps hearts aren’t good at that sort of question. They also aren’t good at engineering or cooking or anything else concrete and testable that I can think of except socializing. More people struggle against their inclination to do what they feel like than struggle to do more of it. Perhaps you say it isn’t their heart that likes masturbating and reading Reddit, but that really makes the advice ‘do what you feel like, if it’s admirable to me’, which is pretty vacant. Perhaps listening to your heart means doing what you want to do in the long term, rather than those things society would have you do, which are called ‘reason’ because society has bothered making up reasons for them. This seems far fetched though.

\n

Another explanation is that we want to listen to our own hearts, i.e. do whatever we feel like without having to think of explanations agreeable to other people. We promote the general principle to justify using it to our hearts’ content. However if we are doing this to fool others, it would be strange for our strategy to include begging others to follow it too. Similarly if you want to defect in prisoners’ dilemmas, you don’t go around preaching that principle. A better explanation would explain our telling others, not our following it.

\n

Another explanation is that this is only one side of the coin. The other half the time we compel people to listen to reason, to think in the long term, to avoid foolish whims. This seems less common to me, especially outside intellectual social groups, but perhaps I just notice it less because it doesn’t strike me as bad advice.

\n

My favorite explanation at the moment is that we always do what our hearts tell us, but explain it in terms of abstract fabrications when our hearts’ interests do not align with those we are explaining to. Rationalization is only necessary for bad news. Have you ever said to someone, ‘I really would love to go with you, but I must submit to sensibility and work on this coursework tonight, and in fact every night for the foreseeable future’? We dearly want to do whatever our listener would have, but are often forced by sensible considerations to do something else. It never happens the other way around. ‘I’m going to stay in tonight because I would just love to, though I appreciate in sensibleness I should socialize more’. Any option that needs reasons is to be avoided. ‘Do what your heart tells you’ means ‘Do what you are telling me your heart tells you’, or translated further, ‘Do what my heart tells you’.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "TEiR8u3wS6GyaoqRd", "title": "The Academic Epistemology Cross Section: Who Cares More About Status?", "pageUrl": "https://www.lesswrong.com/posts/TEiR8u3wS6GyaoqRd/the-academic-epistemology-cross-section-who-cares-more-about", "postedAt": "2009-11-15T19:37:18.935Z", "baseScore": 16, "voteCount": 15, "commentCount": 15, "url": null, "contents": { "documentId": "TEiR8u3wS6GyaoqRd", "html": "

Bryan Caplan writes:

\n
\n

Almost all economic models assume that human beings are Bayesians...  It is striking, then, to realize that academic economists are not Bayesians.  And they're proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more... 

Empirical economists' deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing \"whatever the data say.\"  When there's no data that meets their standards, they mimic the theorists' snobby agnosticism.  If you mention \"common sense,\" they'll scoff.  If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.

\n
\n

Robin Hanson offers an explanation:

\n
\n

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.  If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis.  And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

\n

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display.  It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

\n
\n

I wonder, what does this look like in the cross section?  In other words, relative to other academic disciplines, which have the strongest tendency to celebrate difficult work but ignore sound-yet-unimpressive work?  My hunch is that economics, along with most other social sciences, would be the worst offenders, while the fields closer to engineering will be on the other end of the spectrum.  Engineers should be more concerned with truth since whatever they build has to, you know, work.  What say you?  More importantly, anyone have any evidence?

" } }, { "_id": "4itu6Qdcn5TNwRY9H", "title": "Respond to flying like dying?", "pageUrl": "https://www.lesswrong.com/posts/4itu6Qdcn5TNwRY9H/respond-to-flying-like-dying", "postedAt": "2009-11-15T08:08:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "4itu6Qdcn5TNwRY9H", "html": "

Much of my recent time has passed on aeroplanes, with intervening bits on buses, trains, taxis and dragging an embarrassing mound of luggage between them. Planes share a style of decor and organization which differs from other transport. They feel sanitized and ordered. Every passenger is given a matching set of plasticized provisions. Most things are white or the colours of the airline brand, with no other advertising or decoration. Staff are unusually uniformed. They don’t just offer drinks, but authoritatively ensure that nobody has their tray table down when landing, or their hand luggage not stored under the seat properly. They have a long ritual to explain the detail of safety procedures, as if planes were especially dangerous. All these things are unusual for transport. Why are they specific to planes? I can think of two possible reasons:

\n
    \n
  1. Planes were more recently expensive, high status transport. Thus they traditionally have more intensive service and a cleaner style, as those who are paying a lot already are willing to pay extra for.
  2. \n
  3. Planes are more inclined to scare their guests than other forms of transport: flying is a popular phobia. An air of meticulous order might go a long way to reverse the fear induced by shuddering around in the air torrents. Especially as those in control are clearly more authoritative than you, casually commanding you to sit down or close your laptop. Obsession with safety procedures could be particularly useful for calming passengers, even if it should suggest that emergencies are more likely. Paranoia about safety matters that passengers don’t care about, such as whether their tray table bumps them, means that they can trust the airline employees to respond strongly at the hint of a real emergency. So they can remain calm as long as they can see the flight attendants are. If this theory were true, it could also explain the similarity to hospital decoration and behaviour.
  4. \n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "AnGR4v3oDRAkyhdqu", "title": "Auckland meet up Saturday Nov 28th", "pageUrl": "https://www.lesswrong.com/posts/AnGR4v3oDRAkyhdqu/auckland-meet-up-saturday-nov-28th", "postedAt": "2009-11-15T05:29:16.003Z", "baseScore": 3, "voteCount": 3, "commentCount": 12, "url": null, "contents": { "documentId": "AnGR4v3oDRAkyhdqu", "html": "

For any New Zealanders out there, there is a tentative meet up at the Messe bar at 2pm on the 28th of November (Saturday). Just in case you are shy, there will be at least two people there!

\n

Write a comment and/or please contact me on my cell: 021 039 8554, if you are interested in coming.

\n

 

" } }, { "_id": "CzbvB4dsLNzLzeeot", "title": "Consequences of arbitrage: expected cash", "pageUrl": "https://www.lesswrong.com/posts/CzbvB4dsLNzLzeeot/consequences-of-arbitrage-expected-cash", "postedAt": "2009-11-13T10:32:46.512Z", "baseScore": 17, "voteCount": 13, "commentCount": 28, "url": null, "contents": { "documentId": "CzbvB4dsLNzLzeeot", "html": "

I prefer the movie Twelve Monkeys to Akira. I prefer Akira to David Attenborough's Life in the Undergrowth. And I prefer David Attenborough's Life in the Undergrowth to Twelve Monkeys.

\n

I have intransitive preferences. But I don't suffer from this intransitivity. Up until the moment I'm confronted by an avatar of the money pump, juggling the three DVD boxes in front of me with a greedy gleam in his eye. He'll arbitrage me to death unless I snap out of my intransitive preferences and banish him by putting my options in order.

\n

Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences. Money pumps are a form of arbitrage, exploiting the lack of consistency, transitivity or independence in people's preferences. In most cases, arbitrage ultimately destroys itself: people either wise up to the exploitation and get rid of their vulnerabilities, or lose all their money, leaving only players who are not vulnerable to arbitrage. The crash and burn of the Long-Term Capital Management hedge fund was due in part to the diminishing returns of their arbitrage strategies.

\n

Most humans to not react to the possibility of being arbitraged by changing their whole preference systems. Instead they cling to their old preferences as much as possible, while keeping a keen eye out to avoid being taken advantage of. They keep their inconsistent, intransitive, dependent systems but end up behaving consistently, transitively and independently in their most common transactions.

\n

The weaknesses of this approach are manifest. Having one system of preferences but acting as if we had another is a great strain on our poor overloaded brains. To avoid the arbitrage, we need to scan present and future deals with great keenness and insight, always on the lookout for traps. Since transaction costs shield us from most of the negative consequences of imperfect decision theories, we have to be especially vigilant as transaction costs continue to drop, meaning that opportunities to be arbitraged will continue to rise in future. Finally, how we exit the trap of arbitrage depends on how we entered it: if my juggling Avatar had started me on Life in the Undergrowth, I'd have ended up with Twelve Monkeys, and refused the next trade. If he'd started me on Twelve Monkeys, I've had ended up with Akira. These may not have been the options I'd have settled on if I'd taken the time to sort out my preferences ahead of time.

\n

For these reasons, it is much wiser to change our decision theory ahead of time to something that doesn't leave us vulnerable to arbitrage, rather than clinging nominally to our old preferences.

\n

Inconsistency or intransitivity leaves us vulnerable to a strong money pump, so these we should avoid. Violating independence leaves us vulnerable to a weak money pump, which also means giving up free money, so this should be avoided too. Along with completeness (meaning you can actually decide between options) and the technical assumption of continuity, these make up the von Neumann-Morgenstern axioms of expected utility. Thus if we want to avoid being arbitraged, we should cleave to expected utility.

\n

But the consequences of arbitrage do not stop there.

\n

Quick, which would you prefer, ¥10 000 with certainty, or a 50% chance of getting ¥20 000? Well, it depends on how your utility scales with cash. If it scales concavely, then you are risk averse, while if it scales convexly, then... Stop. Minus the transaction costs, those two options are worth exactly the same thing. If they are freely tradable, then you can exchange them one for one on the world market. Hence if you price the 50% contract at any value other than ¥10 000, you can be arbitraged if you act on your preferences (neglecting transaction costs). People selling to or buying contracts from you will make instant free money on the trade. Money that would be yours instead if your preferences were other.

\n

Of course, you could keep your non-linear utility, and just behave as if it were linear, because of the market price, while being risk-averse in secret... But just as before, this is cumbersome, complicated and unnecessary. Exactly as arbitrage makes you cleave to independence, it will make your utility linear in money - at least for small, freely tradable amounts.

\n

In conclusion:

\n\n

 

\n

Addendum: If contracts such as L = {¥20 000 if a certain coin comes up heads/tails} were freely tradable, they would cost ¥10 000.

\n

Proof: Let LH be the contract that gives out ¥20 000 if that coin comes out heads; LT be the contract if that same coin comes out tails. LH and LT together are exactly the same as a guaranteed ¥20 000. However, individually, LH and LT are the same contract - 50% chance of ¥20 000 - thus by the Law of One Price, they must have the same price (you can get the same result by symmetry). Two contracts with the same price, totalling ¥20 000 together: they must individualy be worth ¥10 000.

" } }, { "_id": "MvDsCGk94qr3bFZYh", "title": "Boston meetup Nov 15 (and others)", "pageUrl": "https://www.lesswrong.com/posts/MvDsCGk94qr3bFZYh/boston-meetup-nov-15-and-others", "postedAt": "2009-11-13T07:05:34.276Z", "baseScore": 6, "voteCount": 4, "commentCount": 1, "url": null, "contents": { "documentId": "MvDsCGk94qr3bFZYh", "html": "

So far, only one of the less-wrong meet-ups that were discussed has been scheduled.  The Boston meet-up was scheduled for:

\n

Carberry's at 74 Prospect St Cambridge, MA
(1.5 blocks northeast from the Central Square T station)
Sunday November 15th at 2pm

\n

though it may move after an hour or two to the Clear Conscience Cafe a couple blocks away if things get too crowded. 

\n

My cell number is (610) 213 2487 so you can contact me if there is a problem.

\n

Regarding Philly, Florida and New Orleans there is still need for detail in the schedule.  I'm leaving New Orleans at 5:10 on the 14th so the 13th is probably better but I can do early on the 14th if people want.  There has been some interest in an event there but I would appreciate more interested people saying so and possibly contacting me via phone or email.  If several people are interested we will have a meet-up.  Just one or two and I can meet less formally.

" } }, { "_id": "n5Yfhygz42QNK2vFe", "title": "Anti-Akrasia Technique: Structured Procrastination", "pageUrl": "https://www.lesswrong.com/posts/n5Yfhygz42QNK2vFe/anti-akrasia-technique-structured-procrastination", "postedAt": "2009-11-12T19:35:45.496Z", "baseScore": 63, "voteCount": 64, "commentCount": 52, "url": null, "contents": { "documentId": "n5Yfhygz42QNK2vFe", "html": "

This idea has been mentioned in several comments, but it deserves a top-level post.  From an ancient, ancient web article (1995!), Stanford philosophy professor John Perry writes:

\n
\n

I have been intending to write this essay for months. Why am I finally doing it? Because I finally found some uncommitted time? Wrong. I have papers to grade, textbook orders to fill out, an NSF proposal to referee, dissertation drafts to read. I am working on this essay as a way of not doing all of those things. This is the essence of what I call structured procrastination, an amazing strategy I have discovered that converts procrastinators into effective human beings, respected and admired for all that they can accomplish and the good use they make of time. All procrastinators put off things they have to do. Structured procrastination is the art of making this bad trait work for you. The key idea is that procrastinating does not mean doing absolutely nothing. Procrastinators seldom do absolutely nothing; they do marginally useful things, like gardening or sharpening pencils or making a diagram of how they will reorganize their files when they get around to it. Why does the procrastinator do these things? Because they are a way of not doing something more important. If all the procrastinator had left to do was to sharpen some pencils, no force on earth could get him do it. However, the procrastinator can be motivated to do difficult, timely and important tasks, as long as these tasks are a way of not doing something more important.

\n
\n

The insightful observation that procrastinators fill their time with effort, not staring at the walls, gives rise to this form of akrasia aikido, where the urge to not do something is cleverly redirected into productivity.  If you can \"waste time\" by doing useful things, while feeling like you are avoiding doing the \"real work\", then you avoid depleting your limited supply of willpower (which happens when you force yourself to do something).

\n

In other words, structured procrastination (SP) is an efficient use of this limited resource, because doing A in order to avoid doing B is easier than making yourself do A.  If A is something you want to get done, then the less willpower you can use to do it, the more you will be to accomplish.  This only works if A is something that you do want to get done - that's how SP differs from normal procrastination, of course.

\n

Like most information works, I am constantly distracted by social networks - reading Twitter, blogs, answering email.  I don't do these things because they are effortless and restful (that's what reading fiction is for), but because they feel like moderately productive work (learning new ideas about the world, keeping up on my ideasphere, connecting with people) that just so happens to be fun and easy.  Unfortunately, that joy and ease translates into a distorted feeling about if/whether/how much I am accomplishing things.  The marginal output product of my spending 15 minutes on social networks may be positive but it's close to zero compared to other kinds of work I could be doing.

\n

SP suggests that I find other things which both feel like productive avoidance and actually are.  For example, rather than reading blogs, I could read one of the dozens of books piled up that have been suggested as relevant to my areas of research.  Yeah, that wouldn't be as much fun as digesting the clever little bites that are blog posts, but it still feels like avoiding the main unpleasant tasks, and it's actually important enough to be on my todo list (if not at the top).

\n

Maybe this is just me applying my standard rationality themes everywhere, but I think that self-awareness and action vs. reaction are key to structured procrastination.  When you are reacting to the vague feeling that you want to avoid doing something, you will automatically get driven towards quick and easy fixes - leave the report you are supposed to write in one window, and go to Google Reader in another.  Anything to scratch the itch of avoidance.

\n

But if you can have the self-awareness to notice your reaction of avoidance, you get to roll a saving throw about whether to make a conscious decision.  If you pass, now you have a choice: buckle down, go with the distraction, or do structured procrastination?  This choice opportunity doesn't automatically let you do the right thing - doing your primary task will still require a major expenditure of willpower.  But at least it stops you from automatically doing the wrong thing, and gives you a chance to use akrasia aikido like SP to apply your willpower efficiently.  Or perhaps go for a truly restful option like taking a break, going outside, or getting a drink, which actually rests your mind (and willpower muscle) much more effectively than looking at the latest on Digg.

\n

There is more nuance to structured procrastination, but this post has gotten long enough already.  If people find the topic interesting, I can write more about it's weaknesses and my ideas for addressing them.

" } }, { "_id": "dciuB5nTG2o9PzJWe", "title": "Test Your Calibration!", "pageUrl": "https://www.lesswrong.com/posts/dciuB5nTG2o9PzJWe/test-your-calibration", "postedAt": "2009-11-11T22:03:38.439Z", "baseScore": 25, "voteCount": 21, "commentCount": 34, "url": null, "contents": { "documentId": "dciuB5nTG2o9PzJWe", "html": "

In my journeys across the land, I have, to date, encountered four sets of probability calibration tests. (If you just want to make bets on your predictions, you can use Intrade or another prediction market, but these generally don't record calibration data, only which of your bets paid out.) If anyone knows of other tests, please do mention them in the comments, and I'll add them to this post. To avoid spoilers, please do not post what you guessed for the calibration questions, or what the answers are.

\n

The first, to boast shamelessly, is my own, at http://www.acceleratingfuture.com/tom/?p=129. My tests use fairly standard trivia questions (samples: \"George Washington actually fathered how many children?\", \"Who was Woody Allen's first wife?\", \"What was Paul Revere's occupation?\"), with an emphasis towards history and pop culture. The quizzes are scored automatically (by computer) and you choose whether to assign a probability of 96%, 90%, 75%, 50%, or 25% to your answer. There are five quizzes with fifty questions each: Quiz #1, Quiz #2, Quiz #3, Quiz #4 and Quiz #5.

\n

The second is a project by John Salvatier (LW account) of the University of Washington, at http://calibratedprobabilityassessment.org/. There are three sets of questions with fifty questions each; two sets of general trivia, and one set of questions about relative distances between American cities (the fourth set, unfortunately, does not appear to be working at this time). The questions do not rotate, but are re-ordered upon refreshing. The probabilities are again multiple choice, with ranges of 51-60%, 61-70%, 71-80%, 81-90%, and 91-100%, for whichever answer you think is more probable. These quizzes are also scored by computer, but instead of spitting back numbers, the computer generates a graph, showing the discrepancy between your real accuracy rate and your claimed accuracy rate. Links: US cities, trivia #1, trivia #2.

\n

The third is a quiz by Steven Smithee of Black Belt Bayesian (LW account here) at http://www.acceleratingfuture.com/steven/?p=96. There are three sets, of five questions each, about history, demographics, and Google rankings, and two sets of (non-testable) questions about the future and historical counterfactuals. (EDIT: Steven has built three more tests in addition to this one, at http://www.acceleratingfuture.com/steven/?p=102, http://www.acceleratingfuture.com/steven/?p=106, and http://www.acceleratingfuture.com/steven/?p=136). This test must be graded manually, and the answers are in one of the comments below the test (don't look at the comments if you don't want spoilers!).

\n

The fourth is a website by Tricycle Developments, the web developers who built Less Wrong, at http://predictionbook.com/. You can make your own predictions about real-world events, or bet on other people's predictions, at whatever probability you want, and the website records how often you were right relative to the probabilities you assigned. However, since all predictions are made in advance of real-world events, it may take quite a while (on the order of months to years) before you can find out how accurate you were.

" } }, { "_id": "iLGrNTwTZivTX5774", "title": "Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions", "pageUrl": "https://www.lesswrong.com/posts/iLGrNTwTZivTX5774/less-wrong-q-and-a-with-eliezer-yudkowsky-ask-your-questions", "postedAt": "2009-11-11T03:00:39.093Z", "baseScore": 19, "voteCount": 19, "commentCount": 701, "url": null, "contents": { "documentId": "iLGrNTwTZivTX5774", "html": "

As promised, here is the \"Q\" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

\n

The Rules

\n

1) One question per comment (to allow voting to carry more information about people's preferences).

\n

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

\n

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

\n

4) If you reference certain things that are online in your question, provide a link.

\n

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

\n

Suggestions

\n

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

\n

It's okay to attempt humor (but good luck, it's a tough crowd).

\n

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

\n

Update: Eliezer's video answers to 30 questions from this thread can be found here.

" } }, { "_id": "2hD9GoYBG3PwK5qK7", "title": "What makes you YOU? For non-deists only.", "pageUrl": "https://www.lesswrong.com/posts/2hD9GoYBG3PwK5qK7/what-makes-you-you-for-non-deists-only", "postedAt": "2009-11-10T19:59:41.456Z", "baseScore": 2, "voteCount": 17, "commentCount": 93, "url": null, "contents": { "documentId": "2hD9GoYBG3PwK5qK7", "html": "

\n

From the dawn of civilization humans believed in eternal life. The flesh may rot, but the soul will be reborn. To save the soul from the potential adverse living conditions (e.g. hell), the body, being the transient and thus the less important part, was expected to make sacrifices. To accumulate the best possible karma, pleasures of the flesh had to be given up or at least heavily curtailed.

\n

 

\n

Naturally the wisdom of this trade-off was questioned by many skeptical minds. The idea of reincarnation may have a strong appeal to imagination, but in absence of any credible evidence the Occam’s razor mercilessly cuts it into pieces. Instead of sacrificing for the sake of the future incarnations, a rationalist should live for the present. But does he really? 

\n

 

\n

Consider the “incarnations” of the same person at different ages. Upon reaching the age of self-awareness, the earlier “incarnations” start making sacrifices for the benefit of the later ones. Dreams of becoming an astronaut at 25 may prompt a child of nine to exercise or study instead of playing. Upon reaching the age of 25, the same child may take a job at the bank and start saving for the potential retirement. Of course, legally all these “incarnations” are just the same person. But beyond jurisprudence, what is it that makes you who you are at the age of nine, twenty five or seventy?

\n

 

\n

Over the years your body, tastes, goals and the whole worldview are likely to undergo dramatic change.  The single thing which remains essentially constant through your entire life is your DNA sequence. Through natural selection, evolution has ensured that we preferentially empathize with those whose DNA sequence is most similar to our own, i.e. our children, siblings and, most importantly, ourselves. But, instinct excepted, is there a reason why a rational self-conscious being must obey a program implanted in us by the unconscious force of evolution? If you identify more with your mind (personality/views/goals/…) than with the DNA sequence, why should you care more for someone who, living many years from now will resemble you less than some actual people living today? 

\n

 

\n

P.S. I am aware that the meaning of “self” was debated by philosophers for many years, but I am really curious about the personal answers of “ordinary” rationalists to this question.

\n

" } }, { "_id": "Kh8JDtqvKoRBHJiXn", "title": "Restraint Bias", "pageUrl": "https://www.lesswrong.com/posts/Kh8JDtqvKoRBHJiXn/restraint-bias", "postedAt": "2009-11-10T17:23:53.075Z", "baseScore": 21, "voteCount": 21, "commentCount": 12, "url": null, "contents": { "documentId": "Kh8JDtqvKoRBHJiXn", "html": "

Ed Yong over at Not Exactly Rocket Science has an article on a study demonstrating \"restraint bias\" (reference), which seems like an important thing to be aware of in fighting akrasia:

\n

People who think they are more restrained are more likely to succumb to temptation

\n
\n

In a series of four experiments, Loran Nordgren from Northwestern University showed that people suffer from a \"restraint bias\", where they overestimate their ability to control their own impulses. Those who fall prey to this fallacy most strongly are more likely to dive into tempting situations. Smokers, for example, who are trying to quit, are more likely to put themselves in situations if they think they're invulnerable to temptation. As a result, they're more likely to relapse.

\n
\n

Thus, not only do people overestimate their abilities to carry out non-immediate plans (far-mode thinking, like in planning fallacy), but also the more confident ones turn out to be least able. This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away. Once you believe yourself to have asserted self-image of a person with good self-control, maintaining the actual self-control loses priority.

\n

See also: Akrasia, Planning fallacy, Near/far thinking.

\n

Related to: Image vs. Impact: Can public commitment be counterproductive for achievement?

" } }, { "_id": "NcEqgAzcRYeEpbrDx", "title": "Rationality advice from Terry Tao", "pageUrl": "https://www.lesswrong.com/posts/NcEqgAzcRYeEpbrDx/rationality-advice-from-terry-tao", "postedAt": "2009-11-10T17:17:23.357Z", "baseScore": 22, "voteCount": 22, "commentCount": 13, "url": null, "contents": { "documentId": "NcEqgAzcRYeEpbrDx", "html": "

Via a link on IRC, I stumbled upon the blog of the mathematician Terry Tao. I noticed that several of his posts contain useful rationality advice, part of it overlapping with content that has been covered here. Most of the posts remind us of things that are kind of obvious, but I don't think that's necessarily a bad thing: we often need reminders of the things that are obvious.

\r\n

Advance warning: the posts are pretty well interlinked, in Wikipedia/TVTropes fashion. I currently have 15 tabs open from the site.

\r\n

Some posts of note:

\r\n

Be sceptical of your own work. If you unexpectedly find a problem solving itself almost effortlessly, and you can’t quite see why, you should try to analyse your solution more sceptically. Most of the time, the process for solving a major problem is a lot more complex and time-consuming.

\r\n

Use the wastebasket. Not every idea leads to a success, and not every first draft forms a good template for the final draft. Know when to start over from scratch, know when you should be persistent, and do keep copies around of even the failed attempts.

\r\n

Learn the limitations of your tools. Knowing what your tools cannot do is just as important as knowing what they can do.

\r\n

Learn and relearn your field. Simply learning the statement and proof of a problem doesn't guarantee understanding: you should test your understanding, using methods such as finding alternate proofs and trying to generalize the argument.

\r\n

Write down what you've done. Write down sketches of any interesting arguments you come across - not necessarily at a publication level of quality, but detailed enough that you can forget about the details and reconstruct them later on.

" } }, { "_id": "J8LTE8CTQfyEMkhnc", "title": "Reflections on Pre-Rationality", "pageUrl": "https://www.lesswrong.com/posts/J8LTE8CTQfyEMkhnc/reflections-on-pre-rationality", "postedAt": "2009-11-09T21:42:22.087Z", "baseScore": 28, "voteCount": 12, "commentCount": 30, "url": null, "contents": { "documentId": "J8LTE8CTQfyEMkhnc", "html": "

This continues my previous post on Robin Hanson's pre-rationality, by offering some additional comments on the idea.

\n

The reason I re-read Robin's paper recently was to see if it answers a question that's related to another of my recent posts: why do we human beings have the priors that we do? Part of that question is why are our priors pretty close to each other, even if they're not exactly equal. (Technically we don't have priors because we're not Bayesians, but we can be approximated as Bayesians, and those Bayesians have priors.) If we were created by a rational creator, then we would have pre-rational priors. (Which, since we don't actually have pre-rational priors, seems to be a good argument against us having been created by a rational creator. I wonder what Aumann would say about this?) But we have other grounds for believing that we were instead created by evolution, which is not a rational process, in which case the concept doesn't help to answer the question, as far as I can see. (Robin never claimed that it would, of course.)

\n

The next question I want to consider is a normative one: is pre-rationality rational? Pre-rationality says that we should reason as if we were pre-agents who learned about our prior assignments as information, instead of just taking those priors as given. But then, shouldn't we also act as if we were pre-agents who learned about our utility function assignments as information, instead of taking them as given? In that case, we're led to the conclusion that we should all have common utility functions, or at least that pre-rational agents should have values that are much less idiosyncratic than ours. This seems to be a reductio ad absurdum of pre-rationality, unless there is an argument why we should apply the concept of pre-rationality only to our priors, and not to our utility functions. Or is anyone tempted to bite this bullet and claim that we should apply pre-rationality to our utility functions as well? (Note that if we were created by a rational creator, then we would have common utility functions.)

\n

The last question I want to address is one that I already raised in my previous post. Assuming that we do want to be pre-rational, how do we move from our current non-pre-rational state to a pre-rational one? This is somewhat similar to the question of how do we move from our current non-rational (according to ordinary rationality) state to a rational one. Expected utility theory says that we should act as if we are maximizing expected utility, but it doesn't say what we should do if we find ourselves lacking a prior and a utility function (i.e., if our actual preferences cannot be represented as maximizing expected utility).

\n

The fact that we don't have good answers for these questions perhaps shouldn't be considered fatal to pre-rationality and rationality, but it's troubling that little attention has been paid to them, relative to defining pre-rationality and rationality. (Why are rationality researchers more interested in knowing what rationality is, and less interested in knowing how to be rational? Also, BTW, why are there so few rationality researchers? Why aren't there hordes of people interested in these issues?)

\n

As I mentioned in the previous post, I have an idea here, which is to apply some concepts related to UDT, in particular Nesov's trading across possible worlds idea. As I see it now, pre-rationality is mostly about the (alleged) irrationality of disagreements between counterfactual versions of the same agent, when those disagreements are caused by irrelevant historical accidents such as the random assortment of genes. But how can such agents reach an agreement regarding what their beliefs should be, when they can't communicate with each other and coordinate physically? Well, at least in some cases, they may be able to coordinate logically. In my example of an AI whose prior was picked by the flip of a coin, the two counterfactual versions of the AI are similar enough to each other and symmetrical enough, for each to infer that if it were to change its prior from O or P to Q, where Q(A=heads)=0.5, the other AI would do the same, but this inference wouldn't be true for any Q' != Q, due to lack of symmetry.

\n

Of course, in the actual UDT, such \"changes of prior\" do not literally occur, because coordination and cooperation between possible worlds happen naturally as part of deciding acts and strategies, while one's preferences stay constant. Is that sufficient, or do we really need to change our preferences and make them pre-rational? I'm not sure.

" } }, { "_id": "JBjZcYtva3RmDZWyc", "title": "Unpromising promise?", "pageUrl": "https://www.lesswrong.com/posts/JBjZcYtva3RmDZWyc/unpromising-promise", "postedAt": "2009-11-09T19:36:17.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "JBjZcYtva3RmDZWyc", "html": "

Marriage usually involves sharing and exchanging a huge bunch of things. Love, sex, childcare, money, cooperation in finding a mutually agreeable place for the knives to live, etc. For all of these but one, you can verify whether I’m upholding my side of the deal. And for all but one, I can meaningfully promise to keep my side of the deal more than a day into the future. Yet the odd one out, love, is the one that we find most suited to making eternal promises about. Are these things related?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "c9EWsufsjc7NLwWM2", "title": "Practical rationality in surveys", "pageUrl": "https://www.lesswrong.com/posts/c9EWsufsjc7NLwWM2/practical-rationality-in-surveys", "postedAt": "2009-11-08T14:27:41.688Z", "baseScore": -3, "voteCount": 8, "commentCount": 11, "url": null, "contents": { "documentId": "c9EWsufsjc7NLwWM2", "html": "

\"Statistically significant results\" mean that there's a 5% chance that results are wrong in addition to chance that the wrong thing was measures, chance that sample was biased, chance that measurement instruments were biased, chance that mistakes were made during analysis, chance that publication bias skewed results, chance that results were entirely made up and so on.

\n

\"Not statistically significant results\" mean all those, except chance of randomly mistaken results even if everything was ran correct is not 5%, but something else, unknown, and dependent of strength of the effect measured (if the effect is weak, you can have study where chance of false negative is over 99%).

\n

So results being statistically significant or not, is really not that useful.

\n

For example, here's a survey of civic knowledge. Plus or minus 3% measurement error? Not this time, they just completely made up the results.

\n

Take home exercise - what do you estimate Bayesian chance of published results being wrong to be?

" } }, { "_id": "spkSkeTqoQBbx2jP5", "title": "Protect the seemingly useless", "pageUrl": "https://www.lesswrong.com/posts/spkSkeTqoQBbx2jP5/protect-the-seemingly-useless", "postedAt": "2009-11-08T12:18:25.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "spkSkeTqoQBbx2jP5", "html": "

Advice I could have done with as a teenager, from G.K. Chesterton:

\n

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

\n

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

\n

Thanks to Mike Blume via me via Robert Wiblin via Jane Galt.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "GS5Ef9FePbn6eP2CR", "title": "The Danger of Stories", "pageUrl": "https://www.lesswrong.com/posts/GS5Ef9FePbn6eP2CR/the-danger-of-stories", "postedAt": "2009-11-08T02:53:29.184Z", "baseScore": 12, "voteCount": 10, "commentCount": 105, "url": null, "contents": { "documentId": "GS5Ef9FePbn6eP2CR", "html": "

Tyler Cowen argues in a TED talk (~15 min) that stories pervade our mental lives.  He thinks they are a major source of cognitive biases and, on the margin, we should be more suspicious of them - especially simple stories.  Here's an interesting quote about the meta-level:

\n
\n

What story do you take away from Tyler Cowen?  ...Another possibility is you might tell a story of rebirth.  You might say, \"I used to think too much in terms of stories, but then I heard Tyler Cowen, and now I think less in terms of stories\". ...You could also tell a story of deep tragedy.  \"This guy Tyler Cowen came and he told us not to think in terms of stories, but all he could do was tell us stories about how other people think too much in terms of stories.\"

\n
" } }, { "_id": "zyoDqxif5onHd7w8e", "title": "Externalizing between conformers", "pageUrl": "https://www.lesswrong.com/posts/zyoDqxif5onHd7w8e/externalizing-between-conformers", "postedAt": "2009-11-07T12:54:03.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "zyoDqxif5onHd7w8e", "html": "

or Why I could conceivably support banning smoking, part 1

\n

Suppose that people are rational and their goals are consistent, and they are free to choose whatever activities they like, as long as they don’t harm others. Suppose we don’t care about equality or whether lifestyles are nihisistic, or anything else Wikipedia claims might be wrong with libertarianism. Should we expect people to approximately end up with the best sets of behaviour? If they smoke, should we infer that they like smoking more than they dislike having lung cancer far in the future? If they watch intellectual documentaries rather than porn should we assume that they have wisely established that they like looking smart more than raunchy fantasy? Many think so, and support libertarianism for this reason.

\n

This makes sense if humans are independently choosing activities. But the all time favorite activity of nearly everyone is doing what other people are doing. This makes such an argument more complicated.

\n

Imagine everyone is doing A. Everyone likes doing B more than doing A, but not as much as they like conformity. There would be a huge gain to a coordinated shift to B, but nobody moves there alone. In some such situations those involved arrange coordination, but often it is impossible. If there are many equilibria like this, and no means to move to better ones, intervention by someone with the power to force a coordinated move could be a great thing.

\n

A good example of this I saw was during first year at college. Everyone used to go to Southpac to drink. I was baffled, as it was probably not just the worst night club around, but actually the least pleasant place I had ever been, possibly but not definitely excluding ankle deep in poo and mud with rotten meat juice running up my arms and dogs clawing at me. When I asked, everyone said they hated it, but it was overall the best place to go, because that’s where everyone else went. It seemed that there were too many people for any student to easily coordinate everyone going somewhere else, so the original equilibrium remained until Southpac was closed down for using (cheap, poisonous) methylated spirits in the drinks. The student council got sponsorship somewhere else, and everyone else went there instead.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Roger
SouthpacElsewhere
Everyone elseSouthpac21
Elsewhere03
\n

Payoffs for Roger in choosing  a nightclub

\n

In the above table, assume ‘everyone else’ is made up of people in the same situation as Roger. Roger doesn’t want to dance alone, so he gets 2 happiness from going to the same club as everyone else. He also doesn’t like being attached to the floor by stickiness and vomit, but it’s less of an issue, so he gets 1 happiness from going anywhere but Southpac. Everyone going to Southpac and everyone going elsewhere are both Nash equilibria, but the going elsewhere equilibria is half as good again.

\n

Why wouldn’t people be able to coordinate to change? One reason is group size or ungainliness. The other is that liking the current activity sends a signal. Suggesting everyone choose a different activity to signal group loyalty for instance marks you as disloyal as fast as refusing to participate alone does.

\n

This doesn’t necessarily mean government intervention is necessary. That might still be worse than freedom, because if the government were to legislate culture it would be hard to verify that they were doing it in only the justified instances. It does seem to mean that free choice will not lead to the best outcomes however, undermining some justification for libertarianism.

\n

Whether we should be concerned about externalities that others choose to bear is a matter of contention. If you should be encouraged against an activity because others want to do the same as you and they don’t like that activity, you should probably also be encouraged not to demonstrate homosexuality where it is unpopular or be ugly for instance. These also harm others, because they choose to disapprove. I think most would disagree that externalities caused by others choosing to care what you are doing should be regulated. I suspect such a sentiment is just a heuristic for allowing those who have the greatest interests in something having control over it, so people should usually be allowed to do unpopular things visibly, but in this case forced change may be a good thing.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "CTNvXTaZ4FPPZJG8E", "title": "Determinism is not blame-free", "pageUrl": "https://www.lesswrong.com/posts/CTNvXTaZ4FPPZJG8E/determinism-is-not-blame-free", "postedAt": "2009-11-06T23:28:52.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "CTNvXTaZ4FPPZJG8E", "html": "

If a person seems to have done something wrong, we check that they weren’t forced by circumstance. If we find they had no choice, we don’t punish them.

\n

As we learn about the detail of ourselves, we see more and more circumstances forcing our decisions. The social context, your beliefs, your genes, your personality disorders, some randomness you didn’t control. In the end the circumstance of being yourself threatens to doom you to all of your actions. Many feel this is a threat indeed, for then we must abandon responsibility as a concept.

\n

The extremes of this are treated as philosophical issue, but at the margin it is quite practical. This week for instance a murderer had his sentence cut because his genes promoted aggression. A common sentiment among those commenting is that in these cases we should distinguish between punishment, prevention and rehabilitation. He does not deserve to be punished as he had no choice, but that we don’t want him around, so he should be politely detained and helped.

\n

Trying to draw a line between what could and couldn’t have been any other way under the circumstances is of course misguided. There is not even a gradient of degree of choice on which to draw a line for practical purposes. Everything was determined. But there is another important gradient. The key factor is how different reality would have needed to be for the person to make a different ‘choice’.

\n

At one end of this gradient, all a killer needed was for one neuron to fire differently and she would have chickened out. At the other end, things would have had to be different for years for the death to be avoided. For instance the killer is a careful driver who runs over a child darting onto the road. He could have prevented it by never driving, which would have required their whole life to be different.

\n

At some point, the cost of being good is equal to the cost of potentially getting punished in a certain way. Sins that are cheaper to avoid than this we should punish in that way (if prevention is worth the effort to us), those that it would take more to alter we should not.

\n

This gradient seems to approximately coincide with when we call things a matter of choice. We also seem to roughly draw a line on it where our usual punishments, such as social ostracism, fail to change behavior. If a behavior can’t be brought down by stigma and bad treatment, it’s probably out of your control. If it can, it’s you. If people persist in smoking and drinking after we have stigmatized them and banned them we conclude that they are probably addicted. When a persons’ illness clears up on invitation to a party, we suspect them of control. I’m not sure how well responsibility felt coincides with punishment being worthwhile, but it looks approximately close. There is also the matter of which choices we can see the restriction on. Until recently we couldn’t see that DNA was an influence. Are there other influences that we can see and we still count as choice, or can’t see and count as no choice?

\n

Anyway, for some reason the line where we punish looks to us like predictable vs. not predictable, so when we look closely and find more predictable things, we want to move the line. This is a problem, because knowing about genes doesn’t make it any more expensive to change behavior influenced by them.

\n

Some people, on thinking about this, say that lack of choice has no implications for responsibility then. We can safely embrace our physicality free from ethical consequences, because responsibility isn’t about philosophical free will. This I disagree with. The logic behind punishment may be independent of vague notions of choice, but our feelings about responsibility are tied to the latter and indifferent about the former. If we actually managed to believe in determinism, punishment may be well justified, but we would largely lose the will to do it. This would be a problem, because it would still be justified.

\n

Perhaps as a society we could understand the merits of punishment and commit to consistently punishing law breakers, but now with cool compassion. This would do little good though. Plenty of punishment isn’t by the law, but by individuals. Even what the law deals with often requires individuals to tell the law about it. If people weren’t lividly bent on justice, they wouldn’t report thefts, damage and violence, because it takes them effort and often gives them nothing but the pleasure of retribution. If punishment were to become a coldly calculated activity we could lose the ability to commit on an individual level to irrationally pay the costs of punishing. That would spoil the cooperation we currently have between one another and ourselves over time to keep harmful activities rare.

\n

I still believe in determinism of course, but I don’t think this is necessarily a safe belief.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "C2uvzYeoMkwMmscMx", "title": "Hamster in Tutu Shuts Down Large Hadron Collider", "pageUrl": "https://www.lesswrong.com/posts/C2uvzYeoMkwMmscMx/hamster-in-tutu-shuts-down-large-hadron-collider", "postedAt": "2009-11-06T16:29:30.210Z", "baseScore": 49, "voteCount": 72, "commentCount": 73, "url": null, "contents": { "documentId": "C2uvzYeoMkwMmscMx", "html": "

The Large Hadron Collider was shut down yesterday by a hamster in a tutu, weary scientists announced.

\n

The Large Hadron Collider is the successor to the earlier Superconducting Super Collider, which was shut down by the US House of Representatives in 1993 after 14 miles of tunnel had been constructed at a cost of $2 billion.  Since its inception, the Large Hadron Collider has been plagued by construction delays, dead technicians, broken magnet supports, electrical faults, helium containment failures, vacuum leaks, birds with baguettes, terrorists, ninjas, pirates, supervillains, hurricanes, asteroids, cosmic energy storms, and a runaway train.  On one occasion it was discovered that the entire 17-mile circular tunnel had been built upside-down due to a sign error in the calculations, and the whole facility had to be carefully flipped by a giant spatula.

\n

One year ago, hopes were raised for the first time in decades when it was discovered that all the incidents up until that point had been the work of a sinister globe-spanning conspiracy of religious fanatics who, inspired by the term \"God Particle\", had decided that no one could ever be allowed to look upon the hypothetical Higgs boson.  This discovery was widely considered to have undermined the theory that Nature abhors a sufficiently powerful particle collider.  Though some found it suspicious that the Higgs boson would even have a religious cult devoted to preventing its observation, the affair did have a patina of surface plausibility - after all, a giant plot to prevent physicists from observing the Higgs boson makes around as much sense as anything else religious people do.

\n

After the conspiracy was shut down by heroic international detectives in an operation so hugely dramatic that it would be pointless to summarize it here, the world began to wonder whether the LHC might really, really work this time around.  Scientists everywhere held their breaths as the bodies were cleared out, the tunnels reconditioned, and the broken magnets replaced, all without incident.  The price of large hadrons held steady on the commodities market, permitting the LHC's reservoirs to be fully stocked.  Proton beams were successfully formed and circulated through the giant tunnel.

\n

Moments before the first collision was scheduled to probe the theretofore-unachieved energy of 3.5 TeV, a hamster in a tutu materialized from nowhere at the intended collision point.  The poor creature didn't even have time for a terrified squeak before the two proton beams smashed into it, releasing the equivalent energy of 724 megajoules or 173 kilograms of TNT.

\n

The dispirited scientists of the LHC have announced that this will create a 24-month delay while tiny bits of hamster are cleaned out of the tunnels and anti-hamster-materialization fields are installed in the collider.

\n

At the poorly attended press conference, journalists asked whether it might finally be time to give up.

\n

\"Nature's just messing with you, man,\" said a reporter from the New York Times.  \"You need to admit this isn't going to work out.\"

\n

Professor Nicholas von Shnicker, project leader of the LHC, responded.

\n

\"NEVER!\" shrieked von Shnicker, spittle flying from his lips and spattering on his ragged beard.  \"Ve vill NEVER give up!  My father spent his life trying to make the LHC vork, and his father!  Even if it takes a century, if it takes a thousand years or ten thousand million years, VE VILL SEE THE HIGGS BOSON IN OUR LIFETIMES!\"

\n

Prof. Kill McBibben is the author of the recently released book Enough, which proposes a new theory of the mysterious Counter-Force that prevents the LHC from operating.  \"It's not the Higgs boson going back in time,\" says Prof. McBibben, \"nor is it the anthropic principle preventing a black hole from forming.  We've just hit the point that we all knew was coming - that we all knew had to happen someday.  We've reached the limits of human science.  We are just not allowed to build colliders at higher than a certain energy, or know more than a certain amount of particle physics.  This is the end of the road.  We're done.\"

" } }, { "_id": "2kBKELS6mmEhydLvR", "title": "All hail the Lisbon Treaty! Or is that \"hate\"? Or just \"huh\"?", "pageUrl": "https://www.lesswrong.com/posts/2kBKELS6mmEhydLvR/all-hail-the-lisbon-treaty-or-is-that-hate-or-just-huh", "postedAt": "2009-11-06T10:42:35.407Z", "baseScore": -3, "voteCount": 9, "commentCount": 39, "url": null, "contents": { "documentId": "2kBKELS6mmEhydLvR", "html": "

The Lisbon treaty was finally ratified last Tuesday, in a most wonderfully disdainful signing cermony.

\n

I take it that everyone on the list is emotionally overwhelmed by this, one of the most important political events in recent history. The world's largest economy has taken a firm step towards statehood; the ramifications of this will be felt across the world. People will die who would have lived; people will live who would have died: the body count is much affected. The potential implications for AI alone (think political singleton, research funding priorities) are huge. Depending on your opinion of the consequences, you are probably dumped into a dark ditch of despair or swimming in a limitless ocean of triumphant glee.

\n

If neither is the case... why not?

" } }, { "_id": "hknmGGsZMdGGduS54", "title": "Bay area LW meet-up", "pageUrl": "https://www.lesswrong.com/posts/hknmGGsZMdGGduS54/bay-area-lw-meet-up", "postedAt": "2009-11-06T07:49:11.816Z", "baseScore": 9, "voteCount": 9, "commentCount": 70, "url": null, "contents": { "documentId": "hknmGGsZMdGGduS54", "html": "

The November LW/OB meet-up will be this Saturday (two days from today), at the SIAI house in Santa Clara.  Apologies for the late notice.  We'll have fun, food, and attempts at rationality, as well as good general conversation.  Details at the bay area OB/LW meet-up page.

" } }, { "_id": "y24ue9LLJdTmpPZnz", "title": "News: Improbable Coincidence Slows LHC Repairs", "pageUrl": "https://www.lesswrong.com/posts/y24ue9LLJdTmpPZnz/news-improbable-coincidence-slows-lhc-repairs", "postedAt": "2009-11-06T07:24:31.000Z", "baseScore": 9, "voteCount": 8, "commentCount": 28, "url": null, "contents": { "documentId": "y24ue9LLJdTmpPZnz", "html": "

Related to: How Many LHC Failures is Too Many?

\n

My first reaction to this was that it had to be a joke, but I thought Less Wrong readers would like to know that The Times of London is reporting that repairs on the Large Hadron Collider have been delayed by overheating caused by a piece of bread, possibly dropped by a bird:

\n
\n

The rehabilitation of the beleaguered Large Hadron Collider was on hold tonight after the failure of one of its powerful cooling units caused by an errant chunk of baguette.

\n

The £4 billion particle-collider faced more than a year of delays after a helium leak stymied the project in its first few days of operation. It is gradually being switched back on over the coming months but suffered a new setback on Tuesday morning.

\n

Scientists at the CERN particle physics laboratory in Geneva noticed that the system’s carefully monitored temperatures were creeping up.

\n

\n\n\n

\n

Further investigation into the failure of a cryogenic cooling plant revealed an unusual impediment. A piece of crusty bread had paralysed a high voltage installation that should have been powering the cooling unit. [...]

\n

A spokeswoman for CERN confirmed that baguette was responsible for the latest hiatus, but she conceded that mystery surrounded the way it got into the vital power installation, which is protected by high security fences.

\n

“Nobody knows how it got there,” she told The Times. “The best guess is that it was dropped by a bird, either that or it was thrown out of a passing aeroplane.”

\n

“Obviously this was slightly surprising. Within the team there was some amusement once they had relaxed after initial concerns.”

\n
\n \n

I'm rather confident that this is just a meaningless coincidence, but in light of the anthropic speculations last year about the LHC's technical difficulties, I thought this was worth sharing.

\n

Hat tip MBlume

\n

 

" } }, { "_id": "9RMYL4qfYpgukLuHf", "title": "Light Arts", "pageUrl": "https://www.lesswrong.com/posts/9RMYL4qfYpgukLuHf/light-arts", "postedAt": "2009-11-06T03:54:49.881Z", "baseScore": 17, "voteCount": 20, "commentCount": 44, "url": null, "contents": { "documentId": "9RMYL4qfYpgukLuHf", "html": "

tl;dr: It is worthwhile to convince people that they already, by their own lights, have reasons to believe true things, as this is faster, easier, nicer, and more effective than helping them create from scratch reasons to believe those things.

\n

This is not part of the problem-solving sequence.  I do plan to finish that, but the last post is eluding me.

\n

Related: Whatever it is I was thinking of here (let me know if you can dig up what it was).

\n

Today, while waiting for a bus, I heard the two girls sitting on the bench next to mine talking about organ donation.  One said that she was thinking of ceasing to be an organ donor, because she'd heard that doctors don't try as hard to save donors in hopes of using their organs to save other lives.

\n

My bus was approaching.  I didn't know the girl and could hardly follow up later with an arsenal of ironclad counterarguments.  There was no time, and probably no receptivity, to engage in a lengthy discussion of why this medical behavior wouldn't happen.  No chance to fire up my computer, try to get on the nearest wireless, and pull up empirical stats that say it doesn't happen.

\n

So I chuckled and interjected, at a convenient gap in her ramble, \"That's why you carry a blood donor card, too, so they think if you stay alive they'll keep getting blood from you!\"

\n

Some far-off potential tragic crisis averted?  Maybe.  She looked thoughtful, nodded, said that she did have a blood donor card, and that my suggestion made sense.  I boarded my bus and it carried me away.  I hope she's never hit by a cement truck.  I hope that if she is hit by a cement truck, a stupid rumor she heard once doesn't turn it into as complete a waste as it would have to be without the wonders of organ transplant.

\n

And even maintaining those twin hopes and feeling I'd done something to improve their conjoined chance of realization, I began to feel like perhaps I'd done wrong.  I could conjure up a defense - hey, I laughed first, and I'd used the exact same words before as a mere joke (with people better-informed than this who I'd expected to get it on their own).  It's not strictly my fault that she didn't take it as a joke too.  And hey, I would have gone ahead and had the whole knock-down drag-out argument with her if there had only been time, if I could only have had her ear for long enough to spit out more than a soundbite, if only she hadn't been a complete stranger I'll never see again.

\n

But even without time and social pressure preventing you from having a great long knock-down drag-out argument, it can be devastatingly ineffectual to present the reasons you think are the right ones to believe some proposition P or take some action A.  And presenting other reasons seems dishonest, somehow - just lining up soldier-arguments in favor of P or A because they're well-equipped against this opponent, and not because they're the best and soundest and strongest according to objective (read: your) standards.

\n

Here's a related story: in my midterm paper for my Plato's Republic class, my thesis statement was \"Plato's position on falsehood in the kallipolis is inconsistent\".  Bam!  Plato would have a heart attack!  Dreaded inconsistency!  But after I got comments back, I agreed with the professor that what I'd really shown was something weaker: \"Plato has good reason, by his own lights, to reject the Noble Lie\".  No utter logical malady infects his city so thoroughly that I can demonstrate a rejection of modus ponens on the subject at hand.  But the revision... is still pretty strong.  Inconsistency is a general, powerful case of having reason to reject something.  Inconsistency brings with it the guarantee of being wrong in at least one place.  But so too, in a gentler and narrower way, does having reason to reject something by your own lights, even if it's not an airtight reason.  And this gentleness is more non-threateningly persuasive, and this narrowness demands less background from your interlocutor in logic, and beginning from this preexisting background saves more time, than beginning with a priori principles and proceeding from there to proposition P or action A.

\n

The girl at the bus stop began by having nasty suspicions that doctors are twisted creatures who all walked straight out of an ethics textbook, evil consequentialist plots devoid of professionalism or commonsense morality fully-formed in their minds, and who would see her ambulance-borne self as a sack of valuable organs they could use to salvage numerous other lives if only they made the slightest wrong twitch with a scalpel.  But even if we grant that falsehood, she still does not have adequate reason to withdraw her consent for organ donation, as long as she can present proof to evil consequentialist doctors that she's worth more alive than dead.  And she can.

\n

Offering arguments like this - ones which use premises you don't hold that opponent does, and which aren't reductios in form - is only dishonest if your conclusion is meant to be, \"Objectively and all things considered, you should perform action A or believe proposition P.\"  Those arguments (assuming the premises your opponent holds and you don't are false) show nothing of the kind.  But pointing out that people have reasons by their own lights to believe P or perform A - concluding instead \"given these premises which you accept, A or P is reasonable\" - is not, I contend, underhanded.  Conveniently, it's also not slow, mean, or difficult.  And maybe these light arts saved a life today.

" } }, { "_id": "ZTN6bLWqpwWn2i4qZ", "title": "Money pumping: the axiomatic approach", "pageUrl": "https://www.lesswrong.com/posts/ZTN6bLWqpwWn2i4qZ/money-pumping-the-axiomatic-approach", "postedAt": "2009-11-05T11:23:10.873Z", "baseScore": 22, "voteCount": 24, "commentCount": 93, "url": null, "contents": { "documentId": "ZTN6bLWqpwWn2i4qZ", "html": "

This post gets somewhat technical and mathematical, but the point can be summarised as:

\n\n

In other words, using alternate decision theories is bad for your wealth.

\n

But what is a money pump? Intuitively it is a series of trades that I propose to you, that end up bringing you back to where you started. All the trades must be indifferent or advantageous to you, so that you will accept them. And if even one of those trades is advantageous, then this is a money pump: I can charge you a tiny amount for that trade, making free money out of you. You are now strictly poorer than if you had not accepted the tradesat all.

\n

A strict money pump happens when every deal is advantageous to you, not simply indifferent. In most situations, there is no difference between a money pump and a strict money pump: I can offer you a tiny trinket at each indifferent deal to make it advantageous, and get these back later. There are odd preference systems out there, though, so the distinction is needed.

\n

The condition \"bringing you back to where you started\" needs to be examined some more. Thus define:

\n

A strong money pump is a money pump which returns us both to exactly the same situations as when we started: in possession of the same assets and lotteries, with none of them having come due in the meantime.

\n

A weak money pump is a money pump that returns us to the same situation that would have happened if we had never traded at all. Lotteries may have come due in the course of the trades.

\n

\n

What is the difference? Quite simply with a strong money pump, since we return to exactly the same setup, I can then money pump you again, and again, charging you a penny at each round, and draining you of cash until you run out completely or your preferences change. Remember that I charge you that penny only for a deal that is strictly to your advantage, so even with that charge, you are coming out ahead on every deal. It's just at the end of the loop that you're losing out.

\n

For a weak money pump, since we don't return exactly to the same setup, I cannot just money pump you again instantly. I can get cash from you only once for each setup. It might never happen again; or it may happen regularly. But it's not as reusable as a strong money pump.

\n

Now if your preferences are inconsistent, I can certainly money pump you. In his post Zut Allais (French puns remain the property and responsibility of their authors) Eliezer presents a fun version of this, one where subjects prefer A to B and prefer B to A, depending on how they are phrased. Thus I will assume that your preferences are consistent - that you will only invert your preferences under objectively different conditions. I will also assume that you follow the completeness and continuity axioms of the von Neumann-Morgenstern formulation (completeness assumes you are actually capable of deciding between options, while continuity is a technical assumption). Note that the axioms on the Wikipedia article seemed to be incorrect, based on sources such as this book; I have corrected them by replacing >'s with ≥ where appropriate. A lot of the complexities of this post revolve around the difference between these two symbols.

\n

The third von Neumann-Morgenstern axiom is transitivity: that if you like A at least as much as B (designated A ≥ B) and B at least as much as C, then you also like A at least as much as C. Strict transitivity is the weaker statement replacing ≥ with > in the above. This is precisely where the strong money pumps come in:

\n

If your preferences are consistent, complete and continuous, then you are immune to a strong money pump if and only if your preferences are transitive. You are immune to a strict strong money pump if and only if your preferences are strictly transitive.

\n

Proof: A strong money pump is equivalent with a sequence of preferences A ≥ B ≥ ... ≥ S > T ≥ ... ≥ Z ≥ A. Such a sequence can only exist if your preferences are not transitive (note the strict inequality in the middle).

\n

A strict strong money pump is equivalent with a sequence of preferences A > B > ... > Z > A. Such a sequence can only exist if your preferences are not strictly transitive.

\n

The fourth von Neumann-Morgenstern axiom is independence: that if A ≥ B, then for all C, pA + (1-p)C ≥ pB + (1-p)C for all 0 ≤ p ≤ 1. It means essentially that all lotteries can be considered in isolation from each other. This is where the weak money pumps come in:

\n

If your preferences are consistent, complete, continuous and transitive, then you are immune to a weak money pump if and only if your preferences are independent. You are immune to a strict weak money pump if and only if your preferences are not strictly dependent.

\n

The rest of the post is a proof of this, and can be skipped for those unkeen on mathematics.

\n

First of all, we need to explain \"strictly dependent\" (there is no canonical definition of the term). Given all the other axioms, we can replace independence with the following equivalent axiom:

\n\n

The four standard axioms together imply the expected utility hypothesis, and Independence II is a simple consequence of that. Conversely, if we take C and D to be the same lottery, then the axiom states that pA + (1-p)C > pB + (1-p)C implies A > B (since C < C is inconsistent). The contrapositive of this statement is that if A ≤ B, then pA + (1-p)C ≤ pB + (1-p)C. This is just standard independence once more. Thus we can equivalently replace Independence with Independence II.

\n

A decision theory is dependent if it is not independent. Dependency means that there exists p, A, B, C and D such that pA + (1-p)C > pB + (1-p)D while A ≤ B and C ≤ D. A decision theory is strictly dependent if we replace all ≤ in that expression with <. We're now ready to prove the result.

\n

Proof: Assume you have the lottery L, and that there will be a draw to determine one of the random elements. This means L can be written as pA + (1-p)B, where it will end up in as A or B after the draw. Then if I were to trade you L for M = pC + (1-p)D, with M > L, independence implies that A > C or B > D.

\n

Consequently, if you start with lottery L and I trade it for M > L, then for at least one outcome after the random draw, you are left with a lottery strictly better than what would have had if we had not traded. This result continues to be true no matter how often the draws happen, and hence I will not be able to trade you back to your initial situation with an advantageous or indifferent deal. Thus independence implies that you cannot be weakly money pumped with certainty.

\n

Conversely, if your preferences are dependent, then there are p, A, B, C and D such that pA + (1-p)C > pB + (1-p)D and yet A ≤ B and C ≤ D. Then I can weakly money pump you, building on Eliezer's example. Assume you are in possession of pB + (1-p)D, with the first draw being to determine which of B or D you have. Then I can propose a binding contract: I will trade A for B and C for D after that first draw, replacing your current lottery with pA + (1-p)C. This is advantageous to you, so you will accept it. Then, after the first draw, I will propose to trade back B for A or D for C, another advantageous trade that you will accept. Congratulations! You have just been (weakly) money pumped.

\n

In the strict dependency situation, the proof works out exactly the same way, with ≥ replacing > where appropriate, and proving that you cannot be strictly weakly money pumped if your preferences are consistent, transitive, complete, and not strictly dependent.

\n

Note: Of course, if you do not follow independence, you truly do not follow independence. You can play present lotteries off against future lotteries; you can look ahead and see that I will attempt to money-pump you, and compensate for that. You can therefore behave as if you were following independence in these situations, even if you do not. This sort of \"arbitraged independence\" will be the subject of a future post.

\n

Addendum: Following a discussion with MendelShmiedekamp, I realised that the continuity axiom is not needed for any of the results, as long as one uses independence II instead of independence. Anything that violates independence also violates independence II, so the \"if you violate independence, you can be weakly money-pumped\" result goes straight through to the non-continuous case.

" } }, { "_id": "DW7yzwCj76srESrtN", "title": "Rolf Nelson's \"The Rational Entrepreneur\"", "pageUrl": "https://www.lesswrong.com/posts/DW7yzwCj76srESrtN/rolf-nelson-s-the-rational-entrepreneur", "postedAt": "2009-11-04T21:31:00.512Z", "baseScore": 13, "voteCount": 12, "commentCount": 3, "url": null, "contents": { "documentId": "DW7yzwCj76srESrtN", "html": "

New blog up by longtime OB/LWer Rolf Nelson, \"The Rational Entrepreneur\" at rolfnelson.com.  On \"the overlap between entrepreneurship and the modern tools of rationality\".  Rolf will post daily through November, and after that it will depend on how much traction the blog gets.

\n

First posts:  Pay more attention to the statistics! (which say to do what you know, not what you love) and Having co-founders is valuable but not crucial (according to a study).

" } }, { "_id": "ahX9485o4kufEwCsY", "title": "Re-understanding Robin Hanson’s “Pre-Rationality”", "pageUrl": "https://www.lesswrong.com/posts/ahX9485o4kufEwCsY/re-understanding-robin-hanson-s-pre-rationality", "postedAt": "2009-11-03T02:58:04.599Z", "baseScore": 34, "voteCount": 27, "commentCount": 19, "url": null, "contents": { "documentId": "ahX9485o4kufEwCsY", "html": "

I’ve read Robin’s paper “Uncommon Priors Require Origin Disputes” several times over the years, and I’ve always struggled to understand it. Each time I would think that I did, but then I would forget my understanding, and some months or years later, find myself being puzzled by it all over again. So this time I’m going to write down my newly re-acquired understanding, which will let others check that it is correct, and maybe help people (including my future selves) who are interested in Robin's idea but find the paper hard to understand.

\n

Here’s the paper’s abstract, in case you aren’t already familiar with it.

\n
\n

In standard belief models, priors are always common knowledge. This prevents such models from representing agents’ probabilistic beliefs about the origins of their priors. By embedding standard models in a larger standard model, however, pre-priors can describe such beliefs. When an agent’ s prior and pre-prior are mutually consistent, he must believe that his prior would only have been different in situations where relevant event chances were different, but that variations in other agents’ priors are otherwise completely unrelated to which events are how likely. Due to this, Bayesians who agree enough about the origins of their priors must have the same priors.

\n
\n

I think my main difficulty with understanding the paper is the lack of a worked out example. So I’ll take a simplified version of an example given in the paper and try to work out how it should be treated under the proposed formalism. Quoting the paper:

\n
\n

For example, if there were such a thing as a gene for optimism versus pessimism, you might believe that you had an equal chance of inheriting your mother’s optimism gene or your father’s pessimism gene.

\n
\n

Instead of talking about optimism vs pessimism in general, I’ll use the example of an AI which has a prior on just the outcome of one coin toss (A) which will occur after it is created. The AI programmer will program it with one of two priors. The “optimistic” prior O says that the coin will land heads with probability .6, and the “pessimistic” prior P says that the coin will land heads with probability .4. For some reason, the programmer has decided to choose the prior based on an independent coin toss (B), which corresponds to the random Mendelian inheritance in the original example.

\n

Suppose an “optimistic” AI wakes up and then reads Robin’s paper. How would it reason? First, it needs a pre-prior (denoted p~ [EDIT: actually denoted q, as Hal pointed out in a comment] in the paper, but I’ll use r here) that explains how it got its prior. So it asks the programmer how it got its prior, and the programmer tells it about coin toss B. (I’m using the AI as an explanatory prop here, not saying that an actual AI would reason this way.) One plausible pre-prior at this point might be:

\n\n

But unfortunately, this pre-prior doesn’t satisfy Robin’s pre-rationality condition (equation 2 on page 4), which when applied to this example says that

\n\n

The first equality holds, but the second one doesn’t, because P(A=heads) = 0.4, and r(A=heads | p=P) = 0.6.

\n

And actually there seems no way to construct any pre-prior that satisfies pre-rationality, unless the coin tosses A and B are correlated in the pre-prior (I think this is what Robin meant in the paper by “track truth”). But the programmer insists that the two coin tosses are not correlated, so that’s that. What should the AI do at this point? Robin’s paper doesn’t say. It assumes pre-rationality and derives consequences from it, but doesn’t talk about what we should do if we find ourselves lacking in pre-rationality, except that we might want to do something to correct this defect.

\n

One obvious solution here is for the AI programmer to not have picked different priors for the AI based on an independent coin toss in the first place, and perhaps it could be argued that it was irrational, according to ordinary rationality, for the programmer to have done that. If it had been the case that O=P, then the AI can easily construct a pre-rational pre-prior. But our own priors depend partly on our genes, which were picked by evolution, so this solution doesn’t seem to apply to us. And if we create any Bayesian AIs, the priors of those AIs will also be inevitably influenced (indirectly via us) by the randomness inherent in evolution.

\n

So what should we (or our AIs) do? I think I have some ideas about that, but first, is my understanding of pre-rationality correct?

" } }, { "_id": "6wRPygaH7raZPmM9A", "title": "Why are fictional inventors anti-trade?", "pageUrl": "https://www.lesswrong.com/posts/6wRPygaH7raZPmM9A/why-are-fictional-inventors-anti-trade", "postedAt": "2009-11-02T18:40:32.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "6wRPygaH7raZPmM9A", "html": "

In stories, those who invent and make powerful technologies frequently seek to gain power via the technology they make.

\n

In reality, those who invent and make powerful technologies seek to gain power solely through selling their technologies and getting status.

\n

Fictional artists do not keep their paintings, to go out provoking and moving people themselves. Fictional chefs usually prefer trade to eating it all themselves.

\n

Why the difference?

\n

How much does this bias our expectations for the social outcomes, especially risks, of future technologies?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "sdSoCLQiqdQqBNHLi", "title": "Open Thread: November 2009", "pageUrl": "https://www.lesswrong.com/posts/sdSoCLQiqdQqBNHLi/open-thread-november-2009-0", "postedAt": "2009-11-02T01:22:07.732Z", "baseScore": 2, "voteCount": 1, "commentCount": 3, "url": null, "contents": { "documentId": "sdSoCLQiqdQqBNHLi", "html": "

This is, hopefully, the usual monthly thread for discussion of Less Wrong topics that have not appeared in recent posts.

\n

(My apologies and promise to fix if I'm doing this wrong; I have not previously submitted an article and am creating this thread so that I can post a comment in it.)

\n

 

" } }, { "_id": "ZjikPZ67CvR64YvMK", "title": "Open Thread: November 2009", "pageUrl": "https://www.lesswrong.com/posts/ZjikPZ67CvR64YvMK/open-thread-november-2009", "postedAt": "2009-11-02T01:18:42.048Z", "baseScore": 5, "voteCount": 6, "commentCount": 551, "url": null, "contents": { "documentId": "ZjikPZ67CvR64YvMK", "html": "

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

\n

If you're new to Less Wrong, check out this welcome post.

" } }, { "_id": "Mhm2MNEuG2FcrMTH6", "title": "Our House, My Rules", "pageUrl": "https://www.lesswrong.com/posts/Mhm2MNEuG2FcrMTH6/our-house-my-rules", "postedAt": "2009-11-02T00:44:46.686Z", "baseScore": 42, "voteCount": 42, "commentCount": 232, "url": null, "contents": { "documentId": "Mhm2MNEuG2FcrMTH6", "html": "

People debate all the time about how strictly children should be disciplined. Obviously, this is a worthwhile debate to have, as there must be some optimal amount of discipline that is greater than zero. The debate's nominal focus is usually on what's best for the child, with even the advocates for greater strictness arguing that it's \"for their own good.\" It might also touch on what's good for other family members or for society at large. What I think is missing from the usual debate is that it assumes nothing but honorable motives on the part of the arguers. That is, it assumes that the arguments in favor of greater strictness are completely untainted by any element of authoritarianism or cruelty. But people are sometimes authoritarian and cruel! Just for fun! And the only people who you can be consistently cruel to without them slugging you, shunning you, suing you, or calling the police on you are your children. This is a reason for more than the usual amount of skepticism of arguments that say that strict parenting is necessary. If there were no such thing as cruelty in the world, people would still argue about the optimal level of strictness, and sometimes the more strict position would be the correct one, and parents would chose the optimal level of strictness on the basis of these arguments. But what we actually have is a world with lots and lots of cruelty lurking just under the surface, which cannot help but show up in the form of pro-strictness arguments in parenting debates. This should cause us to place less weight on pro-strictness arguments than we otherwise would.*  Note that this is basically the same idea as Bertrand Russell's argument against the idea of sin: its true function is to allow people to exercise their natural cruelty while at the same time maintaining their opinion of themselves as moral.

One example of authoritarianism masquerading as sound discipline (even among otherwise good parents) is the idea of \"My House, My Rules.\" I've even heard parents go so far as to say things like: \"it's not your room, it's the room in my house that I allow you to live in.\" This attitude makes little sense on its own terms, as it suggests that parents would have no legitimate authority over, say, a famous child actor whose earnings paid for the house. Worse, it's a relatively minor manifestation of the broader notion that the child has a fundamentally lower status in the family just for being a child, that they deserve less weight in the family's utility function. I don't think this is what parents would be saying if recreational authoritarianism really were not a factor. They would still say that they, by virtue of their superior experience and judgment, get to make the rules (i.e., decide how to go about maximizing the family's utility function, though even this might be done with more authoritarianism than is necessary). But you wouldn't be hearing this \"I'm higher than you in the pecking order and don't you dare forget it\" attitude that is so very common.

*Some might argue that arguments should be evaluated solely on their merit, and not on the motives with which they were offered. This is correct when the validity of the arguments can be finally determined. For most kinds of persuasive argumentation, especially in complicated and emotionally laden subjects like child rearing, arguments work on us without us ever being able to fully evaluate their merit. And in that world, it does make sense to down-weight arguments that have some bias built into them.

" } }, { "_id": "Cbw2LafucXb5NKMQo", "title": "Romantic idealism: true love conquers almost all", "pageUrl": "https://www.lesswrong.com/posts/Cbw2LafucXb5NKMQo/romantic-idealism-true-love-conquers-almost-all", "postedAt": "2009-10-31T20:52:57.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "Cbw2LafucXb5NKMQo", "html": "

More romantic people tend to be vocally in favor of more romantic fidelity in my experience. If you think about it though, faith in romance is not a very romantic ideal. True love should overcome all things! The highest mountains, the furthest distances, social classes, families, inconveniences, ugliness, but NOT previous love apparently. There shouldn’t be any competition there. The love that got there first is automatically the better one, winning the support and protection of the sentimental against all other love on offer. Other impediments are allowed to test love, sweetened with ‘yes, you must move a thousand miles apart, but if it’s really true love, he’ll wait for you’. You can’t say, ‘yes, he has another girlfriend, but if you really are better for him he’ll come back – may the truest love win!’.

\n

Perhaps more commitment in general allows better and more romance? There are costs as well as benefits to being tied to anything though. Just as it’s not clear that more commitment in society to stay with your current job would be pro-productivity, it’s hard to see that more commitment to stay with your current partner would be especially pro-romance. Of course this is all silly – being romantic and vocally supporting faithfulness are about signaling that you will stick around, not about having consistent values or any real preference about the rest of the world. Is there some other explanation?

\n

 


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "RkYEmjdQjTJXcy6YP", "title": "Choose pain-free utilitarianism", "pageUrl": "https://www.lesswrong.com/posts/RkYEmjdQjTJXcy6YP/choose-pain-free-utilitarianism", "postedAt": "2009-10-30T16:59:25.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "RkYEmjdQjTJXcy6YP", "html": "

Some of my friends are hedonic utilitarians, or close human approximations (people whose excuse to talk excitedly in bars and on the internet is sometimes hedonic utilitarianism). I am a preference utilitarian, so I would like to talk excitedly on the internet about how they are wrong.

\n

Robert Wiblin sums up a big motivation for hedonic utilitarianism:

\n

“I am hedonic rather than a preference utilitarian because if I were aware of a being that wanted things but had no experiences I would not care about it as its welfare could not be affected”

\n

Something like this seems a common reason. What makes a thing good or bad if not someone experiencing it as good or bad? And how can you consciously experience something as good or bad if it’s not some variation on pleasure and pain? If your wanting chocolate isn’t backed by being pleased by chocolate, why would I want you to have chocolate more than I would want any old unconscious chocolate-getting mechanism to have chocolate? Pleasure and pain are the distinctive qualia that connect normativeness to consciousness and make it all worthwhile.

\n

This must be wrong. Pain at least can have no such importance, as empirically it can be decomposed into a sensation and a desire to not have the sensation. This is demonstrated by the medical condition pain asymbolia and by the effects of morphine for example. In both cases people say that they can still feel the sensation of the pain they had, but they no longer care about it.

\n

To say that the sensation of pain is inherently bad then is no different than to say that the sensation of seeing the color red is inherently bad.  The leftover contender for making pain bad is the preference not to have pain. You may still care only about the sensation of having or fulfilling a preference, and not about preferences that are fulfilled outside of knowledge. The feeling of preferring could still be that sought after sensation inherently imbued with goodness or badness. It must be some variation on preferences though; hedonism’s values are built of them.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "QrYkJbE3Qtrg3FDbj", "title": "Less Wrong / Overcoming Bias meet-up groups", "pageUrl": "https://www.lesswrong.com/posts/QrYkJbE3Qtrg3FDbj/less-wrong-overcoming-bias-meet-up-groups", "postedAt": "2009-10-30T04:47:06.192Z", "baseScore": 13, "voteCount": 13, "commentCount": 31, "url": null, "contents": { "documentId": "QrYkJbE3Qtrg3FDbj", "html": "

Hi from Michael Vassar, the president of SIAI. With my wife and Singularity Summit co-director Aruna, I'll be traveling over the next few months to meet with rationality and singularity enthusiasts throughout the U.S.  Specifically, I'll be in Boston from November 14-18, Philadelphia from December 1st to 10th and December 15th through January 4th, New Orleans from December 11th to 14th, Orlando on January 5th and 6th, Sarasota on January 7th through 12th, and in Tampa on the 11th if there is substantial interest in a meet-up there. 

\n

Please comment if you are interested in attending a meet-up in any of the cities in question and we can start planning.

\n

I hope to find that there are thriving communities of rationalists in each of those cities already, but I'm traveling there to try to seed their precipitation from the local populace.  If things go really well the groups and their respective cities will be on SIAI radar and who knows, maybe eventually there will be a Singularity Summit near Tomorrow-Land, a global catastrophic risk conference by the New Orleans levies, or a FAI extrapolation dynamics exploratory workshop near Independence Hall.

" } }, { "_id": "jzibHBxczkZuTKY93", "title": "David Deutsch: A new way to explain explanation", "pageUrl": "https://www.lesswrong.com/posts/jzibHBxczkZuTKY93/david-deutsch-a-new-way-to-explain-explanation", "postedAt": "2009-10-30T00:05:48.203Z", "baseScore": 7, "voteCount": 12, "commentCount": 29, "url": null, "contents": { "documentId": "jzibHBxczkZuTKY93", "html": "

I'm sure this talk will be of interest, even if most of the ideas that he talks about will be familiar to readers here.

\n

[edit]

\n

In this talk David Deutsch discusses \"the most important discovery in human history\"; how humanity moved beyond a few hundred thousand years of complete ignorance about the universe. Deutsch attempts to be specific about what led to this change - he concludes that it is the insistence that an explanation be 'hard to vary'.

\n

Whilst a 'hard to vary' explanation is functionally the same as a, more commonly known, Occam's Razor explanation (since fewer parameters necessarily make a fit harder to vary) the slightly different emphasis might be a useful pedagogical tool. A 'hard to vary' explanation will perhaps lead more naturally to questions about strong predictions and falsifiability than Occam's razor. It also seems harder to misunderstand. As we know, Occam's razor suffers because of the difference between actual complexity and linguistic complexity, so an explanation like \"it's magic\" can appear to be simple. Magic might appear simple, but it will never appear 'hard to vary', so students of rationality would have one less pitfall awaiting them.

\n

Deutsch also touches on what constitutes understanding and knowledge and cautions us not to trust predictions that are purely of an extrapolated empirical nature as there is no true understanding contained there.

\n

[/edit] 

\n

If you haven't already read Deutsch's book \"The Fabric of Reality\" I'd highly recommend that as well.

\n

 

" } }, { "_id": "psuphwRHuJu9T9PPa", "title": "A Less Wrong Q&A with Eliezer (Step 1: The Proposition)", "pageUrl": "https://www.lesswrong.com/posts/psuphwRHuJu9T9PPa/a-less-wrong-q-and-a-with-eliezer-step-1-the-proposition", "postedAt": "2009-10-29T15:04:30.286Z", "baseScore": 19, "voteCount": 23, "commentCount": 33, "url": null, "contents": { "documentId": "psuphwRHuJu9T9PPa", "html": "

I don't know if I'm the only one, but I've always been a bit frustrated by Eliezer's BloggingHeadsTV episodes. I find myself wishing that Eliezer would get more speaking time and could address more directly some of the things we discuss at LW/OB.

\n

Some of you will surely think: \"If you want more Eliezer, just read his blog posts and papers!\"

\n

Sensible advice, no doubt, but I think that there's something special (because of how our brains evolved) about actually seeing and hearing a teacher, and I find it very helpful to see how he applies rationality techniques in \"real time\". But I'm not just looking for a dry lecture, I want more Eliezer because I enjoy listening to him (gotta love that sense of humor), and I bet many of you do too.

\n

Here is my suggestion:

\n

If the Less Wrong community thinks it's a good idea and if Eliezer agrees, I will create a \"Step 2: Ask Your Questions\" post in which the comments section will be used to gather questions for Eliezer. Each question should be submitted as an individual comment to allow more granularity in the ranking based on the voting system.

\n

After at least 7 days, to give everybody enough time to submit their questions and vote, Eliezer will sit down in front of a camera and answers however many questions he fells like answering (at a time of his choosing, t-shirt or hot cocoa in a mug imprinted with Bayes' theorem are optional), in descending order from most to least votes, skipping the ones he doesn't want to answer. If Eliezer feels uncomfortable speaking alone for an extended period of time, he can get someone to read him the questions so that it feels more like an interview.

\n

The video can then be uploaded to a hosting service like Youtube or Vimeo, and posted to Less Wrong.

\n

In short, it would be a kind of Less Wrong podcast, something that many other sites do successfully.

\n

Yay or nay?

\n

Update: Eliezer says \"I'll do it.\"

\n

Update 2: The thread where questions were submitted can be found here.

\n

Update 3: Eliezer's video answers can be found here.

" } }, { "_id": "TRcfweoArnnx8zCX5", "title": "Post retracted: If you follow expected utility, expect to be money-pumped", "pageUrl": "https://www.lesswrong.com/posts/TRcfweoArnnx8zCX5/post-retracted-if-you-follow-expected-utility-expect-to-be", "postedAt": "2009-10-29T12:06:04.105Z", "baseScore": 1, "voteCount": 9, "commentCount": 20, "url": null, "contents": { "documentId": "TRcfweoArnnx8zCX5", "html": "

This post has been retracted because it is in error. Trying to shore it up just involved a variant of the St Petersburg Paradox and a small point on pricing contracts that is not enough to make a proper blog post.

\n

I apologise.

\n

Edit: Some people have asked that I keep the original up to illustrate the confusion I was under. I unfortunately don't have a copy, but I'll try and recreate the idea, and illustrate where I went wrong.

\n

The original idea was that if I were to offer you a contract L that gained £1 with 50% probability or £2 with 50% probability, then if your utility function wasn't linear in money, you would generally value L at having a value other that £1.50. Then I could sell or buy large amounts of these contracts from you at your stated price, and use the law of large number to ensure that I valued each contract at £1.50, thus making a certain profit.

\n

The first flaw consisted in the case where your utility is concave in cash (\"risk averse\"). In that case, I can't buy L from you unless you already have L. And each time I buy it from you, the mean quantity of cash you have goes down, but your utility goes up, since you do not like the uncertainty inherent in L. So I get richer, but you get more utility, and once you've sold all L's you have, I cannot make anything more out of you.

\n

If your utility is convex in cash (\"risk loving\"), then I can sell you L forever, at more than £1.50. And your money will generally go down, as I drain it from you. However, though the median amount of cash you have goes down, your utility goes up, since you get a chance - however tiny - of huge amounts of cash, and the utility generated by this sum swamps the fact you are most likely ending up with nothing. If I could go on forever, then I can drain you entirely, as this is a biased random walk on a one-dimensional axis. But I would need infinite ressources to do this.

\n

The major error was to reason like an investor, rather than a utility maximiser. Investors are very interested in putting prices on objects. And if you assign the wrong price to L while investing, someone will take advantage of you and arbitrage you. I might return to this in a subsequent post; but the issue is that even if your utility is concave or convex in money, you would put a price of £1.50 on L if L were an easily traded commodity with a lot of investors also pricing it at £1.50.

" } }, { "_id": "tGhz4aKyNzXjvnWhX", "title": "Expected utility without the independence axiom", "pageUrl": "https://www.lesswrong.com/posts/tGhz4aKyNzXjvnWhX/expected-utility-without-the-independence-axiom", "postedAt": "2009-10-28T14:40:08.464Z", "baseScore": 20, "voteCount": 20, "commentCount": 68, "url": null, "contents": { "documentId": "tGhz4aKyNzXjvnWhX", "html": "

John von Neumann and Oskar Morgenstern developed a system of four axioms that they claimed any rational decision maker must follow. The major consequence of these axioms is that when faced with a decision, you should always act solely to increase your expected utility. All four axioms have been attacked at various times and from various directions; but three of them are very solid. The fourth - independence - is the most controversial.

\n

To understand the axioms, let A, B and C be lotteries - processes that result in different outcomes, positive or negative, with a certain probability of each. For 0<p<1, the mixed lottery pA + (1-p)B implies that you have p chances of being in lottery A, and (1-p) chances of being in lottery B. Then writing A>B means that you prefer lottery A to lottery B, A<B is the reverse and A=B means that you are indifferent between the two. Then the von Neumann-Morgenstern axioms are:

\n\n

In this post, I'll try and prove that even without the Independence axiom, you should continue to use expected utility in most situations. This requires some mild extra conditions, of course. The problem is that although these conditions are considerably weaker than Independence, they are harder to phrase. So please bear with me here.

\n

The whole insight in this post rests on the fact that a lottery that has 99.999% chance of giving you £1 is very close to being a lottery that gives you £1 with certainty. I want to express this fact by looking at the narrowness of the probability distribution, using the standard deviation. However, this narrowness is not an intrinsic property of the distribution, but of our utility function. Even in the example above, if I decide that receiving £1 gives me a utility of one, while receiving zero gives me a utility of minus ten billion, then I no longer have a narrow distribution, but a wide one. So, unlike the traditional set-up, we have to assume a utility function as being given. Once this is chosen, this allows us to talk about the mean and standard deviation of a lottery.

\n

Then if you define c(μ) as the lottery giving you a certain return of μ, you can use the following axiom instead of independence:

\n\n

This seems complicated, but all that it says, in mathematical terms, is that if we have a probability distribution that is \"narrow enough\" around its mean μ, then we should value it are being very close to a certain return of μ. The narrowness is expressed in terms of its standard deviation - a lottery with zero SD is a guaranteed return of μ, and as the SD gets larger, the distribution gets wider, and the chances of getting values far away from μ increases. So risk, in other words, scales (approximately) with the SD.

\n

We also need to make sure that we are not risk loving - if we are inveterate gamblers for the point of being gamblers, our behaviour may be a lot more complicated.

\n\n

I.e. we don't love a worse rate of return just because of the risk. This axiom can and maybe should be weakened, but it's a good approximation for the moment - most people are not risk loving with huge risks.

\n

Assume you are going to be have to choose n different times whether to accept independent lotteries with fixed mean β>0, and all with SD less than a fixed upper-bound K. Then if you are not risk loving and n is large enough, you must accept an arbitrarily large proportion of the lotteries.

\n

Proof: From now on, I'll use a different convention for adding and scaling lotteries. Treating them as random variables, A+B will mean the lottery consisting of A and B together, while xA will mean the same lottery as A, but with all returns (positive or negative) scaled by x.

\n

Let X1, X2, ... , Xn be these n independent lotteries, with means β and variances vj. The since the standard deviations are less than K, the variances must be less than K2.

\n

Let Y = X1 + X2 + ... + Xn. The mean of Y is nβ. The variance of Y is the sum of the vj, which is less than nK2. Hence the SD of Y is less than K√(n). Now pick an ε>0, and the resulting δ>0 from the standard deviation bound axiom. For large enough n, nβδ must be larger than K√(n); hence, for large enough n, Y > c((1-ε)nβ). Now, if we were to refuse more that εn of the lotteries, we would be left with a distribution with mean ≤ (1-ε)nβ, which, since we are not risk loving, is worse than c((1-ε)nβ), which is worse than Y. Hence we must accept more than a proportion (1-ε) of the lotteries on offer.

\n

This only applies to lotteries that share the same mean, but we can generalise the result as:

\n

Assume you are going to be have to choose n different times whether to accept independent lotteries all with means greater than a fixed β>0, and all with SD less than a fixed upper-bound K. Then if you are not risk loving and n is large enough, you must accept lotteries whose means represent an arbitrarily large proportion of the total mean of all lotteries on offer.

\n

Proof: The same proof works as before, with nβ now being a lower bound on the true mean μ of Y. Thus we get Y > c((1-ε)μ), and we must accept lotteries whose total mean is greater than (1-ε)μ.

\n

 

\n

Analysis: Since we rejected independence, we must now consider the lotteries when taken as a whole, rather than just seeing them individually. When considered as a whole, \"reasonable\" lotteries are more tightly bunched around their total mean than they are individually. Hence the more lotteries we consider, the more we should treat them as if only their mean mattered. So if we are not risk loving, and expect to meet many lotteries with bounded SD in our lives, we should follow expected utility. Deprived of independence, expected utility sneaks in via aggregation.

\n

Note: This restates the first half of my previous post - a post so confusingly written it should be staked through the heart and left to die on a crossroad at noon.

\n

Edit: Rewrote a part to emphasis the fact that a utility function needs to be chosen in advance - thanks to Peter de Blanc and Nick Hay for bringing this up.

\n

 

" } }, { "_id": "Ph9CJHwYAmbY9B9im", "title": "Choosing the right amount of choice", "pageUrl": "https://www.lesswrong.com/posts/Ph9CJHwYAmbY9B9im/choosing-the-right-amount-of-choice", "postedAt": "2009-10-28T04:47:32.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "Ph9CJHwYAmbY9B9im", "html": "

The TED talk which I have seen praised most often is Barry Shwartz’s Paradox of Choice. His claim is that the ‘official dogma of all Western industrial societies’ – that more choice is good for us – is wrong. This has apparently been a welcome message for many.

\n\n

Barry thinks the costs of choice are too high at current levels. His reasons are that it increases our expectations, makes us focus on opportunity costs rather than enjoying what we have, paralyzes us into putting off complicated or important choices, and makes us blame ourselves rather than the world when our selections fail to satisfy. We can choose how much choice to have usually though. You can always just pick a random jar of jam from the shelf if you find the decision making costly. So implicit in Barry’s complaint is that we continually misjudge these downsides and opt for more choice than we should.

\n

Perhaps he is right currently, but I think probably wrong in the long term. Why should we fail to adapt? Even if we can’t adapt psychologically, as inability to deal with choices becomes more of a problem, more technologies for solving it will be found. Having the benefits of choice without the current costs doesn’t appear an insoluble problem.

\n

One option for allowing more choice about choice, while keeping some benefits of variety is to have a standard default option available. Another that seems feasible is using a barcode scanner on a phone, connected to product information and an equation for finding the net goodness of products according to the owner’s values (e.g. goodness = -price – 1c per calorie – 1c per 10 miles travelled + 10c per good review – $100m for peanut traces + …). This could avoid a lot of time spent comparing product information on packages by instantly telling you which brand you likely prefer. Systems for telling you which music and films and people you are likely to like based on previous encounters are improving.

\n

I suspect for many things we would prefer to make very resource intensive choices, because we want to make them ourselves. Where we want to have unique possessions that we identify with, each person needs to go through a similar process of finding out product information and assessing it. We don’t want to know once and for all which is most likely to be the best car for most people. Neither do we want to have randomized unique clothing. We usually want our visible possessions to reflect a choice. This isn’t a barrier to improving our choice making though. Any system that gave a buyer the best few options according to their apparent taste, for them to make the final decision, should probably keep the nice parts of choosing while avoiding time spent on disappointing options.

\n

How much choice is good for us depends a lot on the person. Those far out on relevant bell curves will benefit more from access to more obscure options, while the most normal people will do better by going with the standard option without much thought. One level of choice will not suit all and nor will it have to. We will choose to keep and improve our choice of choices.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "r8stxYL29NF9w53am", "title": "Doing your good deed for the day", "pageUrl": "https://www.lesswrong.com/posts/r8stxYL29NF9w53am/doing-your-good-deed-for-the-day", "postedAt": "2009-10-27T00:45:24.144Z", "baseScore": 152, "voteCount": 141, "commentCount": 57, "url": null, "contents": { "documentId": "r8stxYL29NF9w53am", "html": "

Interesting new study out on moral behavior. The one sentence summary of the most interesting part is that people who did one good deed were less likely to do another good deed in the near future. They had, quite literally, done their good deed for the day.

In the first part of the study, they showed that people exposed to environmentally friendly, \"green\" products were more likely to behave nicely. Subjects were asked to rate products in an online store; unbeknownst to them, half were in a condition where the products were environmentally friendly, and the other half in a condition where the products were not. Then they played a Dictator Game. Subjects who had seen environmentally friendly products shared more of their money.

In the second part, instead of just rating the products, they were told to select $25 worth of products to buy from the store. One in twenty five subjects would actually receive the products they'd purchased. Then they, too, played the Dictator Game. Subjects who had bought environmentally friendly products shared less of their money.

In the third part, subjects bought products as before. Then, they participated in a \"separate, completely unrelated\" experiment \"on perception\" in which they earned money by identifying dot patterns. The experiment was designed such that participants could lie about their perceptions to earn more. People who purchased the green products were more likely to do so.

This does not prove that environmentalists are actually bad people - remember that whether a subject purchased green products or normal products was completely randomized. It does suggest that people who have done one nice thing feel less of an obligation to do another.

This meshes nicely with a self-signalling conception of morality. If part of the point of behaving morally is to convince yourself that you're a good person, then once you're convinced, behaving morally loses a lot of its value.

By coincidence, a few days after reading this study, I found this article by Dr. Beck, a theologian, complaining about the behavior of churchgoers on Sunday afternoon lunches. He says that in his circles, it's well known that people having lunch after church tend to abuse the waitstaff and tip poorly. And he blames the same mechanism identified by Mazar and Zhong in their Dictator Game. He says that, having proven to their own satisfaction that they are godly and holy people, doing something else godly and holy like being nice to others would be overkill.

It sounds...strangely plausible.

If this is true, then anything that makes people feel moral without actually doing good is no longer a harmless distraction. All those biases that lead people to give time and money and thought to causes that don't really merit them waste not only time and money, but an exhaustible supply of moral fiber (compare to Baumeister's idea of willpower as a limited resource).

People here probably don't have to worry about church. But some of the other activities Dr. Beck mentions as morality sinkholes seem appropriate, with a few of the words changed:

\n
\n

Bible study
Voting Republican
Going on spiritual retreats
Reading religious books
Arguing with evolutionists
Sending your child to a Christian school or providing education at home
Using religious language
Avoiding R-rated movies
Not reading Harry Potter.

\n
\n


Let's not get too carried away with the evils of spiritual behavior - after all, data do show that religious people still give more to non-religious charities than the nonreligious do. But the points in and of themselves are valid. I've seen Michael Keenan and Patri Friedman say exactly the same thing regarding voting, and I would add to the less religion-o-centric list:

\n
\n

Joining \"1000000 STRONG AGAINST WORLD HUNGER\" type Facebook groups
Reading a book about the struggles faced by poor people, and telling people how emotional it made you
\"Raising awareness of problems\" without raising awareness of any practical solution
Taking (or teaching) college courses about the struggles of the less fortunate
Many forms of political, religious, and philosophical arguments

\n
\n

My preferred solution to this problem is to consciously try not to count anything I do as charitable or morally relevant except actually donating money to organizations. It is a bit extreme, but, like Eliezer's utilitarian foundation for deontological ethics, sometimes to escape the problems inherent in running on corrupted hardware you have to jettison all the bathwater, even knowing it contains a certain number of babies. A lot probably slips by subconsciously, but I find it better than nothing (at least, I did when I was actually making money; it hasn't worked since I went back to school. Your mileage may vary.

It may be tempting to go from here to a society where we talk much less about morality, especially little bits of morality that have no importance on their own. That might have unintended consequences. Remember that the participants in the study who saw lots of environmentally friendly products but couldn't buy any ended up nicer. The urge to be moral seems to build up by anything priming us with thoughts of morality.

But to prevent that urge from being discharged, we need to plug up the moral sinkholes Dr. Beck mentions, and any other moral sinkholes we can find. We need to give people less moral recognition and acclaim for performing only slightly moral acts. Only then can we concentrate our limited moral fiber on truly improving the world.

And by, \"we\", I mean \"you\". I've done my part just by writing this essay.

" } }, { "_id": "MxyRNd6qJsYAcXKuw", "title": "Doing Your Good Deed For The Day", "pageUrl": "https://www.lesswrong.com/posts/MxyRNd6qJsYAcXKuw/doing-your-good-deed-for-the-day-1", "postedAt": "2009-10-27T00:36:20.553Z", "baseScore": 3, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "MxyRNd6qJsYAcXKuw", "html": "

A few months ago, researchers at the University of Toronto published a very interesting study on moral behavior. The one sentence summary of the most interesting part is that people who did one good deed were less likely to do another good deed in the near future. They had, quite literally, done their good deed for the day.

\n

In the first part of the study, they showed that people exposed to environmentally friendly, \"green\" products were more likely to behave nicely. Subjects were asked to rate products in an online store; unbeknownst to them, half were in a condition where the products were environmentally friendly, and the other half in a condition where the products were not. Then they played a Dictator Game. Subjects who had seen environmentally friendly products shared more of their money.

In the second part, instead of just rating the products, they were told to select $25 worth of products to buy from the store. One in twenty five subjects would actually receive the products they'd purchased. Then they, too, played the Dictator Game. Subjects who had bought environmentally friendly products shared less of their money.

In the third part, subjects bought products as before. Then, they participated in a \"separate, completely unrelated\" experiment \"on perception\" in which they earned money by identifying dot patterns. The experiment was designed such that participants could lie about their perceptions to earn more. People who purchased the green products were more likely to do so.

This does not prove that environmentalists are actually bad people - remember that whether a subject purchased green products or normal products was completely randomized. It does suggest that people who have done one nice thing feel less of an obligation to do another.

\n

\n

This meshes nicely with a self-signalling conception of morality. If part of the point of behaving morally is to convince yourself that you're a good person, then once you're convinced, behaving morally loses a lot of its value.

By coincidence, a few days after reading this study, I found this article by Dr. Beck, a theologian, complaining about the behavior of churchgoers on Sunday afternoon lunches. He says that in his circles, it's well known that people having lunch after church tend to abuse the waitstaff and tip poorly. And he blames the same mechanism identified by Mazar and Zhong in their Dictator Game. He says that, having proven to their own satisfaction that they are godly and holy people, doing something else godly and holy like being nice to others would be overkill.

It sounds...strangely plausible.

If this is true, then anything that makes people feel moral without actually doing good is no longer a harmless distraction. All those biases that lead people to give time and money and thought to causes that don't really merit them waste not only time and money, but an exhaustible supply of moral fiber (compare to Baumeister's idea of willpower as a limited resource).

People here probably don't have to worry about church. But some of the other activities Dr. Beck mentions as morality sinkholes seem appropriate, with a few of the words changed:

\n
\n

Bible study
Voting Republican
Going on spiritual retreats
Reading religious books
Arguing with evolutionists
Sending your child to a Christian school or providing education at home
Using religious language
Avoiding R-rated movies
Not reading Harry Potter.

\n
\n

I've seen Michael Keenan and Patri Friedman make exactly the same point regarding voting, and I would add to the less religion-o-centric list:

\n
\n

Joining \"1000000 STRONG AGAINST WORLD HUNGER\" type Facebook groups
Reading a book about the struggles faced by poor people, and telling people how emotional it made you
\"Raising awareness of problems\" without raising awareness of any practical solution
Taking (or teaching) college courses about the struggles of the less fortunate
Many forms of political, religious, and philosophical arguments

\n
\n

My preferred solution to this problem is to consciously try not to count anything I do as charitable or morally relevant except actually donating money to organizations. It is a bit extreme, but, like Eliezer's utilitarian foundation for deontological ethics, sometimes to escape the problems inherent in running on corrupted hardware you have to jettison all the bathwater, even knowing it contains a certain number of babies. A lot probably slips by subconsciously, but I find it better than nothing (at least, I did when I was actually making money; I've had to suspend it recently). Your mileage may vary.

\n

It may be tempting to go from here to a society where we talk much less about morality, especially little bits of morality that have no importance on their own. That might have unintended consequences. Remember that the participants in the study who saw lots of environmentally friendly products but couldn't buy any ended up nicer. The urge to be moral seems to build up by anything priming us with thoughts of morality.

But to prevent that urge from being discharged, we need to plug up the moral sinkholes Dr. Beck mentions, and any other moral sinkholes we can find. We need to give people less moral recognition and acclaim for performing only slightly moral acts. Only then can we concentrate our limited moral fiber on truly improving the world.

And by, \"we\", I mean \"you\". I've done my part just by writing this essay.

" } }, { "_id": "reG3g4wwzwJcKnFfh", "title": "Doing Your Good Deed For The Day", "pageUrl": "https://www.lesswrong.com/posts/reG3g4wwzwJcKnFfh/doing-your-good-deed-for-the-day-0", "postedAt": "2009-10-27T00:28:12.781Z", "baseScore": 4, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "reG3g4wwzwJcKnFfh", "html": "

A few months ago, researchers at the University of Toronto published a very interesting study on moral behavior. The one sentence summary of the most interesting part is that people who did one good deed were less likely to do another good deed in the near future. They had, quite literally, done their good deed for the day.

\n

In the first part of the study, they showed that people exposed to environmentally friendly, \"green\" products were more likely to behave nicely. Subjects were asked to rate products in an online store; unbeknownst to them, half were in a condition where the products were environmentally friendly, and the other half in a condition where the products were not. Then they played a Dictator Game. Subjects who had seen environmentally friendly products shared more of their money.

In the second part, instead of just rating the products, they were told to select $25 worth of products to buy from the store. One in twenty five subjects would actually receive the products they'd purchased. Then they, too, played the Dictator Game. Subjects who had bought environmentally friendly products shared less of their money.

In the third part, subjects bought products as before. Then, they participated in a \"separate, completely unrelated\" experiment \"on perception\" in which they earned money by identifying dot patterns. The experiment was designed such that participants could lie about their perceptions to earn more. People who purchased the green products were more likely to do so.

This does not prove that environmentalists are actually bad people - remember that whether a subject purchased green products or normal products was completely randomized. It does suggest that people who have done one nice thing feel less of an obligation to do another.

\n

\n

This meshes nicely with a self-signalling conception of morality. If part of the point of behaving morally is to convince yourself that you're a good person, then once you're convinced, behaving morally loses a lot of its value.

By coincidence, a few days after reading this study, I found this article by Dr. Beck, a theologian, complaining about the behavior of churchgoers on Sunday afternoon lunches. He says that in his circles, it's well known that people having lunch after church tend to abuse the waitstaff and tip poorly. And he blames the same mechanism identified by Mazar and Zhong in their Dictator Game. He says that, having proven to their own satisfaction that they are godly and holy people, doing something else godly and holy like being nice to others would be overkill.

It sounds...strangely plausible.

If this is true, then anything that makes people feel moral without actually doing good is no longer a harmless distraction. All those biases that lead people to give time and money and thought to causes that don't really merit them waste not only time and money, but an exhaustible supply of moral fiber (compare to Baumeister's idea of willpower as a limited resource).

People here probably don't have to worry about church. But some of the other activities Dr. Beck mentions as morality sinkholes seem appropriate, with a few of the words changed:

\n
\n

Bible study
Voting Republican
Going on spiritual retreats
Reading religious books
Arguing with evolutionists
Sending your child to a Christian school or providing education at home
Using religious language
Avoiding R-rated movies
Not reading Harry Potter.

\n
\n

I've seen Michael Keenan and Patri Friedman make exactly the same point regarding voting, and I would add to the less religion-o-centric list:

\n
\n

Joining \"1000000 STRONG AGAINST WORLD HUNGER\" type Facebook groups
Reading a book about the struggles faced by poor people, and telling people how emotional it made you
\"Raising awareness of problems\" without raising awareness of any practical solution
Taking (or teaching) college courses about the struggles of the less fortunate
Many forms of political, religious, and philosophical arguments

\n
\n

My preferred solution to this problem is to consciously try not to count anything I do as charitable or morally relevant except actually donating money to organizations. It is a bit extreme, but, like Eliezer's utilitarian foundation for deontological ethics, sometimes to escape the problems inherent in running on corrupted hardware you have to jettison all the bathwater, even knowing it contains a certain number of babies. A lot probably slips by subconsciously, but I find it better than nothing. Your mileage may vary.

\n

It may be tempting to go from here to a society where we talk much less about morality, especially little bits of morality that have no importance on their own. That might have unintended consequences. Remember that the participants in the study who saw lots of environmentally friendly products but couldn't buy any ended up nicer. The urge to be moral seems to build up by anything priming us with thoughts of morality.

But to prevent that urge from being discharged, we need to plug up the moral sinkholes Dr. Beck mentions, and any other moral sinkholes we can find. We need to give people less moral recognition and acclaim for performing only slightly moral acts. Only then can we concentrate our limited moral fiber on truly improving the world.

And by, \"we\", I mean \"you\". I've done my part just by writing this essay.

" } }, { "_id": "NcY2K27z6DYoLpdYa", "title": "Computer bugs and evolution", "pageUrl": "https://www.lesswrong.com/posts/NcY2K27z6DYoLpdYa/computer-bugs-and-evolution", "postedAt": "2009-10-26T22:06:38.945Z", "baseScore": 55, "voteCount": 52, "commentCount": 10, "url": null, "contents": { "documentId": "NcY2K27z6DYoLpdYa", "html": "

Last Friday, I finally packaged up the quarterly release of JCVI's automatic prokaryote functional annotation pipeline and distributed it to the other 3 sequencing centers for the Human Microbiome Project.  It looks at genes found in newly-sequenced bacterial genomes and guesses what they do.  As often happens when I release a new version, shortly afterwards, I discovered a major bug that had been hiding in the code for years.

\n

The program takes each new gene and runs BLAST against a database of known genes, and produces a list of identifiers of genes resembling the new genes.  It then takes these identifiers, and calls a program to look in a database for all of the synonyms for these identifiers used in different gene databases.  This lookup step takes 90% of the program's runtime.

\n

I found that the database lookup usually failed, because most identifiers didn't match the regular expression used in the lookup program to retrieve identifiers.  Nobody had noticed this, because nobody had checked the database log files.  I fixed the program so that the database lookup would always work correctly, and re-ran the program.  It produced exactly the same output as before, but took five times as long to run.

\n

So instead of going dancing as I'd planned, I spent Friday evening figuring out why this happened.  It turned out that the class of identifiers that failed to match the regular expression were a subset of a set of identifiers for which the lookup didn't have to be done, because some previously-cached data would give the same results.  Once I realized this, I was able to speed the program up more, by excluding more such identifiers, and avoiding the overhead of about a million subroutine calls that would eventually fail when the regular expression failed to match.  (Lest you think that the regular expression was intentionally written that way to produce that behavior, the regular expression was inside a library that was written by someone else.  Also, the previously-cached data would not have given the correct results prior to a change that I made a few months ago.)

\n

A bug in a program is like a mutation.  Bugs in a computer program are almost always bad.  But this was a beneficial bug, which had no effect other than to make the program run much faster than it had been designed to.  I was delighted to see this proof of the central non-intuitive idea of evolution:  A random change can sometimes be beneficial, even to something as complex and brittle as a 10,000-line Perl program.

" } }, { "_id": "swSBpXET5cRbJBFFY", "title": "Circular Altruism vs. Personal Preference", "pageUrl": "https://www.lesswrong.com/posts/swSBpXET5cRbJBFFY/circular-altruism-vs-personal-preference", "postedAt": "2009-10-26T01:43:16.174Z", "baseScore": 15, "voteCount": 17, "commentCount": 14, "url": null, "contents": { "documentId": "swSBpXET5cRbJBFFY", "html": "

Suppose there is a diagnostic procedure that allows to catch a relatively rare disease with absolute precision. If left untreated, the disease if fatal, but when diagnosed it's easily treatable (I suppose there are some real-world approximations). The diagnostics involves an uncomfortable procedure and inevitable loss of time. At what a priori probability would you not care to take the test, leaving this outcome to chance? Say, you decide it's 0.0001%.

\n

Enter timeless decision theory. Your decision to take or not take the test may be as well considered a decision for the whole population (let's also assume you are typical and everyone is similar in this decision). By deciding to personally not take the test, you've decided that most people won't take the test, and thus, for example, with 0.00005% of the population having the condition, about 3000 people will die. While personal tradeoff is fixed, this number obviously depends on the size of the population.

\n

It seems like a horrible thing to do, making a decision that results in 3000 deaths. Thus, taking the test seems like a small personal sacrifice for this gift to others. Yet this is circular: everyone would be thinking that, reversing decision solely to help others, not benefiting personally. Nobody benefits.

\n

Obviously, together with 3000 lives saved, there is a factor of 6 billion accepting the test, and that harm is also part of the outcome chosen by the decision. If everyone personally prefers to not take the test, then inflicting the opposite on the whole population is only so much worse.

\n

Or is it? What if you care more about other people's lives in proportion to their comfort than you care about your own life in proportion to your own comfort? How can caring about other people be in exact harmony with caring about yourself? It may be the case that you prefer other people to take the test, even if you don't want to take the test yourself, and that is the position of the whole population. What is the right thing to do then? What wins: personal preference, or this \"circular altruism\", preference about other people that not a single person accepts for oneself?

\n

If altruism wins, then it seems that the greater the population, the less personal preference should matter, and the more the structure of altruistic preference takes over the personal decision-making. Person disappears, with everyone going through the motions of implementing the perfect play for their ancestral spirits.

\n

P.S. This thought experiment is an example of Pascal's mugging closer to real-world scale. As with specks, I assume that there is no opportunity cost in lives from taking the test. The experiment also confronts utilitarian analysis of caring about other people depending on structure of the population, comparing that criterion against personal preference.

" } }, { "_id": "ACPsf7nMBMK3rESrT", "title": "Natural cultural relativists?", "pageUrl": "https://www.lesswrong.com/posts/ACPsf7nMBMK3rESrT/natural-cultural-relativists", "postedAt": "2009-10-25T21:53:44.000Z", "baseScore": 2, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "ACPsf7nMBMK3rESrT", "html": "

When given the same ability to punish anyone, cooperative people want to punish members of groups they identify with more than they do outsiders, while less cooperative people want to punish outsiders more. From the Journal of Evolution and Human Behavior:

\n
One of the most critical features of human society is the pervasiveness of cooperation in social and economic exchanges. Moreover, social scientists have found overwhelming evidence that such cooperative behavior is likely to be directed toward in-group members. We propose that the group-based nature of cooperation includes punishment behavior. Punishment behavior is used to maintain cooperation within systems of social exchange and, thus, is directed towards members of an exchange system. Because social exchanges often take place within groups, we predict that punishment behavior is used to maintain cooperation in the punisher’s group. Specifically, punishment behavior is directed toward in-group members who are found to be noncooperators. To examine this, we conducted a gift-giving game experiment with third-party punishment. The results of the experiment (N=90) support the following hypothesis: Participants who are cooperative in a gift-giving game punish noncooperative in-group members more severely than they punish noncooperative out-group members.
\n

..[W]e predict that … punishment behavior is directed toward in-group members who are found to be noncooperators. To examine this, we conducted a gift-giving game experiment with third-party punishment. The results of the experiment (N=90) support the following hypothesis: Participants who are cooperative in a gift-giving game punish noncooperative in-group members more severely than they punish noncooperative out-group members.

\n

The researchers’ conclusion is that punishment is just an extension of cooperation, and so applies in the same areas. They were not expecting, and haven’t got a good explanation for, uncooperative people’s interest in specifically punishing outsiders.

\n

This provides a potential explanation for something I was wondering about. Middle class people often seem to talk about poor people and people from other cultures in terms of their actions being caused by bad external influences, in contrast to the language of free will and responsibility for their own kind. Discussion of Aboriginals in Australia regularly exemplifies this. e.g. SMH:

\n

More than half the Aboriginal male inmates in prison for violent crimes are suffering from post traumatic stress disorder, an academic says.

\n

And without effective intervention, the “stressors” for the disorder will be passed on to other generations, perpetuating the cycles of crime.

\n

Dr Caroline Atkinson said most violent inmates had suffered from some form of family violence, alcohol and drug use, as well as profound grief and loss…

\n

“It was a confronting experience being inside a cell with someone who has committed murder, but I quickly realised they are the ones with the answers and they had such amazing insight,” she said.

\n

This is quite unlike news coverage I have seen of middle class white murderers. When we see faults as caused by external factors rather than free will or personal error, we aren’t motivated to punish. Is the common practice of coolly blaming circumstance when we talk about situations like violence in Aboriginal communities because the good, cooperative people who write about these things don’t identify with the groups they are talking about?

\n

On a side note, is our ‘widening moral circle’ linked to greater desire to reform other cultures?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "PGyJ5Kczc77AYLSKZ", "title": "The Value of Nature and Old Books", "pageUrl": "https://www.lesswrong.com/posts/PGyJ5Kczc77AYLSKZ/the-value-of-nature-and-old-books", "postedAt": "2009-10-25T18:14:43.990Z", "baseScore": 11, "voteCount": 22, "commentCount": 68, "url": null, "contents": { "documentId": "PGyJ5Kczc77AYLSKZ", "html": "

People have always had a religious or quasi-religious reverence for nature. In modern times, some people have started to see nature more as an enemy to be conquered than as a god to be worshiped. Such people point out that uncontrolled nature causes a tremendous amount of human suffering (to say nothing of all the misery that it causes other creatures), and that vast improvements to human welfare have largely been the result of us ceasing to love and fear nature and starting to control it.

There are several common responses to this. One response is that it is solipsistic for humans to measure the value of nature in terms of what is and is not good for us. This strikes me as right only insofar as it ignores the welfare of non-human creatures who have enough going on in terms of consciousness and/or sentience to matter; I think the objection would be without merit if one were to broaden the scope of concern to something like all creatures, present and future, capable of having experiences (who else is there to care about?). A  second response is that seeing ourselves as highly effective lords over nature leads to dangerous overconfidence, which leads to costly mistakes in how we deal with nature. This is a very fair point, but what it really amounts to is a claim that we shouldn't underestimate the enemy, not that the enemy is really a friend. Anyway, the solution to that problem is to become better rationalists and get better at being skeptical regarding our powers, not to retreat into quasi-mystical Gaia worship. A third response is that getting into a \"conquer nature\" frame of mind puts people into a \"conquer everything\" frame of mind and leads to aggression against other people. This might have merit historically, but that problem is also best confronted directly, in this case by more effectively promulgating liberal humanistic values.

So what, if anything, is left to the idea that there is something special about nature worthy of particular regard? And by special I mean something beyond the fact that many people just plain enjoy it the way they enjoy lots of other things that nevertheless have no claim to any special status. I would say that the main thing that makes nature special in this sense is that when you are in nature or contemplating nature, you can be confident that the resulting thoughts and feelings are uncontaminated by all of the (visible and invisible) ideas and biases and assumptions that are present in your particular time and place. When you look at a waterfall and you like it, you can be pretty sure that: (i) it wasn't put there by anyone with an agenda; (ii) you weren't manipulated into liking it by contemporary ideology or social pressure or persuasive advertising or whatever; and (iii) the thoughts that you think while contemplating it aren't the thoughts anyone is trying to lead you into. In other words, nature is a way of guaranteeing that there is a little corner of experience that we are instinctively drawn to and that we can be confident doesn't represent anyone else's attempt to control us. And since other people are trying to control us all the time, even in relatively free societies (all the more so in oppressive ones), this is of real value.

I think the same basic point applies to some other things besides nature. Why do people still read old books* even when the knowledge in them has been refined and improved-upon in the meantime? In many subjects, we don't. Nobody learns geometry by reading Euclid, because there would be no point. But people do still read ancient works of philosophy. It seems to me that one good reason to do so is that for all the ways that these works have been analyzed and surpassed in the intervening years, the reader can be sure that what is written there is not the product of manipulation by the forces that are at work in the reader's own time and place. So it represents another way to gain valuable freedom and distance.

*Here I'm talking about non-fiction books. The merits of old creative works even when the innovations in them have become widespread in newer works is a different story. Often a point like the one in this post still applies, and sometimes the old stuff really is still just the best.

" } }, { "_id": "hqpSutKLfBQtffs72", "title": "Arrow's Theorem is a Lie", "pageUrl": "https://www.lesswrong.com/posts/hqpSutKLfBQtffs72/arrow-s-theorem-is-a-lie", "postedAt": "2009-10-24T20:46:07.942Z", "baseScore": 44, "voteCount": 49, "commentCount": 64, "url": null, "contents": { "documentId": "hqpSutKLfBQtffs72", "html": "

Suppose that we, a group of rationalists, are trying to come to an agreement on which course of action to take. All of us have the same beliefs about the world, as Aumann's Agreement Theorem requires, but everyone has different preferences. What decision-making procedure should we use? We want our decision-making procedure to satisfy a number of common-sense criteria:

\n

- Non-dictatorship: No one person should be able to dictate what we should do next. The decision-making process must take multiple people's preferences into account.

\n

- Determinism: Given the same choices, and the same preferences, we should always make the same decisions.

\n

- Pareto efficiency: If every member of the group prefers action A to action B, the group as a whole should also prefer A to B.

\n

- Independence of irrelevant alternatives: If we, as a group, prefer A to B, and a new option C is introduced, then we should still prefer A to B, regardless of what we think about C.

\n

Arrow's Theorem says that: Suppose we have a list of possible courses of action, {A, B, C, D... Q}. Everyone gets a pencil and a piece of paper, and writes down what their preferences are. Eg., if you thought that Q was the best, D was the second best, and A was the worst, you would write down Q > D > (...) > A. We then collect all the slips of paper, and put them in a big bin. However, we can't take all these slips of paper and produce a set of preferences for the group as a whole, eg., {H > E > I > C > ... > D > A}, in a way that satisfies all of the above criteria.

\n

Therefore, (so the story goes), much woe be unto us, for there is no way we can make decisions that satisfies this entirely reasonable list of requirements. Except, this isn't actually true. Suppose that, instead of writing down a list of which actions are preferred to which other actions, we write down a score next to each action, on a scale from 0-10. (For simplicity, we allow irrationals as well as integers, so, say, sqrt(2) or 7/3 are perfectly valid scores). We add up the numbers that each person puts down for each action. Say, if person A puts down 5 for action A, and person B puts down 7 for action A, then A has a total score of 12. We then decree: we will, as a group, prefer any action with a higher total score to any action with a lower total score. This procedure does satisfy all of the desired criteria:

\n

- Non-dictatorship: Suppose there are two possible actions, A and B, and ten people in the group. Each person gives a score of zero to A and a ten to B, so B has a total score of 100, as compared to A's score of zero. If any one person changes their score for A to X, and their score for B to Y, then the total score for A is now X, and the score for B is now 90 + Y. Since we know that 0 <= X <= 10 and 0 <= Y <= 10, we can derive that 90 + Y >= 90 > 10 >= X, so B is still preferable to A. Therefore, no one person can switch the group's decision from B to A, so no one person is a dictator.

\n

- Determinism: All we do is add and rank the scores, and adding and ranking are both deterministic operations, so we know this procedure is deterministic.

\n

- Pareto efficiency: Suppose that there are N people in the group, and two possible actions, A and B. Every person prefers A to B, so person X will assign some score to B, B_X, and some score to A, A_X, where A_X > B_X. Let C_X = A_X - B_X; we know that C_X > 0. The total score for B is (B_1 + B_2 + B_3 + ... + B_N), and the total score for A is (A_1 + A_2 + A_3 + ... + A_N) = (B_1 + B_2 + B_3 + ... + B_N) + (C_1 + C_2 + C_3 + ... + C_N) > (B_1 + B_2 + B_3 + ... + B_N), so the group as a whole also prefers A to B.

\n

- Independence of irrelevant alternatives: If we prefer A to B as a group, and we all assign the same scores to A and B when C is introduced, then the total score of A must still be higher than that of B, even after C is brought in. Hence, our decision between A and B does not depend on C.

\n

What happened here? Arrow's Theorem isn't wrong, in the usual sense; the proof is perfectly valid mathematically. However, the theorem only covers a specific subset of decision algorithms: those where everyone writes down a simple preference ordering, of the form {A > B > C > D ... > Q}. It doesn't cover all collective decision procedures.

\n

I think what's happening here, in general, is that a). there is a great deal of demand for theorems which show counterintuitive results, as these tend to be the most interesting, and b). there are a large number of theorems which give counterintuitive results about things in the world of mathematics, or physics. For instance, many mathematicians were surprised when Cantor showed that the number of all rational numbers is equal to the number of all integers.

\n

Therefore, every time someone finds a theorem which appears to show something counterintuitive about real life, there is a tendency to stretch the definition: to say, wow, this theorem shows that thing X is actually impossible, when it usually just says \"thing X is impossible if you use method Y, under conditions A, B and C\". And it's unlikely you'll get called on it, because people are used to theorems which show things like 0.999... = 1, and we apparently treat this as evidence that a theorem which (supposedly) says \"you can't design a sane collective decision-making procedure\" or \"you can't do electromagnetic levitation\" is more likely to be correct.

\n

More examples of this type of fallacy:

\n

- Earnshaw's theorem states that no stationary collection of charges, or magnets, is stable. Hence, you cannot put a whole bunch of magnets in a plate, put the plate over a table with some more magnets in it, and expect it to stay where it is, regardless of the configuration of the magnets. This theorem was taken to mean that electromagnetic levitation of any sort was impossible, which caused a great deal of grief, as scientists then still thought that atoms were governed by electromagnetism rather than quantum mechanics, and so the theorem would appear to imply that no collection of atoms is stable. However, we now know that atoms stay stable by virtue of the Pauli exclusion principle, not classical E&M. And, it turns out, it is possible to levitate things with magnets: you just have to keep the thing spinning, instead of leaving it in one place. Or, you can use diamagnetism, but that's another discussion.

\n

- Heisenberg's Student Confusion Principle, which does not say \"we can't find out what the position and momentum of a particle is\", has already been covered earlier, by our own Eliezer Yudkowsky.

\n

- Conway's Free Will Theorem says that it's impossible to predict the outcomes of certain quantum experiments ahead of time. This is a pretty interesting fact about quantum physics. However, it is only a fact about quantum physics; it is not a fact about this horribly entangled thing that we call \"free will\". There are also numerous classical, fully deterministic computations that we can't predict the output ahead of time; eg., if you write a Turing Machine to output every twin prime, will it halt after some finite number of steps? If we could predict all such computations ahead of time, it would violate the Halting Theorem, and so there must be some that we can't predict.

\n

- There are numerous law of economics which appear to fall into this category; however, so far as I can tell, few of them have been mathematically formalized. I am not entirely sure whether this is a good thing or a bad thing, as formalizing them would make them more precise, and clarify the conditions under which they apply, but also might cause people to believe in a law \"because it's been mathematically proven!\", even well outside of the domain it was originally intended for.

" } }, { "_id": "hPvx6WZr5kBnLtKrP", "title": "What is hope?", "pageUrl": "https://www.lesswrong.com/posts/hPvx6WZr5kBnLtKrP/what-is-hope", "postedAt": "2009-10-24T19:00:51.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "hPvx6WZr5kBnLtKrP", "html": "

At first it seems like a mixture of desire and belief in a possibility. It’s not just desire because you can ‘have your hopes too high’, though the hoped for outcome is well worthy of desire, or ‘abandon hope’ when something reaches some level of unlikelihood. But hope is also not linked to a particular level of chance. It implies uncertainty about the outcome, but nothing beyond that.

\n

Is it a mixture of significant uncertainty and a valuable outcome then? No, you can consider something plausible and wonderful, but not worth hoping for. Sometimes it is worse to hope for the most marvelous things. No matter how likely, folks ‘don’t want to get their hopes up’ or ‘can’t bear to hope’ .

\n

So there is apparently a cost to hoping. Hopes can bring you unhappiness if they fail, while another possibility with similar chances and desirability which was not hoped for would cause no distress. So hope is to do with something other than value or likelihood.

\n

A hope sounds like a goal which you can’t necessarily influence then. Failing in a goal is worse than failing in something you did not intend to achieve. A hope or a goal seems to be particular point in outcome space where you will be extra happy if it is reached or surpassed and extra unhappy otherwise. We seem to choose goals according to a trade-off of ease and desirability, which is reminiscent of our seemingly choosing hopes according to likelihood and desirability. Unlike hopes though, we pretty much always try harder for goals when the potential gains are big. This probably makes sense; trying harder at a goal increases the likelihood of success, whereas hoping more does not, yet still gives you the larger misery of failure.

\n

Why hope at all then? Why not just have smooth utility functions? Goals help direct actions, which is extremely handy. Hopes seem to be outcomes you cheer for from the sidelines. Is this useful at all? Is it just a side effect of having goals? Is it so we can show others what would be our goals if we had the power? In which case should we expect declared hopes to be less honest than declared goals? Why are hopes so ubiquitous?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "h9WAEigBrbiimDHWA", "title": "Pound of Feathers, Pound of Gold", "pageUrl": "https://www.lesswrong.com/posts/h9WAEigBrbiimDHWA/pound-of-feathers-pound-of-gold", "postedAt": "2009-10-23T17:48:43.336Z", "baseScore": 5, "voteCount": 10, "commentCount": 15, "url": null, "contents": { "documentId": "h9WAEigBrbiimDHWA", "html": "

Which weighs more:  a pound of feathers, or a pound of gold?

\n

Close consideration of this riddle - and the conditions under which people tend to get it wrong - is helpful in understanding the limits of human rationality.  It is a specific example which leads us to general principles of rationality failure.

\n

These sorts of riddles and similar interpersonal language tricks (such as \"Stupid says what?\") are especially popular among children but not among adults.  Why is this the case?  Partly because adults are more likely to have previously encountered and become familiar with their patterns, but there are other factors - including one very relevant one.  Children tend to have less-developed capacities of impulse control.

\n

It takes very little analysis to discover the 'trick' in the question; the concepts involved are relatively simple.  But we're confronted with the fact that people do answer it incorrectly, and that by manipulating aspects of the context in which the question is delivered, we can significantly increase the chance people will fall for it.  What does this imply?  That analysis is not being conducted in the erroneous cases, and that context is a contributing factor to whether people successfully engage in conceptual analysis.  Specifically, that context determines whether people will counter their impulses long enough for analysis to be completed.

\n

The key to these sorts of riddles is time pressure.  If people feel free to take as much time as they like thinking over the question, they rarely fall for the trick.  But if they're trying to answer rapidly, they'll screw up.  Examples of situations that often result in such behavior include:  competing against others to see who can be correct first, trying to demonstrate competence by investing little effort in answering, or encountering the question as part of a limited-duration examination.  If several superficially-similar questions whose answer depends on retrieving facts from memory rather than performing logical analysis of the question are asked before the riddle is presented, that also tends to result in a wrong response.

\n

The error occurs because of our weight-related associations with the concepts of 'feathers' and 'gold', our conditioned assumptions about the sorts of questions people are likely to ask, and a failure to inhibit the first impulses towards response.  Feathers are far less dense than gold; any given volume of feathers will weigh far less than the same volume of the metal.  Questions about a property rarely contain their own answers in a trivial way - we do not expect the defined quantities in the question to be equivalent relative to the property being asked about.  And - this is the most vital aspect - it takes longer for our brains to process the question at a conceptual level than it does to activate our associations.

\n

In the state of nature, organisms are often under intense pressure to produce results quickly.  If they take too long, the resource they're trying to exploit may be taken by a competitor - or worse, they may become exploited resources by a predator.  So stimulus-response methods which produce generally-useful reactions tend to be favored over extremely accurate and precisely analysis that takes longer.  As a consequence, natural modes of though available to humans favor rapid responses more than rigorous correctness - and in much the same way that the limits of our visual processing systems lead to optical illusions, which can be understood and thus constructed, the limits of our conceptual processing lead to inherent tendencies towards fallacies of reason, which can be exploited to produce riddles and language gags.

\n

Just as other aspects of our behavioral response involve the repression of rudimentary reflexes, our thinking involves the inhibition of associational activation and reflexive reactions.  The \"more advanced\" cognitive functions can take place only because the simpler, less resource-intensive, and faster functions are prevented from initiating responses before them.

\n

In the wrestling match between the modern functions and the ancient ones they try to control, the more subtle and advanced features are at a distinct disadvantage.  Which brings us to the next post.

" } }, { "_id": "kmjCaq66MDkfvZpFX", "title": "Extreme risks: when not to use expected utility", "pageUrl": "https://www.lesswrong.com/posts/kmjCaq66MDkfvZpFX/extreme-risks-when-not-to-use-expected-utility", "postedAt": "2009-10-23T14:40:48.084Z", "baseScore": 8, "voteCount": 18, "commentCount": 37, "url": null, "contents": { "documentId": "kmjCaq66MDkfvZpFX", "html": "

Would you prefer a 50% chance of gaining €10, one chance in a million off gaining €5 million, or a guaranteed €5? The standard position on Less Wrong is that the answer depends solely on the difference between cash and utility. If your utility scales less-than-linearly with money, you are risk averse and should choose the last option; if it scales more-than-linearly, you are risk-loving and should choose the second one. If we replaced €’s with utils in the example above, then it would simply be irrational to prefer one option over the others.

\n

 

\n

There are mathematical proofs of that result, but there are also strong intuitive arguments for it. What’s the best way of seeing this? Imagine that X1 and X2 were two probability distributions, with mean u1 and u2 and variances v1 and v2. If the two distributions are independent, then the sum X1 + X2 has mean u1 + u2, and variance v1 + v2.

\n

 

\n

Now if we multiply the returns of any distribution by a constant r, the mean scales by r and variance scales by r2. Consequently if we have n probability distributions X1, X2, ... , Xn representing n equally expensive investments, the expected average return is (Σni=1 ui)/n, while the variance of this average is (Σni=1 vi)/n2. If the vn are bounded, then once we make n large enough, that variance must tend to zero. So if you have many investments, your averaged actual returns will be, with high probability, very close to your expected returns.

\n

 

\n

Thus there is no better strategy than to always follow expected utility. There is no such thing as sensible risk-aversion under these conditions, as there is no actual risk: you expect your returns to be your expected returns. Even if you yourself do not have enough investment opportunities to smooth out the uncertainty in this way, you could always aggregate your own money with others, through insurance or index funds, and achieve the same result. Buying a triple-rollover lottery ticket may be unwise; but being part of a consortium that buys up every ticket for a triple rollover lottery is just a dull, safe investment. If you have altruistic preferences, you can even aggregate results across the planet simply by encouraging more people to follow expected returns. So, case closed it seems; departing from expected returns is irrational.

\n

 

\n

But the devil’s detail is the condition ‘once we make n large enough’. Because there are risk distributions so skewed that no-one will ever be confronted with enough of them to reduce the variance to manageable levels. Extreme risks to humanity are an example; killer asteroids, rogue stars going supernova, unfriendly AI, nuclear war: even totalling all these risks together, throwing in a few more exotic ones, and generously adding every single other decision of our existence, we are nowhere near a neat probability distribution tightly bunched around its mean.

\n

 

\n

To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical. In the same way, our decision when faced with a single planet-destroying event should not be constrained by the behaviour of a hypothetical being who confronts such events trillions of times over.

\n

 

\n

So where does this leave us? The independence axiom of the von Neumann-Morgenstern  utility formalism should be ditched, as it implies that large variance distributions are identical to sums of low variance distributions. This axiom should be replaced by a weaker version which reproduces expected utility in the limiting case of many distributions. Since there is no single rational path available, we need to fill the gap with other axioms – values – that reflect our genuine tolerance towards extreme risk. As when we first discovered probability distributions in childhood, we may need to pay attention to medians, modes, variances, skewness, kurtosis or the overall shapes of the distributions. Pascal's mugger and his whole family can be confronted head-on rather than hoping the probabilities neatly cancel out.

\n

 

\n

In these extreme cases, exclusively following the expected value is an arbitrary decision rather than a logical necessity.

\n

 

\n

 

\n

 

\n

 

" } }, { "_id": "p66HNYv6an3eAcq5w", "title": "Better thinking through experiential games", "pageUrl": "https://www.lesswrong.com/posts/p66HNYv6an3eAcq5w/better-thinking-through-experiential-games", "postedAt": "2009-10-23T12:59:12.901Z", "baseScore": 39, "voteCount": 30, "commentCount": 36, "url": null, "contents": { "documentId": "p66HNYv6an3eAcq5w", "html": "

A few years ago I came across The Logic of Failure by Dietrich Doerner (previously mentioned on LW) which discusses cognitive failures in people dealing with \"complex situations\".

One section (p.1 28) discusses a little simulation game, where participants are told to \"steer\" the temperature of a refrigerated storeroom with a defective thermostat, the exact equation governing how the thermostat setting affects actual temperature being unknown. Players control a dial with settings numbered 0 through 100, and can read actual temperature off a thermometer display. The only complications in this task are a) that there is a delay between changing the dial and the effects of the new setting; b) the possibility of \"overshoot\".

I found the section's title chilling as well as fascinating: \"Twenty-eight is a good number.\" Doerner says this statement is typical of what participants faced with this type of situation tend to say. People don't just make ineffective use of the data they are presented with: they make up magical hypotheses, cling to superstitions or even call into question the very basis of the experiment, that there is a systematic link between thermostat setting and temperature.

Reading about it is one thing, and actually playing the game quite another, so I got a group of colleagues together and we gave it a try. We were all involved in one way or another with managing software projects, which are systems way more complex than the simple thermostat system; our interest was to confirm Doerner's hypothesis that humans are generally inept at even simple management tasks. By the reports of all involved it was one of the most effective learning experiences they'd had. Since then, I have had a particular interest in this type of situation, which I have learned is sometimes called \"experiential learning\".

As I conceive of it, experiential learning consists of setting up a problematic situation, in such a way that the students (\"players\") should rely on their own wits to explore the situation, invent ways of dealing with it (sometimes by incorporating conceptual tools provided by an instructor), and test their newfound insights against the original problem - or against real-world situations. My preferred setting for experiential learning is a small-group format, with individual or group interaction with the situation, and group discussion for the debrief.

In experiential learning there is no \"right\" or \"wrong\" lesson to be taken from a game or simulation. Everything that happens, not just the ostensible game but also the myriad meta-games that accompany it, is fodder for observation and analysis. Neither is realism a requirement for experiential learning; it is an understood convention of the genre that such games present an abstraction of some \"real world\" situation that necessarily deviates from it in many respects.

The important part of an experiential learning situation is the debrief. In the debrief, you initially refrain from drawing conclusions about the experiment. The first thing you want from the session is data. A good question to ask is \"What happened in this session that stood out for you?\"

Because you want to map the game back to the real world, perhaps in unforeseen ways, another thing you want from the session is analogies. A good question to ask is \"What did the experiences of this session remind you of?\"

For learning to take place there should also be some puzzles arising from either the observations made during the game, or their transposition to real life. For instance, your preexisting mental model - derived from real life interactions - would have led you to different predictions about the game.

The intended outcome of experiential learning is for students (and, sometimes, teacher) to construct an updated mental model that resolves these tensions and can be transposed back to real world situations and applied there. A constructivist approach doesn't expect students to draw exactly the same conclusions as the teacher, even when the teacher makes available the ingredients out of which students build their updated model. Knowledge obtained in that way is more truly a part of you - it sticks better than anything the teacher could have merely told you.

An experiential learning game focusing on the basics of Bayesian reasoning might be a valuable design goal for this community - and a game I'd definitely have an interest in playing. Such is my \"hidden agenda\" in publishing this post...

Any takers ?

" } }, { "_id": "xq89jpDo6NZexWQLr", "title": "The continued misuse of the Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/xq89jpDo6NZexWQLr/the-continued-misuse-of-the-prisoner-s-dilemma", "postedAt": "2009-10-23T03:48:08.860Z", "baseScore": 34, "voteCount": 41, "commentCount": 70, "url": null, "contents": { "documentId": "xq89jpDo6NZexWQLr", "html": "

Related to: The True Prisoner's Dilemma, Newcomb's Problem and Regret of Rationality

\n

In The True Prisoner's Dilemma, Eliezer Yudkowsky pointed out a critical problem with the way the Prisoner's Dilemma is taught: the distinction between utility and avoided-jail-time is not made clear.  The payoff matrix is supposed to represent the former, even as its numerical values happen to coincidentally match the latter.  And worse, people don't naturally assign utility as per the standard payoff matrix: their compassion for the friend in the \"accomplice\" role means they wouldn't feel quite so good about a \"successful\" backstabbing, nor quite so bad about being backstabbed.  (\"Hey, at least I didn't rat out a friend.\")

\n

For that reason, you rarely encounter a true Prisoner's Dilemma, even an iterated one.  The above complications prevent real-world payoff matrices from working out that way.

\n

Which brings us to another unfortunate example of this misunderstanding being taught.

\n

Recently, on the New York Times's \"Freakonomics\" blog, Professor Daniel Hamermesh gleefully recounts a recent experiment he performed (which he says he does often) on students in his intro economics course, which is basically the same as the Prisoner's Dilemma (henceforth, PD).

\n

Now, before going further, let me make clear that Hamermesh is no small player.  Just take a look at all the accolades and accomplishments listed on his Wikipedia page or his university page CV.  So, this is a teaching of a professor at the top of his field, so it's only with hesitation that I proceed further to allege that he's Doing It Wrong

\n

Hamermesh's variant of the PD is to pick eight students and auction off a $20 bill to them, with the money split evenly across the winners if there are multiple highest bids.  Here, cooperation corresponds to adhering to a conspiracy where everyone agrees to make the same low bid and thus a big profit.  Defecting corresponds to breaking the agreement and making a slightly higher bid so you can take everything for yourself.  If the others continue to cooperate, their bid is lower and they get nothing.

\n

Here is how Hamermesh describes the result (italics mine, bold in the original):

\n
\n

Today seven of the students stuck to the collusive agreement, and each bid $.01. They figured they would split the $20 eight ways, netting $2.49 each. Ashley, bless her heart, broke the agreement, bid $0.05, and collected $19.95. The other 7 students booed her, but I got the class to join me in applauding her, as she was the only one who understood the game.

\n
\n

The game?  Which game?  There's more than one game going on here!  There's the neat little well-defined, artificial setup that Professor Hamermesh has laid out.  On top of that, there's the game we better know as \"life\", in which the later consequences of superficially PD-like scenarios cause us to assign different utilities to successful backstabbing (defecting when others cooperate).  There's also the game of becoming the high-status professor's Wunderkind.  And while Ashley (whose name he bolded for some reason) may have won the narrow, artificial game, she also told everyone there that, \"Trusting me isn't such a good idea.\"  In other words, the kind of consequence we normally worry about in our everyday lives.

\n

For this reason, I left the following comment:

\n
\n

No, she learned how to game a very narrow instance of that type of scenario, and got lucky that someone else didn’t bid $0.06.

\n

Try that kind of thing in real life, and you’ll get the social equivalent of a horse’s head in your bed.

\n

Incidentally, how many friends did Ashley make out of this event?

\n
\n

I probably came off as more \"anticapitalist\" or \"collectivist\" than I really am, but the point is important: betraying your partners has long-term consequences which aren't apparent when you only look at the narrow version of this game.

\n

Hamermesh's point was actually to show the difficulty of collusion in a free market.  However, to the extent that markets can pose barriers to collusion, it's certainly not because going back on your word will consistently work out in just the right way as to divert a huge amount of utility to yourself -- which happens to be the very reason Ashley \"succeded\" (with the professor's applause) in this scenario.  Rather, it's because the incentives for making such agreements fundamentally change; you are still better off maintaining a good reputation.

\n

Ultimately, the students learned the wrong lesson from an unrealistic game.

" } }, { "_id": "peutSP3n5QPkhyhtj", "title": "Generous people cross the street before the beggar", "pageUrl": "https://www.lesswrong.com/posts/peutSP3n5QPkhyhtj/generous-people-cross-the-street-before-the-beggar", "postedAt": "2009-10-22T17:22:54.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "peutSP3n5QPkhyhtj", "html": "

Robert Wiblin points to a study showing that the most generous people are the most keen to avoid situations where they will be generous, even though the people they would have helped will go without.

\n

We conduct an experiment to demonstrate the importance of sorting in the context of social preferences. When individuals are constrained to play a dictator game, 74% of the subjects share. But when subjects are allowed to avoid the situation altogether, less than one third share. This reversal of proportions illustrates that the influence of sorting limits the generalizability of experimental findings that do not allow sorting. Moreover, institutions designed to entice pro-social behavior may induce adverse selection. We find that increased payoffs prevent foremost those subjects from opting out who share the least initially. Thus the impact of social preferences remains much lower than in a mandatory dictator game, even if sharing is subsidized by higher payoffs…

\n

A big example of generosity inducing institutions causing adverse selection is market transactions with poor people.

\n

For some reason we hold those who trade with another party responsible for that party’s welfare. We blame a company for not providing its workers with more, but don’t blame other companies for lack of charity to the same workers. This means that you can avoid responsibility to be generous by not trading with poor people.

\n

Many consumers feel that if they are going to trade with poor people they should buy fair trade or thoroughly research the supplier’s niceness. However they don’t have the money or time for those, so instead just avoid buying from poor people. Only the less ethical remain to contribute to the purses of the poor.

\n

Probably the kindest girl in my high school said to me once that she didn’t want a job where she would get rich because there are so many poor people in the world. I said that she should be rich and give the money to the poor people then. Nobody was wowed by this idea. I suspect something similar happens often with people making business and employment decisions. Those who have qualms about a line of business such as trade with poor people tend not to go into that, but opt for something guilt free already, while the less concerned do the jobs where compassion might help.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "K5L2DX5SBmGE23NoC", "title": "Rationality Quotes: October 2009", "pageUrl": "https://www.lesswrong.com/posts/K5L2DX5SBmGE23NoC/rationality-quotes-october-2009", "postedAt": "2009-10-22T16:06:42.037Z", "baseScore": 10, "voteCount": 11, "commentCount": 284, "url": null, "contents": { "documentId": "K5L2DX5SBmGE23NoC", "html": "

A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).

\n" } }, { "_id": "jjGBv4iXpcprDc9DC", "title": "Biking Beyond Madness (link)", "pageUrl": "https://www.lesswrong.com/posts/jjGBv4iXpcprDc9DC/biking-beyond-madness-link", "postedAt": "2009-10-22T15:16:39.346Z", "baseScore": 25, "voteCount": 23, "commentCount": 6, "url": null, "contents": { "documentId": "jjGBv4iXpcprDc9DC", "html": "
\n

‘‘During race, I am going crazy, definitely,’’ he says, smiling in bemused despair. ‘‘I cannot explain why is that, but it is true.’’

The craziness is methodical, however, and Robic and his crew know its pattern by heart. Around Day 2 of a typical weeklong race, his speech goes staccato. By Day 3, he is belligerent and sometimes paranoid. His short-term memory vanishes, and he weeps uncontrollably. The last days are marked by hallucinations: bears, wolves and aliens prowl the roadside; asphalt cracks rearrange themselves into coded messages. Occasionally, Robic leaps from his bike to square off with shadowy figures that turn out to be mailboxes. In a 2004 race, he turned to see himself pursued by a howling band of black-bearded men on horseback.

‘‘Mujahedeen, shooting at me,’’ he explains. ‘‘So I ride faster.’’

\n
\n

This 2006 New York Times story is about Jure Robic, a Slovenian ultra long distance bicycler who goes seriously insane when he pushes himself far enough during the races. At the point he feels like dying out of fatigue he still has a major portion (estimated 50 % by his team) of his strength left. So he hands over control to his team and with their help, pushes himself into the realm of insanity and gives up control to the team:

\n
\n

 For Robic, his support crew serves as a second brain, consisting of a well-drilled cadre of a half-dozen fellow Slovene soldiers. It resembles other crews in that it feeds, hydrates, guides and motivates — but with an important distinction. The second brain, not Robic’s, is in charge.

\n

 ‘‘By the third day, we are Jure’s software,’’ says Lt. Miran Stanovnik, Robic’s crew chief. ‘‘He is the hardware, going down the road.’’

\n
\n

 His success isn't because of exceptional physiology or training:

\n
\n

 On rare occasions when he permits himself to be tested in a laboratory, his ability to produce power and transport oxygen ranks on a par with those of many other ultra-endurance athletes. He wins for the most fundamental of reasons: he refuses to stop.

\n
\n

 The whole thing is an intriguing example of making an extraordinary, desperate effort by knowing that even when his body and brain scream for him to stop, he can go further, and doing so. Also, pushing one's self to become insane isn't the sensible thing to do, but for him, it is the path that wins.

" } }, { "_id": "sRkDWMC9gpQrPEPab", "title": "Trust in the adoration of strangers", "pageUrl": "https://www.lesswrong.com/posts/sRkDWMC9gpQrPEPab/trust-in-the-adoration-of-strangers", "postedAt": "2009-10-21T16:33:04.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "sRkDWMC9gpQrPEPab", "html": "

Attractive people are more trusting when they think they can be seen:

\n

Here, we tested the effects of cues of observation on trusting behavior in a two-player Trust game and the extent to which these effects are qualified by participants’ own attractiveness. Although explicit cues of being observed (i.e., when participants were informed that the other player would see their face) tended to increase trusting behavior, this effect was qualified by the participants’ other-rated attractiveness (estimated from third-party ratings of face photographs). Participants’ own physical attractiveness was positively correlated with the extent to which they trusted others more when they believed they could be seen than when they believed they could not be seen. This interaction between cues of observation and own attractiveness suggests context dependence of trusting behavior that is sensitive to whether and how others react to one’s physical appearance.

\n

Probably rightly so. It’s interesting that people do not get used to the average level of good treatment expected for their attractiveness, but are sensitive to the difference in treatment when visible and when not. Is it inbuilt that we should expect some difference there, or is it just very noticeable?

\n

I wonder whether widespread beauty enhancement increases overall trust in society, and enhances productivity accordingly, or whether favorable treatment and returned trust both adapt to relative position. Does advertising suggesting that the world is chock full of model material decrease trust between real people?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "nW2EGC9XmsjEGsn9r", "title": "Why the beliefs/values dichotomy?", "pageUrl": "https://www.lesswrong.com/posts/nW2EGC9XmsjEGsn9r/why-the-beliefs-values-dichotomy", "postedAt": "2009-10-20T16:35:52.427Z", "baseScore": 29, "voteCount": 25, "commentCount": 156, "url": null, "contents": { "documentId": "nW2EGC9XmsjEGsn9r", "html": "

\n

I'd like to suggest that the fact that human preferences can be decomposed into beliefs and values is one that deserves greater scrutiny and explanation. It seems intuitively obvious to us that rational preferences must decompose like that (even if not exactly into a probability distribution and a utility function), but it’s less obvious why.

\n

The importance of this question comes from our tendency to see beliefs as being more objective than values. We think that beliefs, but not values, can be right or wrong, or at least that the notion of right and wrong applies to a greater degree to beliefs than to values. One dramatic illustration of this is in Eliezer Yudkowsky’s proposal of Coherent Extrapolated Volition, where an AI extrapolates the preferences of an ideal humanity, in part by replacing their \"wrong” beliefs with “right” ones. On the other hand, the AI treats their values with much more respect.

\n

Since beliefs and values seem to correspond roughly to the probability distribution and the utility function in expected utility theory, and expected utility theory is convenient to work with due to its mathematical simplicity and the fact that it’s been the subject of extensive studies, it seems useful as a first step to transform the question into “why can human decision making be approximated as expected utility maximization?”

\n

I can see at least two parts to this question:

\n\n

Not knowing how to answer these questions yet, I’ll just write a bit more about why I find them puzzling.

\n

Why this mathematical structure?

\n

It’s well know that expected utility maximization can be derived from a number of different sets of assumptions (the so called axioms of rationality) but they all include the assumption of Independence in some form. Informally, Independence says that what you prefer to happen in one possible world doesn’t depend on what you think happens in other possible worlds. In other words, if you prefer A&C to B&C, then you must prefer A&D to B&D, where A and B are what happens in one possible world, and C and D are what happens in another.

\n

This assumption is central to establishing the mathematical structure of expected utility maximization, where you value each possible world separately using the utility function, then take their weighted average. If your preferences were such that A&C > B&C but A&D < B&D, then you wouldn’t be able to do this.

\n

It seems clear that our preferences do satisfy Independence, at least approximately. But why? (In this post I exclude indexical uncertainty from the discussion, because in that case I think Independence definitely doesn't apply.) One argument that Eliezer has made (in a somewhat different context) is that if our preferences didn’t satisfy Independence, then we would become money pumps. But that argument seems to assume agents who violate Independence, but try to use expected utility maximization anyway, in which case it wouldn’t be surprising that they behave inconsistently. In general, I think being a money pump requires having circular (i.e., intransitive) preferences, and it's quite possible to have transitive preferences that don't satisfy Independence (which is why Transitivity and Independence are listed as separate axioms in the axioms of rationality).

\n

Why this representation?

\n

Vladimir Nesov has pointed out that if a set of preferences can be represented by a probability function and a utility function, then it can also be represented by two probability functions. And furthermore we can “mix” these two probability functions together so that it’s no longer clear which one can be considered “beliefs” and which one “values”. So why do we have the particular representation of preferences that we do?

\n

Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds? Unlike the case with anticipation, I don’t claim that this is true or even likely here, but it seems to me that we don’t understand things well enough yet to say that it’s definitely false and why that's so.

" } }, { "_id": "LeXfq8NsaHXzTDNcu", "title": "Lore Sjöberg's Life-Hacking FAQK", "pageUrl": "https://www.lesswrong.com/posts/LeXfq8NsaHXzTDNcu/lore-sjoeberg-s-life-hacking-faqk", "postedAt": "2009-10-20T16:10:38.877Z", "baseScore": 1, "voteCount": 17, "commentCount": 16, "url": null, "contents": { "documentId": "LeXfq8NsaHXzTDNcu", "html": "

Lore Sjöberg's Life-hacking FAQK

\n

Pretty self-explanatory. Also available as a podcast.

" } }, { "_id": "gTNpGyMENdwgMrXio", "title": "Everyone else prefers laws to values", "pageUrl": "https://www.lesswrong.com/posts/gTNpGyMENdwgMrXio/everyone-else-prefers-laws-to-values", "postedAt": "2009-10-20T11:16:47.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "gTNpGyMENdwgMrXio", "html": "
\"How

How do you tell what a superhuman AI's values are? ( picture: ittybittiesforyou - see bottom)

\n

Robin Hanson says that it is more important to have laws than shared values. I agree with him when ‘shared values’ means that shared indexical values remain about different people, e.g. If you and I share a high value of orgasms, you value you having orgasms and I value me having orgasms. Unless we are dating it’s all the same to me if you prefer croquet to orgasms. I think the singularitarians aren’t talking about this though. They want to share values in such a way that AI wants them to have orgasms. In principle this would be far better than having different values and trading. Compare gains from trading with the world economy to gains from the world economy’s most heartfelt wish being to please you. However I think that laws will get far more attention than values overall in arranging for an agreeable robot transition, and rightly so. Let me explain, then show you how this is similar to some more familiar situations.

\n
Greater intelligences are unpredictable
\n

If you know exactly what a creature will do in any given situation before it does it, you are at least as smart as it (if we don’t include it’s physical power as intelligence). Greater intelligences are inherently unpredictable. If you know the intelligence is trying to do, then you know what kind of outcome to expect, but guessing how it will get there is harder. This should be less so for lesser intelligences, and more so for more different intelligences. I will have less trouble guessing what a ten year old will do in chess against me than a grand master, though I can guess the outcome in both cases. If I play someone with a significantly different way of thinking about the game they may also be hard to guess.

\n
Unpredictability is dangerous
\n

This unpredictability is a big part of the fear of a superhuman AI. If you don’t know what path an intelligence will take to the goal you set it, you don’t know whether it will affect other things that you care about. This problem is most vividly illustrated by the much discussed case where the AI in question is suddenly very many orders of magnitude smarter than a human. Imagine we initially gave it only a subset of our values, such as our yearning to figure out whether P = NP, and we assume that it won’t influence anything outside its box. It might determine that the easiest way to do this is to contact outside help, build powerful weapons, take more resources by force, and put them toward more computing power. Because we weren’t expecting it to consider this option, we haven’t told it about our other values that are relevant to this strategy, such as the popular penchant for being alive.

\n

I don’t find this type of scenario likely, but others do, and the problem could arise at a lesser scale with weaker AI. It’s a bit like the problem that every genie owner in fiction has faced. There are two solutions. One is to inform the AI about all of human values, so it doesn’t matter how wide it’s influence is. The other is to restrict its actions. SIAI interest seems to be in giving the AI human values (whatever that means), then inevitably surrendering control to it. If the AI will inevitably likely be so much smarter than humans that it will control everything fovever almost immediately, I agree that values are probably the thing to focus on. But consider the case where AI improves fast but by increments, and no single agent becomes more powerful than all of human society for a long time.

\n
Unpredictability also makes it hard to use values to protect from unpredictability
\n

When trying to avoid the dangers of unpredictability, the same unpredictability causes another problem for using values as a means of control. If you don’t know what an entity will do with given values, it is hard to assess whether it actually has those values. It is much easier to assess whether it is following simpler rules. This seems likely to be the basis for human love of deontological ethics and laws. Utilitarians may get better results in principle, but from the perspective of anyone else it’s not obvious whether they are pushing you in front of a train for the greater good or specifically for the personal bad. You would have to do all the calculations yourself and trust their information. You also can’t rely on them to behave in any particular way so that you can plan around them, unless you make deals with them, which is basically paying them to follow rules, so is more evidence for my point.

\n
‘We’ cannot make the AI’s values safe.
\n

I expect the first of these things to be a particular problem with greater than human intelligences. It might be better in principle if an AI follows your values, but you have little way to tell whether it is. Nearly everyone must trust the judgement, goodness and competency of whoever created a given AI, be it a person or another AI. I suspect this gets overlooked somewhat because safety is thought of in terms of what to do when *we* are building the AI. This is the same problem people often have thinking about government. They underestimate the usefulness of transparency there because they think of the government as ‘we’. ‘We should redistribute wealth’ may seem unproblematic, whereas ‘I should allow an organization I barely know anything about to take my money on the vague understanding that they will do something good with it’ does not. For people to trust AIs the AIs should have simple enough promised behavior that people using them can verify that they are likely doing what they are meant to.

\n

This problem gets worse the less predictable the agents are to you. Humans seem to naturally find rules more important for more powerful people and consequences more important for less powerful people. Our world also contains some greater than human intelligences already: organizations. They have similar problems to powerful AI. We ask them to do something like ‘cheaply make red paint’ and often eventually realize their clever ways to do this harm other values, such as our regard for clean water. The organization doesn’t care much about this because we’ve only paid it to follow one of our values while letting it go to work on bits of the world where we have other values. Organizations claim to have values, but who can tell if they follow them?

\n

To control organizations we restrict them with laws. It’s hard enough to figure out whether a given company did or didn’t give proper toilet breaks to its employees. It’s virtually impossible to work out whether their decisions on toilet breaks are as close to optimal according some popularly agreed set of values.

\n

It may seem this is because values are just harder to influence, but this is not obvious. Entities follow rules because of the incentives in place rather than because they are naturally inclined to respect simple constraints. We could similarly incentivise organizations to be utilitarian if we wanted. We just couldn’t assess whether they were doing it. Here we find rules more useful and values less for these greater than human intelligences than we do for humans.

\n

We judge and trust friends and associates according to what we perceive to be their values. We drop a romantic partner because they don’t seem to love us enough even if they have fulfilled their romantic duties. But most of us will not be put off using a product because we think the company doesn’t have the right attitude, though we support harsh legal punishments for breaking rules. Entities just a bit superhuman are too hard to control with values.

\n

You might point out here that values are not usually programmed specifically in organizations, whereas in AI they are. However this is not a huge difference from the perspective of everyone who didn’t program the AI. To the programmer, giving an AI all of human values may be the best method of avoiding assault on them. So if the first AI is tremendously powerful, so nobody but the programmer gets a look in, values may matter most. If the rest of humanity still has a say, as I think they will, rules will be more important.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "TxFhRaZjcXZXtbcmq", "title": "Shortness is now a treatable condition", "pageUrl": "https://www.lesswrong.com/posts/TxFhRaZjcXZXtbcmq/shortness-is-now-a-treatable-condition", "postedAt": "2009-10-20T01:13:07.428Z", "baseScore": 10, "voteCount": 15, "commentCount": 111, "url": null, "contents": { "documentId": "TxFhRaZjcXZXtbcmq", "html": "

There was some talk here about height taxes, but there's a better solution - redefine shortness as a treatable condition and use HGH to cure it. They even got FDA on board with that, at least for 1.2% shortest people.

\n

Unsatisfactory sexual performance became a treatable condition with Viagra. Depression and hyperactivity became treatable conditions with SSRIs. Being ugly is already almost considered a treatable condition, at least one can get that impression from cosmetic surgery ads. Being overweight is universally considered an illness, even though we don't have too many effective treatment options (surgery is unpopular, and effective drugs like fen-phen and ECA are not officially prescribed any more). If we ever figure out how to increase IQ, you can be certain low IQ will be considered a treatable condition too. Almost everything undesirable gets redefined as an illness as soon as an effective way to fix it is developed.

\n

I welcome these changes. Yes, redefining large parts of normal human variability as illness is a lie, but if that's what society needs to work around its taboos against human enhancement, so be it.

" } }, { "_id": "G8RQrZX7rWyNkftPB", "title": "What do trust and sharing do to reputations?", "pageUrl": "https://www.lesswrong.com/posts/G8RQrZX7rWyNkftPB/what-do-trust-and-sharing-do-to-reputations", "postedAt": "2009-10-19T17:30:12.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "G8RQrZX7rWyNkftPB", "html": "

Bryan Caplan asked, ‘when doesn’t reputation work well?

\n

He answers,

\n

To me, venereal disease is the most striking response.  Unlike other disease, V.D. is simple to prevent: Only have sex with people who credibly show that they aren’t infected.  How hard is that?  But according to Wikipedia, AIDS alone kills over 2 million people per year.

\n

He suggests this is caused by a demand problem (people are strangely willing to sleep with someone without evidence of their not having VDs) and a supply problem (people who have good reputations can’t take over the whole market), and asks whether there are other areas where reputation fails.

\n

Making good decisions about small risks far in the future while horny is probably a rare skill, but not the only reason for the demand problem I think. Asking someone to credibly show that they aren’t infected credibly shows that you don’t trust them to tell you on their own. Trust is a handy thing to have the appearance of in relationships, but unfortunately requires behaving trustingly. A survey of  Texan girls shows 28% of them think they sometimes or never ‘have the right to’ ask their partner if he has been tested for STDs  (all the questions in the survey are  in terms of ‘rights’ to act certain ways, and I’m not sure what that means, but I guess it implies that asking would detriment their partner unacceptably).

\n

Does this generalize to suggest other areas reputation doesn’t work that well? I think so. Knowing someone’s reputation allows you to trust them more. This means if you want to demonstrate that you trust someone already, something you should not do is visibly seek their reputation. Reputation should work less well then when demonstrating trust is useful and seeking information about reputation is visible.

\n

When else is showing trust useful? Any time in relationships. Sure enough, I could assess a new boyfriend much better if I rung all his exes and got appraisals. But asking for their numbers is awkward. It would make him think I don’t trust his account of himself. Which would usually be entirely sensible of course. Out of earshot we might passionately use gossip and status cues to keep track of reputations, but if you invited your partner to seek reviews of your past behavior from others (as businesses do happily) it would be an implicit accusation of distrust.

\n

Friends are another group to whom showing trust is important. Again, once you are friends with someone, reputation doesn’t work as well as it can in other situations because seeking it out or relying on it suggests distrust, or that you suspect the friendship isn’t enough to ensure  the other person behave well. If your friend asks to borrow a book for instance, and you have no previous data on whether they return things, you don’t usually ask them or other friends nearby about their track record. You probably lose the book, but it’s worth it. With friends and lovers, reputation is important for who you get involved with, but once you are involved the need to show trust hinders assessment on smaller issues.

\n

Another area reputation can work poorly is when it is shared as a disorganized commons.  Stereotypes can be thought of as reputations attached to identities used by more than one person. Where stereotypes are triggered by a real statistical differences between populations, there is often an externality between those sharing a given reputation. Every time my sister elopes with a butcher’s son, or another woman does well on a math test, or a man from my social class goes to jail, it is not only their reputation which is changed, but incrementally mine too. This might provide useful information about me for onlookers, but the lack of feedback to the person triggering the change means no reason for them to adjust their behavior to take into account the effects on others. For instance had I much concern for my younger brothers’ treatment at high school I might have behaved differently when going through a couple of years before. This should be more of a problem if groups of people become relatively more similar, for instance if many copies exist of one upload they will have bigger interests in the behavior of their reputation sharers. More generally, our keen interest in constructing expectations of others from reputations is presumably a partial cause of whatever problems stereotypes entail.

\n

Reputations can also work well when shared of course. In fact sharing is the only way that reputation does work, though often it is sharing of an identity by many instants of a person, which we do not usually think of as sharing. One person usually does take into account the wellbeing of their future moments to some extent at least. That so many people voluntarily affiliate with groups that lead to others having certain expectations of them is evidence that sharing between people can be great for those involved too. Companies for instance dress their employees the same and encourage shared style and behaviour, in the hope that their brand will be trusted. Because the members of the brand are rewarded or punished according to their effect on the whole company, not just themselves, the externality is removed and there are big gains to be made.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "DD5bZCWDGaEogGuTM", "title": "Near and far skills", "pageUrl": "https://www.lesswrong.com/posts/DD5bZCWDGaEogGuTM/near-and-far-skills", "postedAt": "2009-10-19T13:16:44.043Z", "baseScore": 23, "voteCount": 30, "commentCount": 34, "url": null, "contents": { "documentId": "DD5bZCWDGaEogGuTM", "html": "

Robin Hanson has repeatedly pointed out the difference between near and far modes of thinking, and Alicorn has repeatedly pointed out the difference between procedural and propositional knowledge. Just occurred to me that it's pretty much the same difference. I will now proceed to play some obvious riffs on the theme.

\n

Do you ever feel yourself to be an expert on some topic just from reading about it a lot? Every young enthusiastic programmer just entering the workforce feels like that. (I certainly did.) Every wannabe entrepreneur who hangs out on Hacker News and has read enough Paul Graham essays feels they can take on the world by just applying those valuable insights. (I certainly did.) It's soooo fun to feel knowledgeable, maybe even project it outward by giving Internet advice to newbies. The public relations department of my brain doesn't seem to care that I have no actual experience: reading stuff is quite enough to change my self-image.

\n

The problem is, excessively liking propositional knowledge over procedural is a bias that harms us every day. Though some information is directly useful, most of it is worthless. A couple days ago I had to give advice to a classmate of mine who wants to start his own \"thing\" but isn't sure. See, he has this theory that one should accumulate propositional knowledge until one reaches critical mass, at which point the successful venture happens by itself. Being the wise and experienced mentor that I am (hah... on my second \"thing\", without much success), I told him outright that his theory was bullshit. Propositional knowledge doesn't spontaneously turn into procedural.

\n

(Digression: come to think, I'm not even sure why we need the kinds of propositional knowledge that we tend to accumulate. It reeks of a superstimulus. I know more about programming that I'll ever need for work or play, but don't remember the birthdays of all my acquaintances, which would obviously be more useful. Memorizing birthdays just isn't as exciting as reading about comonads or whatever.)

\n

At this point I sincerely wish I had a recipe for overcoming this bias. Like pjeby, he always has a recipe. Well, I don't. Maybe perceiving the bias will turn out to be enough; maybe some kind of social software thing can help cure it on a mass scale, like meetup.com is trying to cure Bowling Alone; or maybe each of us will have to apply force, the oldskool way. It's too early to say.

\n

Thank me for never mentioning the example of riding a bicycle.

" } }, { "_id": "queYtEHA9mPLeehSK", "title": "Applying Double Standards to ‘‘Divisive’’ Ideas", "pageUrl": "https://www.lesswrong.com/posts/queYtEHA9mPLeehSK/applying-double-standards-to-divisive-ideas", "postedAt": "2009-10-19T12:36:30.730Z", "baseScore": 4, "voteCount": 6, "commentCount": 5, "url": null, "contents": { "documentId": "queYtEHA9mPLeehSK", "html": "

This is a commentary by Linda Gottfredson on a paper by Hunt and Carlson about a paper by Richard Nisbett regarding studies done by Arthur Jensen. It's ultimately about race and intelligence, but it seemed meta enough to link to here.

\n

Warning: PDF

\n

Applying Double Standards to ‘‘Divisive’’ Ideas

" } }, { "_id": "8f7sXMEiKRWQdBRna", "title": "Localized theories and conditional complexity", "pageUrl": "https://www.lesswrong.com/posts/8f7sXMEiKRWQdBRna/localized-theories-and-conditional-complexity", "postedAt": "2009-10-19T07:29:34.468Z", "baseScore": 11, "voteCount": 10, "commentCount": 5, "url": null, "contents": { "documentId": "8f7sXMEiKRWQdBRna", "html": "

Suppose I hand you a series of data points without providing the context. Consider the theory v = a*t for t<<1, v = b for t>>1. Without knowing anything a priori about the shapes of the curves, one must have enough data to make sure that v follows the right lines at the two limits since there is complexity that must be justified.  Here we have two one-parameter curves, so we need at least two data points to pick the right slope and offset, as well as at least a couple more to make sure it follows the right shape.

\n

This is what I’ll call a completely local theory – see data, fit curve.  Dealing with problems at this level does not leave much room for human bias or error, but it also does not allow for improvement by including  background knowledge.

\n

\n

Now consider the case where v = velocity of a rocket sled, a = thrust/mass of sled,  and b = sqrt(thrust/(1/2*rho*Cd*Af)). If you have a theory explaining rockets and aerodynamics, the equation v=a*t and v = b are just the limiting cases for small and large t. In this case, you only need two data points to find a and b since you already know the shape (over the full range) from solving the differential equations. If you understand the aerodynamics well enough, and know the shape and mass of the sled, you don’t even need to do the experiment!  The “conditional complexity” is 0 since it is directly predicted from what we already know. This is the magic of keeping track of the dependencies between theories.

\n

We can take this a step further and derive a theory of aerodynamics from a theory of air molecules- and so on until we have one massively connected TOE.

\n

Now step back to the beginning.  If all I tell you is that when t = 1e-5, v = 2e-5 and when t = 1e-3, v = 2e-3,  you’re going to come up with the equationv = 2*t. If someone, with no further information, suggested that v = 2*t was only a small t approximation, and that for large t, v = 5.32, you’d think that  he’s nutso (and rightfully so), with all that unnecessary complexity.

\n

As a wannabe Bayesian, you need to update on all evidence, so we're almost never trying to fit data without knowing what it means. We prefer globally simple theories, not theories where each local section is simple but they don't want to fit together.

\n

I suspect that one of the main reasons people fail to understand/accept Occam's razor comes from trying to apply it to theories locally and then noticing that by importing information from a more general theory, they can do better. Of course you do better with more information than you do with a wisely chosen ignorance prior. You need to apply Occams razor to the whole bundle. Since all of the background theory is the same, you can reduce this to the entropy of the local theories that is left after conditioning on the background theory.

\n

When Eliezer says that he doesn't expect humans to have simple utility functions, its not because it is a magical case where Occam's razor doesn't apply. It's that it would take more bits overall to explain evolution creating a simple utility function than it would be to explain evolution creating a particular locally complex utility function. This is very different than concluding that Occam's razor doesn't fit to real life. If Occam's razor seems to be giving you bad answers, you're doing it wrong.

\n

What does this imply for the future? Those with poor memories and/or a poor understanding of history will answer “much like the present” based on a single point and the locally simplest fit. You can find people one step up from that who notice improvements over time and fit it to a line, which again isn't a bad guess if that's all you know (you almost always know more- actually using the rest of your information efficiently is the trick). Another step ahead and you get people who hypothesize an exponential growth based on their understanding of improvements feeding improvements, or at least a wider spanning dataset. This is where you’ll find Ray Kurzweil and the 'accellerating improvement singularity' folk. The last step I know of is where you'll find Eliezer Yudkowsky and other ‘hard takeoff’ folks. This is where you say “yes, I know my theory is locally more complex- I know that it isn’t obvious from looking at the curve, quite the opposite. However, my theory is  less complex after conditioning on the rest of my knowledge that doesn't show up on this plot, and for that reason,  I believe it to be true”.

\n

This might sound like saying “Emeralds are Grue not green”, but while “X until arbitrary date, then Y” fares worse when applying Occam’s razor locally, if our theory of color indicated a special ‘turning point’, then we would have to conclude “Emeralds are grue, not green”, and we would conclude this beacuse of Occam's razor, not inspite of it.

\n

I chose this example because it is important and well known at LW, but not for lack of other examples. In my experience, this is a very common mistake, even for otherwise intelligent individuals. This makes getting it right a quite fun rationality ‘superpower’.

" } }, { "_id": "Mss55L2mFd9MK7Kvp", "title": "Why will we be extra wrong about AI values?", "pageUrl": "https://www.lesswrong.com/posts/Mss55L2mFd9MK7Kvp/why-will-we-be-extra-wrong-about-ai-values", "postedAt": "2009-10-18T23:12:53.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "Mss55L2mFd9MK7Kvp", "html": "

I recently discussed the unlikelihood of an AI taking off and leaving the rest of society behind. The other part I mentioned of Singularitarian concern is that powerful AIs will be programmed with the wrong values. This would be bad even if the AIs did not take over the world entirely, but just became a powerful influence. Is that likely to happen?

\n

Don’t get confused by talk of ‘values’. When people hear this they often think an AI could fail to have values at all, or that we would need to work out how to give an AI values. ‘Values’ just means what the AI does. In the same sense your refrigerator might value making things inside it cold (or for that matter making things behind it warm). Every program you write has values in this sense. It might value outputting ‘#t’ if and only if it’s given a prime number for instance.

\n

The fear then is that a super-AI will do something other than what we want. We are unfortunately picky, and most things other than what we want, we really don’t want. Situations such as being enslaved by an army of giant killer robots, or having your job taken by a simulated mind are really incredibly close to what you do want compared to situations such as your universe being efficiently remodeled into stationery. If you have a machine with random values and the ability to manipulate everything in the universe, the chance of it’s final product having humans and tea and crumpets in it is unfathomably unlikely. Some SIAI members seem to believe that almost anyone who manages to make a powerful general AI will be so incapable of giving it suitable values as to approximate a random selection from mind design space.

\n

The fear is not that whoever picks the AI’s goals will do so at random, but rather that they won’t forsee the extent of the AI’s influence, and will pick narrow goals that may as well be random when they act on the world outside the realm they were intended. For instance an AI programmed to like finding really big prime numbers might find methods that are outside the box, such as hacking computers to covertly divert others’ computing power to the task. If it improves its own intelligence immensely and copies itself we might quickly find ourselves amongst a race of superintelligent creatures whose only value is to find prime numbers. The first thing they would presumably do is stop this needless waste of resources worldwide on everything other than doing that.

\n

Having an impact outside the intended realm is a problem that could exist for any technology. For a certain time our devices do what we want, but at some point they diverge if left long enough, depending on how well we have designed them to do what we want. In the past a car driving itself would diverge from what you wanted at the first corner, whereas after more work they diverge at the point another car gets in their way, and after more work they will diverge at the point that you unexpectedly need to pee.

\n

Notice that at all stages we know over what realm the car’s values coincide with ours, and design it to run accordingly. The same goes with just about all the technology I can think of. Because your toaster’s values and yours diverge as soon as you cease to want bread heated, your toaster is programmed to turn off at that point and not to be very powerful.

\n

Perhaps the concern about strong AI having the wrong goals is like saying ‘one day there will be cars that can drive themselves. It’s much easier to make a car that drives by itself than to make it steer well, so when this technology is developed, the cars will probably have the wrong goals and drive off the road.’ The error here is assuming that the technology will be used outside the realm it does what we want because the imagined amazing prototype can and programming what we do want it to do seems hard. In practice we hardly ever encounter this problem because we know approximately what our creations will do, and can control where they are set to do something. Is AI different?

\n

One suggestion it might be different comes from looking at technologies that intervene in very messy systems. Medicines, public policies and attempts to intervene in ecosystems for instance are used without total knowledge of their effects, and often to broader and iller effects than anticipated. If it’s hard to design a single policy with known consequences, and hard to tell what the consequences are, safely designing a machine which will intervene in everything in ways you don’t anticipate is presumably harder. But it seems effects of medicine and policy aren’t usually orders of magnitude larger than anticipated. Nobody accidentally starts a holocaust by changing the road rules. Also in the societal cases, the unanticipated effects are often from society reacting to the intervention, rather than from the mechanism used having unpredictable reach. e.g. it is not often that a policy which intends to improve childhood literacy accidentally improves adult literacy as well, but it might change where people want to send their children to school and hence where they live and what children do in their spare time. This is not such a problem, as human reactions presumably reflect human goals. It seems incredibly unlikely that AI will not have huge social effects of this sort.

\n

Another suggestion that human level AI might have the ‘wrong’ values is that the more flexible and complicated things are the harder it is to predict them in all of the circumstances they might be used. Software has bugs and failures sometimes because those making it could not think of every relevant difference in situations it will be used. But again, we have an idea of how fast these errors turn up and don’t move forward faster than enough are corrected.

\n

The main reason that the space in which to trust technology to please us is predictable is that we accumulate technology incrementally and in pace with the corresponding science, so have knowledge and similar cases to go by. So another reason AI could be different is that there is a huge jump in AI ability suddenly. As far as I can tell this is the basis for SIAI concern. For instance if after years of playing with not very useful code, a researcher suddenly figures out a fundamental equation of intelligence and suddenly finds the reachable universe at his command. Because he hasn’t seen anything like it, when he runs it he has virtually no idea how much it will influence or what it will do. So the danger of bad values is dependent on the danger of a big jump in progress. As I explained previously, a jump seems unlikely. If artificial intelligence is reached more incrementally, even if it ends up being a powerful influence in society, there is little reason to think it will have particularly bad values.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "GDRMuk9qMnEpH9LKu", "title": "How does raising awareness stop prejudice?", "pageUrl": "https://www.lesswrong.com/posts/GDRMuk9qMnEpH9LKu/how-does-raising-awareness-stop-prejudice", "postedAt": "2009-10-17T06:00:46.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "GDRMuk9qMnEpH9LKu", "html": "

Imagine you are in the habit of treating people who have lesser social status as if they are below you. One day you hear an advertisement talking about a group of people you know nothing about. It’s main thrust is that these people are as good as everyone else, or perhaps even special in some ways which the advertisement informs you are good, and that therefore you should respect them.

\n
\"ANTaR

ANTaR informs us that Aboriginals do not get enough respect

\n

What do you infer?

\n
    \n
  1. These people are totally normal except for being special in various exciting ways, and you should respect them.
  2. \n
  3. These people are so poorly respected by others that somebody feels the need to buy advertising to rectify the situation.
  4. \n
\n

What about the next day when you hear that other employers are going to court for failing to employ these people enough?

\n

I can’t think of any better way to stop people wanting to associate with someone than by suggesting to them that nobody else wants to. Low social status seems like the last thing you can solve by raising awareness.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "wqnmHu4LnHxobqHBh", "title": "How far can AI jump?", "pageUrl": "https://www.lesswrong.com/posts/wqnmHu4LnHxobqHBh/how-far-can-ai-jump", "postedAt": "2009-10-16T17:39:47.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "wqnmHu4LnHxobqHBh", "html": "

I went to the Singularity Summit recently, organized by the Singularity Institute for Artificial Intelligence (SIAI). SIAI’s main interest is in the prospect of a superintelligence quickly emerging and destroying everything we care about in the reachable universe. This concern has two components. One is that any AI above ‘human level’ will improve its intelligence further until it takes over the world from all other entities. The other is that when the intelligence that takes off is created it will accidentally have the wrong values, and because it is smart and thus very good at bringing about what it wants, it will destroy all that humans value. I disagree that either part is likely. Here I’ll summarize why I find the first part implausible, and there I discuss the second part.

\n

The reason that an AI – or a group of them – is a contender for gaining existentially risky amounts of power is that it could trigger an intelligence explosion which happens so fast that everyone else is left behind. An intelligence explosion is a positive feedback where more intelligent creatures are better at improving their intelligence further.

\n

Such a feedback seems likely. Even now as we gain more concepts and tools that allow us to think well we use them to make more such understanding. AIs fiddling with their architecture don’t seem fundamentally different. But feedback effects are easy to come by. The question is how big this feedback effect will become. Will it be big enough for one machine to permanently overtake the rest of the world economy in accumulating capability?

\n

In order to grow more powerful than everyone else you need to get significantly ahead at some point. You can imagine this could happen either by having one big jump in progress or by having slightly more growth over a long period of time. Having slightly more growth over a long period is staggeringly unlikely to happen by chance, so it needs to share some cause too. Anything that will give you higher growth for long enough to take over the world is a pretty neat innovation, and for you to take over the world everyone else has to not have anything close. So again, this is a big jump in progress. So for AI to help a small group take over the world, it needs to be a big jump.

\n

Notice that no jumps have been big enough before in human invention. Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing. This makes it hard for anyone to get far ahead of everyone else. While you can see there are barriers to insights passing between groups, such as incompatible approaches to a kind of technology by different people working on it, these have not so far caused anything like a gap allowing permanent separation of one group.

\n

Another barrier to a big enough jump is that much human progress comes from the extra use of ideas that sharing information brings. You can imagine that if someone predicted writing they might think ‘whoever creates this will be able to have a superhuman memory and accumulate all the knowledge in the world and use it to make more knowledge until they are so knowledgeable they take over everything.’ If somebody created writing and kept it to themselves they would not accumulate nearly as much recorded knowledge as another person who shared a writing system. The same goes for most technology. At the extreme, if nobody shared information, each person would start out with less knowledge than a cave man, and would presumably end up with about that much still. Nothing invented would be improved on. Systems which are used tend to be improved on more. This means if a group hides their innovations and tries to use them alone to create more innovation, the project will probably not grow as fast as the rest of the economy together. Even if they still listen to what’s going on outside, and just keep their own innovations secret, a lot of improvement in technologies like software comes from use. Forgoing information sharing to protect your advantage will tend to slow down your growth.

\n

Those were some barriers to an AI project causing a big enough jump. Are the reasons for it good enough to make up for them?

\n

The main argument for an AI jump seems to be that human level AI is a powerful and amazing innovation that will cause a high growth rate. But this means it is a leap from what we have currently, not that it is especially likely to be arrived at in one leap. If we invented it tomorrow it would be a jump, but that’s just evidence that we won’t invent it tomorrow. You might argue here that however gradually it arrives, the AI will be around human level one day, and then the next it will suddenly be a superpower. There’s a jump from the growth after human level AI is reached, not before. But if it is arrived at incrementally then others are likely to be close in developing similar technology, unless it is a secret military project or something. Also an AI which recursively improves itself forever will probably be preceded by AIs which self improve to a lesser extent, so the field will be moving fast already. Why would the first try at an AI which can improve itself have infinite success? It’s true that if it were powerful enough it wouldn’t matter if others were close behind or if it took the first group a few goes to make it work. For instance if it only took a few days to become as productive as the rest of the world added together, the AI could probably prevent other research if it wanted. However I haven’t heard any good evidence it’s likely to happen that fast.

\n

Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.

\n

I agree human level AI will be a darn useful achievement and will probably change things a lot, but I’m not convinced that one AI or one group using it will take over the world, because there is no reason it will be a never before seen size jump from technology available before it.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "RKEepbECEXQXqXFNm", "title": "How to think like a quantum monadologist", "pageUrl": "https://www.lesswrong.com/posts/RKEepbECEXQXqXFNm/how-to-think-like-a-quantum-monadologist", "postedAt": "2009-10-15T09:37:33.643Z", "baseScore": -17, "voteCount": 36, "commentCount": 268, "url": null, "contents": { "documentId": "RKEepbECEXQXqXFNm", "html": "

Half the responses to my last article focused on the subject of consciousness, understandably so. Back when LW was still part of OB, I stated my views in more detail (e.g. here, here, here, and here); and I also think it's just obvious, once you allow yourself to notice, that the physics we have does not even contain the everyday phenomenon of color, so something has to change. However, it also seems that people won't change their minds until a concrete alternative to physics-as-usual and de facto property dualism actually comes along. Therefore, I have set out to explain how to think like a quantum monadologist, which is what I will call myself.

Fortunately, this new outlook agrees with the existing outlook far more than it disagrees. For example, even though I'm a quantum monadologist, I'm still seeking to identify the self and its experiences with some part of the physical brain. And I'm not seeking to add big new annexes to the physical formalism that we have, just in order to house the mind; though I may feel the need to impose a certain structure on that formalism, for ontological reasons, and that may or may not have empirical consequences in the macro-quantum realm.

So what are the distinctive novelties of this other approach to the problem? There is an ontological hypothesis, that conscious states are states of a single physical entity, which we may call the self. There is a preferred version of the quantum formalism, in which the world is described by quantum jumps between spacelike tensor products of abstract quantum states (more on this below). The self is represented by one of the tensor factors appearing in these products. There is an inversion of attitude with respect to the mathematical formalism; we do not say that the self is actually a vector in a Hilbert space, we say that the nature of the self is as revealed by phenomenology, and the mathematics is just a way of describing its structure and dynamics. Finally, it is implied that significant quantum effects are functionally relevant to cognition, though so far this tells us nothing about where or how.

Quantum Jumps Between Tensor Products?

For this audience, I think it's best that I start by explaining the quantum formalism I propose, even though the formalism has been chosen solely to match the ontology. I will assume familiarity with the basics of quantum mechanics, including superposition, entanglement, and the fact that we only ever see one outcome, even though the wavefunction describes many.

Suppose we have three qubits, allegedly in a state like |011> + |101> + |110>. In a many-worlds interpretation, we suppose that all three components are equally real. In a one-world interpretation, we normally assume that reality is just one of the three, e.g. |011>, which can be expanded as |0> x |1> x |1>: the first qubit is actually in the 0 state, the second and third qubits in the 1 state.

However, we may, with just as much mathematical validity, express the original state as {|01>+|10>}|1> + |110>. If we look at that first term, how many things are present in it? If the defining property of a thing is that it has a state of its own, then we only have two things, and not three, because two of our qubits are entangled and don't have independent states. It is logically possible to have a one-world interpretation according to which there are two things actually there - one with quite a few degrees of freedom, in the state |01>+|10>, and the other in the much simpler state |1> (and with |110> being unreal, an artefact of the Schrodinger formalism, as must be all the unreal \"branches\" and \"worlds\" according to any single-world interpretation).

And there you have it. This is, in its essence, the quantum formalism or quantum interpretation I want to use, as a neo-monadologist. At any time, the universe consists of a number of entities whose formal states inhabit Hilbert spaces of various dimension (thus |01>+|10> comes from a four-dimensional Hilbert space, while |1> comes from a two-dimensional Hilbert space), and the true dynamics consists of repeatedly jumping from one such set of entity-states to another set of entity-states. Models like this exist in the physics literature (see especially Figure 1; you may think of the points as qubits, and the ovals around them as indicating potential entanglement). For those who think in terms of \"collapse interpretations\", this may be regarded as a \"partial collapse theory\" in which most things, at any given time, are completely disentangled; actually realized entanglements are relatively local and transient. However, from the monadological perspective, we want to get away from the idea of entanglement, somewhat. We don't want to think of this as a world in which there are two entangled qubits and one un-entangled qubit, but rather a world in which there is one monad with four degrees of freedom, and another monad with two degrees of freedom. (The degrees of freedom correspond to the number of complex amplitudes required to specify the quantum state.)

The Actual Ontology of the Self and Its Relationship to the Formalism

I've said that the self, the entity which you are and which is experiencing what you experience, is to be formally represented by one of these tensor factors; like |01>+|10>, though much much bigger. But I've also said that this is still just formalism; I'm not saying that the actual state of the self consists of a vector in a Hilbert space or a big set of complex numbers. So what is the actual state of the self, and how does it relate to the mathematics?

The actual nature of the self I take to be at least partly revealed by phenomenology. You are, when awake, experiencing sensations; and you are experiencing them as something - there is a conceptual element to experience. Thoughts and emotions also, I think, conform somewhat to this dual description; there is an aspect of veridical awareness, and an aspect of conceptual projection. If we adopt Husserl's quasi-Cartesian method of investigating consciousness - neither believing in that which is not definitely there, nor outright rejecting any of the stream of suppositions which make up the conceptual side of experience - we find that a specific consciousness, whatever else may be true about it, is partly characterized by this stream of double-sided states: on one side, the \"data\", the \"raw sensations\" and even \"raw thoughts\"; on the other side, the \"interpretation\", all the things which are posited to be true about the data.

Husserl says all this much better than I do, and says much more as well, and he has a precise technical vocabulary in which to say it. As phenomenology, what I just wrote is crude and elementary. But I do want to point out one thing, which is that there is a phenomenology of thought and not just a phenomenology of sensation. Because sensations are so noticeable, philosophers of consciousness generally accept that they are there, and that a description of consciousness must include sensations; but there is a tendency (not universal) to regard thought, cognition, as unconscious. I see this as just footdragging on the part of materialist philosophers who have at length been compelled to admit that colors, et cetera, are there, somewhere; if you were setting out to describe your experience without ontological prejudice, of course you would say something about what you think and not just what you sense, and you would say that you have at least partial awareness of what you're thinking.

But this poses a minor ontological challenge. So long as the ontology of consciousness is restricted to sensation, you can get away with saying that the contents of consciousness consist of a visual sensory field in a certain state, an auditory sensory field in another state, and so on through all the senses, and then all of these integrated in a unitary spatiotemporal meta-perception. A thought, however, is a rather different thing; it is something like a consciously apprehended conceptual structure. There are at least two ontological challenges here: what is a \"conceptual structure\", and how does it unite with raw sensory data to produce an interpreted experience, such as an experience of seeing an apple? The philosophers who limit consciousness to raw sensation alone don't face these problems; they can describe concepts and thinking in a purely computational and unconscious fashion. However, in reality there clearly is such a thing as conceptual phenomenology (or else we wouldn't talk about beliefs and thoughts and awareness of them), and the actual ontology of the self must reflect this.

A crude way to proceed here, which I introduce more as a suggestion than as the answer, is to distinguish between presence and interpretation as aspects of consciousness. It's almost just terminology; but it's terminology constructed to resemble the reality. So, we say there is a self, whatever that is; everything \"raw\" is \"present\" to that self; and everything with a conceptual element is some raw presence that is being \"interpreted\". And since interpretations are themselves processes occurring within the self, logically they are themselves potentially present to it; and their presence may itself be conceptually interpreted. Thus we have the possibility of iteratively more complex \"higher-order thoughts\", thoughts about thoughts.

Enough with the poetics for a moment. Is there a natural formalism for talking about such an entity? It would seem to require a conjunction of qualitative continua and sentential structure. For example, a standard way of talking about the raw visual field specifies hue, saturation, and intensity at every point in that field. But we also want to be able to say that a particular substructure within that field is being \"seen as a square\" or even \"seen as an apple\". We might build up these complex concepts square or apple combinatorially from a set of primitive concepts; and then we need a further notation to say that raw sensory structure X is currently being experienced as a Y. I emphasize again that I am not talking about the computation whereby input X is processed or categorized as a Y, but the conscious experience of interpreting sensation X as an object Y. It can be a slippery idea to hold onto, but I maintain that the situation is analogous to how it was with sensation. You can't say that a particular shade of red is really some colorless physical entity; you have to turn it around and say that the entity in your theory, which hitherto you only knew formally and mathematically, is actually a shade of red. And similarly, we are going to have to say that certain states and certain transitions of state, which we only knew formally and computationally, are actually conceptually interpreted perceptions, reflectively driven thought processes, and so forth.

Returning to the second part of the question with which we started - how does the actual ontology of the self relate to the quantum mathematics - I have supposed that there is a mapping (maybe not 1-to-1, we may be overlooking other aspects of the self) from states of the self to descriptions of those states in a hybrid qualitative/sentential formalism. The implication is that there is a further mapping from this intermediate formalism into the quantum formalism of Hilbert spaces. This isn't actually so amazing. One way to do it is to have a separate basis state for each state in the intermediate formalism - so the basis states are formally labelled by the qualitative/sentential structures - and to also postulate that superpositions of these basis states never actually show up (as we would be unable to interpret them as states of consciousness). But there may be more subtle ways to do it which take advantage of more of the structure of Hilbert space.

What About Unconscious Matter? 

If I continue to use this terminology of \"monads\" to describe the entities whose quantum states, tensored together, form the formal state of the universe from moment to moment, then my basic supposition is that conscious minds, e.g. as known from within to adult humans, correspond to monads with very many degrees of freedom, and that these are causally surrounded by (and interact with) many lesser monads in simpler, unconscious states. I'm not saying that complexity causes consciousness, but rather that conscious states, on account of having a minimum internal structure of a certain complexity, cannot be found in (say) a two-qubit monad, and that these simple monads make up the vast majority of them in nature.

In fact, this might be an apt moment to say something about the relationship between these \"monads\" and the elementary particles in terms of which physics is normally described. I think of this in terms of string theory; not to be dogmatic about it, but it just concretely illustrates a way of thinking. There is a formulation of string theory in which everything is made up of entangled \"D0-branes\". An individual D0-brane, as I understand it, has just one scalar internal degree of freedom. A particular spatial geometry can be formed by a quantum condensate of D0-branes, and particles in that geometry are themselves individual D0-branes or are lesser condensates (e.g. a string would be, I suppose, a 1-dimensional D0-brane condensate). Living matter is made up of electrons and quarks; but these are themselves just D0-brane composites. So here we have the answer. The D0-branes are the fundamental degrees of freedom - the qubits of nature, so to speak - and their entanglements and disentanglements define the boundaries of the monads.

Abrupt Conclusion

This is obviously more of a research program than a theory. About a dozen separate instances of handwaving need to be turned into concrete propositions before it has produced an actual theory. The section on how to talk about the actual nature of consciousness without implicitly falling back into the habit of treating the formalism as the reality may seem especially slippery and mystical; but in the end I think it's just another problem we have to face and solve. However, the point of this article is not to carry out the research program, but just to suggest what I'm actually on about. It will be interesting to see how much sense people are able to extract from it.

P.S. I will get around to responding to comments from the previous article soon.

" } }, { "_id": "j93FEgdau5YBvt4BY", "title": "Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 '09!", "pageUrl": "https://www.lesswrong.com/posts/j93FEgdau5YBvt4BY/waterloo-on-canada-meetup-6pm-sun-oct-18-09", "postedAt": "2009-10-15T04:30:49.778Z", "baseScore": 7, "voteCount": 6, "commentCount": 17, "url": null, "contents": { "documentId": "j93FEgdau5YBvt4BY", "html": "

Michael Vassar and I will be attending the Quantum to Cosmos Festival of the Perimeter Institute.  Is anyone interested in meeting up at the Symposium Cafe on 4 King St N, Waterloo, ON, Canada, on Sunday at 6pm on October 18th 2009?  I might duck out at around 8pm, but Michael Vassar seems more likely to stick around.  If we get at least two more positive reply comments (plus the one person who suggested the meetup) then it'll be on and I'll take the question mark off the title.  If that time doesn't work for you but you're in the area, feel free to email me about meeting informally.

\n

Result:  Okay, we have exactly two people RSVPing (I was hoping for three).  We'll show up at the Symposium Cafe at 6pm, possibly walk around or head out to elsewhere by 6:30pm (i.e., join the meetup by 6:30pm if you want to be part of the walking-around group), and we've both got a talk we want to attend at 8:00pm.  Either stuff will happen, or not.

" } }, { "_id": "KqhHhsBRzbf7eckTS", "title": "Information theory and FOOM", "pageUrl": "https://www.lesswrong.com/posts/KqhHhsBRzbf7eckTS/information-theory-and-foom", "postedAt": "2009-10-14T16:52:35.107Z", "baseScore": 5, "voteCount": 18, "commentCount": 95, "url": null, "contents": { "documentId": "KqhHhsBRzbf7eckTS", "html": "

Information is power.  But how much power?  This question is vital when considering the speed and the limits of post-singularity development.  To address this question, consider 2 other domains in which information accumulates, and is translated into an ability to solve problems:  Evolution, and science.

\n

DNA Evolution

\n

Genes code for proteins.  Proteins are composed of modules called \"domains\"; a protein contains from 1 to dozens of domains.  We classify genes into gene \"families\", which can be loosely defined as sets of genes that on average share >25% of their amino acid sequence and have a good alignment for >75% of their length.  The number of genes and gene families known doubles every 28 months; but most \"new\" genes code for proteins that recombine previously-known domains in different orders.

\n

Almost all of the information content of a genome resides in the amino-acid sequence of its domains; the rest mostly indicates what order to use domains in individual genes, and how genes regulate other genes.  About 64% of domains (and 84% of those found in eukaryotes) evolved before eukaryotes split from prokaryotes about 2 billion years ago. (Michael Levitt, PNAS July 7 2009, \"Nature of the protein universe\"; D. Yooseph et al. \"The Sorcerer II global ocean sampling expedition\", PLoS Bio 5:e16.)  (Prokaryotes are single-celled organisms lacking a nucleus, mitochondria, or gene introns.  All multicellular organisms are eukaryotes.)

\n

It's therefore accurate to say that most of the information generated by evolution was produced in the first one or two billion years; the development of more-complex organisms seems to have nearly stopped evolution of protein domains.  (Multi-cellular organisms are much larger and live much longer; therefore there are many orders of magnitude fewer opportunities for selection in a given time period.)  Similarly, most evolution within eukaryotes seems to have occurred during a period of about 50 million years leading up to the Cambrian explosion, half a billion years ago.

\n

My first observation is that evolution has been slowing down in information-theoretic terms, while speeding up in terms of the intelligence produced.  This means that adding information to the gene pool increases the effective intelligence that can be produced using that information by a more-than-linear amount.

\n

In the first of several irresponsible assumptions I'm going to make, let's assume that the information evolved in time t is proportional to i = log(t), while the intelligence evolved is proportional to et = ee^i.  I haven't done the math to support those particular functions; but I'm confident that they fit the data better than linear functions would.  (This assumption is key, and the data should be studied more closely before taking my analysis too seriously.)

\n

My second observation is that evolution occurs in spurts.  There's a lot of data to support this, including data from simulated evolution; see in particular the theory of punctuated equilibrium, and the data from various simulations of evolution in Artificial Life and Artificial Life II.  But I want to single out the eukaryote-to-Cambrian-explosion spurt.  The evolution of the first eukaryotic cell suddenly made a large subset of organism-space more accessible; and the speed of evolution, which normally decreases over time, instead increased for tens of millions of years.

\n

Science!

\n

The following discussion relies largely on de Solla Price's Little Science, Big Science (1963), Nicholas Rescher's Scientific Progress: A Philosophical Essay on the Economics of Research in Natural Science (1978), and the data I presented in my 2004 TransVision talk, \"The myth of accelerating change\".

\n

The growth of \"raw\" scientific knowledge is exponential by most measures: Number of scientists, number of degrees granted, number of journals, number of journal articles, number of dollars spent.  Most of these measures have a doubling time of 10-15 years.  (GDP has a doubling time closer to 20 years, suggesting that the ultimate limits on knowledge may be economic.)

\n

The growth of \"important\" scientific knowledge, measured by journal citations, discoveries considered worth mentioning in histories of science, and perceived social change, is much slower; if it is exponential, it appears IMHO to have had a doubling time of 50-100 years between 1600 and 1940.  (It can be argued that this growth began slowing down at the onset of World War II, and more dramatically around 1970).  Nicholas Rescher argues that important knowledge = log(raw information).

\n

A simple argument supporting this is that \"important\" knowledge is the number of distinctions you can make in the world; and the number of distinctions you can draw based on a set of examples is of course proportional to the log of the size of your data set, assuming that the different distinctions are independent and equiprobable, and your data set is random.  However, an opposing argument is that log(i) is simply the amount of non-redundant information present in a database with uncompressed information i.  (This appears to be approximately the case for genetic sequences.  IMHO it is unlikely that scientific knowledge is that redundant; but that's just a guess.)  Therefore, important knowledge is somewhere between O(log(information)) and O(information), depending whether information is closer to O(raw information) or O(log(raw information)).

\n

Analysis

\n

We see two completely-opposite pictures:  In evolution, the efficaciousness of information increases more-than-exponentially with the amount of information.  In science, it increases somewhere between logarithmically and linearly.

\n

My final irreponsible assumption will be that the production of ideas, concepts, theories, and inventions (\"important knowledge\") from raw information, is analogous to the production of intelligence from gene-pool information.  Therefore, evolution's efficacy at using the information present in the gene pool can give us a lower bound on the amount of useful knowledge that could be extracted from our raw scientific knowledge.

\n

I argued above that the amount of intelligence produced from a given gene-information-pool i is approximately e^ei, while the amount of useful knowledge we extract from raw information i is somewhere between O(i) and O(log(i)).  The implication is that the fraction of discoveries that we have made, out of those that could be made from the information we already have, has an upper bound between O(1/e^e^i) and O(1/e^e^e^i).

\n

One key question in asking what the shape of AI takeoff will be, is therefore: Will AI's efficiency at drawing inferences from information be closer to that of humans, or that of evolution?

\n

If the latter, then the number of important discoveries that an AI could make, using only the information we already have, may be between e^e^i and e^e^e^i times the number of important discoveries that we have made from it.  i is a large number representing the total information available to humanity.  e^e^i is a goddamn large number.  e^e^e^i is an awful goddamn large number.  Where before, we predicted FOOM, we would then predict FOOM^FOOM^FOOM^FOOM.

\n

Furthermore, the development of the first AI will be, I think, analogous to the evolution of the first eukaryote, in terms of suddenly making available a large space of possible organisms.  I therefore expect the pace of information generation by evolution to suddenly switch from falling, to increasing, even before taking into account recursive self-improvement.  This means that the rate of information increase will be much greater than can be extrapolated from present trends.  Supposing that the rate of acquisition of important knowledge will change from log(i=et) to et gives us FOOM^FOOM^FOOM^FOOM^FOOM, or 4FOOM.

\n

This doesn't necessarily mean a hard takeoff.  \"Hard takeoff\" means, IMHO, FOOM in less than 6 months.  Reaching the e^e^e^i level of efficiency would require vast computational resources, even given the right algorithms; an analysis might find that the universe doesn't have enough computronium to even represent, let alone reason over, that space.  (In fact, this brings up the interesting possibility that the ultimate limits of knowledge will be storage capacity:  Our AI descendants will eventually reach the point where they need to delete knowledge from their collective memory in order to have the space to learn something new.)

\n

However, I think this does mean FOOM.  It's just a question of when.

\n

ADDED:  Most commenters are losing sight of the overall argument.  This is the argument:

\n
    \n
  1. Humans have diminishing returns on raw information when trying to produce knowledge.  It takes more dollars, more data, and more scientists to produce a publication or discovery today than in 1900.
  2. \n
  3. Evolution has increasing returns on information when producing intelligence.  With 51% of the information in a human's DNA, you could build at best a bacteria.  With 95-99%, you could build a chimpanzee.
  4. \n
  5. Producing knowledge from information is like producing intelligence from information. (Weak point.)
  6. \n
  7. Therefore, the knowledge that could be inferred from the knowledge that we have is much, much larger than the knowledge that we have.
  8. \n
  9. An artificial intelligence may be much more able than us to infer what is implied by what it knows.
  10. \n
  11. Therefore, the Singularity may not go FOOM, but FOOMFOOM.
  12. \n
" } }, { "_id": "vEJBc6hfKntnQ2A7C", "title": "The Shadow Question", "pageUrl": "https://www.lesswrong.com/posts/vEJBc6hfKntnQ2A7C/the-shadow-question", "postedAt": "2009-10-14T01:40:56.490Z", "baseScore": 41, "voteCount": 42, "commentCount": 44, "url": null, "contents": { "documentId": "vEJBc6hfKntnQ2A7C", "html": "

This is part 2 of a sequence on problem solving.  Here's part 1, which introduces the vocabulary of \"problems\" versus \"tasks\".  This post's title is a reference1 worth 15 geek points if you get it without Googling, and 20 if you can also get it without reading the rest of the post.

\n

You have to be careful what you wish for.  You can't just look at a problem, say \"That's not okay,\" and set about changing the world to contain something, anything, other than that.  The easiest way to change things is usually to make them worse.  If I owe the library fifty cents that I don't have lying around, I can't go, \"That's not okay!  I don't want to owe the library fifty cents!\" and consider my problem solved when I set the tardy book on fire and now owe them, not money, but a new copy of the book.  Or you could make things, not worse in the specific domain of your original problem, but bad in some tangentially related department: I could solve my library fine problem by stealing fifty cents from my roommate and giving it to the library.  I'd no longer be indebted to the library.  But then I'd be a thief, and my roommate might find out and be mad at me.  Calling that a solution to the library fine problem would be, if not an outright abuse of the word \"solution\", at least a bit misleading.

\n

So what kind of solutions are we looking for?  How do we answer the Shadow Question?  It's hard to turn a complex problem into doable tasks without some idea of what you want the world to look like when you've completed those tasks.  You could just say that you want to optimize according to your utility function, but that's a little like saying that your goal is to achieve your goals: no duh, but now what?  You probably don't even know what your utility function is; it's not a luminous feature of your mind.

\n

For little problems, the answer to the Shadow Question may not be complete.  For instance, I have never before thought to mentally specify, when making a peanut butter sandwich, that I'd prefer that my act of sandwich-making not lead to the destruction of the Everglades.  But it's complete enough.  The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect.  But for big problems, well - we may have a problem...

\n

Here's a few broad approaches you could take in trying to answer the Shadow Question.  Somebody please medicate me for my addiction to cutesy reference-y titles for things:

\n\n

These strategies tolerate plenty of overlap, but in general, the more overlap available in a situation, the less problematic a problem you have.  If you can simultaneously enable the best case, disable the worst case, make it unlikely that anything will deteriorate, and nearly guarantee that things will improve - uh - go ahead and do that, then!  Sometimes, though, it seems like you have to organize these strategies and narrow down your plan in order.  Arrange them however you like, and in the search space each one leaves behind, optimize for the next.

\n

Part 3 of this sequence will conclude it, and will talk about resource evaluation.

\n

 

\n

1\"The Shadow Question\" refers to the question \"What do you want?\", which was repeatedly asked by creatures called Shadows and their agents during the course of the splendid television show Babylon 5.

" } }, { "_id": "ofSYgmMby7iqxJqi6", "title": "PredictionBook.com - Track your calibration", "pageUrl": "https://www.lesswrong.com/posts/ofSYgmMby7iqxJqi6/predictionbook-com-track-your-calibration", "postedAt": "2009-10-14T00:08:43.863Z", "baseScore": 41, "voteCount": 36, "commentCount": 53, "url": null, "contents": { "documentId": "ofSYgmMby7iqxJqi6", "html": "

Our hosts at Tricycle Developments have created PredictionBook.com, which lets you make predictions and then track your calibration - see whether things you assigned a 70% probability happen 7 times out of 10.

\n

The major challenge with a tool like this is (a) coming up with good short-term predictions to track (b) maintaining your will to keep on tracking yourself even if the results are discouraging, as they probably will be.

\n

I think the main motivation to actually use it, would be rationalists challenging each other to put a prediction on the record and track the results - I'm going to try to remember to do this the next time Michael Vassar says \"X%\" and I assign a different probability.  (Vassar would have won quite a few points for his superior predictions of Singularity Summit 2009 attendance - I was pessimistic, Vassar was accurate.)

" } }, { "_id": "MEyqjpSFogiEaoixn", "title": "We're in danger. I must tell the others...", "pageUrl": "https://www.lesswrong.com/posts/MEyqjpSFogiEaoixn/we-re-in-danger-i-must-tell-the-others", "postedAt": "2009-10-13T23:06:46.947Z", "baseScore": 5, "voteCount": 4, "commentCount": 10, "url": null, "contents": { "documentId": "MEyqjpSFogiEaoixn", "html": "

... Oh, no! I've been shot!

\n

— C3PO

\n

A strange sort of paralysis can occur when risk-averse people (like me) decide that we're going to play it safe. We imagine the worst thing that could happen if we go ahead with our slightly risky plan, and this stops us from carrying it out.

\n

One possible way of overcoming such paralysis is to remind yourself just how much danger you're actually in.

\n

Humanity could be mutilated by nuclear war, biotechnology disasters, societal meltdown, environmental collapse, oppressive governments, disagreeable AI, or other horrors. On an individual level, anybody's life could turn sour for more mundane reasons, from disease to bereavement to divorce to unemployment to depression. The terrifying scenarios depend on your values, and differ from person to person. Those here who hope to live forever may die of old age, and then cryonics turns out not to work.

\n

There must be some number X which is the probability of Really Bad Things happening to you. X is probably not a tiny figure, but instead significantly above zero, which encourages you to go ahead with whatever slightly risky plan you were contemplating, as long as it only nudges X upwards a little.

\n

Admittedly, this tactic seems like a cheap hack that relies on an error in human reasoning - is nudging your danger level from .2 to .201 actually more acceptable than nudging it from 0 to .001? Perhaps not. Needless to say, a real rationalist ought to ignore all this and take the action with the highest expected value.

" } }, { "_id": "NXpnf6nJfsgmtfvCN", "title": "BHTV: Eliezer Yudkowsky and Andrew Gelman", "pageUrl": "https://www.lesswrong.com/posts/NXpnf6nJfsgmtfvCN/bhtv-eliezer-yudkowsky-and-andrew-gelman", "postedAt": "2009-10-13T20:09:18.348Z", "baseScore": 11, "voteCount": 9, "commentCount": 4, "url": null, "contents": { "documentId": "NXpnf6nJfsgmtfvCN", "html": "

\n\n\n\n\n

\n

Percontations: The Nature of Probability

\n

Source.

\n

Background on Gelman.

" } }, { "_id": "2CtfZDpSymBt33AJb", "title": "Quantifying ethicality of human actions", "pageUrl": "https://www.lesswrong.com/posts/2CtfZDpSymBt33AJb/quantifying-ethicality-of-human-actions", "postedAt": "2009-10-13T16:10:14.847Z", "baseScore": -14, "voteCount": 17, "commentCount": 58, "url": null, "contents": { "documentId": "2CtfZDpSymBt33AJb", "html": "

Background:  This article is licensed under the GNU Free Documentation License and Creative Commons Attributions-Share-Alike Unported. It was posted to Wikipedia by an author who wished to remain anonymous, known variously as \"24\" and \"142\".  It was subsequently removed from view on Wikipedia, but its text has been preserved by a number of mirrors.  While it could be seen as no more than a basic primer in moral philosophy, it is arguably required reading to anyone unfamiliar with the philosophical background of such concepts as Friendly AI and Coherent Extrapolated Volition.

The search for a formal method for evaluating and quantifying ethicality and morality of human actions stretches back to ancient times. While any simple view of right, wrong and dispute resolution relies on some linguistic and cultural norms, a 'formal' method presumably cannot, and must rely instead on knowledge of more basic human nature, and symbolic methods that allow for only very simple evidence.
By contrast, modern systems of criminal justice and civil law evaluate and quantify social and moral norms (usually as a fine or sentence or ruling on damages) rely usually on adversarial process and forensic method, combined using some quasi-empirical methods and many outright appeal to authority and ad hominem arguments. These would all be unacceptable in a formal method based on something more resembling axiomatic proof, which by definion relies on some axioms of morality.

Religious moral codes provide such axioms in most societies, and to some degree, following those strictly could be considered formal in that no more trusted or respected method existed. But our modern concept of what is formal and thus universally trustworthy and transparent is derived from that of the ancient Greeks:

Pythagoras and Plato sought to combine moral and mathematical elements of reality in their work on ontology. This was very influential and the work of both is still consulted to this day, although, the social and political implications of their methods are often rejected by more modern philosophers.

Thomas Aquinas, Francis Bacon and some of the Asharite philosophers shared a belief in some kind of over-arching ethical reality provided by a deity. But while Aquinas and Bacon integrated this with methods of Aristotle and ultimately inspired Jesuit and other Catholic methods of assessing and dispensing justice, resulting in Catholic canon law and other forms of Christian church law, the Asharite influence on Islam rejected parallel Mutazilite work on Aristotle, and eventually resulted in the \"classical fiqh\" and the shariah now being revived in some parts of the Islamic World. Thus it could reasonably be said that Catholic and Islamic thought diverged on Aristotle's ideas in the middle ages.

Some consider the debate to continue to this day in economics, with the neoclassical economics based firmly on Aristotle's methods via Friedrich Hayek and Karl Popper, against Islamic economics and feminist economics which reject some aspects of Aristotle's logic, e.g. law of excluded middle, and seek to build on some intuitive and morally defensible ontology, as Plato did. This is probably no less of a controversy today than it was in Plato's time, or among the Asharites:

Today, few accept that economics is a means to any ethical or moral end, but more of a technology that serves the ends of those who control and refine it. It remains however that economics does \"evaluate and quantify\" relationships of such importance, e.g. food, labour, that most humans literally cannot live without an economy around them. Thus an economy embodies assumptions about ethics and morality, and Karl Marx thought that this was itself proof that capitalist economics had subsumed the role of the old feudal methods. This view is current to this day in Marxist economics.

However, the longest-lived view of formal methods as applied to morality comes not from Western but Eastern traditions. Confucianism with its stress on honesty and transparency and etiquette, and moral example of rulers and elders, has at times been seen as a formal method among the Chinese, its \"axioms\" often respected as much or more than any from science.

Buddhism also stresses notions of right livelihood which seem to be possible to measure and compare in a quasi-formal manner. The Noble Eightfold Path is a set of priorities, ordinal not cardinal, not strictly quantities, but still, a useful framework for any more formal or weighted value theory.

During The Enlightenment the various traditions became more unified:

Immanuel Kant, in his \"categorical imperative\", sought to define moral duty reflectively, in that everyone was obligated to anticipate and limit the impacts of one's own actions, and \"not act as one would not have everyone act.\". This can be seen as a restated Golden Rule. In the 20th century it was restated as the ecological footprint, a measure of one's use of the Earth's natural capital, which later became a keystone of green economics.

Other related practices are means of measuring well-being and assessing the implied value of life of various professional ethical codes and infrastructure decisions. While these systems rely on empirical methods for gathering data, and are more interested in \"is\" than \"should\", they are at least \"transparent\" and \"repeatable\" in a sense that could be called \"pre-formal\" or \"pre-requisite to formal\". Some think that they verge at times on the reliability of the quasi-empirical methods in mathematics, in that no conceivable disproof seems possible, but evidence \"for\" is not disputed - an example being the observation of Marilyn Waring that actions which prepare for war have measurably higher economic values than those within family.

A formal method could reconcile many points of view by excluding forensic or audit methods which passed morally-undesirable outcomes, e.g. war or genocide, or worse which valued them highly. It could not validate any one view a \"true\" but it could find a \"best\" or \"best next step\" for some given time horizon or limited list of models or choices to evaluate. Most proposals for moral purchasing employ some such process. Given a very large number of socially-shared semi-formal economically-committed methods, one might take a mean or other stochastic measure of ethical and moral acceptability to those participating, and thus produce very nearly a species-wide informal method that would have as much reliability as one could expect from any \"formal\" method. Such are the goals of some NGOs in civil society and peace movement and labour movement and anti-globalization movement circles.

An alternative but less popular view is that \"human nature\" can be so well understood and modelled mathematically that it becomes possible to assess with formal and mathematical methods, the cognitive bias or moral instinct, e.g. altruism of humans in general, perhaps with measurable variations due to genetics. This view has been popular since the emergence of the theory of evolution, and E. O. Wilson and George Lakoff are among those who have asserted a strong \"biological basis\" for \"morality\" and \"cognitive science of mathematics\" respectively.

Some combination of these views may effective at posing a starting point for models of moral cores and instincts and aesthetics in human beings. However few see them as routes to new moral codes that would be more reliable than the traditional religious ones. A notable exception is B. F. Skinner who proposed exactly such replacement in his \"Walden Two\", a sort of behaviorist utopia which had many characteristics in common with modern eco-anarchism and eco-villages. Most advocates of such co-housing and extended family living situations, e.g. Daniel Quinn or William Thomas, consider informal, political, \"tribal\" methods sufficient or more desirable than those involving any kind of \"proof\".

If so, the long search for a formal method to evaluate and quantify ethical outcomes, even in economics, may come to be seen as a sort of mathematical fetishism, or scientism, or even commodity fetishism to the degree it requires the reduction of quality of life to a series of simple quantities.

" } }, { "_id": "K8ovvvMKPvY6wPXAJ", "title": "The power of information?", "pageUrl": "https://www.lesswrong.com/posts/K8ovvvMKPvY6wPXAJ/the-power-of-information", "postedAt": "2009-10-13T01:07:35.908Z", "baseScore": 0, "voteCount": 5, "commentCount": 24, "url": null, "contents": { "documentId": "K8ovvvMKPvY6wPXAJ", "html": "

I'm thinking about how to model an ecosystem of recursively self-improving computer programs.  The model I have in mind assumes finite CPU cycles/second and finite memory as resources, and that these resources are already allocated at time zero.  It models the rate of production of new information by a program given its current resources of information, CPU cycles, and memory; the conversion of information into power to take resources from other programs; and a decision rule by which a program chooses which other program to take resources from.  The objective is to study the system dynamics, in particular looking for attractors and bifurcations/catastrophes, and to see what range of initial conditions don't lead to a singleton.

\n

(A more elaborate model would also represent the fraction of ownership one program had of another program, that being a weight to use to blend the decision rules of the owning programs with the decision rule of the owned program.  It may also be desirable to model trade of information.  I think that modeling Moore's law wrt CPU speed and memory size would make little difference, if we assume the technologies developed would be equally available to all agents.  I'm interested in the shapes of the attractors, not the rate of convergence.)

\n

Problem: I don't know how to model power as a function of information.

\n

I have a rough model of how information grows over time; so I can estimate the relative amounts of information in a single real historical society at two points in time.  If I can say that society X had tech level T at time A, and society Y had tech level T at time B, I can use this model to estimate what tech level society Y had at time A.

\n

Therefore, I can gather historical data about military conflicts between societies at different tech levels, estimate the information ratio between those societies, and relate it to the manpower ratios between the armies involved and the outcome of the conflict, giving a system of inequalities.

\n

You can help me in 3 ways:

\n\n

If you choose the last option, choose a historical conflict between sides of uneven tech level, and post here as many as you can find of the following details:

\n\n

For example:

\n\n

Using the two dates 1415 and 1346 leads to some tech-level (or information) ratio R.  For example, under a simple model assuming that tech level doubled every 70 years in this era, we would give the English a tech-level ratio over the French of 2, and then say that the tech-level ratio enjoyed by the English produced a power multiplier greater than the manpower ratio enjoyed by the french:   P(2) > 30000/5900.  This ignores the many advances shared by the English and French between 1346 and 1415; but most of them were not relevant to the battle.  It also ignores the claim that the main factor was that the French had heavy armour, which was a disadvantage rather than an advantage in the deep mud on that rainy day.  Oh well.  (Let's hope for enough data that the law of large numbers kicks in.)

\n

After gathering a few dozen datapoints, it may be possible to discern a shape for the function P.  (Making P a multiplying force that is a function of a ratio assumes P is linear, since eg. P(8) = P(8/4)*P(4/2)*P(2/1) = 4*P(2); the data can reject this assumption.)  There may be a way to factor the battle duration and the casualty outcome into the equation as well; or at least to see if they correlate with the distance of the datapoint's manpower ratio from the estimated value of P(information ratio) for that datapoint.

\n

(I tried to construct another example from the Battle of Little Bighorn to show a case where the lower-level technology won, but found that the Indians had more rifles than the Army did, and that there is no agreement as to whether the Indians' repeating rifles or the Army's longer-ranged single-shot Springfield rifles were better.)

" } }, { "_id": "pePKcEd4HXfwtBptQ", "title": "Anticipation vs. Faith: At What Cost Rationality?", "pageUrl": "https://www.lesswrong.com/posts/pePKcEd4HXfwtBptQ/anticipation-vs-faith-at-what-cost-rationality", "postedAt": "2009-10-13T00:10:47.818Z", "baseScore": 12, "voteCount": 12, "commentCount": 106, "url": null, "contents": { "documentId": "pePKcEd4HXfwtBptQ", "html": "

Anticipation and faith are both aspects of the human decision process, in a sense just subroutines of a larger program, but they also generate subjective experiences (qualia) that we value for their own sake. Suppose you ask a religious friend why he doesn’t give up religion, he might say something like “Having faith in God comforts me and I think it is a central part of the human experience. Intellectually I know it’s irrational, but I want to keep my faith anyway. My friends and the government will protect me from making any truly serious mistakes as a result of having too much faith (like falling into dangerous cults or refusing to give medical treatment to my children).\"

\n

Personally I've never been religious, so this is just a guess of what someone might say. But these are the kinds of thoughts I have when faced with the prospect of giving up the anticipation of future experiences (after being prompted by Dan Armak). We don't know for sure yet that anticipation is irrational, but it's hard to see how it can be patched up to work in an environment where mind copying and merging are possible, and in the mean time, we have a decision theory (UDT) that seems to work fine, but does not involve any notion of anticipation.

\n

What would you do if true rationality requires giving up something even more fundamental to the human experience than faith? I wonder if anyone is actually willing to take this step, or is this the limit of human rationality, the end of a short journey across the space of possible minds?

" } }, { "_id": "gsYjMui5yD7ePkTY6", "title": "Do the 'unlucky' systematically underestimate high-variance strategies?", "pageUrl": "https://www.lesswrong.com/posts/gsYjMui5yD7ePkTY6/do-the-unlucky-systematically-underestimate-high-variance", "postedAt": "2009-10-12T22:27:37.461Z", "baseScore": 25, "voteCount": 23, "commentCount": 5, "url": null, "contents": { "documentId": "gsYjMui5yD7ePkTY6", "html": "

From the UK Telegraph:

\n
\n

A decade ago, I set out to investigate luck. I wanted to examine the impact on people's lives of chance opportunities, lucky breaks and being in the right place at the right time. After many experiments, I believe that I now understand why some people are luckier than others and that it is possible to become luckier.

\n

To launch my study, I placed advertisements in national newspapers and magazines, asking for people who felt consistently lucky or unlucky to contact me. Over the years, 400 extraordinary men and women volunteered for my research from all walks of life: the youngest is an 18-year-old student, the oldest an 84-year-old retired accountant.

\n
\n

Be lucky -- it's an easy skill to learn

\n

On reading the article, the takeaway message seems to be that the 'unlucky' systematically fail to take advantage of high-expected-but-low-median value opportunities.

" } }, { "_id": "5r7jgoZN6eDN7M5hK", "title": "What Program Are You?", "pageUrl": "https://www.lesswrong.com/posts/5r7jgoZN6eDN7M5hK/what-program-are-you", "postedAt": "2009-10-12T00:29:19.218Z", "baseScore": 36, "voteCount": 31, "commentCount": 43, "url": null, "contents": { "documentId": "5r7jgoZN6eDN7M5hK", "html": "

I've been trying for a while to make sense of the various alternate decision theories discussed here at LW, and have kept quiet until I thought I understood something well enough to make a clear contribution.  Here goes.

\n

You simply cannot reason about what to do by referring to what program you run, and considering the other instances of that program, for the simple reason that: there is no unique program that corresponds to any physical object.

\n

Yes, you can think of many physical objects O as running a program P on data D, but there are many many ways to decompose an object into program and data, as in O = <P,D>.  At one extreme you can think of every physical object as running exactly the same program, i.e., the laws of physics, with its data being its particular arrangements of particles and fields.  At the other extreme, one can think of each distinct physical state as a distinct program, with an empty unused data structure.  Inbetween there are an astronomical range of other ways to break you into your program P and your data D.

\n

Eliezer's descriptions of his \"Timeless Decision Theory\", however refer often to \"the computation\" as distinguished from \"its input\" in this \"instantiation\" as if there was some unique way to divide a physical state into these two components.  For example:

\n

The one-sentence version is:  Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.

The three-sentence version is:  Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.

\n

And also:

\n

Timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows:  Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe))  ... which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal descendants of the computation, and are recalculated accordingly inside the counterfactual. ...  Timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation.  It does not update any physical or logical parent of the logical output - rather, it conditions on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made.

\n

These summaries give the strong impression that one cannot use this decision theory to figure out what to decide until one has first decomposed one's physical state into one's \"computation\" as distinguished from one's \"initial state\" and its followup data structures eventually leading to an \"output.\"  And since there are many many ways to make this decomposition, there can be many many decisions recommended by this decision theory. 

\n

The advice to \"choose as though controlling the logical output of the abstract computation you implement\" might have you choose as if you controlled the actions of all physical objects, if you viewed the laws of physics as your program, or choose as if you only controlled the actions of the particular physical state that you are, if every distinct physical state is a different program.

" } }, { "_id": "FuqELHc3kHnJ4gzbo", "title": "Link: PRISMs, Gom Jabbars, and Consciousness (Peter Watts)", "pageUrl": "https://www.lesswrong.com/posts/FuqELHc3kHnJ4gzbo/link-prisms-gom-jabbars-and-consciousness-peter-watts", "postedAt": "2009-10-11T21:51:52.943Z", "baseScore": 14, "voteCount": 15, "commentCount": 20, "url": null, "contents": { "documentId": "FuqELHc3kHnJ4gzbo", "html": "

http://www.rifters.com/crawl/?p=791

\n
\n

Morsella has gone back to basics. Forget art, symphonies, science. Forget the step-by-step learning of complex tasks. Those may be some of the things we use consciousness for now but that doesn’t mean that’s what it evolved for, any more than the cones in our eyes evolved to give kaleidoscope makers something to do. What’s the primitive, bare-bones, nuts-and-bolts thing that consciousness does once we’ve stripped away all the self-aggrandizing bombast?

\n

Morsella’s answer is delightfully mundane: it mediates conflicting motor commands to the skeletal muscles.

\n
" } }, { "_id": "ioy3DmZduwrNor3hC", "title": "The Argument from Witness Testimony", "pageUrl": "https://www.lesswrong.com/posts/ioy3DmZduwrNor3hC/the-argument-from-witness-testimony", "postedAt": "2009-10-10T14:05:50.422Z", "baseScore": 8, "voteCount": 10, "commentCount": 10, "url": null, "contents": { "documentId": "ioy3DmZduwrNor3hC", "html": "

(Note: This is essentially a rehash/summarization of Jordan Sobel's Lotteries and Miracles - you may prefer the original.)

\n

George Mavrodes wrote an interesting analogy. Scenario 1: Suppose you read a newspaper report claiming that a particular individual (say, Henry Plushbottom of Topeka, Kansas) has won a very large lottery. Before reading the newspaper, you would have given quite low odds that Henry in particular had won the lottery. However, the newspaper report flips your beliefs quite drastically. Afterward, you would give quite high odds that Henry in particular had won the lottery. Scenario 2: You have read various claims that a particular individual (Jesus of Nazareth) arose from the dead. Before hearing those claims, you would have given quite low odds of anything so unlikely happening. However (since you are reading LessWrong) you presumably do not give quite high odds that Jesus arose from the dead.

\n

What is it about the second scenario which makes it different from the first?

\n

\n

Let's model Scenario 1 as a simple Bayes net. There are two nodes, one representing whether Henry wins, and one representing whether Henry is reported to win, and one arrow, from first to the second.

\n

\"A

\n

What are the parameters of the conditional probability tables? Before any information came in, it seemed very unlikely that Henry was the winner - perhaps he had a one in a million chance. Given that Henry did win, what is the chance that he would be reported to have won? Pretty likely - newspapers do err, but it's reasonable to believe that 9 times out of 10, they get the name of the lottery winner correct. Now suppose that Henry didn't win. What is the chance that he would be reported to have won by mistake? There's nothing in particular to single him out from the other non-winners - being misreported is just as unlikely as winning, maybe even more unlikely.

\n

So we have (using w to abbreviate \"Henry Wins\" and r to abbreviate \"Henry is reported\"):

\n\n\n\n

With a simple computation, we can verify that this model replicates the phenomenon in question. After reading the report, one's estimated probability should be:

\n\n

Of course, Scenario 2 could be modeled with two nodes and one arrow in exactly the same way. If it is rational to come to a different conclusion, then the parameters must be different. How would you justify setting the parameters differently in the second case?

\n

Somewhat relatedly, Douglas Walton has an \"argumentation scheme\" for Argument from Witness Testimony. An argumentation scheme is (roughly) a useful pattern of \"presumptive\" reasoning - that is, uncertain reasoning. In general, the argumentation/defeasible reasoning/non-monotonic logic community seems strangely isolated from the Bayesian inference community, though nominally they're both associated with artificial intelligence. Despite how odd each approach seems from the other side, there is a possibility of cross-fertilization here. Here are the so-called \"premises\" of the scheme (from Argumentation Schemes, p. 310):

\n\n

Here are the so-called \"critical questions\" associated with the argument from witness testimony:

\n
    \n
  1. Is what the witness said internally consistent?
  2. \n
  3. Is what the witness said consistent with the known facts of the case (based on evidence apart from what the witness testified to)?
  4. \n
  5. Is what the witness said consistent with what other witnesses have (independently) testified to?
  6. \n
  7. Is there some kind of bias that can be attributed to the account given by the witness?
  8. \n
  9. How plausible is the statement A asserted by the witness?
  10. \n
\n

As I understand it, argumentation schemes are something like inference rules for plausible reasoning but the actual premises (including both the scheme's \"premises\" and its \"critical questions\") are treated differently. I have not yet been able to unpack Walton's description of how they ought to be treated differently into the language of single agent reasoning. Usually argumentation theory is phrased and targeted for dialog between differing agents (for example, legal advocates), but it certainly can be applied to single agent reasoning. For example, Pollack's OSCAR is based on defeasible reasoning.

\n

(Spoiler)

\n

Jordan Sobel's answer is that the key aspect of the sudden flip is P(r|!w), the probability of observing a false report. In Scenario 1, the probability of a false report of Henry's having won is even less likely than the probability of Henry winning. Given that humans are known to self-deceive regarding the things that are miraculous and wonderful, you should not carry that parameter through the analogy unchanged. Small increases in P(r|!w) lead to large reductions in P(w|r). For example, if P(r|!w) were equal to P(w), then the posterior probability that Henry won would drop below 0.5. If P(r|!w) were one in a hundred thousand, the posterior probability would drop below 0.1.

" } }, { "_id": "3FXzDQ56Z5HdHaqAB", "title": "How to get that Friendly Singularity: a minority view", "pageUrl": "https://www.lesswrong.com/posts/3FXzDQ56Z5HdHaqAB/how-to-get-that-friendly-singularity-a-minority-view", "postedAt": "2009-10-10T10:56:46.960Z", "baseScore": 17, "voteCount": 27, "commentCount": 69, "url": null, "contents": { "documentId": "3FXzDQ56Z5HdHaqAB", "html": "

Note: I know this is a rationality site, not a Singularity Studies site. But the Singularity issue is ever in the background here, and the local focus on decision theory fits right into the larger scheme - see below.

\n

There is a worldview which I have put together over the years, which is basically my approximation to Eliezer's master plan. It's not an attempt to reconstruct every last detail of Eliezer's actual strategy for achieving a Friendly Singularity, though I think it must have considerable resemblance to the real thing. It might be best regarded as Eliezer-inspired, or as \"what my Inner Eliezer thinks\". What I propose to do is to outline this quasi-mythical orthodoxy, this tenuous implicit consensus (tenuous consensus because there is in fact a great diversity of views in the world of thought about the Singularity, but implicit consensus because no-one else has a plan), and then state how I think it should be amended. The amended plan is the \"minority view\" promised in my title.

\n

Elements Of The Worldview

\n

There will be strongly superhuman intelligence in the historically immediate future, unless a civilization-ending technological disaster occurs first.

\n\n

In a conflict of values among intelligences, the higher intelligence will win, so for human values / your values to survive after superintelligence, the best chance is for the seed from which the superintelligence grew to have already been \"human-friendly\".

\n\n

The way to produce a human-friendly seed intelligence is to identify the analogue, in the cognitive architecture behind human decision-making, of the utility function of an EUM, and then to \"renormalize\" or \"reflectively idealize\" this, i.e. to produce an ideal moral agent as defined with respect to our species' particular \"utility function\".

\n\n

The truly fast way to produce a human-relative ideal moral agent is to create an AI with the interim goal of inferring the \"human utility function\" (but with a few safeguards built in, so it doesn't, e.g., kill off humanity while it solves that sub-problem), and which is programmed to then transform itself into the desired ideal moral agent once the exact human utility function has been identified.

\n\n

Commentary

\n

This is, somewhat remarkably, a well-defined research program for the creation of a Friendly Singularity. You could print it out right now and use it as the mission statement of your personal institute for benevolent superintelligence. There are very hard theoretical and empirical problems in there, but I do not see anything that is clearly nonsensical or impossible.

\n

So what's my problem? Why don't I just devote the rest of my life to the achievement of this vision? There are two, maybe three amendments I would wish to make. What I call the ontological problem has not been addressed; the problem of consciousness, which is the main subproblem of the ontological problem, is also passed over; and finally, it makes sense to advocate that human neuroscientists should be trying to identify the human utility function, rather than simply planning to delegate that task to an AI scientist.

\n

The problem of ontology and the problem of consciousness can be stated briefly enough: our physics is incomplete, and even worse, our general scientific ontology is incomplete, because inherently and by construction it excludes the reality of consciousness.

\n

The observation that quantum mechanics, when expressed in a form which makes \"measurement\" an undefined basic concept, does not provide an objective and self-sufficient account of reality, has led on this site to the advocacy of the many-worlds interpretation as the answer. I recently argued that many worlds is not the clear favorite, to a somewhat mixed response, and I imagine that I will be greeted with almost immovable skepticism if I also assert that the very template of natural-scientific reduction - mathematical physics in all its forms - is inherently inadequate for the description of consciousness. Nonetheless, I do so assert. Maybe I will make the case at greater length in a future article. But the situation is more or less as follows. We have invented a number of abstract disciplines, such as logic, mathematics, and computer science, by means of which we find ourselves able to think in a rigorously exact fashion about a variety of abstract possible objects. These objects constitute the theoretical ontology in terms of which we seek to understand and identify the nature of the actual world. I suppose there is also a minimal \"worldly\" ontology still present in all our understandings of the actual world, whereby concepts such as \"thing\" and \"cause\" still play a role, in conjunction with the truly abstract ideas. But this is how it is if you attempt to literally identify the world with any form of physics that we have, whether it's classical atoms in a void, complex amplitudes stretching across a multiverse configuration space, or even a speculative computational physics, based perhaps on cellular automata or equivalence classes of Turing machines.

\n

Having adopted such a framework, how does one then understand one's own conscious experience? Basically, through a combination of outright denial with a stealth dualism that masquerades as identity. Thus a person could say, for example, that the passage of time is an illusion (that's denial) and that perceived qualities are just neuronal categorizations (stealth dualism). I call the latter identification a stealth dualism because it blithely asserts that one thing is another thing when in fact they are nothing like each other. Stealth dualisms are unexamined habitual associations of a bit of physico-computational ontology with a bit of subjective phenomenology which allow materialists to feel that the mind does not pose a philosophical problem for them.

\n

My stance, therefore, is that intellectually we are in a much much worse position, when it comes to understanding consciousness, than most scientists, and especially most computer scientists, think. Not only is it an unsolved problem, but we are trying to solve it in the wrong way: presupposing the desiccated ontology of our mathematical physics, and trying to fit the diversities of phenomenological ontology into that framework. This is, I submit, entirely the wrong way round. One should instead proceed as follows: I exist, and among my properties are that I experience what I am experiencing, and that there is a sequence of such experiences. If I can free my mind from the assumption that the known classes of abstract object are all that can possibly exist, what sort of entity do I appear to be? Phenomenology - self-observation - thereby turns into an ontology of the self, and if you've done it correctly (I'm not saying this is easy), you have the beginning of a new ontology which by design accommodates the manifest realities of consciousness. The task then becomes to reconstitute or reinterpret the world according to mathematical physics in a way which does not erase anything you think you established in the phenomenological phase of your theory-building.

\n

I'm sure this program can be pursued in a variety of ways. My way is to emphasize the phenomenological unity of consciousness as indicating the ontological unity of the self, and to identify the self with what, in current physical language, we would call a large irreducible tensor factor in the quantum state of the brain. Again, the objective is not to reduce consciousness to quantum mechanics, but rather to reinterpret the formal ontology of quantum mechanics in a way which is not outright inconsistent with the bare appearances of experience. However, I'm not today insisting upon the correctness of my particular approach (or even trying very hard to explain it); only emphasizing my conviction that there remains an incredibly profound gap in our understanding of the world, and it has radical implications for any technically detailed attempt to bring about a human-friendly outcome to the race towards superintelligence. In particular, all the disciplines (e.g. theoretical computer science, empirical cognitive neuroscience) which play a part in cashing out the principles of a Friendliness strategy would need to be conceptually reconstructed in a way founded upon the true ontology.

\n

Having said all that, it's a lot simpler to spell out the meaning of my other amendment to the \"orthodox\" blueprint for a Friendly Singularity. It is advisable to not just think about how to delegate the empirical task of determining the human utility function to an AI scientist, but also to encourage existing human scientists to tackle this problem. The basic objective is to understand what sort of decision-making system we are. We're not expected utility maximizers; well, what are we then? This is a conceptual problem, though it requires empirical input, and research by merely human cognitive neuroscientists and decision theorists should be capable of producing conceptual progress, which will in turn help us to find the correct concepts which I have merely approximated here in talking about \"utility functions\" and \"ideal moral agents\".

\n

Thanks to anyone who read this far. :-)

" } }, { "_id": "cs2nvx7ajkGr5kudk", "title": "I'm Not Saying People Are Stupid", "pageUrl": "https://www.lesswrong.com/posts/cs2nvx7ajkGr5kudk/i-m-not-saying-people-are-stupid", "postedAt": "2009-10-09T16:23:17.593Z", "baseScore": 62, "voteCount": 63, "commentCount": 101, "url": null, "contents": { "documentId": "cs2nvx7ajkGr5kudk", "html": "

Razib summarized my entire cognitive biases talk at the Singularity Summit 2009 as saying:  \"Most people are stupid.\"

\n

Hey!  That's a bit unfair.  I never said during my talk that most people are stupid.  In fact, I was very careful not to say, at any point, that people are stupid, because that's explicitly not what I believe.

\n

I don't think that people who believe in single-world quantum mechanics are stupid.  John von Neumann believed in a collapse postulate.

\n

I don't think that philosophers who believe in the \"possibility\" of zombies are stupid.  David Chalmers believes in zombies.

\n

I don't even think that theists are stupid.  Robert Aumann believes in Orthodox Judaism.

\n

And in the closing sentence of my talk on cognitive biases and existential risk, I did not say that humanity was devoting more resources to football than existential risk prevention because we were stupid.

\n

There's an old joke that runs as follows:

\n

A motorist is driving past a mental hospital when he gets a flat tire.
He goes out to change the tire, and sees that one of the patients is watching him through the fence.
Nervous, trying to work quickly, he jacks up the car, takes off the wheel, puts the lugnuts into the hubcap -
And steps on the hubcap, sending the lugnuts clattering into a storm drain.
The mental patient is still watching him through the fence.
The motorist desperately looks into the storm drain, but the lugnuts are gone.
The patient is still watching.
The motorist paces back and forth, trying to think of what to do -
And the patient says,
\"Take one lugnut off each of the other tires, and you'll have three lugnuts on each.\"
\"That's brilliant!\" says the motorist.  \"What's someone like you doing in an asylum?\"
\"I'm here because I'm crazy,\" says the patient, \"not because I'm stupid.\"

" } }, { "_id": "KNkd7awTwYxaHqmgJ", "title": "LW Meetup Google Calendar", "pageUrl": "https://www.lesswrong.com/posts/KNkd7awTwYxaHqmgJ/lw-meetup-google-calendar", "postedAt": "2009-10-07T22:51:05.591Z", "baseScore": 14, "voteCount": 10, "commentCount": 15, "url": null, "contents": { "documentId": "KNkd7awTwYxaHqmgJ", "html": "

I've set up a calendar on Google to track future Less Wrong meetups. I've included links to view the calendar in a couple time zones, but note that if you add the calendar to your own google account, events should be shown in your usual time zone (if someone can confirm this for me, I'd appreciate it). I'll do my best to add any meetups posted to LW, but feel free to e-mail me if you don't see them.

\n

Less Wrong Meetups: Pacific View
Less Wrong Meetups: Eastern View

\n

Link for use in iCal or anything else supporting the ics format

\n

Raw XML Version

" } }, { "_id": "KoQpJFdWkjMd2Rtzh", "title": "Boston Area Less Wrong Meetup: 2 pm Sunday October 11th", "pageUrl": "https://www.lesswrong.com/posts/KoQpJFdWkjMd2Rtzh/boston-area-less-wrong-meetup-2-pm-sunday-october-11th", "postedAt": "2009-10-07T21:15:14.155Z", "baseScore": 8, "voteCount": 5, "commentCount": 12, "url": null, "contents": { "documentId": "KoQpJFdWkjMd2Rtzh", "html": "

There will be a Less Wrong meet-up this Sunday, October 11th, 2 pm, in Cambridge at the Central Square Starbucks Coffee at 655 Massachusetts Avenue (time and place are flexible if anyone has a conflict); please comment if you'd like to attend, or if you have any questions or ideas. Some confirmed attendees include SIAI folk and Less Wrongers Anna Salamon, Steve Rayhawk, Carl Shulman, and Roko Mijc. Also keep your eyes peeled for a probable appearance of expert reductionist Gary Drescher, and a rumored Scott Aaronson sighting.

\r\n

Feel free to contact me at my first name DOT my last name AT post.harvard.edu or 646-525-5383.
Thanks, and see everyone there!

" } }, { "_id": "z45QjCjM48rPuMFkG", "title": "New Haven/Yale Less Wrong Meetup: 5 pm, Monday October 12", "pageUrl": "https://www.lesswrong.com/posts/z45QjCjM48rPuMFkG/new-haven-yale-less-wrong-meetup-5-pm-monday-october-12", "postedAt": "2009-10-07T20:35:09.646Z", "baseScore": 7, "voteCount": 4, "commentCount": 9, "url": null, "contents": { "documentId": "z45QjCjM48rPuMFkG", "html": "

Posted on behalf of Thomas McCabe:

\n

I (Thomas McCabe, a Yale math student) will be hosting a Less Wrong meetup in New Haven, Connecticut, on the Yale University campus. The meetup will take place at 5 PM on Monday, October 12th, at the Yorkside Pizza & Restaurant at 288 York St. (time and place are flexible if anyone has a conflict); please comment if you'd like to attend, or if you have any questions or ideas. The location can be found on Google Maps at this link.

\n

Some confirmed attendees include SIAI folk and Less Wrongers Anna Salamon, Steve Rayhawk, Carl Shulman, and Roko Mijc.

\n

Feel free to contact me at thomas.mccabe@yale.edu, or at 518-248-5525.
Thanks, and see everyone there!

" } }, { "_id": "AhHhm63zdZSDLmb76", "title": "Let them eat cake: Interpersonal Problems vs Tasks", "pageUrl": "https://www.lesswrong.com/posts/AhHhm63zdZSDLmb76/let-them-eat-cake-interpersonal-problems-vs-tasks", "postedAt": "2009-10-07T16:35:16.698Z", "baseScore": 96, "voteCount": 88, "commentCount": 575, "url": null, "contents": { "documentId": "AhHhm63zdZSDLmb76", "html": "

When I read Alicorn's post on problems vs tasks, I immediately realized that the proposed terminology helped express one of my pet peeves: the resistance in society to applying rationality to socializing and dating.

\n

In a thread long, long ago, SilasBarta described his experience with dating advice:

\n
\n

I notice all advice on finding a girlfriend glosses over the actual nuts-and-bolts of it.

\n
\n

In Alicorn's terms, he would be saying that the advice he has encountered treats problems as if they were tasks. Alicorn defines these terms a particular way:

\n
\n

It is a critical faculty to distinguish tasks from problems.  A task is something you do because you predict it will get you from one state of affairs to another state of affairs that you prefer.  A problem is an unacceptable/displeasing state of affairs, now or in the likely future.  So a task is something you do, or can do, while a problem is something that is, or may be.

\n
\n

Yet as she observes in her post, treating genuine problems as if they were defined tasks is a mistake:

\n
\n

Because treating problems like tasks will slow you down in solving them.  You can't just become immortal any more than you can just make a peanut butter sandwich without any bread.

\n
\n

Similarly, many straight guys or queer women can't just find a girlfriend, and many straight women or queer men can't just find a boyfriend,  any more than they can \"just become immortal.\"

\n

\n

People having trouble in those areas may ask for advice, perhaps out of a latent effort to turn the problem into more of a task. Yet a lot of conventional advice doesn't really turn the problem into the task (at least, not for everyone), but rather poses new problems, due to difficulties that Alicorn mentioned, such as lack of resources, lack of propositional knowledge, or lack of procedural knowledge.

Take, for example, \"just be yourself,\" or \"just meet potential partners through friends.\" For many people, these pieces of advice just open up new problems: being oneself is a problem of personal identity. It's not a task that you can execute as part of a step in solving the problem of dating. Having a social network, let alone one that will introduce you to potential partners, is also a problem for many people. Consequently, these pieces of advice sound like \"let them eat cake.\"

Society in general resists the notion that socializing (dating and mating in particular) is a problem. Rather, society treats it as a solved task, yet the procedures it advocates are incomplete, dependent on unacknowledged contextual factors, big hairy problems of their own, or just plain wrong. (Or it gives advice that consists of true observations that are useless for taskification, like \"everyone is looking for something different\" in a mate. Imagine telling a budding chef: \"everyone has different tastes\" in food. It's true, but it isn't actually useful in taskifying a problem like \"how do I cook a meal?\")

Even worse, society resists better attempts to taskify social interaction (especially dating and mating). People who attempt to taskify socializing and dating are often seen as inauthentic, manipulative, inhuman, mechanical, objectifying of others, or going to unnecessary lengths.

While some particular attempts of taskifying those problems may indeed suffer from those flaws, some people seem like they object to any form of taskifying in those areas. There may be good reasons to be skeptical of the taskifiability of socializing and mating. Yet while socializing and dating may not be completely taskifiable due to the improvisational and heavily context-dependent nature of those problems, they are actually taskifiable to a reasonably large degree.

Many people seem to hold an idealistic view of socializing and dating, particularly dating, that places them on another plane of reality where things are just supposed to happen \"magically\" and \"naturally,\" free of planning or any other sort of deliberation. Ironically, this Romantic view can actually be counterproductive to romance. Taskifaction doesn't destroy romance any more than it destroys music or dance. Personally, I think musicians who can actually play their instruments are capable of creating more \"magical\" music than musicians who can't. The Romantic view only applies to those who are naturally adept; in other words, those for who mating is not a problem. For those who do experience romance as a problem, the Romantic view is garbage [Edit: while turning this into a top-level post, I've realized that I need more clarification of what I am calling the \"Romantic\" view].

The main problem with this Romantic view is that is that it conflates a requirement for a solution with the requirements for the task-process that leads to the solution. Just because many people want mating and dating to feel magical and spontaneous, it doesn't mean that every step in finding and attracting mates must be magical and spontaneous, lacking any sort of planning, causal thinking, or other elements of taskification. Any artist, whether in visual media, music, drama, or dance knows that the \"magic\" of their art is produced by mundane and usually heavily taskified processes. You can't \"just\" create a sublime work of art any more than you can \"just\" have a sublime romantic experience (well, some very talented and lucky people can, but it's a lot harder for everyone else). Actually, it is taskification itself which allows skill to flourish, creating a foundation for expression that can feel spontaneous and magical. It is the mundane that guides the magical, not the other way around.

Sucking at stuff is not sublime. It's not sublime in art, it's not sublime in music, and it's not sublime in dance. In dating, there is nothing wrong with a little innocence and awkwardness, but the lack of procedural and propositional knowledge can get to the point where it intrudes ruins the \"magic.\" There is nothing \"magical\" about the experience of someone who is bumbling socially and romantically, and practically forcing other people to reject him or her, either for that person of for those around. Yet to preserve the perception of \"magic\" and \"spontaneity\" (an experience that is only accessible for those with natural attractiveness and popularity, or luck), society is actually denying that type of experience to those who experience dating as a problem. Of course, they might \"get lucky\" and eventually get together with someone who is a decent without totally screwing things up with that person... but why is society mandating that romance be a given for some people, but a matter of \"getting lucky\" for others?

The sooner society figures out the following, the better:

1. For many people, socializing and dating are problems, not yet tasks.

2. Socializing and dating can be taskified to the extend that other problems with similar solutions requirements (e.g. improvisation, fast response to emotional impulses of oneself and others, high attention to context, connection to one's own instincts) can be taskified. Which is a lot of the way, but definitely not all the way.

3. Taskification when applied to interpersonal behavior is not inherently immoral or dehumanizing to anyone, nor does it inherently steal the \"magic\" from romance any more than dance training steals the magic from dance.

Until then, we will continue to have a social caste system of those for whom socializing and dating is a task (e.g. due to intuitive social skills), over those for whom those things are still problems (due to society's accepted taskifications not working for them, and being prevented from making better taskifications due to societal pressure and censure).

" } }, { "_id": "Pmfk7ruhWaHj9diyv", "title": "The First Step is to Admit That You Have a Problem", "pageUrl": "https://www.lesswrong.com/posts/Pmfk7ruhWaHj9diyv/the-first-step-is-to-admit-that-you-have-a-problem", "postedAt": "2009-10-06T20:59:41.195Z", "baseScore": 87, "voteCount": 80, "commentCount": 87, "url": null, "contents": { "documentId": "Pmfk7ruhWaHj9diyv", "html": "

This is part 1 of a sequence on problem solving.  Here is part 2.

\n

It is a critical faculty to distinguish tasks from problems.  A task is something you do because you predict it will get you from one state of affairs to another state of affairs that you prefer.  A problem is an unacceptable/displeasing state of affairs, now or in the likely future.  So a task is something you do, or can do, while a problem is something that is, or may be.  For example:

\n\n

Problems are solved by turning them into tasks and carrying out those tasks.  Turning problems into tasks can sometimes be problematic in itself, although small taskifications can be tasky.  For instance, in the peanut butter sandwich case, if your only missing component for sandwich-making is bread, it doesn't take much mental acrobatics to determine that you now have two tasks to be conducted in order: 1. obtain bread, 2. make sandwich.  Figuring out why you're sad, in case two, could be a task (if you're really good at introspecting accurately, or are very familiar with the cousin-missing type of sadness in particular) or could be a problem (if you're not good at that, or if you've never missed your favorite cousin before and have no prior experience with the precise feeling).  And so on.

\n

Why draw this distinction with such care?  Because treating problems like tasks will slow you down in solving them.  You can't just become immortal any more than you can just make a peanut butter sandwich without any bread.  And agonizing about \"why I can't just do this\" will produce the solution to very few problems.  First, you have to figure out how to taskify the problem.  And the first step is to understand that you have a problem.

\n

\n

Identifying problems is surprisingly difficult.  The language we use for them is almost precisely like the language we use for tasks: \"I have to help the exchange student learn English.\"  \"I have to pick up milk on the way home from school.\"  \"I have to clean the grout.\"  \"I have to travel to Zanzibar.\"  Some of these are more likely to be problems than others, but any of them could be, because problemhood and taskiness depend on factors other than what it is you're supposed to wind up with at the end.  You can easily say what you want to wind up with after finishing doing any of the above \"have to's\": a bilingual student, a fridge that contains milk, clean grout, the property of being in Zanzibar.  But for each outcome to unfold correctly, resources that you might or might not have will be called for.  Does the exchange student benefit most from repetition, or having everything explained in song, or do you need to pepper your teaching with mnemonics?  Do you have cash in your wallet for milk?  Do you know what household items will clean grout and what items will dissolve it entirely?  Where the hell is Zanzibar, anyway?  The approximate ways in which a \"have to\" might be a problem are these:

\n\n

So when you have to do something, you can tell whether it's a problem or a task by checking whether you have all of these things.  That's not going to be foolproof: certain knowledge gaps can obscure themselves and other shortfalls too.  If I mistakenly think that the store from which I want to purchase milk is open 24 hours a day, I have a milk-buying problem and may not realize it until I try to walk into the building and find it locked.

\n

Part 2 of this sequence will get into what to do when you have identified a problem.

" } }, { "_id": "tZodkMtQ3Ao7anzNW", "title": "The Presumptuous Philosopher's Presumptuous Friend", "pageUrl": "https://www.lesswrong.com/posts/tZodkMtQ3Ao7anzNW/the-presumptuous-philosopher-s-presumptuous-friend", "postedAt": "2009-10-05T05:26:23.736Z", "baseScore": 4, "voteCount": 16, "commentCount": 82, "url": null, "contents": { "documentId": "tZodkMtQ3Ao7anzNW", "html": "

One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.

\n

You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.

\n

\"Which hotel are we in, I wonder?\" you ask.

\n

\"The big one, obviously\" says the presumptuous philosopher. \"Because of anthropic reasoning and all that. Million to one odds.\"

\n

\"Rubbish!\" you scream. \"Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel.\"

\n

\"Unless there are no observers, I assume you mean\" says the presumptuous philosopher.

\n

\"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!\"

\n

\"In that case,\" says the presumptuous philosopher, \"I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly.\"

\n

\"Hah!\" you say. \"You just lost an expected five bucks, sucker!\"

\n

You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.

" } }, { "_id": "APiWE2wScHQgFkuR6", "title": "Don't Think Too Hard.", "pageUrl": "https://www.lesswrong.com/posts/APiWE2wScHQgFkuR6/don-t-think-too-hard", "postedAt": "2009-10-05T03:51:15.695Z", "baseScore": 14, "voteCount": 12, "commentCount": 36, "url": null, "contents": { "documentId": "APiWE2wScHQgFkuR6", "html": "

I find it interesting that when we're asleep - supposedly unconscious - we're frequently fully conscious, mired in a nonsensical dreamworld of our own creation. There's currently no universally accepted theory for the purpose of dreams - they range from cleaning up mental detritus to subconscious problem solving to cognitive accidents. On the other hand, we DO know plenty about what goes on in the brain during the dream state.

Studies show that in dreams, our thought processes are largely the same as they ones we use when we're awake. The main difference seems to be that we don't notice the insane world that we're a part of. We reason perfectly normally based on our surroundings, we're just incapable of reasoning about those surroundings - we lack metacognition when we're dreaming. The culprit behind this is a brain area known as the dorsolateral prefrontal cortex (DLPFC). It's responsible for, among other things, executive function (directing other brain functions), as well as working memory and motor planning. This combined with the fact that it's the last brain area to develop (meaning it was the last brain area to evolve) suggests that it's key in creating conscious, directed thought. And during sleep, it's shut down, cutting off our ability to question the premises we're given. So, barring entering a lucid dream state, we lack the mental hardware to recognize we're in a hallucination when we dream - it seems perfectly normal.[1]

\n

While we're dreaming, a number of other neurological events are taking place. Long term memories are being accessed and replayed. Brain regions that are normally unconnected work in concert to unite disparate bits of information. Whatever the purpose of dreams is (if they indeed have a purpose at all), they appear to be a window to the strengthening of mental connections and creation of new ones that takes place while we're asleep. And this behavior is necessary to maintain high-level mental functioning. People do better on tests after getting REM sleep, and people who are REM sleep deprived show extremely impaired memory and learning abilities.

Something similar occurs when our mind is wandering - unrelated brain areas are working together to develop new mental pathways (which is why talking about the importance of daydreaming is currently all the rage). The same thing happens when consuming alcohol - daydreaming and mental connection formation increases, and frontal cortex activity (and metacognition) decreases.

The implication here is that the creation of new cognitive pathways is something that takes place in the absence of conscious, directed thought. It passes the plausibility test - we have a limited amount of cognitive resources, so focusing our thoughts leaves fewer mental resources left over for other tasks. And the formation of new mental connections is extremely important - it's essentially enlarging the search space our minds have access to when trying to solve a problem. Though it's not under conscious control, it's still a high-level function - young children and people with autism seem to have extremely muted dreams (if they dream at all), implying fewer mental connections are being formed. Our executive function is great at orchestrating different brain areas to find a solution to a problem, but it's only able to look through the space of possible solutions that's already been created.[2]

Conscious, directed thought - amazing as it is - is not the end-all, be-all of mental function. Humans are at the top of the intellectual food chain (not to mention the actual food chain), but a number of the things that make us 'special' are things we share with other animals. Plenty of them can pass the mirror test. Plenty have language, use tools and have complex social structures. Physiologically, what sets us apart is our processing power - the sheer volume of cognitive pathways we can create and sort through. Half of solving a problem is having a search space that contains the answer, and the human mind can create an ENORMOUS search space. But it does so without directed thought.

If a problem seems intractable, then, you may not be able to make headway by THINKING about it harder. That infamous burst of insight seldom seems to come while hunched over a desk or after that 10th straight hour in the lab - it comes \"in a moment of distraction or else burst forth from the subconscious while we sleep\" [3]. Though it's obviously important to put the hours in to understand your subject (the brain can only work with what it's given), a creative or insightful solution arises from those pseudo-random mental firings that are beyond our conscious control. The answer to a hard problem might be a mental path that your brain hasn't formed yet, making trying to think your way through to it a fruitless endeavor. At a certain point, it's important to step back, relax, and let your subconscious create more grist for the mill


[1] As an aside, the fact that the brain region responsible for working memory is shut down may be the reason why we generally don't remember our dreams, and why writing them down and trying to remember them is an important step in learning to enter a lucid dream state (when the DLPFC is thought to be activated).

[2] Mind-wandering actually seems to be MOST effective when we're at least partially aware we're doing it - if you're not paying any attention at all, something important could easily slip right past you. Something similar probably takes place during the lucid dream state, where we're aware enough to direct the flow of the dream but not so aware that we wake ourselves from it (a frequent problem of beginning lucid dreamers).

\n

[3] Though I suspect there is a selection bias at work here.

" } }, { "_id": "bshZiaLefDejvPKuS", "title": "Dying Outside", "pageUrl": "https://www.lesswrong.com/posts/bshZiaLefDejvPKuS/dying-outside", "postedAt": "2009-10-05T02:45:02.960Z", "baseScore": 403, "voteCount": 303, "commentCount": 91, "url": null, "contents": { "documentId": "bshZiaLefDejvPKuS", "html": "

A man goes in to see his doctor, and after some tests, the doctor says, \"I'm sorry, but you have a fatal disease.\"

\n

Man: \"That's terrible! How long have I got?\"

\n

Doctor: \"Ten.\"

\n

Man: \"Ten? What kind of answer is that? Ten months? Ten years? Ten what?\"

\n

The doctor looks at his watch. \"Nine.\"

\n

Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years.

\n

There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside.

\n

The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition.

\n

With mechanical respiration, survival with ALS can be indefinitely extended. And the great majority of people living on respirators say that their quality of life is good and they are happy with their decision. (There may be a selection effect here.) It seems, then, that calling ALS a fatal disease is an oversimplification. ALS takes away your body, but it does not take away your mind, and if you are determined and fortunate, it does not have to take away your life.

\n

There are a number of practical and financial obstacles to successfully surviving on a ventilator, foremost among them the great load on caregivers. No doubt this contributes to the high rates of choosing death. But it seems that much of the objection is philosophical. People are not happy about being kept alive by machines. And they assume that their quality of life would be poor, without the ability to move and participate in their usual activities. This is despite the fact that most people on respirators describe their quality of life as acceptable to good. As we have seen in other contexts, people are surprisingly poor predictors of how they will react to changed circumstances. This seems to be such a case, contributing to the high death rates for ALS patients.

\n

I hope that when the time comes, I will choose life. ALS kills only motor neurons, which carry signals to the muscles. The senses are intact. And most patients retain at least some vestige of control over a few muscles, which with modern technology can offer a surprisingly effective mode of communication. Stephen Hawking, the world's longest surviving ALS patient at over 40 years since diagnosis, is said to be able to type at ten words per minute by twitching a cheek muscle. I hope to be able to read, browse the net, and even participate in conversations by email and messaging. Voice synthesizers allow local communications, and I am making use of a free service for ALS patients which will create a synthetic model of my own natural voice, for future use. I may even still be able to write code, and my dream is to contribute to open source software projects even from within an immobile body. That will be a life very much worth living.

" } }, { "_id": "SNP6iGikzdf8k95Lq", "title": "When Willpower Attacks", "pageUrl": "https://www.lesswrong.com/posts/SNP6iGikzdf8k95Lq/when-willpower-attacks", "postedAt": "2009-10-03T03:36:49.690Z", "baseScore": 22, "voteCount": 34, "commentCount": 77, "url": null, "contents": { "documentId": "SNP6iGikzdf8k95Lq", "html": "

Less Wrong has held many discussions of willpower. All of them have focused on the cases where willpower fails, and its failure causes harm, such as procrastination, overeating and addiction. Collectively, we call these behaviors akrasia. Akrasia is any behavior that we believe is harmful, but do anyways due to a lack of willpower. Akrasia, however, represents only a small subset of the cases in which willpower fails, and focusing on it too much creates an availability bias that skews our perception of what willpower is, how it works and how much of it is desireable. To counter this bias, I present here some common special cases where strong willpower is harmful or even fatal.

\r\n

\r\n

Consider the human sleep cycle. By default, we settle into an equilibrium in which we sleep for about eight hours per day, starting at about the same time each evening. We can override the normal sleep schedule, and stay awake when we should be asleep, or sleep when we should be awake, by applying willpower. For this example, we'll confine the definition of willpower to just this one ability. Willpower is the ability to control sleep times; we'll leave the ability to work hard and resist cake for later.

\r\n

We use sleep-willpower to arrange our hours strategically. However, we can only do so within certain constraints, which cannot be overriden by conscious choice. It isn't possible to refrain from sleeping entirely, because the amount of willpower required to stay awake increases with the amount sleep missed, until eventually it exceeds the amount of willpower available. Attempting to minimize sleep for a long enough period will eventually cause a shift to an alternative equilibrium, polyphasic sleep, in which all but the most important stage of sleep (REM) are skipped. It's harder to stay awake in some circumstances, like dark rooms and boring lectures, than in others. And it's impossible to sleep too much more than normal; the better rested we are, the harder it is to fall asleep. These restrictions are protective; when rats are artificially prevented from sleeping for too long, they die, and when humans use stimulants to stay awake too long, they die too. If a human had unlimited sleep-willpower, then they would risk serious injury every time they used it.

\r\n

Next, consider diet-willpower. Humans have a system that decides what is good to eat, which manifests as cravings and taste preferences. Many people try to override this mechanism, usually for health reasons, replacing them with advice from a perceived authority. Consider a hypothetical person who decides to follow a highly restrictive diet, to lose weight. Unfortunately, most of the diet advice available is bad; diet advice available in the recent past was very bad; and some of the diet advice floating around is disastrously bad. Restrictive diets tend to have harmful deficiencies. If that deficiency is one of the ones that our biochemical diet manager knows how to handle, then following the diet will require willpower. The more severe the deficiency, the more willpower will be required. Try to follow a no-fat diet, and lapses will be the only thing keeping you alive.

\r\n

In these examples, we have a mechanism which provides defaults, and a mechanism for overriding them. In general, we call the defaults \"System 1\", and the override mechanism \"System 2\"; and whatever influence System 2 has over System 1, we call willpower. System 1 represents a general policy, and System 2 adjusts it for the present circumstances. This pattern applies across many different and sometimes unrelated mechanisms, and we use the term \"willpower\" for all of them. The larger the adjustment, or the longer it is applied, the more willpower is required; and the different implementations of willpower - sleep-willpower, diet-willpower, work-willpower, etc - all require resources, such as glucose and focused attention, which are finite and fungible. When willpower fails, it usually \"breaks\" - that is, it fails all at once - rather than failing gradually. Deviating from innate or habitual behavior requires willpower. Willpower is reduced by pain, discomfort, and many types of biochemical imbalance. Sustained application of willpower may adjust habits and produce a new equilibrium, but if no such equilibrium exists, it will eventually fail and cause a reversion or overcorrection in the opposite direction.

\r\n

Willpower lets us adjust our behavior to match our decisions, and willpower failures protect us from mistakes. Generals often successfully convince their soldiers that they should be courageous and always fight to the death, but when stressed enough, they'll break, desert or surrender, and survive. For soldiers on the losing side of a battle, excess willpower is fatal. Teenagers sometimes pledge to stay celibate. Their psychology is designed to make sure they don't keep that pledge. Married couples pledge monogamy, but women with infertile husbands and men with infertile wives tend to cheat. When stubborn people argue, one of them has to give up and end the conversation eventually. The sooner that happens, the less time they waste and the fewer black eyes they get. Workers may be convinced to keep a frantic pace, but sheer laziness will keep them from working hard enough to injure themselves.

\r\n

The next time you see a willpower hack, remember that willpower isn't always a good thing. Sometimes, what we think of as akrasia is actually protecting us from harm.

" } }, { "_id": "xtEK9upWLnYykDxXj", "title": "'oy, girls on lw, want to get together some time?'", "pageUrl": "https://www.lesswrong.com/posts/xtEK9upWLnYykDxXj/oy-girls-on-lw-want-to-get-together-some-time", "postedAt": "2009-10-02T10:50:11.836Z", "baseScore": 38, "voteCount": 63, "commentCount": 184, "url": null, "contents": { "documentId": "xtEK9upWLnYykDxXj", "html": "

2:45:24 PM Katja Grace: The main thing that puts me off in online dating profiles is lack of ambition to save the world
2:45:35 PM Katja Grace: Or do anything much
2:48:03 PM Michael Blume: *nods*
2:48:07 PM Michael Blume: this is indeed a problem
2:57:55 PM Katja Grace: Maybe there is a dating site for smart ambitious nerds somewhere
2:58:25 PM Katja Grace: Need to set up lw extension perhaps
2:59:02 PM Michael Blume: haha, yes ^^
3:00:40 PM Katja Grace: Plenty of discussion on why few girls, how to get girls, nobody ever says 'oy, girls on lw, want to get together some time?'
3:01:14 PM Michael Blume: somebody really should say that
3:01:34 PM Michael Blume: hell, I'm tempted to just copy that IM into a top-level post and click 'submit'
3:01:48 PM Katja Grace: Haha dare you to

" } }, { "_id": "p9g23Aka8kiPDpppy", "title": "Scott Aaronson on Born Probabilities", "pageUrl": "https://www.lesswrong.com/posts/p9g23Aka8kiPDpppy/scott-aaronson-on-born-probabilities", "postedAt": "2009-10-02T06:40:11.245Z", "baseScore": 39, "voteCount": 34, "commentCount": 8, "url": null, "contents": { "documentId": "p9g23Aka8kiPDpppy", "html": "

\n \n

This post attempts to popularize some of Scott Aaronson's lectures and research results relating to Born probabilities. I think they represent a significant step towards answering the question \"Why Born's rule?\" but do not seem to be very well known. Prof. Aaronson writes frequently on his popular blog, Shtetl-Optimized, but is apparently too modest to use it to do much promotion of his own ideas. I hope he doesn’t mind that I take up this task (and that he forgives any errors and misunderstandings I may have committed here).

\n

Before I begin, I want to point out something that has been bugging me about the fictional Ebborian physics, which will eventually lead us to Aaronson's ideas. So, let’s first recall the following passage from Eliezer’s story:

\n

\n
\n

\"And we also discovered,\" continues Po'mi, \"that our very planet of Ebbore, including all the people on it, has a four-dimensional thickness, and is constantly fissioning along that thickness, just as our brains do.  Only the fissioned sides of our planet do not remain in contact, as our new selves do; the sides separate into the fourth-dimensional void.\"

\n

\n

\"Well,\" says Po'mi, \"when the world splits down its four-dimensional thickness, it does not always split exactly evenly.  Indeed, it is not uncommon to see nine-tenths of the four-dimensional thickness in one side.\"

\n

...

\n

\"Now,\" says Po'mi, \"if fundamental physics has nothing to do with consciousness, can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?\"

\n
\n

Ok, so the part that’s been bugging me is, suppose an Ebborian world splits twice, first into 1/3 and 2/3 of the original thickness (slices A and B respectively), then the B slice splits exactly in half, into two 1/3 thickness slices (C and D). Before the splitting, with what probability should you anticipate ending up in the slices A, C and D? Well, according to the squaring rule, you have 1/5 chance of ending up in A, and 4/5 chance of ending up in B. Those in B then have equal chance of ending up in C and D, so each of them gets a final probability of 2/5.

\n

Well, that’s not how quantum branching works! There, the probability of ending up in any branch depends only on the final amplitude of that branch, not on the order in which branching occurred. This makes perfect sense since decoherence is not a instantaneous process, and thinking of it as branching is only an approximation because worlds never completely split off and become totally independent of one another. In QM, the “order of branching” is not even well defined, so how can probabilities depend on it?

\n

Suppose we want to construct an Ebborian physics where, like in QM, the probability of ending up in any slice depends only on the thickness of that slice, and not on the order in which splitting occurs, how do we go about doing that? Simple, we just make that probability a function of the absolute thickness of a slice, instead of having it depend on the relative thickness at each splitting.

\n

So let’s say that the subjective probability of ending up in any slice is proportional to the square of the absolute thickness of that slice, and consider the above example again. When the world splits into A and B, the probabilities are again 1/5 and 4/5 respectively. But when B splits again into C and D, A goes from probability 1/5 to 1/3, and C and D each get 1/3. That’s pretty weird… what’s going on this time?

\n

To use Aaronson’s language, splitting is not a 2-norm preserving transformation; it only preserves the 1-norm. Or to state this more plainly, splitting conserves the sum of the individual slices’ thicknesses, but not the sum of the squares of the individual thicknesses. So in order to apply the squaring rule and get a set of probabilities that sum to 1 at the end, we have to renormalize, and this renormalizing can cause the probability of a slice to go up or down, depending purely on what happens to other slices that it otherwise would have nothing to do with.

\n

Note that in quantum mechanics, the evolution of a wavefunction always preserves its 2-norm, not its 1-norm (nor p-norm for any p≠2). If we were to use any probability rule other than the squaring rule in QM, we would have to renormalize and thereby encounter this same issue: the probability of a branch would go up or down depending on other parts of the wavefunction that it otherwise would have little interaction with.

\n

At this point you might ask, “Ok, this seems unusual and counterintuitive, but lots of physics are counterintuitive. Is there some other argument that the probability rule shouldn’t involve renormalization?” And the answer to that is yes, because to live in a world with probability renormalization would be to have magical powers, including the ability to solve NP-complete problems in polynomial time. (And to turn this into a full anthropic explanation of the Born rule, similar to the anthropic explanations for other physical laws and constants, we just have to note that intelligence seems to have little evolutionary value in such a world. But that’s my position, not Aaronson’s, or at least he hasn’t argued for this additional step in public, as far as I know.)

\n

Aaronson actually proved that problems in PP, which are commonly believed to be even harder than NP problems, can be solved in polynomial time using “fantasy” quantum computers that use variants of Born’s rule where the exponent doesn't equal 2. But it turns out that the power of these computers has nothing to do with quantum computing, but instead has everything to do with probability renormalization. So here I’ll show how we can solve NP-complete (instead of PP since it’s easier to think about) problems in polynomial time using the modified Ebborian physics that I described above.

\n

The idea is actually very easy to understand. Each time an Ebborian world slice splits, its descendant slices decrease in total probability, while every other slice increases in probability. (Recall how when B split, A’s probability went from 1/5 to 1/3, and B’s 4/5 became a total of 2/3 for C and D.) So to take advantage of this, we first split our world into an exponential number of slices of equal thickness, and let each slice try a different possible solution to the NP-complete problem. If a slice finds that its candidate solution is a correct one, then it does nothing, otherwise it splits itself a large number of times. Since that greatly decreases their own probabilities, and increases the probabilities of the slices that didn’t split at the end, we should expect to find ourselves in one of the latter kind of slices when the computation finishes, which (surprise!) happens to be one that found a correct solution. Pretty neat, right?

\n

ETA: The lecture notes and papers I linked to also give explanations for other aspects of quantum mechanics, such as why it is linear, and why it preserves the 2-norm and not some other p-norm. Read them to find out more.

" } }, { "_id": "srHQ3yedCgscRBMqi", "title": "Dominant characters on the left", "pageUrl": "https://www.lesswrong.com/posts/srHQ3yedCgscRBMqi/dominant-characters-on-the-left", "postedAt": "2009-10-01T22:40:53.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "srHQ3yedCgscRBMqi", "html": "

From Psyblog:

\n
\n

Research finds that people or objects moving from left to right are perceived as having greater power (Maass et al., 2007):

\n\n

Perhaps it’s no coincidence that athletes, cars and horses are all usually shown on TV reaching the finishing line from left to right.

\n
\n

According to other studies mentioned by Maass et al, in Western societies most people also tend to preferentially imagine events evolving from left to right, picture the situations in subject-verb-object sentences with the subject on the left of the object, look at new places rather than old ones more when stimuli show up in a left to right order, memorize the final positions of objects further along the implied path more when they are moving left to right, imagine number lines and time increasing from left to right, and scan their eyes over art in a left to right trajectory.

\n

Why is this?

\n
\n

It seems likely that this left to right bias has its roots in language… people who speak languages written from right to left like Arabic or Urdu … display the same bias, but in the opposite direction.

\n
\n

So Maass and the others guessed that characters perceived as more active would also tend to be depicted on the left of more passive characters in pictures. Their research agreed:

\n
\n

We propose that spatial imagery is systematically linked to stereotypic beliefs, such that more agentic groups are envisaged to the left of less agentic groups. This spatial agency bias was tested in three studies. In Study 1, a content analysis of over 200 images of male–female pairs (including artwork, photographs, and cartoons) showed that males were over-proportionally presented to the left of females, but only for couples in which the male was perceived as more agentic. Study 2 (N = 40) showed that people tend to draw males to the left of females, but only if they hold stereotypic beliefs that associate males with greater agency. Study 3 (N = 61) investigated whether scanning habits due to writing direction are responsible for the spatial agency bias. We found a tendency for Italian-speakers to position agentic groups (men and young people) to the left of less agentic groups (females and old people), but a reversal in Arabic-speakers who tended to position the more agentic groups to the right. Together, our results suggest a subtle spatial bias in the representation of social groups that seems to be linked to culturally determined writing/reading habits.

\n
\n

\n

\"Adam

Adam appeared on the left in 62% of paintings considered, far less than Gomez Addams is portrayed on the left of Mortissa (82%). (Picture: Peter Paul Rubens)

\n

Note that the first study only looked at four couples; Adam and Eve, Gomez and Mortissa Addams, Fred and Wilma Flinstone, and Marge and Homer Simpson. The last three were compared to surveyed opinions on the couple’s relative activeness, dominance and communion, and the Flinstone and Simpson couples were found to be about equal. An earlier study also found that Gabriel was portrayed to the left of Mary 97% of the time.

\n

Most languages also mention the (active) subject before the object, which means the active entity is on the left when written in a left-to-right language. Grammar predates writing, so if this ordering of nouns is relevant, as the researchers suggest, it seems it combines with the direction of writing to cause the left to right bias. It would be interesting to see whether natives to the few languages who put the object before the subject have this bias  in the other direction. It would also be interesting to see whether the layout of sentences more commonly influences our perceptions of the content, or whether the effect is so weak as to only have influence over years of parsing the same patterns.

\n


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "3JnuoQCN9x2zGo6yh", "title": "Are you a Democrat singletonian, or a Republican singletonian?", "pageUrl": "https://www.lesswrong.com/posts/3JnuoQCN9x2zGo6yh/are-you-a-democrat-singletonian-or-a-republican-singletonian", "postedAt": "2009-10-01T21:35:33.419Z", "baseScore": -14, "voteCount": 29, "commentCount": 37, "url": null, "contents": { "documentId": "3JnuoQCN9x2zGo6yh", "html": "

Some people say that the difference between Republicans and Democrats is that Republicans are conservative in the sense of opposing change, while Democrats are liberal in the sense of promoting change.  But this isn't true - both parties want change; neither especially cares how things were done in the past.

\n

Some people say that Republicans are fiscally conservative, while Democrats are fiscally liberal.  But this isn't true.  Republicans and Democrats both run up huge deficits; they just spend the money on different things.

\n

Some people say Democrats are liberal in the sense of favoring liberty.  But this isn't true.  Republicans want freedom to own guns and run their businesses as they please, while Democrats want the freedom to have abortions and live as they please.

\n

Someone - it may have been George Lakoff - observed that Republicans want government to be their daddy, while Democrats want government to be their mommy.  That's the most-helpful distinction that I've heard.  Republicans want a government that's stern and and protects them from strangers.  Democrats want a government that's forgiving and takes care of all their needs.

\n

I was thinking about this because of singletons.  Some people are in favor of creating a singleton AI to rule the universe.  I assume that, as with party affiliation, people choose a position for emotional rather than rational reasons.  So which type of person would want a singleton - a daddy-seeking Republican, or a mommy-seeking Democrat?

\n

I think the answer is, Both.  Republicans and Democrats would both want a singleton to take care of them; just in different ways.  Those who don't want a singleton at all would be Libertarians.

\n

Regardless of whether you think a singleton is a good idea or a bad idea - does this mean that Americans would overwhelmingly vote to construct a singleton, if they were given the choice?

\n

And would the ideas about how to design that singleton break down along party lines?

" } }, { "_id": "2WmQMCFLeEJJ6CrnS", "title": "Why Don’t We Apply What We Know About Twins to Everybody Else?", "pageUrl": "https://www.lesswrong.com/posts/2WmQMCFLeEJJ6CrnS/why-don-t-we-apply-what-we-know-about-twins-to-everybody", "postedAt": "2009-10-01T16:23:42.085Z", "baseScore": 20, "voteCount": 20, "commentCount": 28, "url": null, "contents": { "documentId": "2WmQMCFLeEJJ6CrnS", "html": "

I think our intuition might be miscalibrated when it comes to evaluating how much a person’s genes impact how they turn out physically (which isn’t surprising). What’s a bit strange is that we seem to be closer to the truth when it comes to twins.

\n

Nobody’s surprised when identical twins turn out to have very similar bodies (weight, muscle mass, etc), even into adulthood.

\n

But when it comes to non-twins, people seem to think that “making the right choices” and “willpower” are primary factors in how human bodies turn out, and that we can assign a good amount of personal credit or blame to individuals for good and bad outcomes.

\n

There is a disconnect between these two visions, and I think that it’s the latter that needs to be updated.

\n

After all, even if we put aside the direct ways in which our genes build our bodies (encoding how our tissues grow) and instead look at our abilities to “make the right choices” and exert “willpower”, we find that those are also greatly determined by genetic factors. Identical twins probably turn out very similar in good part because they have almost identical amounts of those qualities of mind.

\n

This doesn’t mean that all is pre-determined and that if we all stop trying we’ll turn out the same we would have otherwise, but rather that we are playing within certain parameters, and that the part we control is probably smaller than most people think (not non-existent — we still deserve some credit — just more modest).

\n

To be clear, I’m not saying the situation was white and we thought it was black, or even that it’s a black & white thing, but rather that most people’s intuition might be the wrong shade of gray. Otherwise, I would think there would be a bigger variation between identical twins, but they spend their lives making different choices yet most stay very similar to each other (as far as I know — if you know of a study on this, please send it my way).

" } }, { "_id": "sihh7JxyE2Xa3RKA5", "title": "Open Thread: October 2009", "pageUrl": "https://www.lesswrong.com/posts/sihh7JxyE2Xa3RKA5/open-thread-october-2009", "postedAt": "2009-10-01T12:49:19.084Z", "baseScore": 8, "voteCount": 8, "commentCount": 436, "url": null, "contents": { "documentId": "sihh7JxyE2Xa3RKA5", "html": "

Hear ye, hear ye: commence the discussion of things which have not been discussed.

\n

As usual, if a discussion gets particularly good, spin it off into a posting.

\n

(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)

" } }, { "_id": "eHiicreYQBZovxtDr", "title": "Regular NYC Meetups", "pageUrl": "https://www.lesswrong.com/posts/eHiicreYQBZovxtDr/regular-nyc-meetups", "postedAt": "2009-10-01T12:44:38.676Z", "baseScore": 11, "voteCount": 9, "commentCount": 6, "url": null, "contents": { "documentId": "eHiicreYQBZovxtDr", "html": "

Sayeth Jasen:

\n
\n
\n
\n

This is an excellent opportunity to announce that I recently organized an OB/LW discussion group that meets in NYC twice a month. We had been meeting sporadically ever since Robin's visit back in April. The regular meetings only started about a month ago and have been great fun. Here is the google group we've been using to organize them:

\n

http://groups.google.com/group/overcomingbiasnyc

\n

We meet every 2nd Saturday at 11:00am and every 4th Tuesday at 6:00pm at Georgia's Bake Shop (on the corner of 89th street and Broadway). The deal is that I show up every time and stay for at least two hours regardless of whether or not anyone else comes.

\n

I've been meaning to post this for a while but I don't have enough Karma...

\n
\n

A couple thoughts:

\n\n
\n
" } }, { "_id": "spuqrkxvbg8W7orvA", "title": "NY-area OB/LW meetup Saturday 10/3 7 PM", "pageUrl": "https://www.lesswrong.com/posts/spuqrkxvbg8W7orvA/ny-area-ob-lw-meetup-saturday-10-3-7-pm", "postedAt": "2009-09-30T23:52:52.381Z", "baseScore": 10, "voteCount": 8, "commentCount": 16, "url": null, "contents": { "documentId": "spuqrkxvbg8W7orvA", "html": "

Since lots of us anticipate being in New York this weekend for Singularity Summit 2009, it seems like a great time to hold a NY-area Overcoming Bias/Less Wrong meetup! Whether you're attending the summit or not, do consider dropping by the 92nd Street Marriott any time after 7PM, where you will find a lively group of rationalists discussing and dissecting the day's events, and the ideas presented by the summit speakers. We'll be meeting in the hotel's internal restaurant, which should be open until well after midnight.

\n

RSVP at the Facebook Group or see this event in Google Calendar.

" } }, { "_id": "JF4mv4PbTp6ckN3vG", "title": "Intuitive differences: when to agree to disagree", "pageUrl": "https://www.lesswrong.com/posts/JF4mv4PbTp6ckN3vG/intuitive-differences-when-to-agree-to-disagree", "postedAt": "2009-09-29T07:56:47.836Z", "baseScore": 35, "voteCount": 29, "commentCount": 36, "url": null, "contents": { "documentId": "JF4mv4PbTp6ckN3vG", "html": "

Two days back, I had a rather frustrating disagreement with a friend. The debate rapidly hit a point where it seemed to be going nowhere, and we spent a while going around in circles before agreeing to change the topic. Yesterday, as I was riding the subway, things clicked. I suddenly realized not only what the disagreement had actually been about, but also what several previous disagreements we'd had were about. In all cases, our opinions and arguments had been grounded in opposite intuitions:

\n\n

You may notice that these intuitions are not mutually exclusive in the strict sense. They could both be right, one of them covering certain classes of things and the other the remaining ones. And neither one is obviously and blatantly false - both have evidence supporting them. So the disagreement is not about which one is right, as such. Rather, it's a question of which one is more right, which is the one with broader applicability.

\n

As soon as I realized this, I also realized two other things. One, whenever we would run into this difference in the future, we'd need to recognize it and stop that line of debate, for it wouldn't be resolved before the root disagreement had been solved. Two, actually resolving that core disagreement would take so much time and energy that it probably wouldn't be worth the effort.

\n

The important thing to realize is that neither intuition rests on any particular piece of evidence. Instead, each one is a general outlook that has been formed over many years and countless pieces of evidence, most of which have already been forgotten. Before my realization, neither of us had even consciously known they existed. They are abstract patterns our minds have extracted from what must be hundreds of different cases we've encountered, very high-level hypotheses that have been repeatedly tested and found to be accurate.

\n

It would be impossible to find out which was the more applicable one by means of regular debate. Each of us would have to gather all the evidence that led to the formulation of the intuition in the first place. Pulling a number out of my hat, I'd guess that a comprehensive overview of that evidence (for one intuition) would run at least a hundred pages long. Furthermore, it wouldn't be sufficient for each of us to simply read the other side's overview, once it had been gathered. By this point, we would be interpreting the evidence in light of our already existing intuition. I wouldn't be surprised if simply reading through the summary would lead to both sides only being more certain of their own intuition being right. We would have to take the time to discuss each individual item in detail.

\n

And if a real attempt to sort out the difference is hard, resolving it in the middle of a debate about something else is impossible. Both sides in the debate will have an opinion they think is obvious and be puzzled as to why the other side can consistently fail to get something so obvious. At the same time, neither can access the evidence that leads them to consider their opinion so obvious, and both will grow increasingly frustrated at both the other side's bone-headedness and their own failure to properly communicate something that shouldn't even need explaining.

\n

In many cases, trying to resolve an intuitive difference simply isn't worth the effort. Learn to recognize your intuitive differences, and you'll know when to break off debates once they hit that difference. Putting those intuitions in words still helps understanding, though. When I told my friend the things I've just written here, she agreed, and we were able to have a constructive dialogue about those differences. (While doing so, and returning to the previous day's topic, we were able to identify at least five separate points of disagreement that were all rooted in the same intuitive difference.) Each one was also able to explain, on a rough level, some of the background that supported their intuition. In the end, we still didn't agree, but at least we understood each other's positions a little better.

\n

But what if the intuitive difference is about something really important? Me and my friend resolved to just wait things out and see whose hypothesis would turn out more accurate, but sometimes the difference might affect big decisions about the actions you want to take. (Robin's and Eliezer's disagreement on the nature of the Singularity comes to mind.) What if the disagreement really needs to be solved?

\n

I'm not sure how well it can be done, but one could try. First off, both need to realize that in all likelihood, both intuitions have a large grain of truth to them. Like with me and my friend, the question is often one of the breadth of applicability, not of a strict truth or falsehood. Once the basic positions have been formulated, both should ask whether, not why. Assign some certainty value on the likelyhood of your intuition being the more correct one, and then consider the fact that your \"opponent\" has spent many years analyzing evidence to reach this position and might very well be right. Adjust your certainty downwards to account for this realization. Then take a few weeks considering both the things that may have led you to formulate this intuition, as well as the things that might have led your opponent to theirs. Spend time gathering evidence for both sides of the view, and be sure to give each piece of evidence a balanced view: half of the time you'll first consider a case from the PoV of your opponent's hypothesis, then of your own. Half of the time you'll do it the other way around. Commit all such considerations in writing and present them to your opponent an regular intervals, taking the time to discuss them through. This is no time for motivated skepticism - both of you need to have genuine crisis of faith in order for things to get anywhere.

\n

Not every disagreement is an intuitive difference. Any disagreement that rests on particular pieces of evidence and can be easily resolved with the correct empirical evidence isn't one. If it feels like one of the intuitions is strictly false instead of having a large grain of truth to it, it's still an intuitive difference, but not the kind of one that I have been covering here. An intuitive difference is also kind of related to, but different from, an inferential distance. In order to resolve it, a lot of information needs to be absorbed, but by both partners, not simply the other. It's not a question of having different evidence: theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.

" } }, { "_id": "sLLR5u9NJTfL88Spx", "title": "Why Many-Worlds Is Not The Rationally Favored Interpretation", "pageUrl": "https://www.lesswrong.com/posts/sLLR5u9NJTfL88Spx/why-many-worlds-is-not-the-rationally-favored-interpretation", "postedAt": "2009-09-29T05:22:48.366Z", "baseScore": 13, "voteCount": 40, "commentCount": 101, "url": null, "contents": { "documentId": "sLLR5u9NJTfL88Spx", "html": "

Eliezer recently posted an essay on \"the fallacy of privileging the hypothesis\". What it's really about is the fallacy of privileging an arbitrary hypothesis. In the fictional example, a detective proposes that the investigation of an unsolved murder should begin by investigating whether a particular, randomly chosen citizen was in fact the murderer. Towards the end, this is likened to the presumption that one particular religion, rather than any of the other existing or even merely possible religions, is especially worth investigating.

\n

However, in between the fictional and the supernatural illustrations of the fallacy, we have something more empirical: quantum mechanics. Eliezer writes, as he has previously, that the many-worlds interpretation is the one - the rationally favored interpretation, the picture of reality which rationally should be adopted given the empirical success of quantum theory. Eliezer has said this before, and I have argued against it before, back when this site was just part of a blog. This site is about rationality, not physics; and the quantum case is not essential to the exposition of this fallacy. But given the regularity with which many-worlds metaphysics shows up in discussion here, perhaps it is worth presenting a case for the opposition.

\n

We can do this the easy way, or the hard way. The easy way is to argue that many-worlds is merely not favored, because we are nowhere near being able to locate our hypotheses in a way which permits a clean-cut judgment about their relative merits. The available hypotheses about the reality beneath quantum appearances are one and all unfinished muddles, and we should let their advocates get on with turning them into exact hypotheses without picking favorites first. (That is, if their advocates can be bothered turning them into exact hypotheses.)

\n

The hard way is to argue that many-worlds is actually disfavored - that we can already say it is unlikely to be true. But let's take the easy path first, and see how things stand at the end.

\n

The two examples of favoring an arbitrary hypothesis with which we have been provided - the murder investigation, the rivalry of religions - both present a situation in which the obvious hypotheses are homogeneous. They all have the form \"Citizen X did it\" or \"Deity Y did it\". It is easy to see that for particular values of X and Y, one is making an arbitrary selection from a large set of possibilities. This is not the case in quantum foundations. The well-known interpretations are extremely heterogeneous. There has not been much of an effort made to express them in a common framework - something necessary if we want to apply Occam's razor in the form of theoretical complexity - nor has there been much of an attempt to discern the full \"space\" of possible theories from which they have been drawn - something necessary if we really do wish to avoid privileging the hypotheses we happen to have. Part of the reason is, again, that many of the known options are somewhat underdeveloped as exact theories. They subsist partly on rhetoric and handwaving; they are mathematical vaporware. And it's hard to benchmark vaporware.

\n

In his latest article, Eliezer presents the following argument:

\n

\"... there [is] no concrete evidence whatsoever that favors a collapse postulate or single-world quantum mechanics.  But, said Scott, we might encounter future evidence in favor of single-world quantum mechanics, and many-worlds still has the open question of the Born probabilities... There must be a trillion better ways to answer the Born question without adding a collapse postulate...\"

\n

The basic wrong assumption being made is that quantum superposition by default equals multiplicity - that because the wavefunction in the double-slit experiment has two branches, one for each slit, there must be two of something there - and that a single-world interpretation has to add an extra postulate to this picture, such as a collapse process which removes one branch. But superposition-as-multiplicity really is just another hypothesis. When you use ordinary probabilities, you are not rationally obligated to believe that every outcome exists somewhere; and an electron wavefunction really may be describing a single object in a single state, rather than a multiplicity of them.

\n

A quantum amplitude, being a complex number, is not an ordinary probability; it is, instead, a mysterious quantity from which usable probabilities are derived. Many-worlds says, \"Let's view these amplitudes as realities, and try to derive the probabilities from them.\" But you can go the other way, and say, \"Let's view these amplitudes as derived from the probabilities of a more fundamental theory.\" Mathematical results like Bell's theorem show that this will require a little imagination - you won't be able to derive quantum mechanics as an approximation to a 19th-century type of physics. But we have the imagination; we just need to use it in a disciplined way.

\n

So that's the kernel of the argument that many worlds is not favored: the hypotheses under consideration are still too much of a mess to even be commensurable, and the informal argument for many worlds, quoted above, simply presupposes a multiplicity interpretation of quantum superposition. How about the argument that many worlds is actually disfavored? That would become a genuinely technical discussion, and when pressed, I would ultimately not insist upon it. We don't know enough about the theory-space yet. Single-world thinking looks more fruitful to me, when it comes to sub-quantum theory-building, but there are versions of many-worlds which I do occasionally like to think about. So the verdict for now has to be: not proven; and meanwhile, let a hundred schools of thought contend.

" } }, { "_id": "X2AD2LgtKgkRNPj2a", "title": "Privileging the Hypothesis", "pageUrl": "https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis", "postedAt": "2009-09-29T00:40:12.159Z", "baseScore": 127, "voteCount": 97, "commentCount": 130, "url": null, "contents": { "documentId": "X2AD2LgtKgkRNPj2a", "html": "

Suppose that the police of Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues—the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.

Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”

I’ll label this the fallacy of privileging the hypothesis. (Do let me know if it already has an official name—I can’t recall seeing it described.)

Now the detective may perhaps have some form of rational evidence that is not legal evidence admissible in court—hearsay from an informant, for example. But if the detective does not have some justification already in hand for promoting Mortimer to the police’s special attention—if the name is pulled entirely out of a hat—then Mortimer’s rights are being violated.

And this is true even if the detective is not claiming that Mortimer “did” do it, but only asking the police to spend time pondering that Mortimer might have done it—unjustifiably promoting that particular hypothesis to attention. It’s human nature to look for confirmation rather than disconfirmation. Suppose that three detectives each suggest their hated enemies, as names to be considered; and Mortimer is brown-haired, Frederick is black-haired, and Helen is blonde. Then a witness is found who says that the person leaving the scene was brown-haired. “Aha!” say the police. “We previously had no evidence to distinguish among the possibilities, but now we know that Mortimer did it!”

This is related to the principle I’ve started calling “locating the hypothesis,” which is that if you have a billion boxes only one of which contains a diamond (the truth), and your detectors only provide 1 bit of evidence apiece, then it takes much more evidence to promote the truth to your particular attention—to narrow it down to ten good possibilities, each deserving of our individual attention—than it does to figure out which of those ten possibilities is true. It takes 27 bits to narrow it down to ten, and just another 4 bits will give us better than even odds of having the right answer.

Thus the detective, in calling Mortimer to the particular attention of the police, for no reason out of a million other people, is skipping over most of the evidence that needs to be supplied against Mortimer.

And the detective ought to have this evidence in their possession, at the first moment when they bring Mortimer to the police’s attention at all. It may be mere rational evidence rather than legal evidence, but if there’s no evidence then the detective is harassing and persecuting poor Mortimer.

During my recent diavlog with Scott Aaronson on quantum mechanics, I did manage to corner Scott to the extent of getting Scott to admit that there was no concrete evidence whatsoever that favors a collapse postulate or single-world quantum mechanics. But, said Scott, we might encounter future evidence in favor of single-world quantum mechanics, and many-worlds still has the open question of the Born probabilities.

This is indeed what I would call the fallacy of privileging the hypothesis. There must be a trillion better ways to answer the Born question without adding a collapse postulate that would be the only non-linear, non-unitary, discontinous, non-differentiable, non-CPT-symmetric, non-local in the configuration space, Liouville’s-Theorem-violating, privileged-space-of-simultaneity-possessing, faster-than-light-influencing, acausal, informally specified law in all of physics. Something that unphysical is not worth saying out loud or even thinking about as a possibilitywithout a rather large weight of evidence—far more than the current grand total of zero.

But because of a historical accident, collapse postulates and single-world quantum mechanics are indeed on everyone’s lips and in everyone’s mind to be thought of, and so the open question of the Born probabilities is offered up (by Scott Aaronson no less!) as evidence that many-worlds can’t yet offer a complete picture of the world. Which is taken to mean that single-world quantum mechanics is still in the running somehow.

In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as “in the running,” and if other runners seem to fall behind in the race a little, it’s assumed that this runner is edging forward or even entering the lead.

And yes, this is just the same fallacy committed, on a much more blatant scale, by the theist who points out that modern science does not offer an absolutely complete explanation of the entire universe, and takes this as evidence for the existence of Jehovah. Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations!

To talk about “intelligent design” whenever you point to a purported flaw or open problem in evolutionary theory is, again, privileging the hypothesis—you must have evidence already in hand that points to intelligent design specifically in order to justify raising that particular idea to our attention, rather than a thousand others.

So that’s the sane rule. And the corresponding anti-epistemology is to talk endlessly of “possibility” and how you “can’t disprove” an idea, to hope that future evidence may confirm it without presenting past evidence already in hand, to dwell and dwell on possibilities without evaluating possibly unfavorable evidence, to draw glowing word-pictures of confirming observations that could happen but haven’t happened yet, or to try and show that piece after piece of negative evidence is “not conclusive.”

Just as Occam’s Razor says that more complicated propositions require more evidence to believe, more complicated propositions also ought to require more work to raise to attention. Just as the principle of burdensome details requires that each part of a belief be separately justified, it requires that each part be separately raised to attention.

As discussed in Perpetual Motion Beliefs, faith and type 2 perpetual motion machines (water ice cubes + electricity) have in common that they purport to manufacture improbability from nowhere, whether the improbability of water forming ice cubes or the improbability of arriving at correct beliefs without observation. Sometimes most of the anti-work involved in manufacturing this improbability is getting us to pay attention to an unwarranted belief—thinking on it, dwelling on it. In large answer spaces, attention without evidence is more than halfway to belief without evidence.

Someone who spends all day thinking about whether the Trinity does or does not exist, rather than Allah or Thor or the Flying Spaghetti Monster, is more than halfway to Christianity. If leaving, they’re less than half departed; if arriving, they’re more than halfway there.

An oft-encountered mode of privilege is to try to make uncertainty within a space, slop outside of that space onto the privileged hypothesis. For example, a creationist seizes on some (allegedly) debated aspect of contemporary theory, argues that scientists are uncertain about evolution, and then says, “We don’t really know which theory is right, so maybe intelligent design is right.” But the uncertainty is uncertainty within the realm of naturalistic theories of evolution—we have no reason to believe that we’ll need to leave that realm to deal with our uncertainty, still less that we would jump out of the realm of standard science and land on Jehovah in particular. That is privileging the hypothesis—taking doubt within a normal space, and trying to slop doubt out of the normal space, onto a privileged (and usually discredited) extremely abnormal target.

Similarly, our uncertainty about where the Born statistics come from should be uncertainty within the space of quantum theories that are continuous, linear, unitary, slower-than-light, local, causal, naturalistic, et cetera—the usual character of physical law. Some of that uncertainty might slop outside the standard space onto theories that violate one of these standard characteristics. It’s indeed possible that we might have to think outside the box. But single-world theories violate all these characteristics, and there is no reason to privilege that hypothesis.

" } }, { "_id": "JTFGokioPRJjw3Zas", "title": "Your Most Valuable Skill", "pageUrl": "https://www.lesswrong.com/posts/JTFGokioPRJjw3Zas/your-most-valuable-skill", "postedAt": "2009-09-27T17:01:10.306Z", "baseScore": 32, "voteCount": 33, "commentCount": 97, "url": null, "contents": { "documentId": "JTFGokioPRJjw3Zas", "html": "

Knowledge is great: I suspect we can agree there.  Sadly, though, we can't guarantee ourselves infinite time in which to learn everything eventually, and in the meantime, there are plenty of situations where having irrelevant knowledge instead of more instrumentally useful knowledge can be decidedly suboptimal.  Therefore, there's good reason to work out what facts we'll need to deploy and give special priority to learning those facts.  There's nothing intrinsically more interesting or valuable about the knowledge that the capital of the United States is Washington, D.C. than there is about the knowledge that the capital of Bali is Denpasar, but unless you live or spend a lot of time in Indonesia, the latter knowledge will be less likely to come up.

\n

It seems the same is true of procedural knowledge (with the quirk that it's easier to deliberately put yourself in situations where you use whatever procedural knowledge you have than it is to arrange to need to know the capital of Bali.)  If your procedural knowledge is useful, and also difficult to obtain or unpopular to practice or both, you might even turn it into a career (or save money that you would have spent hiring people who have).

\n

Rationality is sort of the ur-procedure, but after a certain point - the point where you're no longer buying into supernaturalist superstition, begging for a Darwin Award, or falling for cheap scams - its marginal practical value diminishes.  Practicing rationality as an art is fun and there's some chance it'll yield a high return, but evolution (genetic and memetic) didn't do that bad of a job on us: we enter adulthood with an arsenal of heuristics that are mostly good enough.  A little patching of the worst leaks, some bailing of bilge that got in early on, and you have a serviceable brain-yacht.  (Sound of metaphor straining.)

\n

So when you want to spend time on learning or honing a skill, it makes sense to choose skills with a high return on investment, be it in terms of fun, resources, the goodwill of others, insurance against emergency, or other valuable results.  Note that if you learned a skill, used it to learn a non-customized fact, and do not anticipate using the skill again, it's not the skill that was useful; the skill was just a sine qua non for the useful fact, and others don't have to duplicate the research process to benefit.  A skill that yielded one (or more) customized facts - i.e., facts about yourself, that you can't go on to share straight up with other people - might be a useful skill in this way, however.

\n

For practical daily purposes, what is your most valuable skill (or what most valuable skill are you trying to attain now)?  Post it in the comments, along with what makes your skill valuable, tips for picking it up, and what made you first investigate it.

" } }, { "_id": "y7jZ9BLEeuNTzgAE5", "title": "The Anthropic Trilemma", "pageUrl": "https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-anthropic-trilemma", "postedAt": "2009-09-27T01:47:54.920Z", "baseScore": 78, "voteCount": 69, "commentCount": 233, "url": null, "contents": { "documentId": "y7jZ9BLEeuNTzgAE5", "html": "

Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.

\n

The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness.  The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places.  If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability.  That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally.  (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)

\n

But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?  Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?

\n

Let's suppose that three copies get three times as much experience.  (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)

\n

Just as computer programs or brains can split, they ought to be able to merge.  If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them.  In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program.  (This seems to establish an equal causal dependency of the final program on the two original programs that went into it.  E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)

\n

So here's a simple algorithm for winning the lottery:

\n

Buy a ticket.  Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere.  Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery.  Then suspend the programs, merge them again, and start the result.  If you don't win the lottery, then just wake up automatically.

\n

The odds of winning the lottery are ordinarily a billion to one.  But now the branch in which you win has your \"measure\", your \"amount of experience\", temporarily multiplied by a trillion.  So with the brief expenditure of a little extra computing power, you can subjectively win the lottery - be reasonably sure that when next you open your eyes, you will see a computer screen flashing \"You won!\"  As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing.

\n

Now you could just bite this bullet.  You could say, \"Sounds to me like it should work fine.\"  You could say, \"There's no reason why you shouldn't be able to exert anthropic psychic powers.\"  You could say, \"I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities.\"

\n

I find myself somewhat reluctant to bite that bullet, personally.

\n

Nick Bostrom, when I proposed this problem to him, offered that you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds.

\n

To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities.  If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.  And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1.  But we don't have p(\"experience win after 15s\") = p(\"experience win after 15s\"|\"experience win after 5s\")*p(\"experience win after 5s\") + p(\"experience win after 15s\"|\"experience not-win after 5s\")*p(\"experience not-win after 5s\").

\n

I'm reluctant to bite that bullet too.

\n

And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears.  Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears.  In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer \"continuing on\" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.

\n

There are no threads connecting subjective experiences.  There are simply different subjective experiences.  Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.

\n

I still have trouble biting that bullet for some reason.  Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, \"What will I see happen next?\"  I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own.  I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.

\n

Bound to my naive intuitions that can be explained away by obvious evolutionary instincts, you say?  It's plausible that I could be forced down this path, but I don't feel forced down it quite yet.  It would feel like a fake reduction.  I have rather the sense that my confusion here is tied up with my confusion over what sort of physical configurations, or cascades of cause and effect, \"exist\" in any sense and \"experience\" anything in any sense, and flatly denying the existence of subjective continuity would not make me feel any less confused about that.

\n

The fourth horn of the trilemma (as 'twere) would be denying that two copies of the same computation had any more \"weight of experience\" than one; but in addition to the Boltzmann Brain problem in large universes, you might develop similar anthropic psychic powers if you could split a trillion times, have each computation view a slightly different scene in some small detail, forget that detail, and converge the computations so they could be reunified afterward - then you were temporarily a trillion different people who all happened to develop into the same future self.  So it's not clear that the fourth horn actually changes anything, which is why I call it a trilemma.

\n

I should mention, in this connection, a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely, if you tried the analogue using quantum branching within a large coherent superposition (e.g. a quantum computer).  If you quantum-split into a trillion copies, those trillion copies would have the same total quantum measure after being merged or converged.

\n

It's a remarkable fact that the one sort of branching we do have extensive actual experience with - though we don't know why it behaves the way it does - seems to behave in a very strange way that is exactly right to avoid anthropic superpowers and goes on obeying the standard axioms for conditional probability.

\n

In quantum copying and merging, every \"branch\" operation preserves the total measure of the original branch, and every \"merge\" operation (which you could theoretically do in large coherent superpositions) likewise preserves the total measure of the incoming branches.

\n

Great for QM.  But it's not clear to me at all how to set up an analogous set of rules for making copies of sentient beings, in which the total number of processors can go up or down and you can transfer processors from one set of minds to another.

\n

To sum up:

\n\n

I will be extremely impressed if Less Wrong solves this one.

" } }, { "_id": "n585YgAhDFo5DkYvt", "title": "The Scylla of Error and the Charybdis of Paralysis", "pageUrl": "https://www.lesswrong.com/posts/n585YgAhDFo5DkYvt/the-scylla-of-error-and-the-charybdis-of-paralysis", "postedAt": "2009-09-26T16:01:20.938Z", "baseScore": 16, "voteCount": 17, "commentCount": 18, "url": null, "contents": { "documentId": "n585YgAhDFo5DkYvt", "html": "

We're interested in improving human rationality. Many of our techniques for improving human rationality take time. In real-time situations, you can lose by making the wrong decision, or by making the \"right\" decision too slowly. Most of us do not have inflexible-schedule, high-stakes decisions to make, though. How often does real-time decision making really come up?

\n

Suppose you are making a fairly long-ranged decision. Call this decision 1. While analyzing decision 1, you come to a natural pause. At this pause you need to decide whether to analyze further, or to act on your best-so-far analysis. Call this decision 2. Note that decision 2 is made under tighter time pressure than decision 1. This scenario argues that decision-making is recursive, and so if there are any time bounds, then many decisions will need to be made at very tight time bounds.

\n

A second, \"covert\" goal of this post is to provide a definitely-not-paradoxical problem for people to practice their Bayseian reasoning on. Here is a concrete model of real-time decisionmaking, motivated by medical-drama television shows, where the team diagnoses and treats a patient over the course of each episode. Diagnosing and treating a patient who is dying of an unknown disease is a colorful example of real-time decisionmaking.

\n

To play this game, you need a coin, two six-sided dice, a deck of cards, and a helper to manipulate these objects. The manipulator sets up the game by flipping a coin. If heads (tails) the patient is suffering from an exotic fungus (allergy). Then the manipulator prepares a deck by removing all of the clubs (diamonds) so that the deck is a red-biased (black-biased) random-color generator. Finally, the manipulator determines the patients starting health by rolling the dice and summing them. All of this is done secretly.

\n

Play proceeds in turns. At the beginning of each turn, the manipulator flips a coin to determine whether test results are available. If test results are available, the manipulator draws a card from the deck and reports its color. A red (black) card gives you suggestive evidence that the patient is suffering from a fungus (allergy). You choose whether to treat a fungus, allergy, or wait. If you treat correctly, the manipulator leaves the patient's health where it is (they're improving, but on a longer timescale). If you wait, the manipulator reduces the patient's health by one. If you treat incorrectly, the manipulator reduces the patient's health by two.

\n

Play ends when you treat the patient for the same disease for six consecutive turns or when the patient reaches zero health.

\n

Here is some Python code simulating a simplistic strategy. What Bayesian strategy yields the best results? Is there a concise description of this strategy? 

\n

The model can be made more complicated. The space of possible actions is small. There is no choice of what to investigate next. In the real world, there are likely to be diminishing returns to further tests or further analysis. There could be uncertainty about how much time pressure there is. There could be uncertainty about how much information future tests will reveal. Every complication will make the task of computing the best strategy more difficult.

\n

We need fast approximations to rationality (even quite bad approximations, if they're fast enough), as well as procedures that spend time in order to purchase a better result.

" } }, { "_id": "mmuNCfLyArLodpGLt", "title": "Correlated decision making: a complete theory", "pageUrl": "https://www.lesswrong.com/posts/mmuNCfLyArLodpGLt/correlated-decision-making-a-complete-theory", "postedAt": "2009-09-26T11:47:09.061Z", "baseScore": 10, "voteCount": 8, "commentCount": 22, "url": null, "contents": { "documentId": "mmuNCfLyArLodpGLt", "html": "

The title of this post most probably deserves a cautious question mark at the end, but I'll go out on a limb and start sawing it behind me: I think I've got a framework that consistently solves correlated decision problems. That it is, those situation where different agents (a forgetful you at different times, your duplicates, or Omega’s prediction of you) will come to the same decision.

\n

After my first post on the subject, Wei Dai asked whether my ideas could be formalised enough that it could applied mechanically. There were further challenges: introducing further positional information, and dealing with the difference between simulations and predictions. Since I claimed this sort of approach could apply to the Newcomb’s problem, it is also useful to see it work in cases were the two decisions are only partially correlated - where Omega is good, but he’s not perfect.

\n

The theory

\n

In standard decision making, it is easy to estimate your own contribution to your own utility; the contribution of others to your own utility is then estimated separately. In correlated decision-making, both steps are trickier; estimating your contribution is non-obvious, and the contribution from others is not independent. In fact, the question to ask is not \"if I decide this, how much return will I make\", but rather \"in a world in which I decide this, how much return will I make\".

\n

You first estimate the contribution of each decision made to your own utility, using a simplified version of the CDP: if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility.

\n

Then the procedure under correlated decision making is:

\n

1) Estimate the contribution of each correlated decision towards your utility, using CDP.

\n

2) Estimate the probability that each decision actually happens (this is an implicit use of the SIA).

\n

3) Use 1) and 2) to estimate the total utility that emerges from the decision.

\n

To illustrate, apply it to the generalised absent minded driver problem, where the return for turning off at the first and second intersection are x and y, respectively, while driving straight through grants a return of z. The expected return for going straight with probability p is R = (1-p)x + p(1-p)y + p2z.

\n

Then the expected return for the driver at the first intersection is (1-p)x + [p(1-p)y + p2z]/2, since the y and z returns require two decisions before being claimed. The expected return for the second driver is [(1-p)y + pz]/2. The first driver exists with probability one, while the second driver exists with probability p, giving the correct return of R.

\n

In the example given in Outlawing Athropics, there are twenty correlated decision makers, all existing with probability 1/2. Two of them contribute towards a decision which has utility -52, hence each generates a utility of -52/2. Eighteen of them contribute towards a decision which has utility 12, hence each one generates a utility of 12/18. Summing this up, the total utility generated is [2*(-52)/2 + 18*(12)/18]/2 = -20, which is correct.

\n

Simulation versus prediction

\n

In the Newcomb problem, there are two correlated decisions: your choice of one- or two-boxing, and Omega's decision on whether to put the money. The return to you for one-boxing in either case is X/2; for two boxing, the return is 1000/2.

\n

If Omega simulates you, you can be either decision maker, with probability 1/2; if he predicts without simulating, you are certainly the box-chooser. But it makes no difference - who you are is not an issue, you are simply looking at the probability of each decision maker existing, which is 1 in both cases. So adding up the two utilities gives you the correct estimate.

\n

Consequently, predictions and simulations can be treated similarly in this setup.

\n

Partial correlation

\n

If two decisions are partially correlated - say, Newcomb's problem where Omega has a probability p of correctly guessing your decision - then the way of modeling it is to split it into several perfectly correlated pieces.

\n

For instance, the partially correlated Newcomb's problem can be split into one model which is perfectly correlated (with probability p), and one model which is perfectly anti-correlated (with probability (1-p)). The return from two-boxing in the first case is 1000, and X+1000 is the second case. One-boxing gives a return of X in the first case and 0 in the second case. Hence the expected return from one-boxing is p(X), and for two-boxing is 1000 + (1-p)X, which are the correct odds.

\n

Adding positional information

\n

SilasBarta asked whether my old model could deal with the absent-minded driver if there were some positional information. For instance, imagine if there were a light at each crossing that could be red or green, and it was green 1/2 the time at the first crossing and 2/3 of the time in the second crossing. Then if your probability of continuing on a green light was g, and if on a red light it was r, your initial expected return is R = (2-r-g)x/2 + (r+g)D/2, where D = ((1-r)y + rz)/3 + 2((1-g)y + gz)/3 ).

\n

Then if you are at the first intersection, your expected return must be (2-r-g)x/2 + (r+g)D/4 (CDP on y and z, which require 2 decisions), while if you are at the second intersection, your expected return is D/4. The first driver exists with certainty, while the second one exists with probability r+g, giving us the correct return R.

" } }, { "_id": "jpPLLyyb5ASeYivsL", "title": "Non-Malthusian Scenarios", "pageUrl": "https://www.lesswrong.com/posts/jpPLLyyb5ASeYivsL/non-malthusian-scenarios", "postedAt": "2009-09-26T02:44:53.571Z", "baseScore": 24, "voteCount": 20, "commentCount": 89, "url": null, "contents": { "documentId": "jpPLLyyb5ASeYivsL", "html": "

This is an attempt to list all of the possible ways in which humanity may avoid scenarios where the average standard of living is close to subsistence, in response to Robin Hanson's recent series of posts on Overcoming Bias, where he argues that such an outcome is likely in the long run.

\n

I'll start with six, some suggested by myself, and others collected from comments on Overcoming Bias and Robin's own posts. If anyone provides additional ideas, I'll add them to the list.

\n

(I have a more general point here, BTW, which is that predicting the far future is very difficult. Before thinking that some outcome is inevitable or highly likely, it's a good idea to repeatedly ask oneself \"This is all the ways that I can think of why it may fail to come true. Am I sure that all of them have low probability and that I'm not missing anything?\" There may be some scenario with a non-negligible probability that your brain simply overlooked when you first asked it.)

\n

Singleton

\n

A world government or superpower imposes a population control policy over the whole world.

\n

Strong Security

\n

Strong defensive technologies and doctrines (such as Mutually Assured Destruction) allow nations, communities, and maybe tribes and families to unilaterally limit their populations within their own borders, while holding off hordes of would-be invaders and immigrants.

\n

Non-Human Capital

\n

Maximizing the wealth and power of a nation requires an optimal mix of human and non-human capital. Nations that fail to adopt population controls find their relative wealth and power fade over time as their mixes deviate from the optimum (i.e., they find themselves spending too much resources on raising humans, and not enough on building machines), and either move to correct this or are taken over by stronger powers. (I believe that historically this was the reason China adopted its one-child policy.)

\n

Unlimited Growth

\n

We don't completely understand the laws of physics, nor the nature of value. There turns out to be some way for economic growth to continue without limit. (Robin himself once wrote \"I know of no law limiting economic value per atom\" but apparently changed his mind later.)

\n

Selfish Memes

\n

Memes that manage to divert people's resources away from biological reproduction and towards memetic reproduction will have an advantage over memes that don't. On the other hand, genes that manage to block such memes will have an advantage over genes that don't. Memes manage to keep the upper hand in this struggle (or periodically regain the upper hand).

\n

Disease, Warfare, Natural Disasters, Aliens, Keeper of the Simulation

\n

One or more of these come along regularly to keep the human population in check and per capita incomes above subsistence.

" } }, { "_id": "2p8BWvcJvKkXGMsch", "title": "Solutions to Political Problems As Counterfactuals", "pageUrl": "https://www.lesswrong.com/posts/2p8BWvcJvKkXGMsch/solutions-to-political-problems-as-counterfactuals", "postedAt": "2009-09-25T17:21:50.038Z", "baseScore": 46, "voteCount": 41, "commentCount": 36, "url": null, "contents": { "documentId": "2p8BWvcJvKkXGMsch", "html": "
\n

A mathematician wakes up to find his house on fire. He frantically looks around before seeing the fire extinguisher on the far wall of the room. \"Aha!\" he says, \"a solution exists!\" and goes back to sleep.

\n

    -- Popular math students' joke

\n
\n

There has been much discussion of coulds, woulds, and shoulds recently. Agents imagine different counterfactual states of their own minds or actions, then select the most desirable. Something similar seems to happen during political discussions, but the multiplicity of agents involved muddles it a little.

I recently read a letter to the editor in my local paper. The city was launching a public education campaign against binge drinking, and this letter writer thought that all the billboards and lectures and what-not were a waste of time. She said that instead of a flashy and expensive public awareness campaign, the real solution was for binge drinkers to take responsibility for their own actions and learn that there were ways to have fun that didn't involve alcohol.

This struck me as a misguided line of thinking. Consider this analogy: pretend that the city government was, instead, increasing the number of police to prevent terrorist attacks. And that the writer was arguing that no, we shouldn't get the police involved: the real solution was for terrorists to stop being so violent and attacking people. This would be a weird and completely useless response.

Attempts to solve political problems are counterfactuals in the same way attempts to solve other problems are. In Newcomb's Problem, I modify the \"my decision\" node and watch what happens to the \"money in the box\" and \"money I get\" nodes. When I say \"Increasing local police would prevent terrorist attacks,\" I am modifying the \"local police\" node, and positing that this would have a certain inhibitory effect on the connected \"terrorist attacks\" node.

The hypothetical second letter-writer's argument, then, is that if we counterfactually modified the \"terrorists' attitude\" node, then the \"terrorist attacks\" node would change. This is correct but useless.

\n

But it's harder to see exactly why it's useless. Consider the original argument \"We should raise the number of police and train them in counter-terrorism techniques.\" In this case, I would be counterfactually modifying the attitudes of (for example) the chief of police. But I'm not the chief of police, any more than I'm Osama bin Laden. If I'm going to let myself modify the chief of police's attitude just because it would be convenient, I might as well let myself turn Osama into a pretty decent guy. Yet \"the chief of police should train more policemen\" sounds like a potential solution, whereas \"terrorists should be nicer\" doesn't.

Here's one possible resolution to the problem: it's much more likely I could convince the chief of police to train more policemen than that I could convince terrorists to be nonviolent. Since the police chief shares my goal of stopping terrorists, all I need to do is tell them why training more police would accomplish this goal, and the problem is solved. So the reason why \"train more police\" counts as a solution is that, as soon as the statement and the evidence supporting the statement reaches the right person, the problem will be solved.

In this case, changing the \"chief of police\" node is really a proxy for changing the \"my actions\" node. The solution could be rephrased as \"You or I could go to the chief of police and sugget they add more policemen, thus preventing further terrorist attacks.\" Counterfactually changing your own actions is entirely kosher. This also throws into greater relief the problems with \"You or I could find and approach all terrorists and convince them to be nicer,\" or \"You or I could go to every single binge drinker in the city and convince them to be more responsible.\"1

Actually, when phrased like that, the binge drinking example doesn't sound so bad. Add a comprehensive plan for doing it, enough funding to reach them all, and some idea of how you're going to phrase the \"be more responsible\" point, and it sounds like, well, a grassroots public awareness campaign. Which is kind of ironic, seeing as the letter started out as an argument against a public awareness campaign, and maybe a sign that I'm taking the Principle of Charity too far here.

\n

...in more realistic situations

It's more complex when there are only small probabilities of your own actions having any effect, but the principle stays the same. For example, I recently heard a doctor say that a single-payer system would best solve the US' health care woes, but since that was politically infeasible he was backing Obama's plan. This one doctor's support will have minimal effect on the chances of Obama's plan passing, but it will have even less of an effect on the chances of single-payer passing. If the expected utilities multiply out in such a way as to make supporting Obama more likely to gain more utility than supporting single-payer, the doctor is justified in his strategy of support for Obama's plan.

One more example from real history I learned recently. Suppose you are a Communist, and your fellow Reds are proposing ways to create a socialist paradise. One says that you must incite the workers to violent revolution. Another says you must petition the current government to support labor reform laws. A third says you must petition the current government to oppose labor reform laws.

Before you expel the third communist from the Party, let them make their argument. They say that the Party doesn't have enough resources to incite violent revolution, and the workers don't want to revolt anyway. Counterfactually modifying the \"workers' actions\" node to a revolutionary state is a waste of time, because there's no link between any modification of your own actions and that node reaching the state you want. Likewise, modifying \"government policy\" is useless, because the Communists don't have any clout in the government, so even if you found a wonderful value for that node that would make all workers happy forever, you couldn't change it.

Instead, she says, oppose labor reform laws. These are already unpopular, and even a small party like the Communists would probably have enough power to get them shot down. When there's no labor reform, workers will get angrier and angrier, until they gradually revolt and overthrow the system, getting you what you wanted in the first place.

There were communists in the early 1900s who actually tried this third approach. It didn't work, but I admire their thought processes. They ignored solutions that would never happen, and found an action they thought they could enact, that they thought they would raise the chance of revolution significantly. Compare this kind of cunning to the vapidity of the letter-writer who says \"Binge drinkers should become more responsible.\"

\n

...as an unrealized ideal

\n

I like this way of viewing the problem, because it explains why a certain class of argument feels wrong: arguments that go \"The solution to binge drinking is more personal responsibility\" or \"The solution to poverty is for the poor to work harder,\" or so on. But do people actually think this way?

The most glaring reason to believe they don't is that most people who \"solve\" societal problems have no interest in actually enacting their solution. The attitude is something like \"Hey, if the federal government passed a single-payer health plan, then all our health troubles would be over!\" and then don't bother to write a letter to their representative about it or even convince their next door neighbor.

For reasons that have been discussed ad nauseum on Overcoming Bias, politics is very much a signalling game. In particular, it seems to be a game in which you counterfactually propose different states of the \"government policy\" node and explain why these would have the best effects, and whoever can give the best explanation gets rewarded with higher status. Sometimes you're also allowed to edit the policies of large private organizations, or of influential individuals. In this case, the problem with the original letter writer wasn't just that she had no plan to enact her solution, but that she was breaking the rule which said that you're only allowed to play with relatively unified, powerful organizations, and not things like \"the set of all binge drinkers\".

In a way, this isn't so bad. When enough people play this game, their opinions get out to the voters, consumers, politicians, and business leaders, and eventually do change government and private policy.

The point is that if your goal is to actually personally affect things in a direct, immediate way, you can't just apply the rules of this game without thinking beforehand. Communists, when discussing politics for \"fun\", would never say \"I think the government should oppose labor reforms,\" but that might be the winning move for them when they're actually trying to increase utility. Likewise, libertarians spend a lot of time discussing different ways the government could implement libertarian policy, but when they actually have to take action, the best choice might be seasteads or charter cities or something else that doesn't involve policy at all.

And if you are content to just play the game, at least keep it interesting. No fair counterfactually editing things like \"terrorists' behavior\" or \"poor people's work ethic\" or \"how responsible binge drinkers are\".

\n

Footnotes

\n

1: Or, here's an alternate interpretation of \"Binge drinkers should be more responsible\": it's not worth trying to prevent binge drinking, because binge drinkers could prevent it themselves if they were more responsible, so it's their own fault. This is not illogical, but applying the argument to a case like \"Drunk drivers should be more responsible\" would be. There, even if we have no sympathy for drunk drivers, we still need to prevent drunk driving because many of the victims are innocents. The other issue is that people process the two statements \"The solution to binge drinking is for binge drinkers to be more responsible\" and \"We don't need to solve binge drinking; it's the drinkers' own fault and we need not care\" differently; the first sounds wise and reasonable, the second callous. For both these reasons, I don't think this interpretation is entirely what the original  letter writer, or other people who use this sort  of argument, are thinking of.

" } }, { "_id": "9ige644KxQ2oiBuKy", "title": "The utility curve of the human population", "pageUrl": "https://www.lesswrong.com/posts/9ige644KxQ2oiBuKy/the-utility-curve-of-the-human-population", "postedAt": "2009-09-24T21:00:41.266Z", "baseScore": 9, "voteCount": 11, "commentCount": 34, "url": null, "contents": { "documentId": "9ige644KxQ2oiBuKy", "html": "
\n

\"Whoever saves a single life, it is as if he had saved the whole world.\"

\n

  —The Talmud, Sanhedrin 4:5

\n
\n

That was the epigraph Eliezer used on a perfectly nice post reminding us to shut up and multiply when valuing human lives, rather than relying on the (roughly) logarithmic amount of warm fuzzies we'd receive.  Implicit in the expected utility calculation is the idea that the value of human lives scales linearly: indeed, Eliezer explicitly says, \"I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable.\"

\n

However, in a comment on Wei Dai's brilliant recent post comparing boredom and altruism, Vladimir Nesov points out that \"you can value lives sublinearly\" and still make an expected utility calculation rather than relying on warm-fuzzy intuition.  This got me thinking about just what the functional form of U(Nliving-persons) might be. 

\n

Attacking from the high end (the \"marginal\" calculation), it seems to me that the utility of human lives is actually superlinear to a modest degree1; that is, U(N+1)-U(N) > U(N)-U(N-1).  As an example, consider a parent and young child.  If you allow one of them to die, not only do you end that life, but you make the other one significantly worse off.  But this generalizes: the marginal person (on average) produces positive net value to society (though being an employee, friend, spouse, etc.) in addition to accruing their own utilons, and economies of scale dictate that adding another person allows a little more specialization and hence a little more efficiency.  I.e., the larger the pool of potential co-workers/friends/spouses is, the pickier everyone can be, and the better matches they're likely to end up with.  Steven Landsburg (in Fair Play) uses a version of this argument to conclude that children have positive externalities and therefore people on average have fewer children than would be optimal. 

\n

In societies with readily available birth control, that is.  And naturally, in societies which are insufficiently technological for each marginal person to be able to make a contribution, however indirect, to (e.g.) the food output, it's quite easy for the utility of lives to be sublinear, which is the classical Malthusian problem, and still very much with us in the poorer areas of the world. (In fact, I was recently informed by a humor website that the Black Death had some very positive effects for medieval Europe.)

\n

Now let's consider the other end of the problem (the \"inductive\" calculation).  As an example let's assume that humanity has been mostly supplanted by AIs or some alien species.  I would certainly prefer to have at least one human still alive: such a person could represent humanity (and by extension, me), carry on our culture, values and perspective on the universe, and generally push for our agenda.  Adding a second human seems far less important—but still quite important, since social interactions (with other humans) are such a vital component of humanity. So adding a third person would be less important still, and so on. A sublinear utility function.

\n

So are the marginal calculation and the inductive calculation inconsistent?  I don't think so: it's perfectly possible to have a utility function whose first derivative is complex and non-monotonic.  The two calculations are simply presenting two different terms of the function, which are dominant in different regimes.  Moreover the linear approximation is probably good enough for most ordinary circumstances; let's just remember that it is an approximation.

\n

 

\n

1. Note that in these arguments I'm averaging over \"ability to create utility\" (not to mention \"capacity to experience utility\").

" } }, { "_id": "MNEQn2qsxMGAgmQH2", "title": "Minneapolis Meetup, This Saturday (26th) at 3:00 PM, University of Minnesota ", "pageUrl": "https://www.lesswrong.com/posts/MNEQn2qsxMGAgmQH2/minneapolis-meetup-this-saturday-26th-at-3-00-pm-university", "postedAt": "2009-09-24T20:18:58.532Z", "baseScore": 5, "voteCount": 4, "commentCount": 0, "url": null, "contents": { "documentId": "MNEQn2qsxMGAgmQH2", "html": "

The Minneapolis meetup will take place this Saturday the 26th at 3:00 PM in Coffman Memorial Union at the University of Minnesota. Directions and address can be found here, and we'll be meeting inside near Starbucks, next to the northwest entrance to the bookstore, and underneath the northwest entrance to the Union. That entrance is a separate glass-sided structure right next to Washington Avenue, not the multistory brick building, but you can get to Starbucks from either entrance; it's on level 2.

RSVP and we'll keep an extra eye out for you.

" } }, { "_id": "6S4Lf2tCMWAfbGtdt", "title": "Boredom vs. Scope Insensitivity", "pageUrl": "https://www.lesswrong.com/posts/6S4Lf2tCMWAfbGtdt/boredom-vs-scope-insensitivity", "postedAt": "2009-09-24T11:45:54.121Z", "baseScore": 78, "voteCount": 59, "commentCount": 40, "url": null, "contents": { "documentId": "6S4Lf2tCMWAfbGtdt", "html": "

How much would you pay to see a typical movie? How much would you pay to see it 100 times?

\n

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

\n

If you are like a typical human being, your answers to both sets of questions probably exhibit failures to aggregate value linearly. In the first case, we call it boredom. In the second case, we call it scope insensitivity.

\n

Eliezer has argued on separate occasions that one should be regarded as an obvious error to be corrected, and the other is a gift bestowed by evolution, to be treasured and safeguarded. Here, I propose to consider them side by side, and see what we can learn by doing that.

\n

(Eliezer sometimes treats scope insensitivity as a simple arithmetical error that the brain commits, like in this quote: “the brain can't successfully multiply by eight and get a larger quantity than it started with”. Considering that the brain has little trouble multiplying by eight in other contexts and the fact that scope insensitivity starts with numbers as low as 2, it seems more likely that it’s not an error but an adaptation, just like boredom.)

\n

The nonlinearities in boredom and scope insensitivity both occur at two different levels. On the affective or hedonic level, our emotions fail to respond in a linear fashion to the relevant input. Watching a movie twice doesn’t give us twice the pleasure of watching it once, nor does saving two lives feel twice as good as saving one life. And on the level of decision making and revealed preferences, we fail to act as if our utilities scale linearly with the number of times we watch a movie, or the number of lives we save.

\n

Note that these two types of nonlinearities are logically distinct, and it seems quite possible to have one without the other. The refrain “shut up and multiply” is an illustration of this. It exhorts (or reminds) us to value lives directly and linearly in our utility functions and decisions, instead of only valuing the sublinear emotions we get from saving lives.

\n

We sometimes feel bad that we aren’t sufficiently empathetic. Similarly, we feel bad about some of our boredoms. For example, consider a music lover who regrets no longer being as deeply affected by his favorite piece of music as when he first heard it, or a wife who wishes she was still as deeply in love with her husband as she once was. If they had the opportunity, they may very well choose to edit those boredoms away.

\n

Self-modification is dangerous, and the bad feelings we sometimes have about the way we feel were never meant to be used directly as a guide to change the wetware behind those feelings. If we choose to edit some of our boredoms away, while leaving others intact, we may find ourselves doing the one thing that we’re not bored with, over and over again. Similarly, if we choose to edit our scope insensitivity away completely, we may find ourselves sacrificing all of our other values to help random strangers, who in turn care little about ourselves or our values. I bet that in the end, if we reach reflective equilibrium after careful consideration, we’ll decide to reduce some of our boredoms, but not eliminate them completely, and become more empathetic, but not to the extent of full linearity.

\n

But that’s a problem for a later time. What should we do today, when we can’t change the way our emotions work to a large extent? Well, first, nobody argues for “shut up and multiply” in the case of boredom. It’s clearly absurd to watch a movie 100 times, as if you’re not bored with it, when you actually are. We simply don’t value the experience of watching a movie apart from whatever positive emotions it gives us.

\n

Do we value saving lives independently of the good feelings we get from it? Some people seem to (or claim to), while others don’t (or claim not to). For those who do, some value (or claim to value) the lives saved linearly, and others don’t. So the analogy between boredom and scope insensitivity starts to break down here. But perhaps we can still make some final use out of it: whatever arguments we have to support the position that lives saved ought to be valued apart from our feelings, and linearly, we better make sure those arguments do not apply equally well to the case of boredom.

\n

Here’s an example of what I mean. Consider the question of why we should consider the lives of random strangers to be valuable. You may be tempted to answer that we know those lives are valuable because we feel good when we consider the possibility of saving a stranger’s life. But we also feel good when we watch a well-made movie, and we don’t consider the watching of a movie to be valuable apart from that good feeling. This suggests that the answer is not a very good one.

\n

Appendix: Altruism vs. Cooperation

\n

This may be a good time to point out/clarify that I consider cooperation, but not altruism, to be a core element of rationality. By “cooperation” I mean techniques that can be used by groups of individuals with disparate values to better approximate the ideals of group rationality (such as Pareto optimality). According to Eliezer,

\n
\n

\"altruist\" is someone who chooses between actions according to the criterion of others' welfare

\n
\n

In cooperation, we often takes others' welfare into account when choosing between actions, but this \"altruism\" is conditional on others reciprocating and taking our welfare into account in return. I expect that what Eliezer and others here mean by \"altruist\" must consider others’ welfare to be a terminal value, not just an instrumental one, and therefore cooperation and true altruism are non-overlapping concepts. (Please correct me if I'm wrong about this.)

" } }, { "_id": "Ab6gFDx4LxxySY3E8", "title": "Anthropic reasoning and correlated decision making", "pageUrl": "https://www.lesswrong.com/posts/Ab6gFDx4LxxySY3E8/anthropic-reasoning-and-correlated-decision-making", "postedAt": "2009-09-23T15:10:41.114Z", "baseScore": 9, "voteCount": 11, "commentCount": 14, "url": null, "contents": { "documentId": "Ab6gFDx4LxxySY3E8", "html": "

There seems to be some confusion on how to deal with correlated decision making - such as with absent-minded drivers and multiple copies of yourself; any situation in which many agents will allreach the same decision. Building on Nick Bostrom's division-of-responsibility principle mentioned in Outlawing Anthropics, I propose the following correlated decision principle:

CDP: If you are part of a group of N individuals whose decision is perfectly correlated, then you should reason as if you had a 1/N chance of being the dictator of the group (in which case your decision is applied to all) and a (N-1)/N chance of being a dictatee (in which case your decision is ignored).

What justification could there be to this principle? A simple thought experiment: imagine if you were one of N individuals who had to make a decision in secret. One of the decisions is opened at random, the others are discarded, and each person has his mind modified to believe that what was decided was in fact what they decided. This process is called a \"dictator filter\".

If you apply this dictator filter any situation S, then in \"S + dictator filter\", you should reason as in the CDP. If you apply it to perfectly correlated decision making, however, then the dictator filter changes nothing at all to anyone's decision - hence we should treat \"perfectly correlated\" as isomorphic to \"perfectly correlated + dictator filter\", which establishes the CDP.

Used alongside the SIA, this solves many puzzles on this blog, without needing advanced decision theory.

For instance, the situation in Outlawing Anthropics is simple: the SIA implies the 90% view, giving you a 90% chance of being in a group of 18, and a 10% of being in a group of two. Then you were offered a deal in which $3 is stolen from the red rooms, and $1 given to the green rooms. The initial expected gain from accepting the deal was -$20; the problem came that when you woke up in a green room, you were far more likely to be in the group of 18, giving an expected gain of +$5.60. The CDP cancels out this effect, returning you to an expected individual gain of -$2, and a global expected gain of -$20.

The Absent-Minded driver problem is even more interesting, and requires a more subtle reasoning. The SIA implies that if your probability of continuing is p, then the chance that you are at the first intersection is 1/(1+p), while the chance that you are at the second is p/(1+p). Using these number, it appears that your expected gain is [p2 + 4(1-p)p + p(p+4(1-p))]/(1+p) which is 2[p2 + 4(1-p)p]/(1+p).

If you were the dictator, deciding the behaviour at both intersections, your expected gain would be 1+p times this amount, since the driver at the first intersection exist with probability 1, while that at the second exists with probability p. Since there are N=2 individuals, the CDP thus cancels both the 2 and the (1+p) factors, returning the situation to the expected gain of p2 -4(1-p)p, maximised at p = 2/3.

The CDP also solves the issues in my old Sleeping Beauty problem.

" } }, { "_id": "5A9x74mgCwJwSg4sN", "title": "Avoiding doomsday: a \"proof\" of the self-indication assumption", "pageUrl": "https://www.lesswrong.com/posts/5A9x74mgCwJwSg4sN/avoiding-doomsday-a-proof-of-the-self-indication-assumption", "postedAt": "2009-09-23T14:54:32.288Z", "baseScore": 26, "voteCount": 24, "commentCount": 238, "url": null, "contents": { "documentId": "5A9x74mgCwJwSg4sN", "html": "

EDIT: This post has been superceeded by this one.

\n

The doomsday argument, in its simplest form, claims that since 2/3 of all humans will be in the final 2/3 of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that's it in a nutshell.

There are many immediate rebuttals that spring to mind - there is something about the doomsday argument that brings out the certainty in most people that it must be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom's book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

\n

A - A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

\n

Here, the probability is certainly 99%. But now consider the situation:

B - same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:

C - same as B, except the coin is flipped before you are created (the killing still happens later).

D - same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.

E - same as C, except the people to be killed are killed before awakening.

F - same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can't see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won't rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

\n

 

\n

EDIT: Corrected the language in the presentation of the SIA, after SilasBarta's comments.

\n

EDIT2: There are some objections to the transfer from D to C. Thus I suggest sliding in C' and C'' between them; C' is the same as D, execpt those due to die have the situation explained to them before being killed; C'' is the same as C' except those due to die are told \"you will be killed\" before having the situation explained to them (and then being killed).

" } }, { "_id": "Gb4bcSS5XAxFjvCA6", "title": "‘Clearly’ covers murky thought", "pageUrl": "https://www.lesswrong.com/posts/Gb4bcSS5XAxFjvCA6/clearly-covers-murky-thought", "postedAt": "2009-09-22T18:26:26.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "Gb4bcSS5XAxFjvCA6", "html": "

Why do we point out that statements we are making are obvious? If a statement is actually obvious, there should rarely reason to point the statement out, let alone that it is obvious. It’s obviousness should be obvious. It seems that a person often emphasizes that a statement is obvious when they would prefer not be required to defend it. Sometimes this is just because it is obvious once you know their field but a lot of effort to explain to someone who doesn’t, but often it’s just that the explanation is not obvious to them.

\n

But saying ‘obviously’ is too obvious. A better word is ‘clearly’. ‘Clearly’ sounds transparent and innocent. In reality it is a more subtle version of ‘obviously’.

\n

I have noticed this technique used well in published philosophy from time to time. If getting to your conclusion is going to require assuming your conclusion is true, ‘clearly’ suggests to the reader that they not think over that step too closely.

\n

For instance Michael Huemer in Ethical Intuitionism, while arguing that moral subjectivism is wrong, for the purpose of demonstrating that ethical intuitionism is right:

\n

Traditionally, cultural relativists have been charged with endorsing such statements as,

\n

If society were to approve of eating children, then eating children would be good.

\n

which is clearly false.

\n

Notice that ‘false’ seemingly means that it is false according to his intuition; the thing which he is trying to argue for the reliability of. If he just said ‘which is false’, the reader may wonder where, in a book on establishing a basis for ethical truth, this source of falsity may have popped from. ‘Clearly’ says that they needn’t worry about it.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "LCnryDjnrM7Kcikx7", "title": "Ethics as a black box function", "pageUrl": "https://www.lesswrong.com/posts/LCnryDjnrM7Kcikx7/ethics-as-a-black-box-function", "postedAt": "2009-09-22T17:25:47.094Z", "baseScore": 14, "voteCount": 20, "commentCount": 32, "url": null, "contents": { "documentId": "LCnryDjnrM7Kcikx7", "html": "

(Edited to add: See also this addendum.)

\r\n

I commented on Facebook that I think our ethics is three-tiered. There are the things we imagine we consider right, the things we consider right, and the things we actually do. I was then asked to elaborate between the difference of the first two.

For the first one, I was primarily thinking about people following any idealized, formal ethical theories. People considering themselves act utilitarians, for instance. Yet when presented with real-life situations, they may often reply that the right course of action is different than what the purely act utilitarian framework would imply, taking into account things such as keeping promises and so on. Of course, a rule utilitarian would avoid that particular trap, but in general nobody is a pure follower of any formal ethical theory.

Now, people who don't even try to follow any formal ethical systems probably have a closer match between their first and second categories. But I recently came to view as our moral intuitions as a function that takes the circumstances of the situation as an input and gives a moral judgement as an output. We do not have access to the inner workings of that function, though we can and do try to build models that attempt to capture its inner workings. Still, as our understanding of the function is incomplete, our models are bound to sometimes produce mistaken predictions.

\r\n

Based on our model, we imagine (if not thinking about the situations too much) that in certain kinds of situations we would arrive at a specific judgement, but a closer examination of them reveals that the function outputs the opposite value. For instance, we might think that maximizing total welfare is always for the best, but then realize that we don't actually want to maximize total welfare if the people we consider our friends would be hurt. This might happen even if you weren't explicitly following any formal theory of ethics. And if *actually* faced with that situation, we might end up acting selfishly instead.

This implies that people pick the moral frameworks which are best at justifying the ethical intuitions they already had. Of course, we knew that much already (even if we sometimes fail to apply it - I was previously puzzled over why so many smart people reject all forms of utilitarianism, as ultimately everyone has to perform some sort of expected utility calculations in order to make moral decisions at all, but then realized it had little to do with utilitarianism's merits as such). Some of us attempt to reprogram their moral intuitions, by taking those models and following them even when they fail to predict the correct response of the moral function. With enough practice, our intuitions may be shifted towards the consciously held stance, which may be a good or bad thing.

" } }, { "_id": "d7TxGtBo7pCbw3dpi", "title": "Why do animal lovers want animals to feel pain?", "pageUrl": "https://www.lesswrong.com/posts/d7TxGtBo7pCbw3dpi/why-do-animal-lovers-want-animals-to-feel-pain", "postedAt": "2009-09-21T09:00:21.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "d7TxGtBo7pCbw3dpi", "html": "
\"Behind

Behind the vail of (lots of) ignorance, would you rather squished chickens be painless?

\n

We may soon be able to make pain-free animals, according to New Scientist. The study they reported on finds that people not enthused by creating such creatures for scientific research, which is interesting. Robin Hanson guessed prior to seeing the article that this was because endorsing pain free animals would require thinking that farmed animals now were in more pain than wild animals, which people don’t think. However it turns out that vegetarians and animal welfare advocates were much more opposed to the idea than others in the study, so another explanation is needed.

\n

Robert Wiblin suggested to me that vegetarians are mostly in favor of animals not being used, as well as not being hurt, so they don’t want to support pain-free use, as that is supporting use. He made this comparison:

\n

Currently children are being sexually abused. The technology now exists to put them under anaesthetic so that they don’t experience the immediate pain of sexual abuse. Should we put children under anaesthetic to sexually abuse them?

\n

A glance at the comments on other sites reporting the possibility of painless meat suggests vegetarians cite this along with a lot of different reasons for disapproval. And sure enough it seems mainly meat eaters who say eliminating pain would make them feel better about eating meat. The reasons vegetarians (and others) give for not liking the idea, or for not being more interested in pain-free meat, include:

\n\n

Many reasonable reasons. The fascinating thing though is that vegetarians seem to consistently oppose the idea, yet not share reasons. Three (not mutually exclusive) explanations:

\n
    \n
  1. Vegetarians care more about animals in general, so care about lots of related concerns.
  2. \n
  3. Once you have an opinion, you collect a multitude of reasons to have it. When I was a vegetarian I thought meat eating was bad for the environment, bad for people who need food, bad for me, maybe even bad for animals. This means when a group of people lose one reason to hold a shared belief they all have other reasons to put forward, but not necessarily the same ones.
  4. \n
  5. There’s some single reason vegetarians are especially motivated to oppose pain-free meat, so they each look for a reason to oppose it, and come across different ones, as there are many.
  6. \n
\n

I’m interested by 3 because the situation reminds me of a pattern in similar cases I have noticed before. It goes like this. Some people make personal sacrifices, supposedly toward solving problems that don’t threaten them personally. They sort recycling, buy free range eggs, buy fair trade, campaign for wealth redistribution etc. Their actions are seen as virtuous. They see those who don’t join them as uncaring and immoral. A more efficient solution to the problem is suggested. It does not require personal sacrifice. People who have not previously sacrificed support it. Those who have previously sacrificed object on grounds that it is an excuse for people to get out of making the sacrifice.

\n

The supposed instrumental action, as the visible sign of caring, has become virtuous in its own right. Solving the problem effectively is an attack on the moral people – an attempt to undermine their dream of a future where everybody longs to be very informed on the social and environmental effects of their consumption choices or to sort their recycling really well. Some examples of this sentiment:

\n\n

In these cases, having solved a problem a better way should mean that efforts to solve it via personal sacrifice can be lessened. This would be a good thing if we wanted to solve the problem, and didn’t want to sacrifice. We would rejoice at progress allowing ever more ignorance and laziness on a given issue. But often we instead regret the end of an opportunity to show compassion and commitment. Especially when we were the compassionate, committed ones.

\n

Is vegetarian opposition to preventing animal pain an example of this kind of motivation? Vegetarianism is a big personal effort, a moral issue, a cause of feelings of moral superiority, and a feature of identity which binds people together. It looks like other issues where people readily claim fear of an end to virtuous efforts.  How should we distinguish between this and the other explanations?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "XSqYe5Rsqq4TR7ryL", "title": "The Finale of the Ultimate Meta Mega Crossover", "pageUrl": "https://www.lesswrong.com/posts/XSqYe5Rsqq4TR7ryL/the-finale-of-the-ultimate-meta-mega-crossover", "postedAt": "2009-09-20T17:21:08.994Z", "baseScore": 53, "voteCount": 49, "commentCount": 107, "url": null, "contents": { "documentId": "XSqYe5Rsqq4TR7ryL", "html": "

So I'd intended this story as a bit of utterly deranged fun, but it got out of control and ended up as a deep philosophical exploration, and now those of you who care will have to wade through the insanity.  I'm sorry.  I just can't seem to help myself.

\n

I know that writing crossover fanfiction is considered one of the lower levels to which an author can sink.  Alas, I've always been a sucker for audacity, and I am the sort of person who couldn't resist trying to top the entire... but never mind, you can see for yourself.

\n

Click on to read my latest story and first fanfiction, a Vernor Vinge x Greg Egan crackfic.

" } }, { "_id": "M4urAEehmHzMMpDCF", "title": "How to use SMILE to solve Bayes Nets", "pageUrl": "https://www.lesswrong.com/posts/M4urAEehmHzMMpDCF/how-to-use-smile-to-solve-bayes-nets", "postedAt": "2009-09-20T12:08:10.702Z", "baseScore": 16, "voteCount": 15, "commentCount": 3, "url": null, "contents": { "documentId": "M4urAEehmHzMMpDCF", "html": "

This is an account of downloading and using SMILE, a free-as-in-beer-but-not-open-source bayes net library. SMILE powers GENIE, a graphical bayes net tool. SMILE can do a lot of things, but I only used the simplest features - building a network and, given evidence, inferring probability distributions on the unobserved features.

\n

\n

After registering, there is a download page, containing binary distributions for various operating systems and architectures, including a .tar.gz intended for 32-bit Linux, which is what I picked. Possibly someone else can describe the experience of using it from Windows. The tarball turned out to contain about 50 .h files (C++ header files) and two .a files (statically linked libraries). I also downloaded the \"smilehelp.zip\", which turned out to contain a .chm (compiled hypertext help). I didn't have a viewer at the time, but gnochm worked out of the box. Most of the smilehelp text was only moderately helpful, but the appendices with tutorial code were very helpful.

\n

As I understand it, there are a lot of different things that one could be interested in doing (influence diagrams, learning parameters, learning structure) I did the very simplest thing, building a net and performing inference on it. This is basically \"tutorial 2\" from the reference manual. (Note: The tutorial 2 code is not correct, but the appendix tutorial 2 code is).

\n

The experience of using the library was not magically smooth, but then, the number of large-ish old-ish libraries which are magically smooth to use can probably be counted on the fingers of one foot. It might be obvious to some, but the friction impressed on me that even if some people are getting solid results that they trust from SMILE, I need to verify my use of it independently. Even straightforward uses of a trusted library could be buggy or (less likely) an unusual usage could reveal bugs internal to the library which existing users never encounter.

\n

The two annoyances that I encountered were Law of Demeter ugliness and the flat conditional probability tables. The Law of Demeter is the principle that interfaces should not reveal their implementations, or alternatively, clambering through object graphs is a bad idea. It is a version of the general programmer injunction: \"Decouple!\". Of course, once an interface is published, it is very hard to break backwards compatibility and revise away from it - there are almost certainly best-option-at-the-time historical reasons for the SMILE interface to be the way it is.

\n

In order to understand the issue with the conditional probability tables, consider writing a data structure to support interactive editing of a Bayes Net. The graph-editing parts are relatively standard and straightforward, but each node has a table of probabilities associated with it, and the table's dimensionality (number of dimensions, as well as size of dimensions) depends on the node's parents, and they might change as the graph is edited. This sounds somewhat tricky.

The data structure that SMILE seems to have picked is a flat sequence of probabilities. The flat sequence is interpreted as a grid through the standard mixed-radix arithmetic. The first parent is the most-significant dimension, then the next parent, and so on, and the node itself is the least-significant. So if a node X has two parents, Y and Z, in that order, and all three of them have two outcomes, true and false, in that order, then the probabilities go like this:

\n
P(X|Y&Z), P(~X|Y&Z), P(X|Y&~Z), P(~X|Y&~Z), P(X|~Y&Z), P(~X|~Y&Z), P(X|~Y&~Z), P(~X|~Y&~Z)
\n

But of course, that's the symbolic form. Inside someone's code (or inside the XML), you might see something like this:

\n
0.8 0.2 0.9 0.1 0 1 1 0
\n

There is an iterator-like class, sysCoords, which encapsulates the mixed-radix logic one uses to navigate the matrix, but I couldn't find a clean symbolic way to access or modify \"P(foo=bar|baz)\". Maybe most of the users learn to read these sequences, or maybe most users learn parameters or use noisy-or models.

\n

\"AThe net that I built is quite simple - an absent-minded driver picks one of three strategies at random - always take the exit, always continue, or flip a fair coin. After that, the driver tries to get home in the usual way, following that strategy. To build a net with SMILE it is probably helpful to be very clear on the network structure, and in particular a standard linear order (topological sort) of the nodes in your net.

\n

Very probably, it is almost never appropriate to build a net like this inside a executable. SMILE makes parsing a net very easy, either from XDSL format, or any of several competing formats. Building a net as an XML document would be easier to write, easier to read, and more portable. Note: The smilehelp document claims that to use the XML format, you need a separate library, but this is no longer correct. Another small quirk, which took some debugging: strings that are names of things should not contain spaces.

\n

The best part about SMILE interface is the inference step - one method call on the network: \"UpdateBeliefs()\". There is a standard \"Lauritzen\" algorithm for computing beliefs. Setting evidence requires the usual (LoD violating) graph navigation (from the network to the node to the node's value), but it seemed relatively straightforward. Of course, I may have grown accustomed to the interface.

\n

Once you get comfortable with the flat conditional tables, it seems very feasible to use SMILE as a bayes-net calculator. I look forward to the day when people discussing probability on LessWrong routinely exchange bayes nets, either in XDSL some other standard format.

\n

If you're interested in reading my code, it's here. Since the SMILE authors want people to register before downloading the library (I presume so they can use the numbers to justify their research to grant-giving institutions), I didn't include the library in my tarball, so you'll have to download it separately.

" } }, { "_id": "aHaqgTNnFzD7NGLMx", "title": "Reason as memetic immune disorder", "pageUrl": "https://www.lesswrong.com/posts/aHaqgTNnFzD7NGLMx/reason-as-memetic-immune-disorder", "postedAt": "2009-09-19T21:05:07.256Z", "baseScore": 533, "voteCount": 412, "commentCount": 184, "url": null, "contents": { "documentId": "aHaqgTNnFzD7NGLMx", "html": "

A prophet is without dishonor in his hometown

\n

I'm reading the book \"The Year of Living Biblically,\" by A.J. Jacobs.  He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year.  He quickly found that

\n\n

You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion.  People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them.  Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore.  New converts sometimes try to actually do what their religion tells them to do.

\n

I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases.  We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...

\n

How do we explain the blindness of people to a religion they grew up with?

\n

Cultural immunity

\n

Europe has lived with Christianity for nearly 2000 years.  European culture has co-evolved with Christianity.  Culturally, memetically, it's developed a tolerance for Christianity.  These new Christian converts, in Uganda, Papua New Guinea, and other remote parts of the world, were being exposed to Christian memes for the first time, and had no immunity to them.

\n

The history of religions sometimes resembles the history of viruses.  Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them.  They both grew more sedate over time.  (Christianity was pacifist at the start, as it arose in a conquered people.  When the Romans adopted it, it didn't make them any more militaristic than they already were.)

\n

The mechanism isn't the same as for diseases, which can't be too virulent or they kill their hosts.  Religions don't generally kill their hosts.  I suspect that, over time, individual selection favors those who are less zealous.  The point is that a culture develops antibodies for the particular religions it co-exists with - attitudes and practices that make them less virulent.

\n

I have a theory that \"radical Islam\" is not native Islam, but Westernized Islam.  Over half of 75 Muslim terrorists studied by Bergen & Pandey 2005 in the New York Times had gone to a Western college.  (Only 9% had attended madrassas.)  A very small percentage of all Muslims have received a Western college education.   When someone lives all their life in a Muslim country, they're not likely to be hit with the urge to travel abroad and blow something up.  But when someone from an Islamic nation goes to Europe for college, and comes back with Enlightenment ideas about reason and seeking logical closure over beliefs, and applies them to the Koran, then you have troubles.  They have lost their cultural immunity.

\n

I'm also reminded of a talk I attended by one of the Dalai Lama's assistants.  This was not slick, Westernized Buddhism; this was saffron-robed fresh-off-the-plane-from-Tibet Buddhism.  He spoke about his beliefs, and then took questions.  People began asking him about some of the implications of his belief that life, love, feelings, and the universe as a whole are inherently bad and undesirable.  He had great difficulty comprehending the questions - not because of his English, I think; but because the notion of taking a belief expressed in one context, and applying it in another, seemed completely new to him.  To him, knowledge came in units; each unit of knowledge was a story with a conclusion and a specific application.  (No wonder they think understanding Buddhism takes decades.)  He seemed not to have the idea that these units could interact; that you could take an idea from one setting, and explore its implications in completely different settings.  This may have been an extreme form of cultural immunity.

\n

We think of Buddhism as a peaceful, caring religion.  A religion that teaches that striving and status are useless is probably going to be more peaceful than one that teaches that the whole world must be brought under its dominion; and religions that lack the power of the state (e.g., the early Christians) are usually gentler than those with the power of life and death.  But much of Buddhism's kind public face may be due to cultural norms that prevent Buddhists from connecting all of their dots.  Today, we worry about Islamic terrorists.  A hundred years from now, we'll worry about Buddhist physicists.

\n

Reason as immune suppression

\n

The reason I bring this up is that intelligent people sometimes do things more stupid than stupid people are capable of.  There are a variety of reasons for this; but one has to do with the fact that all cultures have dangerous memes circulating in them, and cultural antibodies to those memes.  The trouble is that these antibodies are not logical.  On the contrary; these antibodies are often highly illogical.  They are the blind spots that let us live with a dangerous meme without being impelled to action by it.  The dangerous effects of these memes are most obvious with religion; but I think there is an element of this in many social norms.  We have a powerful cultural norm in America that says that all people are equal (whatever that means); originally, this powerful and ambiguous belief was counterbalanced by a set of blind spots so large that this belief did not even impel us to free slaves or let women or non-property-owners vote.  We have another cultural norm that says that hard work reliably and exclusively leads to success; and another set of blind spots that prevent this belief from turning us all into Objectivists.

\n

A little reason can be a dangerous thing.  The landscape of rationality is not smooth; there is no guarantee that removing one false belief will improve your reasoning instead of degrading it.  Sometimes, reason lets us see the dangerous aspects of our memes, but not the blind spots that protect us from them.  Sometimes, it lets us see the blind spots, but not the dangerous memes.  Either of these ways, reason can lead an individual to be unbalanced, no longer adapted to their memetic environment, and free to follow previously-dormant memes through to their logical conclusions.  To paraphrase Steve Weinberg: For a smart person to do something truly stupid, they need a theory.

\n

The vaccines?

\n

How can you tell when you have removed one set of blind spots from your reasoning without removing its counterbalances?  One heuristic to counter this loss of immunity might be to be very careful when you find yourself deviating from everyone around you.  But most people already do this too much

\n

Another heuristic is to listen to your feelings.  If your conclusions seem repulsive to you, you may have stripped yourself of cognitive immunity to something dangerous.

\n

Perhaps the most-helpful thing isn't to try to prevent memetic immune disorder, but to know that it could happen to you.

" } }, { "_id": "8rhJ8EKQ46HYqECSh", "title": "Hypothetical Paradoxes", "pageUrl": "https://www.lesswrong.com/posts/8rhJ8EKQ46HYqECSh/hypothetical-paradoxes", "postedAt": "2009-09-19T06:28:06.637Z", "baseScore": 12, "voteCount": 28, "commentCount": 34, "url": null, "contents": { "documentId": "8rhJ8EKQ46HYqECSh", "html": "
\n

When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.

\n

Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.

\n

And I'll do it without math or programming! Metatheory is fun.

\n

\n

Newcomb's problem, insofar as it is used as a refutation of causal decision theory, relies on convenient ignorance and a paradoxical concept of free will, though it takes some thinking to see why, because the concept of naive free will is such an innate part of human thought. In order for Newcomb's to work, there must exist some thing or set of things (\"A\") that very closely (even perfectly) link \"Omega predicts Y-boxing\" with \"Decider takes Y boxes.\" If there is no A, Omega cannot predict your behaviour. The existence of A is a fact necessary for the hypothetical and the decision maker should be aware of it, even if he doesn't know anything about how A generates a prediction.

\n

Newcomb's problem assumes two contradictory things about A. It assumes that, for the purpose of Causal Decision Theory, A is irrelevant and completely separated from your actual decision process; it assumes you have some kind of free will such that you can decide to two-box without this decision having been reflected in A. It also assumes that, for purposes of the actual outcome, A is quite relevant; if you decided to two-box, your decision will have been reflected in A. This contradiction is the reason the problem seems complicated. If CDT were allowed to consider A, as it should be, it would realize:

\n

(B), \"I might not understand how it works, but my decision is somehow bound to the prediction in such a way that however I decide will have been predicted. Therefore, for all intents and purposes, even though my decision feels free, it is not, and, insofar as it feels free, deciding to one-box will cause that box to be filled, even if I can't begin to comprehend *how*.\"

\n

\"I should one-box\" follows rather clearly from this. If B is false, and your decision is *not* bound to the prediction, then you should two-box. To let the theorizer know that B is true, but to forbid the decider from using such knowledge is what makes Newcomb's being a \"problem.\" Newcomb's assumes that CDT operates with naive free will. It also assumes that naive free will is false and that Omega accurately employs purely deterministic free will. It is this paradox of simultaneously assuming naive free will *and* deterministic will that makes Necomb's problem a problem. CDT does not appear to be bound to assume naive free will, and therefore it seems capable of treating your \"free\" decision as causal, which it seems that it functionally must be.

\n

The Smoking Lesion problem relies on the same trick in reverse. There is, by necessary assumption, some C such that C causes smoking and cancer, but smoking does not actually cause cancer. The decider is utterly forbidden from thinking about what C is and how C might influence the decision under consideration. The *decision to smoke* very, very strongly predicts *being a smoker.*1Indeed, given that there is no question of being able to afford or find cigarettes, the outcome of the decision to smoke is precisely what C predicts. The desire to smoke is essential to the decision to smoke - under the hypothetical, if there were no desire, the decider would always decide not to smoke; if there is a desire and a low enough risk of cancer, the decider will always decide to smoke. Thus, the desire appears to correspond significantly (perhaps perfectly) with C, but Evidential Decision Theory is arbitrarily prevented from taking this into account. This is despite the fact that C is so well understood that we can say with absolute certainty that the correlation between smoking and cancer is completely explained by it.

\n

The problem forces EDT to assume that C operates deterministically on the decision, and that the decision is naively free. It requires that the decision to smoke both is and is not correlated with the desire to smoke - if it were correlated, EDT would consider this and significantly adjust the odds of getting cancer conditional on deciding to smoke *given* that there is a desire to smoke. Forcing the decider to assume a paradox proves nothing, so TSL fails to refute a evidential decision theory that actually uses all of the evidence given to it.

\n

Both TSL and Newcomb's exploit our intuitive understanding of free will to assume paradoxes, then uses these unrecognized paradoxes to undermine a decision strategy. As these problems force the decider to secretly assume a paradox, it is little surprise that they generate convoluted and problematic outputs. This suggests that the problem lies not in these decision theories, but in the challenge of fully and accurately translating our language to our decision maker's decision theory.

\n

Newcomb's, TSL, Counterfactual Mugging, and the Absent-Minded Driver all have another larger, simpler problem, but it is practical rather than conceptual, so I'll address it in a subsequent post.

\n

1 - In the TSL version EY used in the link I provided, C is assumed to be \"a gene that causes a taste for cigarettes.\" Since the decider already *knows* they have a taste for cigarettes, Evidential Decision Theory should take this into account. If it does, it should assume that C is present (or present with high probability), and then the decision to smoke is obvious. Thus, the hypothetical I'm addressing is a more general version of TSL where C is not specified, only the existence of an acausal correlation is assumed.

\n
" } }, { "_id": "NbDFqQ887mgKQACS7", "title": "Minneapolis Meetup: Survey of interest", "pageUrl": "https://www.lesswrong.com/posts/NbDFqQ887mgKQACS7/minneapolis-meetup-survey-of-interest", "postedAt": "2009-09-18T18:52:50.278Z", "baseScore": 8, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "NbDFqQ887mgKQACS7", "html": "

Frank Adamek and I are going to host a Less Wrong/Overcoming Bias meetup tentatively on Saturday September 26 at 3pm in Coffman Memorial Union at the University of Minnesota (there is a coffee shop and a food court there). Frank is the president of the University of Minnesota transhumanist group and some of them may be attending also. We'd like to gauge the level of interest so please comment if you'd be likely to attend.

\n

(ps. If you have any time conflicts or would like to suggest a better venue please comment)

" } }, { "_id": "MoFqnLnXDDG8WXMjB", "title": "MWI, weird quantum experiments and future-directed continuity of conscious experience", "pageUrl": "https://www.lesswrong.com/posts/MoFqnLnXDDG8WXMjB/mwi-weird-quantum-experiments-and-future-directed-continuity", "postedAt": "2009-09-18T16:45:47.741Z", "baseScore": 6, "voteCount": 11, "commentCount": 92, "url": null, "contents": { "documentId": "MoFqnLnXDDG8WXMjB", "html": "

Response to: Quantum Russian Roulette

\n

Related: Decision theory: Why we need to reduce “could”, “would”, “should”

\n

In Quantum Russian Roulette, Christian_Szegedy tells of a game which uses a \"quantum source of randomness\" to somehow make a game which consists in terminating the lives of 15 rich people to create one very rich person sound like an attractive proposition. To quote the key deduction:

\n
\n

Then the only result of the game is that the guy who wins will enjoy a much better quality of life. The others die in his Everett branch, but they live on in others. So everybody's only subjective experience will be that he went into a room and woke up $750000 richer.

\n
\n

I think that Christian_Szegedy is mistaken, but in an interesting way. I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die. To formalize this, we imagine an arbitrary decision problem as a tree with nodes corresponding to decision points that create duplicate persons, and time increasing from left to right:

\n

\"\"

\n

The skull and crossbones symbols indicate that the person created in the previous decision point is killed. We might even consider putting probabilities on the arcs coming out of a given node to indicate how likely a given outcome is. When we try to assess whether a given decision was a good one, we might want to look at the utilities on the leaves of the tree are. But what if there is more than one leaf, and the person concerned is me, i.e. the root of the tree corresponds to \"me, now\" and the leaves correspond to \"possible me's in 10 days' time\"? I find myself querying for \"what will I really experience\" when trying to decide which way to steer reality. So I tend to want to mark some nodes in the decision tree as \"really me\" and others as \"zombie-like copies of me that I will not experience being\", resulting in a generic decision tree that looks like this:

\n

\"\" 

\n

I decorated the tree with normal faces and zombie faces consistent with the following rules:

\n
    \n
  1. At a decision node, if the parent is a zombie then child nodes have to be zombies, and 
  2. \n
  3. If a node is a normal face then exactly one of its children must also be a normal face. 
  4. \n
\n\n

Let me call these the \"forward continuity of consciousness\" rules. These rules guarantee that there will be an unbroken line of normal faces from the root to a unique leaf. Some faces are happier than others, representing, for exmaple, financial loss or gain, though zombies can never be smiling, since that would be out of character. In the case of a simplified version of Quantum Russian Roulette, where I am the only player and Omega pays the reward iff the quantum die comes up \"6\", we might draw a decision tree like this: 

\n

\"\"

\n

The game looks attractive, since the only way of decorating it that is consistent with the \"forward continuity of consciousness\" rules places the worldline of my conscious experience such that I will experience getting the reward, and the zombie-me's will lose the money, and then get killed. It is a shame that they will die, but it isn't that bad, because they are not me, I do not experience being them; killing a collection of beings who had a breif existence and that are a lot like me is not so great, but dying myself is much worse. 

\n

Our intuitions about forward continuity of our own conscious experience, in particular that at each stage there must be a unique answer to the question \"what will I be experiencing at that point in time?\" are important to us, but I think that they are fundamentally mistaken; in the end, the word \"I\" comes with a semantics that is incompatible with what we know about physics, namely that the process in our brains that generates \"I-ness\" is capable of being duplicated with no difference between the copies. Of course a lot of ink has been spilled over the issue. The MWI of quantum mechanics dictates that I am being copied at a frightening rate, as the quantum system that I label as \"me\" interacts with other systems around it, such as incoming photons. The notion of quantum immortality comes from pushing the \"unique unbroken line of conscious experience\" to its logical conclusion: you will never experience your own death, rather you will experience a string of increasingly unlikley events that seem to be contrived just to keep you alive. 

\n

In the comments for the Quantum Russian Roulette article, Vladimir Nesov says: 

\n
\n

MWI is morally uninteresting, unless you do nontrivial quantum computation. ... when you are saying \"everyone survives in one of the worlds\", this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one \"die completely\"), but there is no term in the expected utility calculation that corresponds to the sentiment everyone survives in one of the worlds\"

\n
\n

The sentiment \"I will survive in one of the worlds\" corresponds to my intuition that my own subjective experience continuing, or not continuing, is of the upmost importance. Combine this with the intuition that the \"forward continuity of consciousness\" rules are correct and we get the intuition that in a copying scenario, killing all but one of the copies simply shifts the route that my worldline of conscious experience takes from one copy to another, so that the following tree represents the situation if only two copies of me will be killed:

\n

\"\"

\n

The survival of some extra zombies seems to be of no benefit to me, because I wouldn't have experienced being them anyway. The reason that quantum mechanics and the MWI plays a role despite the fact that decision-theoretically the situation looks exactly the same as it would in a classical world - the utility calculations are the same - is that if we draw a tree where only one line of possibility is realized, we might encounter a situation where the \"forward continuity of consciousness\" rules have to be broken - actual death:

\n

\"\"

\n

The interesting question is: why do I have a strong intuition that the \"forward continuity of consciousness\" rules are correct? Why does my existence feel smooth, unlike the topology of a branch point in a graph? 

\n

ADDED:

\n

The problem of how this all relates to sleep, anasthesia or cryopreservation has come up. When I was anesthetized, there appeared to be a sharp but instantaneous jump from the anasthetic room to the recovery room, indicating that our intuition about continuity of conscious experience treats \"go to sleep, wake up some time later\" as being rather like ordinary survival. This is puzzling, since a peroid of sleep or anaesthesia or even cryopreservation can be arbitrarily long. 

" } }, { "_id": "XH9ZN8bLidtcqMxY2", "title": "Quantum Russian Roulette", "pageUrl": "https://www.lesswrong.com/posts/XH9ZN8bLidtcqMxY2/quantum-russian-roulette", "postedAt": "2009-09-18T08:49:43.865Z", "baseScore": 8, "voteCount": 16, "commentCount": 65, "url": null, "contents": { "documentId": "XH9ZN8bLidtcqMxY2", "html": "

The quantum Russian roulette is a game where 16 people participate. Each of them gets a unique four digit binary code assigned and deposits $50000. They are put to deep sleep using some drug. The organizer flips a quantum coin four times. Unlike in Russian roulette, here only the participant survives whose code was flipped. The others are executed in a completely painless manner. The survivor takes all the money.

Let us assume that none of them have families or very good friends. Then the only result of the game is that the guy who wins will enjoy a much better quality of life. The others die in his Everett branch, but they live on in others. So everybody's only subjective experience will be that he went into a room and woke up $750000 richer.

Being extremely spooky to our human intuition, there are hardly any trivial objective reasons to oppose this game under the following assumptions:

\n
    \n
  1. Average utilitarianism
  2. \n
  3. Near 100% confidence in the Multiple World nature of our universe
  4. \n
  5. It is possible to kill someone without invoking any negative experiences.
  6. \n
\n

The natural question arises whether it could be somehow checked that the method really works, especially that the Multiple World Hypothesis is correct. At first sight, it looks impossible to convince anybody besides the participant who survived the game.

However there is a way to convince a lot of people in a few Everett branches: You make a one-time big announcement in the Internet, TV etc. and say that there is a well tested quantum coin-flipper, examined by a community consisting of the most honest and trusted members of the society. You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected. Of course, only people in the right branch will be convinced. Nevertheless, they could be convinced enough to make serious thoughts about the viability of quantum Russian roulette type games.

\n

My question is: What are the possible moral or logical reasons not to play such games? Both from individual or societal standpoints.

\n

[EDIT] A Simpler version (single player version of the experiment): The single player generates lottery numbers by flipping quantum coins. Sets up an equipment that kills him in sleep if the generated numbers don't coincide with his. In this way, he can guarantee waking up as a lottery millionaire.

\n

 

" } }, { "_id": "TCCwcmQKQvb53JWba", "title": "Sociosexual Orientation Inventory, or failing to perform basic sanity checks", "pageUrl": "https://www.lesswrong.com/posts/TCCwcmQKQvb53JWba/sociosexual-orientation-inventory-or-failing-to-perform", "postedAt": "2009-09-16T10:00:06.816Z", "baseScore": 5, "voteCount": 10, "commentCount": 41, "url": null, "contents": { "documentId": "TCCwcmQKQvb53JWba", "html": "

I just did some reading about \"Sociosexual Orientation Inventory\", a simple 7-item test designed to measure one's openness to sex without love and long term commitment.

\n

Here are the questions. How long will it take you to spot the huge problem ahead...

\n
    \n
  1. With how many different partners have you had sex (sexual intercourse) within the last year.
  2. \n
  3. How many different partners do you foresee yourself having sex with during the next five years? (Please give a specific, realistic estimate)
  4. \n
  5. With how many different partners have you had sex on one and only one occasion?
  6. \n
  7. How often do (did) you fantasize about having sex with someone other than your current (most recent) dating partner? (1 never ... 8 at least once a day)
  8. \n
  9. \"Sex without love is OK\" (1 strongly disagree ... 9 stronly agree)
  10. \n
  11. \"I can imagine myself being comfortable and enjoying `casual' sex with different partners (1 strongly disagree ... 9 stronly agree)
  12. \n
  13. \"I would have to be closely attached to someone (both emotionally and psychologially) before I could feel comfortable and fully enjoy having sex with him or her\" (1 strongly disagree ... 9 stronly agree)
  14. \n
\n

Score is: 5 x item1 + 1 x item2 (capped at 30) + 5 x item3 + 4 x item4 + 2 x (mean of item5, item6, and reversed item7)

\n

Do you see the problem already?

\n

Quite predictably, researchers report that men have much higher SOI scores than women in all countries. But the first three questions (ignoring non-1:1 gender ratios, differently biased sampling for different genders, different rates of homosexuality between genders, different behaviour of homosexuals of different genders, 30 partners cap on the second item, differently biased forecasts of the second item and other small details that won't affect the score much) - simply have to be identical for men and women, so the entire difference would have to be explained by items 4 to 7, which have relatively low weights!

\n

The differences between men and women can be really extreme for some countries, Ukraine has 50.79±28.92 (mean±sd) for men, and 17.36±8.65 for women, which means that either Ukrainian men, or Ukrainian women, or both, are notoriously lying when asked about past and future sex partners. In most countries the differences are more moderate, with total 48-country sample's scores being 46.67±29.68 for men, and 27.34±19.55 for women. Latvia leads the way with smallest difference, and so most likely greatest honesty, with 49.42±23.61 for men, and 41.68±26.68 for women, what can be plausibly explained by just differences in attitude. (fake lie detector experiments have shown it's almost exclusively women who are lying when answering questions like that)

\n

SOI seems to be considered quite useful by psychologists, it correlates with many nice things, not only other questionnaires, but country SOI averages correlate with various demographic, economic, and health scores in quite systematic way. Still, I cannot read papers about it without asking myself - why didn't they bother to perform this basic sanity check - which would detect huge number of outright lies in answers. And more importantly - what proportion of \"serious science\" suffers from problems like that?

\n

References: The 48 country SOI study, fake lie detectors shows which gender lies more.

" } }, { "_id": "ntJKwW3noSzefi8p7", "title": "What is the Singularity Summit?", "pageUrl": "https://www.lesswrong.com/posts/ntJKwW3noSzefi8p7/what-is-the-singularity-summit", "postedAt": "2009-09-16T07:18:06.675Z", "baseScore": 14, "voteCount": 19, "commentCount": 17, "url": null, "contents": { "documentId": "ntJKwW3noSzefi8p7", "html": "

As you know, the Singularity Summit 2009 is on the weekend of Oct 3 - Oct 4. What is it, you ask? I'll start from the beginning...

\n
\n

 

\n

An interesting collection of molecules occupied a certain tide pool 3.5 to 4.5 billion years ago, interesting because the molecule collection built copies of itself out of surrounding molecules, and the resulting molecule collections also replicated while accumulating beneficial mutations. Those molecule collections satisfied a high-level functional criterion called \"genetic fitness\", and it happened by pure chance.

\n

If you think about all the possible arrangements of atoms that can occupy a 1-millimeter by 1-millimeter by 1-millimeter cube of space, most of them are going to suck at causing the future universe to contain copies of themselves. Genetic fitness is a vanishingly small target in configuration-space.

And if you studied the universe 5 billion years ago, you would not see a process capable of hitting such a small target. No physical process could create low-entropy collections of atoms satisfying high-level functional criteria. The second law of thermodynamics thus ensured that mice, as well as mousetraps, were physically impossible.

\n

\n

Then a mutating replicator randomly emerged, and suddenly Earth was home to something special: the process of Natural Selection. Natural Selection optimizes for genetic fitness. It squeezes the space of possible futures into a tiny subspace -- the space of universes that contain self-replicators which are very good at self-replicating. And it remained a flickering candle of optimization in a dark, random universe for three billion years.

An interesting product of Natural Selection occupied a certain region of savannah 100 thousand to 2 million years ago, interesting because it could form internal representations of the world around it and predict the consequences of its own actions. By pure chance, Natural Selection had created its successor.

Thought is a more powerful process than Natural Selection. Thought can optimize atom configurations much faster than Natural Selection can. It takes much less time to think of a big design improvement for an organism, than to breed it for as many generations as it takes for a specimen to manifest one.

Now remember, Natural Selection emerged by coincidence -- not by Natural Selection. Processes that optimize for genetic fitness were previously not to be found in the universe. And remember, Thought was evolved by coincidence -- not by Thought. Organs that represent the world around them and make predictions were previously not to be found among the optimized organisms of Earth.

It is still early in the age of optimization processes. Brains are not very good equipment for doing optimization -- Natural Selection just hacked them together out of cells. Yet, Thought is much more powerful than Natural Selection. So what happens when Thought designs an optimization process more powerful than Thought?

What happens when that optimization process designs an optimization process that is more powerful still?

The Singularity Summit is about the critical transition into the third era of optimization processes, the successor to human Thought. To say we need to be careful about initial conditions is to make the understatement of our own entire era.

" } }, { "_id": "GfHdNfqxe3cSCfpHL", "title": "The Absent-Minded Driver", "pageUrl": "https://www.lesswrong.com/posts/GfHdNfqxe3cSCfpHL/the-absent-minded-driver", "postedAt": "2009-09-16T00:51:45.730Z", "baseScore": 50, "voteCount": 46, "commentCount": 152, "url": null, "contents": { "documentId": "GfHdNfqxe3cSCfpHL", "html": "

This post examines an attempt by professional decision theorists to treat an example of time inconsistency, and asks why they failed to reach the solution (i.e., TDT/UDT) that this community has more or less converged upon. (Another aim is to introduce this example, which some of us may not be familiar with.) Before I begin, I should note that I don't think \"people are crazy, the world is mad\" (as Eliezer puts it) is a good explanation. Maybe people are crazy, but unless we can understand how and why people are crazy (or to put it more diplomatically, \"make mistakes\"), how can we know that we're not being crazy in the same way or making the same kind of mistakes?

\n

The problem of the ‘‘absent-minded driver’’ was introduced by Michele Piccione and Ariel Rubinstein in their 1997 paper \"On the Interpretation of Decision Problems with Imperfect Recall\". But I'm going to use \"The Absent-Minded Driver\" by Robert J. Aumann, Sergiu Hart, and Motty Perry instead, since it's shorter and more straightforward. (Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don't think we want to call these people \"crazy\".)

\n

Here's the problem description:

\n
\n

An absent-minded driver starts driving at START in Figure 1. At X he
can either EXIT and get to A (for a payoff of 0) or CONTINUE to Y. At Y he
can either EXIT and get to B (payoff 4), or CONTINUE to C (payoff 1). The
essential assumption is that he cannot distinguish between intersections X
and Y, and cannot remember whether he has already gone through one of
them.

\n

\"\"

\n

\"graphic

\n
\n

\n

At START, the problem seems very simple. If p is the probability of choosing CONTINUE at each intersection, then the expected payoff is p2+4(1-p)p, which is maximized at p = 2/3. Aumann et al. call this the planning-optimal decision.

\n

The puzzle, as Piccione and Rubinstein saw it, is that once you are at an intersection, you should think that you have some probability α of being at X, and 1-α of being at Y. Your payoff for choosing CONTINUE with probability p becomes α[p2+4(1-p)p] + (1-α)[p+4(1-p)], which doesn't equal p2+4(1-p)p unless α = 1. So, once you get to an intersection, you'd choose a p that's different from the p you thought optimal at START.

\n

Aumann et al. reject this reasoning and instead suggest a notion of action-optimality, which they argue should govern decision making at the intersections. I'm going to skip explaining its definition and how it works (read section 4 of the paper if you want to find out), and go straight to listing some of its relevant properties:

\n
    \n
  1. It still involves a notion of \"probability of being at X\".
  2. \n
  3. It's conceptually more complicated than planning-optimality.
  4. \n
  5. Mathematically, it has the same first-order necessary conditions as planning-optimality, but different sufficient conditions.
  6. \n
  7. If mixed strategies are allowed, any choice that is planning-optimal is also action-optimal.
  8. \n
  9. A choice that is action-optimal isn't necessarily planning-optimal. (In other words, there can be several action-optimal choices, only one of which is planning-optimal.)
  10. \n
  11. If we are restricted to pure strategies (i.e., p has to be either 0 or 1) then the set of action-optimal choices in this example is empty, even though there is still a planning-optimal one (namely p=1).
  12. \n
\n

In problems like this one, UDT is essentially equivalent to planning-optimality. So why did the authors propose and argue for action-optimality despite its downsides (see 2, 5, and 6 above), instead of the alternative solution of simply remembering or recomputing the planning-optimal decision at each intersection and carrying it out?

\n

Well, the authors don't say (they never bothered to argue against it), but I'm going to venture some guesses:

\n\n

Taken together, these guesses perhaps suffice to explain the behavior of these professional rationalists, without needing to hypothesize that they are \"crazy\". Indeed, many of us are probably still not fully convinced by UDT for one or more of the above reasons.

\n

EDIT: Here's the solution to this problem in UDT1. We start by representing the scenario using a world program:

\n

def P(i, j):
    if S(i) == \"EXIT\":
        payoff = 0
    elif S(j) == \"EXIT\":
        payoff = 4
    else:
        payoff = 1

\n

(Here we assumed that mixed strategies are allowed, so S gets a random string as input. Get rid of i and j if we want to model a situation where only pure strategies are allowed.) Then S computes that payoff at the end of P, averaged over all possible i and j, is maximized by returning \"EXIT\" for 1/3 of its possible inputs, and does that.

" } }, { "_id": "33YYcoWwtmqzAq9QR", "title": "Beware of WEIRD psychological samples", "pageUrl": "https://www.lesswrong.com/posts/33YYcoWwtmqzAq9QR/beware-of-weird-psychological-samples", "postedAt": "2009-09-13T11:28:05.581Z", "baseScore": 46, "voteCount": 42, "commentCount": 28, "url": null, "contents": { "documentId": "33YYcoWwtmqzAq9QR", "html": "

Most of the research on cognitive biases and other psychological phenomena that we draw on here is based on samples of students at US universities.  To what extent are we uncovering human universals, and to what extent facts about these WEIRD (Western, Educated, Industrialized, Rich, and Democratic) sample sources? A paper in press in Behavioural and Brain Sciences the evidence from studies that reach outside this group and highlights the many instances in which US students are outliers for many crucial studies in behavioural economics.

\n

Epiphenom: How normal is WEIRD?

\n

Henrich, J., Heine, S. J., & Norenzayan, A. (in press). The Weirdest people in the world? (PDF) Behavioral and Brain Sciences.

\n

Broad claims about human psychology and behavior based on narrow samples from Western  societies are regularly published in leading journals. Are such species-generalizing claims justified? This review suggests not only that substantial variability in experimental results emerges across populations in basic domains, but that standard subjects are in fact rather unusual compared with the rest of the species - frequent outliers. The domains reviewed include visual perception, fairness, categorization, spatial cognition, memory, moral reasoning and self‐concepts. This review (1) indicates caution in addressing questions of human nature based on this thin slice of humanity, and (2) suggests that understanding human psychology will require tapping broader subject pools. We close by proposing ways to address these challenges.

" } }, { "_id": "rDLr86BRG2dHxxPHx", "title": "Formalizing reflective inconsistency", "pageUrl": "https://www.lesswrong.com/posts/rDLr86BRG2dHxxPHx/formalizing-reflective-inconsistency", "postedAt": "2009-09-13T04:23:04.076Z", "baseScore": 5, "voteCount": 4, "commentCount": 13, "url": null, "contents": { "documentId": "rDLr86BRG2dHxxPHx", "html": "

In the post Outlawing Anthropics, there was a brief and intriguing scrap of reasoning, which used the principle of reflective inconsistency, which so far as I know is unique to this community:

\n
\n

If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, as this maximizes your expectation of pleasant experience over future selves.

\n
\n

This post expands upon and attempts to formalize that reasoning, in hopes of developing a logical framework for reasoning about reflective inconsistency.

\n

In diagramming and analyzing this, I encountered a difficulty. There are probably many ways to resolve it, but in resolving it, I basically changed the argument. You might have reasonably chosen a different resolution. Anyway, I'll explain the difficulty and where I ended up.

\n

The difficulty: The text \"...maximizes your expectation of pleasant experience over future selves.\". How would you compute expectation of pleasant experience? It ought to depend intensely on the situation. For example, a flat future, with no opportunity to influence my experience or that of my sibs for better or worse, would argue that caring for sibs has exactly the same expectation as not-caring. Alternatively, if a mad Randian was experimenting on me, rewarding selfishness, not-caring for my sibs might well have more pleasant experiences than caring. Also, I don't know how to compute with experiences - Total Utility, Average Utility, Rawlsian Minimum Utility, some sort of multiobjective optimization? Finally, I don't know how to compute with future selves. For example, imagine some sort of bicameral cognitive architecture, where two individuals have exactly the same percepts (and therefore choose exactly the same actions). Should I count that as one future self or two?

\n

To resolve this, I replace EY's reason with an argument from analogy, like so:

\n
\n

If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, for the same reasons that the process of evolution created kin altruism.

\n
\n

Here is the same argument again, \"expanded\". Remember, the primary reason to expand it is not readability - the expanded version is certainly less readable - it is as a step towards a generally applicable scheme for reasoning using the principle of reflective inconsistency.

\n

At first glance, the mechanism of natural selection seems to explain selfish, but not unselfish behavior. However, the structure of the EEA seems to have offered sufficient opportunities for kin to recognize kin with low-enough uncertainty and assist (with small-enough price to the helper and large-enough benefit to the helped) that unselfish entities do outcompete purely selfish ones. Note that the policy of selfishness is sufficiently simple that it was almost certainly tried many times. We believe that unselfishness is still a winning strategy in the present environment, and will continue to be a winning strategy in the future.

\n

The two policies, caring about sibs or not-caring, do in fact behave differently in the EEA, and so they are incompatible - we cannot behave according to both policies at once. Also, since caring about sibs outcompetes not-caring in the EEA, if a not-caring agent, X, were selecting a proxy (or \"future self\") to compete in an EEA-tournament to for utilons (or paperclips), X would pick a caring agent as proxy. The policy of not-caring would choose to delegate to an incompatible policy. This is what \"reflectively inconsistent\" means. Given a particular situation S1, one can always construct another situation S2 where the choices available in S2 correspond to policies to send as proxies into S1. One might understand the new situation as having an extra \"self-modification\" or \"precommitment\" choice point at the beginning. If a policy chooses an incompatible policy as its proxy, then that policy is \"reflectively inconsistent\" on that situation. Therefore, not-caring is reflectively inconsistent on the EEA.

\n

The last step to the conclusion is less interesting than the part about reflective inconsistency. The conclusion is something like: \"Other things being equal, prefer caring about sibs to not-caring\".

\n

Enough handwaving - to the code! My (crude) formalization is written in Automath, and to check my proof, the command (on GNU/Linux) is something like:

\n
\n

aut reflective_inconsistency_example.aut

\n
\n

 

" } }, { "_id": "XHjAxJvdpiqck8Da2", "title": "The New Nostradamus", "pageUrl": "https://www.lesswrong.com/posts/XHjAxJvdpiqck8Da2/the-new-nostradamus", "postedAt": "2009-09-12T14:42:44.684Z", "baseScore": 21, "voteCount": 15, "commentCount": 27, "url": null, "contents": { "documentId": "XHjAxJvdpiqck8Da2", "html": "

I stumbled upon an article called The New Nostradamus, reporting of a game-theoretic model which predicts political outcomes with startling effectiveness. The results are very impressive. However, the site hosting the article is unfamiliar to me, so I'm not certain of the article's verity, but a quick Google seems to support the claims, at least on a superficial skimming. Here's his TED talk. The model seems almost too good to be true, though. Anybody know more?

\r\n

Some choice bits from the article:

\r\n

The claim:

\r\n
\r\n

In fact, the professor says that a computer model he built and has perfected over the last 25 years can predict the outcome of virtually any international conflict, provided the basic input is accurate. What’s more, his predictions are alarmingly specific. His fans include at least one current presidential hopeful, a gaggle of Fortune 500 companies, the CIA, and the Department of Defense.

\r\n
\r\n

The results:

\r\n
\r\n

The criticism rankles him, because, to his mind, the proof is right there on the page. “I’ve published a lot of forecasting papers over the years,” he says. “Papers that are about things that had not yet happened when the paper was published but would happen within some reasonable amount of time. There’s a track record that I can point to.” And indeed there is. Bueno de Mesquita has made a slew of uncannily accurate predictions—more than 2,000, on subjects ranging from the terrorist threat to America to the peace process in Northern Ireland—that would seem to prove him right.

\r\n

[...]

\r\n

To verify the accuracy of his model, the CIA set up a kind of forecasting face-off that pit predictions from his model against those of Langley’s more traditional in-house intelligence analysts and area specialists. “We tested Bueno de Mesquita’s model on scores of issues that were conducted in real time—that is, the forecasts were made before the events actually happened,” says Stanley Feder, a former high-level CIA analyst. “We found the model to be accurate 90 percent of the time,” he wrote. Another study evaluating Bueno de Mesquita’s real-time forecasts of 21 policy decisions in the European community concluded that “the probability that the predicted outcome was what indeed occurred was an astounding 97 percent.” What’s more, Bueno de Mesquita’s forecasts were much more detailed than those of the more traditional analysts. “The real issue is the specificity of the accuracy,” says Feder. “We found that DI (Directorate of National Intelligence) analyses, even when they were right, were vague compared to the model’s forecasts. To use an archery metaphor, if you hit the target, that’s great. But if you hit the bull’s eye—that’s amazing.\"

\r\n
\r\n

\r\n

Gets good money for it:

\r\n
\r\n

Though controversial in the academic world, Bueno de Mesquita and his model have proven quite popular in the private sector. In addition to his teaching responsibilities and consulting for the government, he also runs a successful private business, Mesquita & Roundell, with offices in Rockefeller Center. Advising some of the top companies in the country, he earns a tidy sum: Mesquita & Roundell’s minimum fee is $50,000 for a project that includes two issues. Most projects involve multiple issues. “I’m not selling my wisdom,” he says. “I’m selling a tool that can help them get better results. That tool is the model.”

“In the private sector, we deal with three areas: litigation, mergers and acquisitions, and regulation,” he says. “On average in litigation, we produce a settlement that is 40 percent better than what the attorneys think is the best that can be achieved.” While Bueno de Mesquita’s present client list is confidential, past clients include Union Carbide, which needed a little help in structuring its defense after its 1984 chemical-plant disaster in Bhopal, India, claimed the lives of an estimated 22,000 people; the giant accounting firm Arthur Andersen; and British Aerospace during its merger with GEC-Marconi.

\r\n
\r\n

The method should be of special interest to the OB/LW audience, as it brings to mind discussions about self-deception and evolutionary vs. acknowledged goals and behavior:

\r\n
\r\n

Which illustrates the next incontrovertible fact about game theory: In the foreboding world view of rational choice, everyone is a raging dirtbag. Bueno de Mesquita points to dictatorships to prove his point: “If you liberate people from the constraint of having to satisfy other people in order to advance themselves, people don’t do good things.” When analyzing a problem in international relations, Bueno de Mesquita doesn’t give a whit about the local culture, history, economy, or any of the other considerations that more traditional political scientists weigh. In fact, rational choicers like Bueno de Mesquita tend to view such traditional approaches with a condescension bordering on disdain. “One is the study of politics as an expression of personal opinion as opposed to political science,” he says dryly. His only concern is with what the political actors want, what they say they want (often two very different things), and how each of their various options will affect their career advancement. He feeds this data into his computer model and out pop the answers.

\r\n

[...]

\r\n

In his continuing work for the CIA and the Defense Department, one of his most recent assignments has been North Korea and its nuclear program. His analysis starts from the premise that what Kim Jong Il cares most about is his political survival. As Bueno de Mesquita sees it, the principal reason for his nuclear program is to deter the United States from taking him out, by raising the costs of doing so. “The solution, then, lies in a mechanism that guarantees us that he not use these weapons and guarantees him that we not interfere with his political survival,” he says.

\r\n

[...]

\r\n

Recently, he’s applied his science to come up with some novel ideas on how to resolve the Israeli-Palestinian conflict. “In my view, it is a mistake to look for strategies that build mutual trust because it ain’t going to happen. Neither side has any reason to trust the other, for good reason,” he says. “Land for peace is an inherently flawed concept because it has a fundamental commitment problem. If I give you land on your promise of peace in the future, after you have the land, as the Israelis well know, it is very costly to take it back if you renege. You have an incentive to say, ‘You made a good step, it’s a gesture in the right direction, but I thought you were giving me more than this. I can’t give you peace just for this, it’s not enough.’ Conversely, if we have peace for land—you disarm, put down your weapons, and get rid of the threats to me and I will then give you the land—the reverse is true: I have no commitment to follow through. Once you’ve laid down your weapons, you have no threat.”

Bueno de Mesquita’s answer to this dilemma, which he discussed with the former Israeli prime minister and recently elected Labor leader Ehud Barak, is a formula that guarantees mutual incentives to cooperate. “In a peaceful world, what do the Palestinians anticipate will be their main source of economic viability? Tourism. This is what their own documents say. And, of course, the Israelis make a lot of money from tourism, and that revenue is very easy to track. As a starting point requiring no trust, no mutual cooperation, I would suggest that all tourist revenue be [divided by] a fixed formula based on the current population of the region, which is roughly 40 percent Palestinian, 60 percent Israeli. The money would go automatically to each side. Now, when there is violence, tourists don’t come. So the tourist revenue is automatically responsive to the level of violence on either side for both sides. You have an accounting firm that both sides agree to, you let the U.N. do it, whatever. It’s completely self-enforcing, it requires no cooperation except the initial agreement by the Israelis that they are going to turn this part of the revenue over, on a fixed formula based on population, to some international agency, and that’s that.”

\r\n
" } }, { "_id": "CwBPX8f59rp2LK4kL", "title": "Timeless Identity Crisis", "pageUrl": "https://www.lesswrong.com/posts/CwBPX8f59rp2LK4kL/timeless-identity-crisis", "postedAt": "2009-09-11T02:37:01.745Z", "baseScore": 11, "voteCount": 9, "commentCount": 33, "url": null, "contents": { "documentId": "CwBPX8f59rp2LK4kL", "html": "

Followup/summary/extension to this conversation with SilasBarta

\n

So, you're going along, cheerfully deciding things, doing counterfactual surgery on the output of decision algorithm A1 to calculate the results of your decisions, but it turns out that a dark secret is undermining your efforts...

\n

You are not running/being decision algorithm A1, but instead decision algorithm A2, an algorithm that happens to have the property of believing (erroneously) that it actually is A1.

\n

Ruh-roh.

\n

Now, it is _NOT_ my intent here to try to solve the problem of \"how can you know which one you really are?\", but instead to deal with the problem of \"how can TDT take into account this possibility?\"

\n

Well, first, let me suggest a slightly more concrete way in which this might come up:

\n

Physical computation errors. For instance, a stray cosmic ray hits your processor and flips a bit in such a way that a certain conditional that would have otherwise gone down one branch instead goes down the other, so instead of computing the output of your usual algorithm in this circumstance, you're computing the output of the version that, at that specific step, behaves in that slightly different way. (Yes, this sort of thing can be mitigated with error correction/etc. The problem that is being addressed here is that, (to me at least) it seems that basic TDT doesn't have a natural way to even represent this possibility).

\n

Consider a slightly modified causal net with in which the innards of an agent are more more of an \"initial state\", and that there's a selector node/process (ie, the resulting computation) that selects which abstract algorithm's output is the one that's the actual output. ie, this process determines which algorithm you, well, are.

\n

Similarly, another being that might base its actions on a model of your behavior will be represented as having a model of your innards and the model itself having a selector, analogous to the above.

\n

\"TDT

\n

To actually compute consequences of decisions and do all the relevant counterfactual surgery, ideally (ignoring \"minor\" issues like computability), one iterates over all possible algorithms one might be. That is, one first goes \"if the actual results of the combination of my innards and all the messy details of reality and so on is to do computation A1, then...\" and subiterate over all possible decisions. The second thing, of course, being done via the usual counterfactual surgery.

\n

Then, weigh all of those by the probability that one actually _is_ algorithm A1, and then go \"if I actually was algorithm A2...\" etc etc... ie, and one does the same counterfactual surgery.

\n

In the above diagram, that lets one consider the possibility of ones own choice being decoupled from what the model of their choice would predict, given that the initial model is correct, but while they are actually considering the decision, a hardware error or whatever causes the agent to be/implement A2 while the model of them is instead properly implementing A1.

\n

 

\n

I am far from convinced that this is the best way to deal with this issue, but I haven't seen anyone else bringing it up, and the usual form of TDT that we've been describing didn't seem to have any obvious way to even represent this issue. So, if anyone has any better ideas for how to clean up this solution, or otherwise alternate ideas for dealing with this problem, go ahead.

\n

I just think it is important that it be dealt with _somehow_... That is, that the decision theory have some way of representing errors or other things that could cause ambiguity as to which algorithm it is actually implementing in the first place.

\n

 

\n

EDIT: sorry, to clarify: one determines the utility for a possible choice by summing over the results of all the possible algorithms making that particular choice. (ie, \"I don't know if my decision corresponds to deciding the outcome of algorithm A1 or A2 or...\") so sum over those for each choice, weighing by the probability of that being the actual algorithm in quesiton)

\n

EDIT2: SilasBarta came up with a different causal graph during our discussion to represent this issue.

" } }, { "_id": "abDxxhhtexErMints", "title": "Formalizing informal logic", "pageUrl": "https://www.lesswrong.com/posts/abDxxhhtexErMints/formalizing-informal-logic", "postedAt": "2009-09-10T20:16:01.304Z", "baseScore": 16, "voteCount": 13, "commentCount": 22, "url": null, "contents": { "documentId": "abDxxhhtexErMints", "html": "

As an exercise, I take a scrap of argumentation, expand it into a tree diagram (using FreeMind), and then formalize the argument (in Automath). This towards the goal of creating  \"rationality augmentation\" software. In the short term, my suspicion is that such software would look like a group of existing tools glued together with human practices.

\n

About my choice of tools: I investigated Araucaria, Rationale, Argumentative, and Carneades. With the exception of Rationale, they're not as polished graphically as FreeMind, and the rigid argumentation-theory structure was annoying in the early stages of analysis. Using a general-purpose mapping/outlining tool may not be ideal, but it's easy to obtain. The primary reason I used Automath to formalize the argument was because I'm somewhat familiar with it. Another reason is that it's easy to obtain and build (at least, on GNU/Linux).

\n

Automath is an ancient and awesomely flexible proof checker. (Of course, other more modern proof-checkers are often just as flexible, maybe more flexible, and may be more useable.) The amount of \"proof checking\" done in this example is trivial - roughly, what the checker is checking is: \"after assuming all of these bits and pieces of opaque human reasoning, do they form some sort of tree?\" - but cutting down a powerful tool leaves a nice upgrade path, in case people start using exotic forms of logic.  However, the argument checkers built into the various argumentation-theory tools do not have such upgrade paths, and so are not really credible as candidates to formalize the arguments on this site.

\n

\n

Here's a piece of argumentation, abstracted from something that I was really thinking about at work:

\n
\n

There aren't any memory leaks in this method, but how would I argue it? If I had tested it with a tool like Valgrind or mtrace, I would have some justification - but I didn't. By eye, it doesn't look like it does any allocations from the heap. Of course, if a programmer violated coding standards, they could conceal an allocation from casual inspection. However, the author of the code wouldn't violate the coding standard that badly. Why do I believe that they wouldn't violate the coding standard? Well, I can't think of a reason for them to violate the coding standard deliberately, and they're competent enough to avoid making such an egregious mistake by accident.

\n
\n

In order to apply the argument diagramming technique, try to form a tree structure, with claims supported by independent reasons. Reasons are small combinations of claims, which do not support the conclusion alone, but do support it together. This is what I came up with (png, svg, FreeMind's native mm format).

\n

Some flaws in this analysis:

\n

1. The cursory inspection might be an independent reason for believing there are no allocations.

\n

2. There isn't any mention in the analysis of the egregiousness of the mistake.

\n

3. The treatment of benevolence and competence is asymmetrical.

\n

(My understanding with argument diagramming so far is that bouncing between \"try to diagram the argument as best you can\" and \"try to explain in what ways the diagram is still missing aspects of the original text\" is helpful.)

\n

In order to translate this informal argument into Automath, I used a very simple logical framework. Propositions are a kind of thing, and each justification is of some proposition. There are no general-purpose inference rules like modus ponens or \"From truth of a proposition, conclude necessity of that proposition\". This means that every reason in the argument needs to assume its own special-case inference rule, warranting that step.

\n
\n
# A proposition is a kind of thing, by assumption.
* proposition
  : TYPE
  = PRIM

# a justification of a proposition is a kind of thing, by assumption
* [p : proposition]
  justification
  : TYPE
  = PRIM
\n
\n

To translate a claim in this framework, we create (by assuming) a the proposition, with a chunk like this:

\n
\n
* claim_foo
: proposition
= PRIM
\n
\n

To translate a reason for a claim \"baz\" from predecessor claims \"foo\" and \"bar\", first we create (by assuming) the inference rule, with a chunk like this:

\n
\n
* [p:justification(claim_foo)]
[q:justification(claim_bar)]
reason_baz
: justification(claim_baz)
= PRIM
\n
\n

Secondly, we actually use the inference rule (exactly once), with a chunk like this:

\n
\n
* justification_baz
: justification(claim_baz)
= reason_baz(justification_foo, justification_bar)
\n
\n

My formalization of the above scrap of reasoning is here. Running \"aut\" is really easy. Do something like:

\n
\n
aut there_are_no_leaks.aut
\n
\n

Where to go next:

\n" } }, { "_id": "9RCoE7jmmvGd5Zsh2", "title": "The Lifespan Dilemma", "pageUrl": "https://www.lesswrong.com/posts/9RCoE7jmmvGd5Zsh2/the-lifespan-dilemma", "postedAt": "2009-09-10T18:45:24.123Z", "baseScore": 61, "voteCount": 55, "commentCount": 220, "url": null, "contents": { "documentId": "9RCoE7jmmvGd5Zsh2", "html": "

One of our most controversial posts ever was \"Torture vs. Dust Specks\".  Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said \"I'm a utilitarian consequentialist\", and the professor said \"No you're not\" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE.  (Yay student!)

\n

In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.

\n

I'll start by briefly introducing Parfit's Repugnant Conclusion, sort of a little brother to the main dilemma.  Parfit starts with a world full of a million happy people - people with plenty of resources apiece.  Next, Parfit says, let's introduce one more person who leads a life barely worth living - but since their life is worth living, adding this person must be a good thing.  Now we redistribute the world's resources, making it fairer, which is also a good thing.  Then we introduce another person, and another, until finally we've gone to a billion people whose lives are barely at subsistence level.  And since (Parfit says) it's obviously better to have a million happy people than a billion people at subsistence level, we've gone in a circle and revealed inconsistent preferences.

\n

My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living.  In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person.  Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness.  If we would be saddened to hear the news that such a person existed, we shouldn't kill them, but we should not voluntarily create such a person in an otherwise happy world.  So each time we voluntarily add another person to Parfit's world, we have a little celebration and say with honest joy \"Whoopee!\", not, \"Damn, now it's too late to uncreate them.\"

\n

And then the rest of the Repugnant Conclusion - that it's better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating - is just \"repugnant\" because of standard scope insensitivity.  The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations.  Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.

\n

But now we introduce the Repugnant Conclusion's big sister, the Lifespan Dilemma, which - at least in my own opinion - seems much worse.

\n

To start with, suppose you have a 20% chance of dying in an hour, and an 80% chance of living for 1010,000,000,000 years -

\n

Now I know what you're thinking, of course.  You're thinking, \"Well, 10^(10^10) years may sound like a long time, unimaginably vaster than the 10^15 years the universe has lasted so far, but it isn't much, really.  I mean, most finite numbers are very much larger than that.  The realms of math are infinite, the realms of novelty and knowledge are infinite, and Fun Theory argues that we'll never run out of fun.  If I live for 1010,000,000,000 years and then die, then when I draw my last metaphorical breath - not that I'd still have anything like a human body after that amount of time, of course - I'll go out raging against the night, for a life so short compared to all the experiences I wish I could have had.  You can't compare that to real immortality.  As Greg Egan put it, immortality isn't living for a very long time and then dying.  Immortality is just not dying, ever.\"

\n

Well, I can't offer you real immortality - not in this dilemma, anyway.  However, on behalf of my patron, Omega, who I believe is sometimes also known as Nyarlathotep, I'd like to make you a little offer.

\n

If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years.  That's 99.9999% of 80%, so I'm just shaving a tiny fraction 10-6 off your probability of survival, and in exchange, if you do survive, you'll survive - not ten times as long, my friend, but ten to the power of as long.  And it goes without saying that you won't run out of memory (RAM) or other physical resources during that time.  If you feel that the notion of \"years\" is ambiguous, let's just measure your lifespan in computing operations instead of years.  Really there's not much of a difference when you're dealing with numbers like 10^(1010,000,000,000).

\n

My friend - can I call you friend? - let me take a few moments to dwell on what a wonderful bargain I'm offering you.  Exponentiation is a rare thing in gambles.  Usually, you put $1,000 at risk for a chance at making $1,500, or some multiplicative factor like that.  But when you exponentiate, you pay linearly and buy whole factors of 10 - buy them in wholesale quantities, my friend!  We're talking here about 1010,000,000,000 factors of 10!  If you could use $1,000 to buy a 99.9999% chance of making $10,000 - gaining a single factor of ten - why, that would be the greatest investment bargain in history, too good to be true, but the deal that Omega is offering you is far beyond that!  If you started with $1, it takes a mere eight factors of ten to increase your wealth to $100,000,000.  Three more factors of ten and you'd be the wealthiest person on Earth.  Five more factors of ten beyond that and you'd own the Earth outright.  How old is the universe?  Ten factors-of-ten years.  Just ten!  How many quarks in the whole visible universe?  Around eighty factors of ten, as far as anyone knows.  And we're offering you here - why, not even ten billion factors of ten.  Ten billion factors of ten is just what you started with!  No, this is ten to the ten billionth power factors of ten.

\n

Now, you may say that your utility isn't linear in lifespan, just like it isn't linear in money.  But even if your utility is logarithmic in lifespan - a pessimistic assumption, surely; doesn't money decrease in value faster than life? - why, just the logarithm goes from 10,000,000,000 to 1010,000,000,000.

\n

From a fun-theoretic standpoint, exponentiating seems like something that really should let you have Significantly More Fun.  If you can afford to simulate a mind a quadrillion bits large, then you merely need 2^(1,000,000,000,000,000) times as much computing power - a quadrillion factors of 2 - to simulate all possible minds with a quadrillion binary degrees of freedom so defined.  Exponentiation lets you completely explore the whole space of which you were previously a single point - and that's just if you use it for brute force.  So going from a lifespan of 10^(10^10) to 10^(10^(10^10)) seems like it ought to be a significant improvement, from a fun-theoretic standpoint.

\n

And Omega is offering you this special deal, not for a dollar, not for a dime, but one penny!  That's right!  Act now!  Pay a penny and go from a 20% probability of dying in an hour and an 80% probability of living 1010,000,000,000 years, to a 20.00008% probability of dying in an hour and a 79.99992% probability of living 10^(1010,000,000,000) years!  That's far more factors of ten in your lifespan than the number of quarks in the visible universe raised to the millionth power!

\n

Is that a penny, friend?  - thank you, thank you.  But wait!  There's another special offer, and you won't even have to pay a penny for this one - this one is free!  That's right, I'm offering to exponentiate your lifespan again, to 10^(10^(1010,000,000,000)) years!  Now, I'll have to multiply your probability of survival by 99.9999% again, but really, what's that compared to the nigh-incomprehensible increase in your expected lifespan?

\n

Is that an avaricious light I see in your eyes?  Then go for it!  Take the deal!  It's free!

\n

(Some time later.)

\n

My friend, I really don't understand your grumbles.  At every step of the way, you seemed eager to take the deal.  It's hardly my fault that you've ended up with... let's see... a probability of 1/101000 of living 10^^(2,302,360,800) years, and otherwise dying in an hour.  Oh, the ^^?  That's just a compact way of expressing tetration, or repeated exponentiation - it's really supposed to be Knuth up-arrows, ↑↑, but I prefer to just write ^^.  So 10^^(2,302,360,800) means 10^(10^(10^...^10)) where the exponential tower of tens is 2,302,360,800 layers high.

\n

But, tell you what - these deals are intended to be permanent, you know, but if you pay me another penny, I'll trade you your current gamble for an 80% probability of living 1010,000,000,000 years.

\n

Why, thanks!  I'm glad you've given me your two cents on the subject.

\n

Hey, don't make that face!  You've learned something about your own preferences, and that's the most valuable sort of information there is!

\n

Anyway, I've just received telepathic word from Omega that I'm to offer you another bargain - hey!  Don't run away until you've at least heard me out!

\n

Okay, I know you're feeling sore.  How's this to make up for it?  Right now you've got an 80% probability of living 1010,000,000,000 years.  But right now - for free - I'll replace that with an 80% probability (that's right, 80%) of living 10^^10 years, that's 10^10^10^10^10^10^10^1010,000,000,000 years.

\n

See?  I thought that'd wipe the frown from your face.

\n

So right now you've got an 80% probability of living 10^^10 years.  But if you give me a penny, I'll tetrate that sucker!  That's right - your lifespan will go to 10^^(10^^10) years!  That's an exponential tower (10^^10) tens high!  You could write that as 10^^^3, by the way, if you're interested.  Oh, and I'm afraid I'll have to multiply your survival probability by 99.99999999%.

\n

What?  What do you mean, no?  The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it!  Well, that and the penny, of course.  If you turn down this offer, what does it say about that whole road you went down before?  Think of how silly you'd look in retrospect!  Come now, pettiness aside, this is the real world, wouldn't you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years?  Those arrows suppress a lot of detail, as the saying goes!  If you can't have Significantly More Fun with tetration, how can you possibly hope to have fun at all?

\n

Hm?  Why yes, that's right, I am going to offer to tetrate the lifespan and fraction the probability yet again... I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that... oh, don't make that face at me, if you want to refuse the whole garden path you've got to refuse some particular step along the way.

\n

Wait!  Come back!  I have even faster-growing functions to show you!  And I'll take even smaller slices off the probability each time!  Come back!

\n

...ahem.

\n

While I feel that the Repugnant Conclusion has an obvious answer, and that SPECKS vs. TORTURE has an obvious answer, the Lifespan Dilemma actually confuses me - the more I demand answers of my mind, the stranger my intuitive responses get.  How are yours?

\n

Based on an argument by Wei Dai.  Dai proposed a reductio of unbounded utility functions by (correctly) pointing out that an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan.  I looked at this and realized that there existed an obvious garden path, which meant that denying the conclusion would create a preference reversal.  Note also the relation to the St. Petersburg Paradox, although the Lifespan Dilemma requires only a finite number of steps to get us in trouble.

" } }, { "_id": "dD4Ls86msCjZCEMda", "title": "Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU", "pageUrl": "https://www.lesswrong.com/posts/dD4Ls86msCjZCEMda/pittsburgh-meetup-saturday-9-12-6-30pm-cmu", "postedAt": "2009-09-10T03:06:21.614Z", "baseScore": 8, "voteCount": 6, "commentCount": 2, "url": null, "contents": { "documentId": "dD4Ls86msCjZCEMda", "html": "

The aforementioned Pittsburgh, PA meetup will happen this Saturday at 6:30. Location is Baker Hall 231A (subject to change) at Carnegie Mellon. (Directions, campus map; 231A is on the second floor of Baker, near the end nearer the quad.) Please RSVP if you're coming.

\n

If necessary, I can be reached at nickptar@gmail.com or (919) 302-1147.

\n

Hope to see you there!

" } }, { "_id": "yN38rRLzyuvNnhqr3", "title": "Let Them Debate College Students", "pageUrl": "https://www.lesswrong.com/posts/yN38rRLzyuvNnhqr3/let-them-debate-college-students", "postedAt": "2009-09-09T18:15:40.967Z", "baseScore": 88, "voteCount": 75, "commentCount": 144, "url": null, "contents": { "documentId": "yN38rRLzyuvNnhqr3", "html": "

(EDIT:  Woozle has an even better idea, which would apply to many debates in general if the true goal were seeking resolution and truth.)

\n

Friends, Romans, non-Romans, lend me your ears.  I have for you a modest proposal, in this question of whether we should publicly debate creationists, or freeze them out as unworthy of debate.

\n

My fellow humans, I have two misgivings about this notion that there should not be a debate.  My first misgiving is that - even though on this particular occasion scientific society is absolutely positively not wrong to dismiss creationism - this business of not having debates sounds like dangerous business to me.  Science is sometimes wrong, you know, even if it is not wrong this time, and debating is part of the recovery process.

\n

And my second misgiving is that, like it or not, the creationists are on the radio, in the town halls, and of course on the Web, and they are already talking to large audiences; and the idea that there is not going to be a debate about this, may be slightly naive.

\n

\"But,\" you cry, \"when prestigious scientists lower themselves so far as to debate creationists, afterward the creationists smugly advertise that prestigious scientists are debating them!\"

\n

Ah, but who says that prestigious scientists are required to debate creationists?

\n

Find some bright ambitious young college student working toward a biology degree, someone who's read Pharyngula and the talk.origins FAQ.  Maybe have P. Z. Myers or someone run a test debate on them, to make sure they know how to answer all the standard lies and are generally good at debating and explaining.  Then have the college student debate the creationists - if the creationists are still up for it.  If not, of course, we can all make a big ruckus about how Michael Behe is afraid to debate a mere college student, and have the college student reply to all requests to debate Richard Dawkins or supply a scientific authority for the TV networks.  And if Michael Behe manages to defeat the college student, then he can go on to debate a PhD, and if that doesn't work, Behe gets to talk to P. Z. Myers, and in the unlikely event Behe manages not to get his butt handed to him by P. Z. Myers, he would have earned the right to debate Richard Dawkins.

\n

If we're dealing with young-earth creationists, then we add a bright 12-year-old at the start of the chain.

\n

That way, anyone who wants to know the state of the debate and the status of the arguments, is welcome to watch creationists being beaten up by some college kid - armed with real science, mind!

\n

But there will still be a debate.  And if the scientific community, at some point in the future, manages to go astray on some issue where the opposing side seems \"silly\", then we can hope - if public debate is any use at all - that the challenger will gently defeat the 12-year-old, unravel the college student, score points against the PhD, and hold their own against senior scientists.  There would still be a path to victory for worthy new ideas, and not a general license for a community to shut down all debate it thinks unworthy.

\n

It's this notion of shutting down debate that I fear as dangerous; and it seems to me that you can get just the same strategic conservation of prestige, by endorsing the principle of debate, but sending out some bright college students to present the standard position.  If the \"controversy\" as shown on CNN consists of some ID-er with a sober-looking business suit and an impressive-sounding title, versus a TA in jeans to represent the scientific community - but with accurate science, mind! - then I think this would viscerally answer what the scientific community thinks of creationism, and not create the false impression of an ongoing debate, while still giving airtime to the standard scientific replies.  If CNN isn't interested in showing that \"controversy\" - well then, that tells us what CNN really wanted, doesn't it.

\n

If an idea is so completely ridiculous as to be unworthy even of debate - then send out some bright un-titled college students to debate it!  Do vet them for knowledge of standard replies, explanation ability, and debating ability against evil opponents, to make sure standard science is not needlessly embarrassed.  But there should be plenty of ambitious young bright college students who can pass that filter and who would enjoy some TV exposure.

" } }, { "_id": "ZTEkZNLrmycNuCNYq", "title": "Outlawing Anthropics: An Updateless Dilemma", "pageUrl": "https://www.lesswrong.com/posts/ZTEkZNLrmycNuCNYq/outlawing-anthropics-an-updateless-dilemma", "postedAt": "2009-09-08T18:31:49.270Z", "baseScore": 38, "voteCount": 35, "commentCount": 210, "url": null, "contents": { "documentId": "ZTEkZNLrmycNuCNYq", "html": "

Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random.

\n

If the result of this logical coinflip is 1 (aka \"heads\"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is \"tails\" (0), we'll create 2 of you in green rooms and 18 of you in red rooms.

\n

After going to sleep at the start of the experiment, you wake up in a green room.

\n

With what degree of credence do you believe - what is your posterior probability - that the logical coin came up \"heads\"?

\n

There are exactly two tenable answers that I can see, \"50%\" and \"90%\".

\n

Suppose you reply 90%.

\n

And suppose you also happen to be \"altruistic\" enough to care about what happens to all the copies of yourself.  (If your current system cares about yourself and your future, but doesn't care about very similar xerox-siblings, then you will tend to self-modify to have future copies of yourself care about each other, as this maximizes your expectation of pleasant experience over future selves.)

\n

Then I attempt to force a reflective inconsistency in your decision system, as follows:

\n

I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room.  If they all reply \"Yes\", I will do so.

\n

(It will be understood, of course, that $1 represents 1 utilon, with actual monetary amounts rescaled as necessary to make this happen.  Very little rescaling should be necessary.)

\n

(Timeless decision agents reply as if controlling all similar decision processes, including all copies of themselves.  Classical causal decision agents, to reply \"Yes\" as a group, will need to somehow work out that other copies of themselves reply \"Yes\", and then reply \"Yes\" themselves.  We can try to help out the causal decision agents on their coordination problem by supplying rules such as \"If conflicting answers are delivered, everyone loses $50\".  If causal decision agents can win on the problem \"If everyone says 'Yes' you all get $10, if everyone says 'No' you all lose $5, if there are conflicting answers you all lose $50\" then they can presumably handle this.  If not, then ultimately, I decline to be responsible for the stupidity of causal decision agents.)

\n

Suppose that you wake up in a green room.  You reason, \"With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms.  Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60.\"  You reply yes.

\n

However, before the experiment, you calculate the general utility of the conditional strategy \"Reply 'Yes' to the question if you wake up in a green room\" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20.  You want your future selves to reply 'No' under these conditions.

\n

This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence.

\n

I originally thought, on first formulating this problem, that it had to do with double-counting the utilons gained by your variable numbers of green friends, and the probability of being one of your green friends.

\n

However, the problem also works if we care about paperclips.  No selfishness, no altruism, just paperclips.

\n

Let the dilemma be, \"I will ask all people who wake up in green rooms if they are willing to take the bet 'Create 1 paperclip if the logical coinflip came up heads, destroy 3 paperclips if the logical coinflip came up tails'.  (Should they disagree on their answers, I will destroy 5 paperclips.)\"  Then a paperclip maximizer, before the experiment, wants the paperclip maximizers who wake up in green rooms to refuse the bet.  But a conscious paperclip maximizer who updates on anthropic evidence, who wakes up in a green room, will want to take the bet, with expected utility ((90% * +1 paperclip) + (10% * -3 paperclips)) = +0.6 paperclips.

\n

This argues that, in general, decision systems - whether they start out selfish, or start out caring about paperclips - will not want their future versions to update on anthropic \"evidence\".

\n

Well, that's not too disturbing, is it?  I mean, the whole anthropic thing seemed very confused to begin with - full of notions about \"consciousness\" and \"reality\" and \"identity\" and \"reference classes\" and other poorly defined terms.  Just throw out anthropic reasoning, and you won't have to bother.

\n

When I explained this problem to Marcello, he said, \"Well, we don't want to build conscious AIs, so of course we don't want them to use anthropic reasoning\", which is a fascinating sort of reply.  And I responded, \"But when you have a problem this confusing, and you find yourself wanting to build an AI that just doesn't use anthropic reasoning to begin with, maybe that implies that the correct resolution involves us not using anthropic reasoning either.\"

\n

So we can just throw out anthropic reasoning, and relax, and conclude that we are Boltzmann brains.  QED.

\n

In general, I find the sort of argument given here - that a certain type of decision system is not reflectively consistent - to be pretty damned compelling.  But I also find the Boltzmann conclusion to be, ahem, more than ordinarily unpalatable.

\n

In personal conversation, Nick Bostrom suggested that a division-of-responsibility principle might cancel out the anthropic update - i.e., the paperclip maximizer would have to reason, \"If the logical coin came up heads then I am 1/18th responsible for adding +1 paperclip, if the logical coin came up tails then I am 1/2 responsible for destroying 3 paperclips.\"  I confess that my initial reaction to this suggestion was \"Ewwww\", but I'm not exactly comfortable concluding I'm a Boltzmann brain, either.

\n

EDIT:  On further reflection, I also wouldn't want to build an AI that concluded it was a Boltzmann brain!  Is there a form of inference which rejects this conclusion without relying on any reasoning about subjectivity?

\n

EDIT2:  Psy-Kosh has converted this into a non-anthropic problem!

" } }, { "_id": "FSqfpaBnz2RCJR5RE", "title": "FHI postdoc at Oxford", "pageUrl": "https://www.lesswrong.com/posts/FSqfpaBnz2RCJR5RE/fhi-postdoc-at-oxford", "postedAt": "2009-09-08T18:18:43.157Z", "baseScore": 7, "voteCount": 6, "commentCount": 3, "url": null, "contents": { "documentId": "FSqfpaBnz2RCJR5RE", "html": "

The Future of Humanity Institute at the University of Oxford is looking to fill a postdoctoral research fellowship in interdisciplinary science or philosophy.  Current work areas include global catastrophic risks, probabilistic methodology and applied epistemology and rationality, impacts of future technologies, and ethical issues related to human enhancement.  Apply by Oct 14.

" } }, { "_id": "9RG2ubLz8hWvDMWFE", "title": "An idea: Sticking Point Learning", "pageUrl": "https://www.lesswrong.com/posts/9RG2ubLz8hWvDMWFE/an-idea-sticking-point-learning", "postedAt": "2009-09-08T09:52:05.118Z", "baseScore": 12, "voteCount": 13, "commentCount": 8, "url": null, "contents": { "documentId": "9RG2ubLz8hWvDMWFE", "html": "

When trying to learn technical topics from online expositions, I imagine that most people hit snags at some moment - passages that they can't seem to grasp right away and that impede further progress. Moreover, I imagine that different people often get stuck in the same places, and that a few fortunate words of explanation can often help overcome the hump. (For example, \"integral is the area under the curve\" or \"entropy is the expected number of bits\".) And finally, perhaps unintuitively, I also imagine that someone who just overcame a sticking point is more likely to say the right magic words about it than someone who has understood the topic for years.

\n

Hence my suggestion: let's try to identify and resolve such sticking points together, maybe as part of our Simple Math of Everything. This idea might be more appropriate for Hacker News, but I'm submitting it here because it sounds like a not-for-profit rather than a business, and seems nicely aligned with the goals of our community.

\n

The required software certainly exists: our wiki would do fine. One of us posts a copy of a technical text. Others try to parse it, hit the difficult points, resolve them by intellectual force and insert (as a mid-article comment) the magic words or hyperlinks that helped them in that particular case. I really wonder what the result would look like; hopefully, something comfortably readable by people with modest math-reading skillz.

\n

Any number of technical topics suggest themselves immediately - now what would you like to see?

" } }, { "_id": "22JbRg4Py7u6Y2LqD", "title": "Charitable explanation", "pageUrl": "https://www.lesswrong.com/posts/22JbRg4Py7u6Y2LqD/charitable-explanation", "postedAt": "2009-09-08T05:55:33.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "22JbRg4Py7u6Y2LqD", "html": "

Is anyone really altruistic? The usual cynical explanations for seemingly altruistic behavior are that it makes one feel good, it makes one look good, and it brings other rewards later. These factors are usually present, but how much do they contribute to motivation?

\n

One way to tell if it’s all about altruism is to invite charity that explicitly won’t benefit anyone. Curious economists asked their guinea pigs for donations to a variety of causes, warning them:

\n

“The amount contributed by the proctor to your selected charity WILL be reduced by however much you pass to your selected charity. Your selected charity will receive neither more nor less than $10.”

\n

Many participants chipped in nonetheless:

\n

We find that participants, on average, donated 20% of their endowments and that approximately 57% of the participants made a donation.

\n

This is compared to giving an average of 30-49% in experiments where donating benefited the cause, but it is of course possible that knowing you are helping offers more of a warm glow. It looks like at least half of giving isn’t altruistic at all, unless the participants were interested in the wellbeing of the experimenters’ funds.

\n

The opportunity to be observed by others also influences how much we donate, and we are duly rewarded with reputation:

\n

Here we demonstrate that more subjects were willing to give assistance to unfamiliar people in need if they could make their charity offers in the presence of their group mates than in a situation where the offers remained concealed from others. In return, those who were willing to participate in a particular charitable activity received significantly higher scores than others on scales measuring sympathy and trustworthiness.

\n

This doesn’t tell us whether real altruism exists though. Maybe there are just a few truly altruistic deeds out there? What would a credibly altruistic act look like?

\n
\"Fortunately

Fortunately for cute children desirous of socially admirable help, much charity is not driven by altruism (picture: Laura Lartigue)

\n

If an act made the doer feel bad, look bad to others, and endure material cost, while helping someone else, we would probably be satisfied that it was altruistic. For instance if a person killed their much loved grandmother to steal her money to donate to a charity they believed would increase the birth rate somewhere far away, at much risk to themselves, it would seem to escape the usual criticisms. And there is no way you would want to be friends with them.

\n

So why would anyone tell you if they had good evidence they had been altruistic? The more credible evidence should look particularly bad. And if they were keen to tell you about it anyway, you would have to wonder whether it was for show after all. This makes it hard for an altruist to credibly inform anyone that they were altruistic. On the other hand the non-altruistic should be looking for any excuse to publicize their good deeds. This means the good deeds you hear about should be very biased toward the non-altruistic. Even if altruism were all over the place it should be hard to find. But it’s not, is it?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "wCecHd3z8EEjNYWYg", "title": "Why I'm Staying On Bloggingheads.tv", "pageUrl": "https://www.lesswrong.com/posts/wCecHd3z8EEjNYWYg/why-i-m-staying-on-bloggingheads-tv", "postedAt": "2009-09-07T20:15:56.595Z", "baseScore": 31, "voteCount": 34, "commentCount": 101, "url": null, "contents": { "documentId": "wCecHd3z8EEjNYWYg", "html": "

Recently, Sean Carroll, Carl Zimmer, and Phil Plait have all decided to stop appearing on BloggingHeads.TV (BHTV), and PZ Myers announced he would not appear on it in the future, after a disastrous decision to have creationist Michael Behe interviewed by the linguist and non-biologist John McWhorter, who failed to call Behe on his standard BS.

\n

I'm hereby publicly announcing that I intend to stay on BloggingHeads.TV.

\n

Why?  Two main reasons:

\n

1)  Robert Wright publicly said that this was foolish, apologized for the poor editorial oversight that led to it, and says they're going to try never to do this again.  This looks sincere to me, and given that it's sincere, people really ought to be allowed more chance than this to recover from their mistakes.

\n

2)  Bloggingheads.TV has given me a forum to debate accomodationist atheists who are insufficiently condemning of religion - for example my diavlog with Adam Frank, author of \"The Constant Fire\".  Adam Frank argues that, while of course we now know that God doesn't exist, nonetheless scientific wonder at the universe and its mysteries has a lot in common with the roots of religion.  And I said this was wishful thinking, historically ignorant of how religions really arose and propagated themselves, and a continuation of such theistic bad habits as thinking that things of which we are temporarily ignorant are \"sacred mysteries\".  And no one at BHTV complained that I was being too confrontational, or too anti-religious, or that it was unfair to have the diavlog be between two atheists.

\n

If BHTV is willing to let me come on and (politely) kick hell out of atheists who aren't atheistic enough to suit me, then I don't believe that their unfortunate failure to have Behe interviewed by someone who could call his BS, represents any deep hidden agenda in favor of religion and against science.

\n

Rather, I think it represents a commitment to having interesting discussions by people who intelligently disagree with each other and have something courteous to say about it - even if that discussion wanders into the fearsome death zones where science does (\"does not!\") clash with religion - and this commitment managed to go wrong on one or two occasions.

\n

My friends and fellow antitheists, this is an important commitment while most of the world is continuing to pretend that there is no conflict between science and religion.  It's not surprising if that commitment goes wrong now and then.  It is not reasonable to expect that a commitment to repeatedly discuss a scary controversy will never go wrong.  It may well go wrong again despite Robert Wright's best intentions.  But unless it starts to go wrong systematically, I'm going to stay on BHTV, arguing that science and religion are not compatible.

\n

Of course, if most other non-accomodationists jump ship from BHTV as a result of the Behe affair, then it will become a hangout for accomodationists only.  \"Evaporative Cooling of Group Beliefs\" is another reason why you should put forth at least a little effort to \"Tolerate Tolerance\" - to not insist that all your potential trade-partners punish the same people you've labeled defectors, exactly the way you want them punished, before you cooperate.  Yes, Behe is an enemy of science, but Wright is not; and Wright may also dislike Behe, yet not wish to implement exactly the same punishment-policy toward Behe that you advocate; and that needs to be all right, if we're all going to end up cooperating.

" } }, { "_id": "LubwxZHKKvCivYGzx", "title": "Forcing Anthropics: Boltzmann Brains", "pageUrl": "https://www.lesswrong.com/posts/LubwxZHKKvCivYGzx/forcing-anthropics-boltzmann-brains", "postedAt": "2009-09-07T19:02:52.990Z", "baseScore": 37, "voteCount": 31, "commentCount": 72, "url": null, "contents": { "documentId": "LubwxZHKKvCivYGzx", "html": "

Followup toAnthropic Reasoning in UDT by Wei Dai

\n

Suppose that I flip a logical coin - e.g. look at some binary digit of pi unknown to either of us - and depending on the result, either create a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, create one of you in a green room and a billion of you in red rooms.  You go to sleep at the start of the experiment, and wake up in a red room.

\n

Do you reason that the coin very probably came up 0?  Thinking, perhaps:  \"If the coin came up 1, there'd be a billion of me in green rooms and only one of me in a red room, and in that case, it'd be very surprising that I found myself in a red room.\"

\n

What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?

\n

There are only two answers I can see that might in principle be coherent, and they are \"50%\" and \"a billion to one against\".

\n

Tomorrow I'll talk about what sort of trouble you run into if you reply \"a billion to one\".

\n

But for today, suppose you reply \"50%\".  Thinking, perhaps:  \"I don't understand this whole consciousness rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself.\"

\n

In that case, why don't you believe you're a Boltzmann brain?

\n

Back when the laws of thermodynamics were being worked out, there was first asked the question:  \"Why did the universe seem to start from a condition of low entropy?\"  Boltzmann suggested that the larger universe was in a state of high entropy, but that, given a long enough time, regions of low entropy would spontaneously occur - wait long enough, and the egg will unscramble itself - and that our own universe was such a region.

\n

The problem with this explanation is now known as the \"Boltzmann brain\" problem; namely, while Hubble-region-sized low-entropy fluctuations will occasionally occur, it would be far more likely - though still not likely in any absolute sense - for a handful of particles to come together in a configuration performing a computation that lasted just long enough to think a single conscious thought (whatever that means) before dissolving back into chaos.  A random reverse-entropy fluctuation is exponentially vastly more likely to take place in a small region than a large one.

\n

So on Boltzmann's attempt to explain the low-entropy initial condition of the universe as a random statistical fluctuation, it's far more likely that we are a little blob of chaos temporarily hallucinating the rest of the universe, than that a multi-billion-light-year region spontaneously ordered itself.  And most such little blobs of chaos will dissolve in the next moment.

\n

\"Well,\" you say, \"that may be an unpleasant prediction, but that's no license to reject it.\"  But wait, it gets worse:  The vast majority of Boltzmann brains have experiences much less ordered than what you're seeing right now.  Even if a blob of chaos coughs up a visual cortex (or equivalent), that visual cortex is unlikely to see a highly ordered visual field - the vast majority of possible visual fields more closely resemble \"static on a television screen\" than \"words on a computer screen\".  So on the Boltzmann hypothesis, highly ordered experiences like the ones we are having now, constitute an exponentially infinitesimal fraction of all experiences.

\n

In contrast, suppose one more simple law of physics not presently understood, which forces the initial condition of the universe to be low-entropy.  Then the exponentially vast majority of brains occur as the result of ordered processes in ordered regions, and it's not at all surprising that we find ourselves having ordered experiences.

\n

But wait!  This is just the same sort of logic (is it?) that one would use to say, \"Well, if the logical coin came up heads, then it's very surprising to find myself in a red room, since the vast majority of people-like-me are in green rooms; but if the logical coin came up tails, then most of me are in red rooms, and it's not surprising that I'm in a red room.\"

\n

If you reject that reasoning, saying, \"There's only one me, and that person seeing a red room does exist, even if the logical coin came up heads\" then you should have no trouble saying, \"There's only one me, having a highly ordered experience, and that person exists even if all experiences are generated at random by a Boltzmann-brain process or something similar to it.\"  And furthermore, the Boltzmann-brain process is a much simpler process - it could occur with only the barest sort of causal structure, no need to postulate the full complexity of our own hallucinated universe.  So if you're not updating on the apparent conditional rarity of having a highly ordered experience of gravity, then you should just believe the very simple hypothesis of a high-volume random experience generator, which would necessarily create your current experiences - albeit with extreme relative infrequency, but you don't care about that.

\n

Now, doesn't the Boltzmann-brain hypothesis also predict that reality will dissolve into chaos in the next moment?  Well, it predicts that the vast majority of blobs who experience this moment, cease to exist after; and that among the few who don't dissolve, the vast majority of those experience chaotic successors.  But there would be an infinitesimal fraction of a fraction of successors, who experience ordered successor-states as well.  And you're not alarmed by the rarity of those successors, just as you're not alarmed by the rarity of waking up in a red room if the logical coin came up 1 - right?

\n

So even though your friend is standing right next to you, saying, \"I predict the sky will not turn into green pumpkins and explode - oh, look, I was successful again!\", you are not disturbed by their unbroken string of successes.  You just keep on saying, \"Well, it was necessarily true that someone would have an ordered successor experience, on the Boltzmann-brain hypothesis, and that just happens to be us, but in the next instant I will sprout wings and fly away.\"

\n

Now this is not quite a logical contradiction.  But the total rejection of all science, induction, and inference in favor of an unrelinquishable faith that the next moment will dissolve into pure chaos, is sufficiently unpalatable that even I decline to bite that bullet.

\n

And so I still can't seem to dispense with anthropic reasoning - I can't seem to dispense with trying to think about how many of me or how much of me there are, which in turn requires that I think about what sort of process constitutes a me.  Even though I confess myself to be sorely confused, about what could possibly make a certain computation \"real\" or \"not real\", or how some universes and experiences could be quantitatively realer than others (possess more reality-fluid, as 'twere), and I still don't know what exactly makes a causal process count as something I might have been for purposes of being surprised to find myself as me, or for that matter, what exactly is a causal process.

\n

Indeed this is all greatly and terribly confusing unto me, and I would be less confused if I could go through life while only answering questions like \"Given the Peano axioms, what is SS0 + SS0?\"

\n

But then I have no defense against the one who says to me, \"Why don't you think you're a Boltzmann brain?  Why don't you think you're the result of an all-possible-experiences generator?  Why don't you think that gravity is a matter of branching worlds in which all objects accelerate in all directions and in some worlds all the observed objects happen to be accelerating downward?  It explains all your observations, in the sense of logically necessitating them.\"

\n

I want to reply, \"But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising.  Even if there are some versions of me that exist in regions or universes where they arose by chaotic chance, I anticipate, for purposes of predicting my future experiences, that most of my existence is encoded in regions and universes where I am the product of ordered processes.\"

\n

And I currently know of no way to reply thusly, that does not make use of poorly defined concepts like \"number of real processes\" or \"amount of real processes\"; and \"people\", and \"me\", and \"anticipate\" and \"future experience\".

\n

Of course confusion exists in the mind, not in reality, and it would not be the least bit surprising if a resolution of this problem were to dispense with such notions as \"real\" and \"people\" and \"my future\".  But I do not presently have that resolution.

\n

(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be \"50%\".)

" } }, { "_id": "ms7JB45DLpeXZpFtb", "title": "Subconscious stalking?", "pageUrl": "https://www.lesswrong.com/posts/ms7JB45DLpeXZpFtb/subconscious-stalking", "postedAt": "2009-09-06T09:00:43.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ms7JB45DLpeXZpFtb", "html": "

Just seeing another person look at something can tend to make you like it a bit more than if you see them looking in another direction:

\n

In this study, we found that objects that are looked at by other people are liked more than objects that do not receive the attention of other people (Experiment 1). This suggests that observing averted gaze can have an impact on the affective appraisals of objects in the environment. This liking effect was absent when an arrow was used to cue attention (Experiment 2). This underlines the importance of other people’s interactions with objects for generating our own impressions of such stimuli in the world.

\n

The authors suggest this is because people really do tend to look at things more if they like them, and that another person likes something is information about its value. This makes sense, and even more if we assume that the ancestral environment contained fewer eye catching people paid to prominently give items their attention.

\n
\"Is

Is observing the eye movement of others a precursor to facebook? (picture: xkcd.com)

\n

Another possibility though is that people want to have coinciding tastes to those around them often, so we are not so much interested in clues to the item’s inherent value, but directly in the other person’s values. In that case if we evolved nicely we would react more to some people looking than to others.

\n

Sure enough, this study found that such an effect seems to hold for attractive people, but not unattractive:

\n

In a conditioning paradigm, novel objects were associated with either attractive or unattractive female faces, either displaying direct or averted gaze. An affective priming task showed more positive automatic evaluations of objects that were paired with attractive faces with direct gaze than attractive faces with averted gaze and unattractive faces, irrespective of gaze direction. Participants’ self-reported desire for the objects matched the affective priming data.

\n

Added: These days we can discover (and adapt to) many people’s likes and dislikes prior to meeting them extensively, as long as they post them all over Facebook or the like. If the tendency to coordinate our values based on minor cues was good enough to evolve, does the possibility of doing so much more effectively via online stalking give a selective advantage to those who use it?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "miwf7qQTh2HXNnSuq", "title": "Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives", "pageUrl": "https://www.lesswrong.com/posts/miwf7qQTh2HXNnSuq/decision-theory-why-pearl-helps-reduce-could-and-would-but", "postedAt": "2009-09-06T06:10:20.344Z", "baseScore": 48, "voteCount": 35, "commentCount": 72, "url": null, "contents": { "documentId": "miwf7qQTh2HXNnSuq", "html": "

(This is the third post in a planned sequence.)

My last post left us with the questions:

\n\n

Today, I’ll take an initial swing at these questions.  I’ll review Judea Pearl’s causal Bayes nets; show how Bayes nets offer a general methodology for computing counterfactual “would”s; and note three plausible alternatives for how to use Pearl’s Bayes nets to set up a CSA.  One of these alternatives will be the “timeless” counterfactuals of Eliezer’s Timeless Decision Theory.

\n

The problem of counterfactuals is the problem what we do and should mean when we we discuss what “would” have happened, “if” something impossible had happened.  In its general form, this problem has proved to be quite gnarly.  It has been bothering philosophers of science for at least 57 years, since the publication of Nelson Goodman’s book “Fact, Fiction, and Forecast” in 1952:

\n
\n

Let us confine ourselves for the moment to [counterfactual conditionals] in which the antecedent and consequent are inalterably false--as, for example, when I say of a piece of butter that was eaten yesterday, and that had never been heated,

\n

`If that piece of butter had been heated to 150°F, it would have melted.'

\n

Considered as truth-functional compounds, all counterfactuals are of course true, since their antecedents are false.  Hence

\n

`If that piece of butter had been heated to 150°F, it would not have melted.'

\n

would also hold.  Obviously something different is intended, and the problem is to define the circumstances under which a given counterfactual holds while the opposing conditional with the contradictory consequent fails to hold.

\n
\n

Recall that we seem to need counterfactuals in order to build agents that do useful decision theory -- we need to build agents that can think about the consequences of each of their “possible actions”, and can choose the action with best expected-consequences.  So we need to know how to compute those counterfactuals.  As Goodman puts it, “[t]he analysis of counterfactual conditionals is no fussy little grammatical exercise.”

\n

Judea Pearl’s Bayes nets offer a method for computing counterfactuals.  As noted, it is hard to reduce human counterfactuals in general: it is hard to build an algorithm that explains what (humans will say) really “would” have happened, “if” an impossible event had occurred.  But it is easier to construct specific formalisms within which counterfactuals have well-specified meanings.  Judea Pearl’s causal Bayes nets offer perhaps the best such formalism.

Pearl’s idea is to model the world as based on some set of causal variables, which may be observed or unobserved.  In Pearl’s model, each variable is determined by a conditional probability distribution on the state of its parents (or by a simple probability distribution, if it has no parents).  For example, in the following Bayes net, the beach’s probability of being “Sunny” depends only on the “Season”, and the probability that there is each particular “Number of beach-goers” depends only on the “Day of the week” and on the “Sunniness”.  Since the “Season” and the “Day of the week” have no parents, they simply have fixed probability distributions.

\n

\"\"

\n

Once we have a Bayes net set up to model a given domain, computing counterfactuals is easy*.  We just:

\n
    \n
  1. Take the usual conditional and unconditional probability distributions, that come with the Bayes net;
  2. \n
  3. Do “surgery” on the Bayes net to plug in the variable values that define the counterfactual situation we’re concerned with, while ignoring the parents of surgically set nodes, and leaving other probability distributions unchanged;
  4. \n
  5. Compute the resulting probability distribution over outcomes.
  6. \n
\n

For example, suppose I want to evaluate the truth of: “If last Wednesday had been sunny, there would have been more beach-goers”.  I leave the “Day of the week” node at Wednesday“, set the ”Sunny?“ node to ”Sunny“, ignore the “Season” node, since it is the parent of a surgically set node, and compute the probability distribution on beach-goers.

*Okay, not quite easy: I’m sweeping under the carpet the conversion from the English counterfactual to the list of variables to surgically alter, in step 2.  Still, Pearl’s Bayes nets do much of the work.

\n

But, even if we decide to use Pearl’s method, we are left with the choice of how to represent the agent's \"possible choices\" using a Bayes net.  More specifically, we are left with the choice of what surgeries to execute, when we represent the alternative actions the agent “could” take.

There are at least three plausible alternatives:

Alternative One:  “Actions CSAs”:

Here, we model the outside world however we like, but have the agent’s own “action” -- its choice of a_1, a_2, or ... , a_n -- be the critical “choice” node in the causal graph.  For example, we might show Newcomb’s problem as follows:

\n

\"\"

\n

The assumption built into this set-up is that the agent’s action is uncorrelated with the other nodes in the network.  For example, if we want to program an understanding of Newcomb’s problem into an Actions CSA, we are forced to choose a probability distribution over Omega’s prediction that is independent of the agent’s actual choice.

How Actions CSAs reckon their coulds and woulds:

\n\n


So, if causal decision theory is what I think it is, an “actions CSA” is simply a causal decision theorist.  Also, Actions CSAs will two-box on Newcomb’s problem, since, in their network, the contents of box B is independent of their choice to take box A.

\n

Alternative Two:  “Innards CSAs”:

Here, we again model the outside world however we like, but we this time have the agent’s own “innards” -- the physical circuitry that interposes between the agent’s sense-inputs and its action-outputs -- be the critical “choice” node in the causal graph.  For example, we might show Newcomb’s problem as follows:

\n

\"\"

\n

Here, the agent’s innards are allowed to cause both the agent’s actions and outside events -- to, for example, we can represent Omega’s prediction as correlated with the agent’s action.

How Innards CSAs reckon their coulds and woulds:

\n\n

Innards CSAs will one-box on Newcomb’s problem, because they reason that if their innards were such as to make them one-box, those same innards would cause Omega, after scanning their brain, to put the $1M in box B.  And so they “choose” innards of a sort that one-boxes on Newcomb’s problem, and they one-box accordingly.

\n

Alternative Three:  “Timeless” or “Algorithm-Output” CSAs:

In this alternative, as Eliezer suggested in Ingredients of Timeless Decision Theory, we have a “Platonic mathematical computation” as one of the nodes in our causal graph, which gives rise at once to our agent’s decision, to the beliefs of accurate predictors about our agent’s decision, and to the decision of similar agents in similar circumstances.  It is the output to this mathematical function that our CSA uses as the critical “choice” node in its causal graph.  For example:

\n

\"\"

\n

How Timeless CSAs reckon their coulds and woulds:

\n\n

Like innards CSAs, algorithm-output CSAs will one-box on Newcomb’s problem, because they reason that if the output of their algorithm was such as to make them one-box, that same algorithm-output would also cause Omega, simulating them, to believe they will one-box and so to put $1M in box B.  They therefore “choose” to have their algorithm output “one-box on Newcomb’s problem!”, and they one-box accordingly.

Unlike innards CSAs, algorithm-output CSAs will also Cooperate in single-shot prisoner’s dilemmas against Clippy -- in cases where they think it sufficiently likely that Clippy’s actions are output by an instantiation of “their same algorithm” -- even in cases where Clippy cannot at all scan their brain, and where their innards play no physically causal role in Clippy’s decision.  (An Innards CSA, by contrast, will Cooperate if having Cooperating-type innards will physically cause Clippy to cooperate, and not otherwise.)

Coming up: considerations as to the circumstances under which each of the above types of agents will be useful, under different senses of “useful”.

\n

\n


\nThanks again to Z M Davis for the diagrams.

" } }, { "_id": "7PtAK5WaiwQpFrWxM", "title": "Bay Area OBLW Meetup Sep 12", "pageUrl": "https://www.lesswrong.com/posts/7PtAK5WaiwQpFrWxM/bay-area-oblw-meetup-sep-12", "postedAt": "2009-09-06T00:00:05.140Z", "baseScore": 4, "voteCount": 3, "commentCount": 7, "url": null, "contents": { "documentId": "7PtAK5WaiwQpFrWxM", "html": "

Bay Area OB/LW meetup, 6pm September 12th 2009 at the SIAI House in Santa Clara.

\n

This will also serve as my 2nd annual 29th birthday party.

" } }, { "_id": "rqt8RSKPvh4GzYoqE", "title": "Counterfactual Mugging and Logical Uncertainty", "pageUrl": "https://www.lesswrong.com/posts/rqt8RSKPvh4GzYoqE/counterfactual-mugging-and-logical-uncertainty", "postedAt": "2009-09-05T22:31:27.354Z", "baseScore": 16, "voteCount": 16, "commentCount": 21, "url": null, "contents": { "documentId": "rqt8RSKPvh4GzYoqE", "html": "

Followup to: Counterfactual Mugging.

\n

Let's see what happens with Counterfactual Mugging, if we replace the uncertainty about an external fact of how a coin lands, with logical uncertainty, for example about what is the n-th place in the decimal expansion of pi.

\n

The original thought experiment is as follows:

\n
\n

Omega appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.

\n
\n

Let's change \"coin came up tails\" to \"10000-th digit of pi is even\", and correspondingly for heads. This gives Logical Counterfactual Mugging:

\n
\n

Omega appears and says that it has just found out what that 10000th decimal digit of pi is 8, and given that it is even, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But Omega also tells you that if the 10000th digit of pi turned out to be odd instead, it'd give you $10000, but only if you'd agree to give it $100 given that the 10000th digit is even.

\n
\n

This form of Counterfactual Mugging may be instructive, as it slaughters the following false intuition, or equivalently conceptualization of \"could\": \"the coin could land either way, but a logical truth couldn't be either way\".

\n

For the following, let's shift the perspective to Omega, and consider the problem about 10001th digit, which is 5 (odd). It's easy to imagine that given that the 10001th digit of pi is in fact 5, and you decided to only give away the $100 if the digit is odd, then Omega's prediction of your actions will still be that you'd give away $100 (because the digit is in fact odd). Direct prediction of your actions can't include the part where you observe that the digit is even, because the digit is odd.

\n

But Omega doesn't compute what you'll do in reality, it computes what you would do if the 10001th digit of pi was even (which it isn't). If you decline to give away the $100 if the digit is even, Omega's simulation of counterfactual where the digit is even will say that you wouldn't oblige, and so you won't get the $10000 in reality, where the digit is odd.

\n

Imagine it constructively this way: you have the code of a procedure, Pi(n), that computes the n-th digit of pi once it's run. If your strategy is

\n
\n

if(Is_Odd(Pi(n))) then Give(\"$100\");

\n
\n

then, given that n==10001, Pi(10001)==5, and Is_Odd(5)==true, the program outputs \"$100\". But Omega tests what's the output of the code on which it performed a surgery, replacing Is_Odd(Pi(n)) by false instead of true to which it normally evaluates. Thus it'll be testing the code

\n
\n

if(false) then Give(\"$100\");

\n
\n

This counterfactual case doesn't give away $100, and so Omega decides that you won't get the $10000.

\n

For the original problem, when you consider what would happen if the coin fell differently, you are basically performing the same surgery, replacing the knowledge about the state of the coin in the state of mind. If you use the (wrong) strategy

\n
\n

if(Coin==\"heads\") then Give(\"$100\")

\n
\n

and the coin comes up \"heads\", so that Omega is deciding whether to give you $10000, then Coin==\"heads\", but Omega is evaluating the modified algorithm where Coin is replaced by \"tails\":

\n
\n

if(\"tails\"==\"heads\") then Give(\"$100\")

\n
\n

Another way of intuitively thinking about Logical CM is to consider the index of the digit (here, 10000 or 10001) to be a random variable. Then, the choice of number n (value of the random variable) in Omega's question is a perfect analogy with the outcome of a coin toss.

\n

With a random index instead of \"direct\" mathematical uncertainty, the above evaluation of counterfactual uses (say) 10000 to replace n (so that Is_Odd(Pi(10000))==false), instead of directly using false to replace Is_Odd(P(n)) with false:

\n
\n

if(Is_Odd(Pi(10000))) then Give(\"$100\");

\n
\n

The difference is that with the coin or random digit number, the parameter is explicit and atomic (Coin and n, respectively), while with the oddness of n-th digit, the parameter Is_Odd(P(n)) isn't atomic. How can it be detected in the code (in the mind) — it could be written in obfuscated assembly, not even an explicit subexpression of the program? By the connection to the sense of the problem statement itself: when you are talking about what you'll do if the n-th digit of pi is even or odd, or what Omega will do if you give or not give away $100 in each case, you are talking about exactly your Is_Odd(Pi(n)), or something from which this code will be constructed. The meaning of procedure Pi(n) is dependent on the meaning of the problem, and through this dependency counterfactual surgery can reach down and change the details of the algorithm to answer the counterfactual query posed by the problem.

" } }, { "_id": "A2XNfwqqqp8A5DZJn", "title": "Notes on utility function experiment", "pageUrl": "https://www.lesswrong.com/posts/A2XNfwqqqp8A5DZJn/notes-on-utility-function-experiment", "postedAt": "2009-09-05T19:10:05.400Z", "baseScore": 19, "voteCount": 19, "commentCount": 9, "url": null, "contents": { "documentId": "A2XNfwqqqp8A5DZJn", "html": "

I just finished a two-week experiment of trying to live by a point system. I attached a point value to various actions and events, and made some effort to maximize the score. I cannot say it was successful in making me achieve more than normally during the same period of time, but it made more clear some of the problems with my behaviour.

\n

Here's some notes from my experiment:

\n\n

Anyone else wants to share their anti-akrasia experiments?

" } }, { "_id": "bMoQQoWyjBoG76Xjn", "title": "Freedom is slavery", "pageUrl": "https://www.lesswrong.com/posts/bMoQQoWyjBoG76Xjn/freedom-is-slavery", "postedAt": "2009-09-04T09:00:28.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "bMoQQoWyjBoG76Xjn", "html": "

These comparisons are sometimes made as arguments in favor of the former in each pair being forcibly prevented:

\n\n

The general pattern:

\n

Freely chosen X is like X coerced. And as X coerced is bad, we should prevent X (coercively if need be).

\n

Why is this error prevalent? I suspect it stems from assuming value to be in goods or activities, rather than in the minds of their beholders. Consent is important because it separates those who value something enough to do it and those who don’t. Without the idea that people value things different amounts, consent seems just another nice thing to have, but not functional. If most people wouldn’t make a choice unless forced, then that choice is bad, then others making it should be stopped.

\n
\"Bought

Bought kidneys look like stolen kidneys; can you spot the difference?

\n

I wonder if this is related to the misunderstanding that trade must be exploitative, because employers gain and the gain must come from somewhere. This also appears to stem from overlooking the possibility that people place different values on the same things, so extra value can be created by exchange.

\n

This is related.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "RcvyJjPQwimAeapNg", "title": "Torture vs. Dust vs. the Presumptuous Philosopher: Anthropic Reasoning in UDT", "pageUrl": "https://www.lesswrong.com/posts/RcvyJjPQwimAeapNg/torture-vs-dust-vs-the-presumptuous-philosopher-anthropic", "postedAt": "2009-09-03T23:04:27.130Z", "baseScore": 37, "voteCount": 35, "commentCount": 29, "url": null, "contents": { "documentId": "RcvyJjPQwimAeapNg", "html": "

In this post, I'd like to examine whether Updateless Decision Theory can provide any insights into anthropic reasoning. Puzzles/paradoxes in anthropic reasoning is what prompted me to consider UDT originally and this post may be of interest to those who do not consider Counterfactual Mugging to provide sufficient motivation for UDT.

\n

The Presumptuous Philosopher is a thought experiment that Nick Bostrom used to argue against the Self-Indication Assumption. (SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.)

\n

\n
\n

It is the year 2100 and physicists have narrowed down the search for a theory of  everything to only two remaining plausible candidate theories, T1 and T2 (using  considerations from super-duper symmetry). According to T1 the world is very,  very big but finite, and there are a total of a trillion trillion observers in  the cosmos. According to T2, the world is very, very, very big but finite, and  there are a trillion trillion trillion observers. The super-duper symmetry  considerations seem to be roughly indifferent between these two theories. The  physicists are planning on carrying out a simple experiment that will falsify  one of the theories. Enter the presumptuous philosopher: \"Hey guys, it is  completely unnecessary for you to do the experiment, because I can already show  to you that T2 is about a trillion times more likely to be true than T1  (whereupon the philosopher runs the God’s Coin Toss thought experiment and  explains Model 3)!\"

One suspects the Nobel Prize committee to be a bit hesitant about awarding the presumptuous philosopher the big one for this contribution.

\n
\n

To make this example clearer as a decision problem, let's say that the consequences of carrying out the \"simple experiment\" is a very small cost (one dollar). And the consequences of just assuming T2 is a disaster down the line if T1 turns out to be true (we create a power plant based on T2, and it blows up and kills someone).

\n

In UDT, no Bayesian updating occurs, and in particular, you don't update on the fact that you exist. Suppose in CDT you have a prior P(T1) = P(T2) = .5 before taking into account that you exist, then translated into UDT you have Σ P(Vi) = Σ P(Wi) = .5, where Vi and Wi are world programs where T1 and T2 respectively hold. Anthropic reasoning occurs as a result of considering the consequences of your decisions, which are a trillion times greater in T2 worlds than in T1 worlds, since your decision algorithm S is called about a trillion times more often in Wi programs than in Vi programs.

\n

Perhaps by now you've notice the parallel between this decision problem and Eliezer's Torture vs. Dust Specks. The very small cost of the simple physics experiment is akin to getting a dust speck in the eye, and the disaster of wrongly assuming T2 is akin to being tortured. By not doing the experiment, we can save one dollar for a trillion individuals in exchange for every individual we kill.

\n

In general, Updateless Decision Theory converts anthropic reasoning problems into ethical problems. I can see three approaches to taking advantage of this:

\n
    \n
  1. If you have strong epistemic intuitions but weak moral intuitions, then you can adjust your morality to fit your epistemology.
  2. \n
  3. If you have strong moral intuitions but weak epistemic intuitions, then you can adjust your epistemology to fit your morality.
  4. \n
  5. You might argue that epistemology shouldn't be linked so intimately with morality, and therefore this whole approach is on the wrong track.
  6. \n
\n

Personally, I have vacillated between 1 and 2. I've argued, based on 1, that we should discount the values of individuals by using a complexity-based measure. And I've also argued, based on 2, that perhaps the choice of an epistemic prior is more or less arbitrary (since objective morality seems unlikely to me). So I'm not sure what the right answer is, but this seems to be the right track to me.

" } }, { "_id": "XuLG6M7sHuenYWbfC", "title": "The Sword of Good", "pageUrl": "https://www.lesswrong.com/posts/XuLG6M7sHuenYWbfC/the-sword-of-good", "postedAt": "2009-09-03T00:53:55.237Z", "baseScore": 158, "voteCount": 127, "commentCount": 303, "url": null, "contents": { "documentId": "XuLG6M7sHuenYWbfC", "html": "

..fragments of a book that would never be written...

\n

*      *      *

\n

Captain Selena, late of the pirate ship Nemesis, quietly extended the very tip of her blade around the corner, staring at the tiny reflection on the metal.  At once, but still silently, she pulled back the sword; and with her other hand made a complex gesture.

\n

The translation spell told Hirou that the handsigns meant:  \"Orcs.  Seven.\"

\n

Dolf looked at Hirou.  \"My Prince,\" the wizard signed, \"do not waste yourself against mundane opponents.  Do not draw the Sword of Good as yet.  Leave these to Selena.\"

\n

Hirou's mouth was very dry.  He didn't know if the translation spell could understand the difference between wanting to talk and wanting to make gestures; and so Hirou simply nodded.

\n

Not for the first time, the thought occurred to Hirou that if he'd actually known he was going to be transported into a magical universe, informed he was the long-lost heir to the Throne of Bronze, handed the legendary Sword of Good, and told to fight evil, he would have spent less time reading fantasy novels.  Joined the army, maybe.  Taken fencing lessons, at least.  If there was one thing that didn't prepare you for fantasy real life, it was sitting at home reading fantasy fiction.

\n

Dolf and Selena were looking at Hirou, as if waiting for something more.

\n

Oh.  That's right.  I'm the prince.

\n

Hirou raised a finger and pointed it around the corner, trying to indicate that they should go ahead -

\n

With a sudden burst of motion Selena plunged around the corner, Dolf following hard on her heels, and Hirou, startled and hardly thinking, moving after.

\n

(This story ended up too long for a single LW post, so I put it on yudkowsky.net.
Do read the rest of the story there, before continuing to the Acknowledgments below.)

\n

\n

 

\n
\n

 

\n

Acknowledgments:

\n

I had the idea for this story during a conversation with Nick Bostrom and Robin Hanson about an awful little facet of human nature I call \"suspension of moral disbelief\".  The archetypal case in my mind will always be the Passover Seder, watching my parents and family and sometimes friends reciting the Ten Plagues that God is supposed to have visited on Egypt.  You take drops from the wine glass - or grape juice in my case - and drip them onto the plate, to symbolize your sadness at God slaughtering the first-born male children of the Egyptians.  So the Seder actually points out the awfulness, and yet no one says:  \"This is wrong; God should not have done that to innocent families in retaliation for the actions of an unelected Pharaoh.\"  I forget when I first realized how horrible that was - the real horror being not the Plagues, of course, since they never happened; the real horror is watching your family not notice that they're swearing allegiance to an evil God in a happy wholesome family Cthulhu-worshiping ceremony.  Arbitrarily hideous evils can be wholly concealed by a social atmosphere in which no one is expected to point them out and it would seem awkward and out-of-place to do so.

\n

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective.  And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic.  Genocide, mind-rape, eternal torture, anything.

\n

Explicit inspiration was taken from this XKCD (warning: spoilers for The Princess Bride), this Boat Crime, and this Monty Python, not to mention that essay by David Brin and the entire Goblins webcomic.  This Looking For Group helped inspire the story's title, and everything else flowed downhill from there.

" } }, { "_id": "zmmB5nu4odF8k6wAc", "title": "The Featherless Biped", "pageUrl": "https://www.lesswrong.com/posts/zmmB5nu4odF8k6wAc/the-featherless-biped", "postedAt": "2009-09-02T17:47:00.573Z", "baseScore": 4, "voteCount": 21, "commentCount": 64, "url": null, "contents": { "documentId": "zmmB5nu4odF8k6wAc", "html": "

The classical understanding of categories centers on necessary and sufficient properties.  If a thing has X, Y, and Z, we say that it belongs to class A; if it lacks them, we say that it does not.  This is the model of how humans construct and recognize categories that philosophers have held since the days of Aristotle.

\n

Cognitive scientists found that the reality isn't that simple.

\n

Human categorization is not a neat and precise process.  When asked to explain the necessary features of, say, a bird, people cannot.  When confronted with collections of stimuli and asked to determine which represent examples of 'birds', people find it easy to accept or reject things that have all or none of the properties they associate with that concept; when shown entities that share some but not all of the critical properties, people spend much more time trying to decide, and their decisions are tentative.   Their responses simply aren't compatible with binary models.

\n

Concepts are associational structures.  They do not divide the world clearly into two parts.  Not all of their features are logically necessary.  The recognition of features produces an activation, the strength of which depends not only on the degree to which the feature is present but a weighting factor.  When the sum of the activations crosses a threshold, the concept becomes active and the stimulus is said to belong to that category.  The stronger the total activation, the more clearly the stimulus can be said to embody the concept.

\n

Does this sound familiar?  It should for us - we have the benefit of hindsight.  We can recognize that pattern - it's how neural networks function.  Or to put it another way, it's how neurons work.

\n

But wait!  There's more!

\n

Try applying that model to virtually every empirical fact we've acquired regarding how people produce their conclusions.  For example, our beliefs about how seriously we should take a hypothetical problem scenario depend not on a rigorous statistical analysis, but a combination of how vividly we feel about the scenario and how frequently it appears in our memory.  People are convinced not only by the logical structure of an argument but the traits of the entities presenting it and the specific way in which the arguments are made.  And so on, and so forth.

\n

Most human behavior derives directly from the behavior of the associational structures in our minds.

\n

To put it another way:  what we call 'thinking' doesn't involve rational thought.  It's *feeling*.  People ponder an issue, then respond in the way that they feel stands out the most from the sea of associations.

\n

Consider the implications for a while.

" } }, { "_id": "gxxpK3eiSQ3XG3DW7", "title": "Decision theory: Why we need to reduce “could”, “would”, “should”", "pageUrl": "https://www.lesswrong.com/posts/gxxpK3eiSQ3XG3DW7/decision-theory-why-we-need-to-reduce-could-would-should", "postedAt": "2009-09-02T09:23:34.936Z", "baseScore": 38, "voteCount": 36, "commentCount": 48, "url": null, "contents": { "documentId": "gxxpK3eiSQ3XG3DW7", "html": "

(This is the second post in a planned sequence.)
 
Let’s say you’re building an artificial intelligence named Bob.  You’d like Bob to sally forth and win many utilons on your behalf.  How should you build him?  More specifically, should you build Bob to have a world-model in which there are many different actions he “could” take, each of which “would” give him particular expected results?  (Note that e.g. evolution, rivers, and thermostats do not have explicit “could”/“would”/“should” models in this sense -- and while evolution, rivers, and thermostats are all varying degrees of stupid, they all still accomplish specific sorts of world-changes.  One might imagine more powerful agents that also simply take useful actions, without claimed “could”s and “woulds”.)

My aim in this post is simply to draw attention to “could”, “would”, and “should”, as concepts folk intuition fails to understand, but that seem nevertheless to do something important for real-world agents.   If we want to build Bob, we may well need to figure out what the concepts “could” and “would” can do for him.*

\n

Introducing Could/Would/Should agents:

Let a Could/Would/Should Algorithm, or CSA for short, be any algorithm that chooses its actions by considering a list of alternatives, estimating the payoff it “would” get “if” it took each given action, and choosing the action from which it expects highest payoff.

That is: let us say that to specify a CSA, we need to specify:

\n
    \n
  1. A list of alternatives a_1, a_2, ..., a_n that are primitively labeled as actions it “could” take;
  2. \n
  3. For each alternative a_1 through a_n, an expected payoff U(a_i) that is labeled as what “would” happen if the CSA takes that alternative.
  4. \n
\n


To be a CSA, the algorithm must then search through the payoffs for each action, and must then trigger the agent to actually take the action a_i for which its labeled U(a_i) is maximal.

\n

\"\"

Note that we can, by this definition of “CSA”, create a CSA around any made-up list of “alternative actions” and of corresponding “expected payoffs”.

\n


The puzzle is that CSAs are common enough to suggest that they’re useful -- but it isn’t clear why CSAs are useful, or quite what kinds of CSAs are what kind of useful.  To spell out the puzzle:

Puzzle piece 1:  CSAs are common.  Humans, some (though far from all) other animals, and many human-created decision-making programs (game-playing programs, scheduling software, etc.), have CSA-like structure.  That is, we consider “alternatives” and act out the alternative from which we “expect” the highest payoff (at least to a first approximation).  The ubiquity of approximate CSAs suggests that CSAs are in some sense useful.

Puzzle piece 2:  The naïve realist model of CSAs’ nature and usefulness doesn’t work as an explanation.

\n

That is:  many people find CSAs’ usefulness unsurprising, because they imagine a Physically Irreducible Choice Point, where an agent faces Real Options; by thinking hard, and choosing the Option that looks best, naïve realists figure that you can get the best-looking option (instead of one of those other options, that you Really Could have gotten).

\n

But CSAs, like other agents, are deterministic physical systems.  Each CSA executes a single sequence of physical movements, some of which we consider “examining alternatives”, and some of which we consider “taking an action”.  It isn’t clear why or in what sense such systems do better than deterministic systems built in some other way.

\n

Puzzle piece 3:  Real CSAs are presumably not built from arbitrarily labeled “coulds” and “woulds” -- presumably, the “woulds” that humans and others use, when considering e.g. which chess move to make, have useful properties.  But it isn’t clear what those properties are, or how to build an algorithm to compute “woulds” with the desired properties.

Puzzle piece 4:  On their face, all calculations of counterfactual payoffs (“woulds”) involve asking questions about impossible worlds.  It is not clear how to interpret such questions.

\n

Determinism notwithstanding, it is tempting to interpret CSAs’ “woulds” -- our U(a_i)s above -- as calculating what “really would” happen, if they “were” somehow able to take each given action. 

\n

But if agent X will (deterministically) choose action a_1, then when he asks what would happen “if” he takes alternative action a­_2, he’s asking what would happen if something impossible happens.

\n

If X is to calculate the payoff “if he takes action a_2” as part of a causal world-model, he’ll need to choose some particular meaning of “if he takes action a_2” – some meaning that allows him to combine a model of himself taking action a_2 with the rest of his current picture of the world, without allowing predictions like “if I take action a_2, then the laws of physics will have been broken”.

We are left with several questions:

\n\n

 

\n
\n

*A draft-reader suggested to me that this question is poorly motivated: what other kinds of agents could there be, besides “could”/“would”/“should” agents?  Also, how could modeling the world in terms of “could” and “would” not be useful to the agent?
   
My impression is that there is a sort of gap in philosophical wariness here that is a bit difficult to bridge, but that one must bridge if one is to think well about AI design.  I’ll try an analogy.  In my experience, beginning math students simply expect their nice-sounding procedures to work.  For example, they expect to be able to add fractions straight across.  When you tell them they can’t, they demand to know why they can’t, as though most nice-sounding theorems are true, and if you want to claim that one isn’t, the burden of proof is on you.  It is only after students gain considerable mathematical sophistication (or experience getting burned by expectations that don’t pan out) that they place the burden of proofs on the theorems, assume theorems false or un-usable until proven true, and try to actively construct and prove their mathematical worlds.

Reaching toward AI theory is similar.  If you don’t understand how to reduce a concept -- how to build circuits that compute that concept, and what exact positive results will follow from that concept and will be absent in agents which don’t implement it -- you need to keep analyzing.  You need to be suspicious of anything you can’t derive for yourself, from scratch.  Otherwise, even if there is something of the sort that is useful in the specific context of your head (e.g., some sort of “could”s and “would”s that do you good), your attempt to re-create something similar-looking in an AI may well lose the usefulness.  You get cargo cult could/woulds.

\n

+ Thanks to Z M Davis for the above gorgeous diagram.

" } }, { "_id": "CfJ98wSHBExBr2orX", "title": "The puzzle continues", "pageUrl": "https://www.lesswrong.com/posts/CfJ98wSHBExBr2orX/the-puzzle-continues", "postedAt": "2009-09-02T09:00:47.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "CfJ98wSHBExBr2orX", "html": "

A puzzle from ages ago:

\n

What do these things have in common? Nerves, emotions, morality, prices.

\n

\n

They all send signals from distant parts of a coordinating system to a part which makes decisions. The signals are not just information, but costs and benefits to the decision maker so that the decision maker’s interests align with those of the whole. This allows cooperation of larger systems in space and time.

\n

Nerves mean that damage to your toes translates to pain in your mind. This makes your mind act in the interest of your toes (which is of course in the interest of your mind eventually). If your foot is numb your mind is not taking into account the costs and benefits your foot faces, so eventually your foot often becomes injured. Nerves allow larger bodies to coordinate.

\n

Emotions sometimes mean that failure or success of my future self translates to positive or negative feelings now. This makes my current self act in the interests of my future self. If something bad might happen I am scared. If my long term prospects look good I am happy. If your emotions are numb you can make decisions that are bad for your long term wellbeing. Some emotions allow temporally longer humans to coordinate.

\n

Morality means that costs or benefits I cause to others lead to harm or good for me, either in the currency of moral feelings or terms in my calculated decisions (I make no claims here about how people do morality). This is the source of altruism, and of the complaints that it isn’t really altruism. If I donate money to charity I feel good (or calculatedly note that I have increased utility). If I hurt you I feel guilty. If your morality is numb you can hurt other people. Morality allows larger groups of people to coordinate.

\n

Prices are the celebrated example; they mean that the costs and benefits to others across the economy feed into mine when I make choices that affect others. This makes me act efficiently if all goes well. I leave my house if someone else wants it more than I and eat foods that are easier for someone across the world to make. If the farming industry is numb nobody knows when they are doing harm or good regarding it, nor care so much, and we cause dead weight losses. Prices allow even larger groups of people to coordinate.

\n

Can you think of more?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "2r7kp9QSNNkF2Lpd7", "title": "Knowing What You Know", "pageUrl": "https://www.lesswrong.com/posts/2r7kp9QSNNkF2Lpd7/knowing-what-you-know", "postedAt": "2009-09-02T05:24:11.734Z", "baseScore": 21, "voteCount": 18, "commentCount": 12, "url": null, "contents": { "documentId": "2r7kp9QSNNkF2Lpd7", "html": "

From Kahneman and Tversky:

\n
\n
\"A person is said to employ the availability heuristic whenever he estimates frequency or probability by the ease with which instances or associations could be brought to mind\"
\n
\n
I doubt that's news to any LessWrong readers - the availability heuristic isn't exactly cutting edge psychology. But the degree to which our minds rely on mental availability goes far beyond estimating probabilities. In a sense, every conscious thought is determined by how available it is - by whether it pops into our heads or not. And it's not something we even have the illusion of control over - if we knew where we were going, we'd already be there. (If you spend time actually looking directly at how thoughts proceed through your head, the idea of free will becomes more and more unrealistic. But I digress). What does and doesn't come to mind has an enormous impact on our mental functioning - our higher brain functions can only process what enters our working memory.
\n
Whether it's called  salience, availability, or vividness, marking certain things important relative to other things is key to the proper functioning of our brains. Schizophrenia (specifically psychosis) has been described as \"a state of aberrant salience\", where the brain incorrectly assigns importance to what it processes. And a quick perusal of the  list of cognitive biases  reveals a large number directly tied to mental availability - there's the obvious availability heuristic, but there's also the simulation heuristic (a special case of the availability heuristic), base-rate neglect (abstract probabilities aren't salient, so aren't taken into account), hyperbolic discounting (the present is more salient than the future), the conjunction fallacy (ornate, specific descriptions make something less likely but more salient), the primacy/recency bias, the false consensus effect, the halo effect, projection bias, etc etc etc. Even consciousness seems to be based on things hitting a certain availability threshold and entering working memory. And it's not particularly surprising that A) we tend to process only what we mark as important and B) our marking system is flawed. Our minds are efficient, not perfect - a \"good enough\" solution to the problem of finding food and avoiding lions.
\n
\n
That doesn't mean that availability isn't a complex system. It doesn't seem to be a fixed number that gets assigned when a memory is written - it's highly dependent on the context of the situation. A perfect example of this is  priming. Simply seeing a picture is enough to make certain things more available, and that small change in availability is all that's needed to change how you vote. In  state-dependent memory, information that's been absorbed while under a certain state can only be retrieved under that same state - the context of the situation is needed for activation. It's why students are told to study under the same conditions that the test will be taken, and why musicians are told not to always practice sitting in the same position, to avoid inadvertently setting up a context dependent state. And anecdotally, I notice that my mind tends to slide between memories that make me happy when I'm happy, and memories that make me upset when I'm angry (moods are thought to be important context cues). In general, the more available something is, the less context is needed to activate it, and the less available, the more context dependent it becomes. Frequency, prototypicality, and abstractness also contribute to availability. Some things are so available that they're activated in improper contexts - this is how figurative language is  thought to work. But some context is always required, or our minds would be nothing a but a greatest hits of our most salient thoughts, on a continuous loop.
\n
The problems with this approach is that availability isn't always assigned the way we'd prefer it. If I'm at a bar and want to tell a hilarious story, I can't just think of \"funny stories\" and activate all my great bar stories - they have to be triggered by some memory. More perniciously, it's possible (and in my experience, all too likely) to have a thought or take an action without having access to the beliefs that produced it. If, for example, I'm playing a videogame, I find it almost impossible to tell someone a sequence of buttons for something unless I'm holding the controller in my hands. Or I might avoid seeing a movie because I think it's awful, but I won't be able to recall why I think it's awful. Or I'll get into an argument with someone because he disagrees with something I think is obvious, but I won't immediately be able to summon the reasons that generated that obviousness. And this lack of availability can go beyond simple bad memory.
\n
From  Block 2008:
\n
\n
There is a type of brain injury which causes a syndrome known as 'visuo-spatial extinction' If the patient sees a single object on either side, the patient can identify it, but if there are objects on both sides, the patient can identify only the one on the right and claims not to see the one on the left. However as Geraint Rees has shown in two fMRI studies of one patient (known as 'GK'), when GK claims not to see a face on the left, his fusiform face area (on the right - which is fed by the left side of space) lights up almost as much as - and in overlapping areas involving the fusiform face area - when he reports seeing the face.
\n
\n
The brain can detect a face without passing that information to our working memory. What's more, when subjects with visuo-spatial extinction are asked to compare objects on both sides - as either 'the same' or 'different' - they're more than 88% accurate, despite not being able to 'see' the object on the left. Judgements can be made based on something that we have no cognitive access to.
\n
In Landman et al. (2003), subjects were shown a circle of eight rectangles for a short period, then a blank screen, then the same circle where a line points to one of the rectangles, which may or may not have rotated 90 degrees. The number of correct answers suggest subjects could track about four different rectangles, in line with data suggesting that our working memory for visual perceptions is about four. Subjects were then given the same test, except the line pointing to the rectangle appeared on the blank screen, between when the first circle and the second circle of rectangles is shown. On this test, subjects were able to track between six and seven rectangles by keeping the first circle of rectangles in memory, and comparing the second circle to it (according to the subjects). They're able to do this despite the fact that they're unable to access the shape of each individual rectangle. The suggested reason for this is that our memory for visual perceptions exceeds what we're capable of fitting into working memory - that we process the information without it entering our conscious minds[1]. It seems perfectly possible for us to know something without knowing we know it, and to believe something without having access to why we believe it.
\n
This, of course, isn't good. If you're building complex interconnected structures of beliefs (and you are), you need to be sure the ground below you is sturdy. And there's a strong possibility that if you can't recall the reason behind something, your brain will  invent one for you. The good news is that memories don't seem to be deleted - we can lose access, but the memory itself doesn't fade[2]. The problem is one of keeping access open. One way is to simply keep your memory sharp - and the web is  full of tips on that. A better way might be to leverage your mind's propensity for habituation - force yourself to trace your chain of belief down to the base, and eventually it will start to become something you do automatically. This isn't perfect either - it's not something you can do for the fast pace of day-to-day life, and it in itself is probably subject ot a whole series of biases. It might even be worth it to write your beliefs down - this method has the dual benefits of creating a hard copy for you to reference later, and increasing the availability of each belief through the act of writing and recalling. There's no ideal solution - we're limited in what new mental structures we can create, and we're forced to rely on the same basic (and imperfect) set of cognitive tools. Since availability seems to be such an integral part of the brain, forcing availability on those things we want to come to mind might be our best bet.
\n

\n
Notes
\n
1: If it's hard to understand this experiment, look at the linked Block paper - it provides diagrams.
\n
2: It can, however, be re-written. Memory seems to work like a save-as function, being saved (and distorted slightly) every time it's recalled.
" } }, { "_id": "xBhnsNzrr5JDQiCRL", "title": "LW/OB Quotes - Fall 2009", "pageUrl": "https://www.lesswrong.com/posts/xBhnsNzrr5JDQiCRL/lw-ob-quotes-fall-2009", "postedAt": "2009-09-01T15:11:01.113Z", "baseScore": 5, "voteCount": 5, "commentCount": 50, "url": null, "contents": { "documentId": "xBhnsNzrr5JDQiCRL", "html": "

This is a monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.

\n\n
\"this thread is insanely incestuous\" - Z_M_Davis
\n

 

" } }, { "_id": "neGQaayydS5gPYfpa", "title": "Rationality Quotes - September 2009", "pageUrl": "https://www.lesswrong.com/posts/neGQaayydS5gPYfpa/rationality-quotes-september-2009", "postedAt": "2009-09-01T15:06:57.167Z", "baseScore": 4, "voteCount": 3, "commentCount": 105, "url": null, "contents": { "documentId": "neGQaayydS5gPYfpa", "html": "

\n

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\n\n
\"A witty saying proves nothing.\" -- Voltaire
" } }, { "_id": "jyz7S4M8GhLmSoPtg", "title": "Open Thread: September 2009", "pageUrl": "https://www.lesswrong.com/posts/jyz7S4M8GhLmSoPtg/open-thread-september-2009", "postedAt": "2009-09-01T10:54:03.786Z", "baseScore": 4, "voteCount": 5, "commentCount": 186, "url": null, "contents": { "documentId": "jyz7S4M8GhLmSoPtg", "html": "

I declare this Open Thread open for discussion of Less Wrong topics that have not appeared in recent posts.

" } }, { "_id": "fArRAdovrnukCzXex", "title": "Do fetching models stave off shallowness?", "pageUrl": "https://www.lesswrong.com/posts/fArRAdovrnukCzXex/do-fetching-models-stave-off-shallowness", "postedAt": "2009-09-01T06:00:53.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "fArRAdovrnukCzXex", "html": "
\"Many

Prevalent advertising portraying an inaccurate proportion of humanity as too attractive is blamed for misinforming people on the value of attractiveness.

\n

Unattractive females, both self rated and judged externally, find features of male physical attractiveness, such as facial masculinity and voice, less appealing than attractive females do.

\n

This study suggests such behavior is to avoid wasting effort on guys who won’t be interested. That hypothesis beats the others because women’s preferences adapt fast to circumstances. In the study women saw pictures of other women who were more or less attractive. As their own perceived attractiveness went down or up accordingly, their preferences for male facial masculinity did too.

\n

Do physically unattractive people actually believe one another to be hot? This study suggests not:

\n
When less attractive people accept less attractive
\n
dates, do they persuade themselves that the people they
\n
choose to date are more physically attractive than others
\n
perceive them to be? Our analysis of data from the popular
\n
Web site HOTorNOT.com suggests that this is not the
\n
case: Less attractive people do not delude themselves into
\n
thinking that their dates are more physically attractive
\n
than others perceive them to be.
\n

When less attractive people accept less attractive dates, do they persuade themselves that the people they choose to date are more physically attractive than others perceive them to be? Our analysis of data from the popular Web site HOTorNOT.com suggests that this is not the case…

\n

Instead, same study suggests that less attractive people claim to put greater weight on other characteristics than attractiveness in mate choice:

\n

..participants’ own attractiveness was significantly correlated with their standardized weights for physical attractiveness (r 5 .60, prep 5 .98), but negatively correlated with their standardized weights for sense of humor (r 5 \u0001.44, prep 5 .91). Overall, these results suggest that more attractive people and less attractive people consider different criteria in date selection: Less attractive people tend to place less weight on physical attractiveness and greater weight on non-attractiveness-related attributes such as sense of humor.

\n

Together these pieces of research suggests that encouraging everyone to feel less attractive should decrease the adoration people (at least claim to) direct at the best looking of us and bring about more appreciation of traits which we consider more virtuous to care about. One method for achieving this, similar to the one shown effective in an above study, would be to surround everyone with pictures of super-attractive models. The advertising industry is making steady progress on this front, yet is blamed for encouraging society to care about looks. Would we be even more shallow without this stream of superior gorgeousness?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "9uAr5W7YBrsGHwB5c", "title": "Optimal Strategies for Reducing Existential Risk", "pageUrl": "https://www.lesswrong.com/posts/9uAr5W7YBrsGHwB5c/optimal-strategies-for-reducing-existential-risk", "postedAt": "2009-08-31T15:52:53.822Z", "baseScore": 4, "voteCount": 10, "commentCount": 21, "url": null, "contents": { "documentId": "9uAr5W7YBrsGHwB5c", "html": "

I've been talking to a variety of people about this recently, and it was suggested that people (including myself) might benefit from a LessWrong discussion on the topic. I've been thinking about it on my own for a year, which took me through Neuroscience, Computer Science, and International Security Policy. I'm hoping and finding that through discussion, a much greater variety of options can be proposed and considered, and those with particular experience or observations can have others benefit from their knowledge. I've been very happy to find there are a number of people seriously working towards this already (still far fewer than we might need), and their deliberations and learning would be particularly valuable.

\r\n

This is primarily about careers and other long term focused efforts (academic research and writing on the side, etc), not smaller incremental tools such as motivation and akrasia discussions. Where you should be applying your efforts, now how (much). Unless there's a lot of interest, it might also be good to otherwise avoid discussions on self-improvement in general and how to best realize these long term concerns, bringing those up elsewhere or in a seperate post.

\r\n

A few initial thoughts:

\r\n" } }, { "_id": "JC5AcJPfXZ5BhEeE8", "title": "Great post on Reddit about accepting atheism", "pageUrl": "https://www.lesswrong.com/posts/JC5AcJPfXZ5BhEeE8/great-post-on-reddit-about-accepting-atheism", "postedAt": "2009-08-30T20:56:22.927Z", "baseScore": 18, "voteCount": 20, "commentCount": 44, "url": null, "contents": { "documentId": "JC5AcJPfXZ5BhEeE8", "html": "

Go see for yourself.

\n

\n
\n

To reject heaven and accept atheism - is not merely about science, facts, beliefs, etc - it is about accepting the reality of all those who have died - being really dead. It is accepting the same reality about everyone you love NOW one day being - really dead. It is accepting the same reality about YOU one day.

\n

The older you are, the more dear loved ones have passed away, the harder it will be to reject the notions of religion. To reject religion requires the re-mourning of everyone who you love who has died.

\n
\n

The complete post is quite long and just as good as this quote.

\n

 

" } }, { "_id": "NJ8k2RTwy3ELmwYZT", "title": "Argument Maps Improve Critical Thinking", "pageUrl": "https://www.lesswrong.com/posts/NJ8k2RTwy3ELmwYZT/argument-maps-improve-critical-thinking", "postedAt": "2009-08-30T17:34:09.150Z", "baseScore": 26, "voteCount": 29, "commentCount": 18, "url": null, "contents": { "documentId": "NJ8k2RTwy3ELmwYZT", "html": "

Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!

\n

Also, his paper is the best, methodologically, that I've seen in the field of \"individual rationality augmentation research\".

\n

To summarize my (clumsy) understanding of the activity of argument mapping:

\n

One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a \"claim\" node. Every claim node has approximately one sentence of english text associated. The children of a claim are \"reasons\", which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim's child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.

\n

In elaborating the argument, it's often necessary to insert implicit claims. This should be done abiding by the \"Principle of Charity\", that you should interpret the argument in such a way as to make it the strongest argument possible. 

\n

There are two syntactic rules which can easily find flaws in argument maps:

\n

The Rabbit Rule: Informally, \"You can't conclude something about rabbits if you haven't been talking about rabbits\". Formally, \"Every meaningful term in the conclusion must appear at least once in every reason.\"

\n

The Holding Hands Rule: Informally, \"We can't be connected unless we're holding hands\". Formally, \"Every meaningful term in one premise of a reason must appear at least once in another premise of that reason, or in the conclusion\".

\n

I have tried the Rationale tool, and it seems afflicted with creeping featurism. My guess is the open-source tool Freemind could support argument mapping as described in Twardy's article, if the user is disciplined about it.

\n

I'd love comments offering alternative rationality-improvement tools. I'd prefer tools intended for solo use (that is, prediction markets are awesome but not what I'm looking for) and downloadable rather than web services, but anything would be great.

" } }, { "_id": "SkkhysBQgK79RvLbf", "title": "Cookies vs Existential Risk", "pageUrl": "https://www.lesswrong.com/posts/SkkhysBQgK79RvLbf/cookies-vs-existential-risk", "postedAt": "2009-08-30T03:56:31.701Z", "baseScore": 18, "voteCount": 14, "commentCount": 23, "url": null, "contents": { "documentId": "SkkhysBQgK79RvLbf", "html": "

I've been thinking for a while now about the possible trade-offs between present recreation and small reductions in existential risk, and I've finally gotten around to a (consequentialist) utilitarian analysis.

\n

ETA: Most of the similar mathematical treatments I've seen assume a sort of duty to unrealized people, such as Bostrom's \"Astronomical Waste\" paper. In addition to avoiding that assumption, my aim was to provide a more general formula for someone to use, in which they can enter differing beliefs and hypotheses. Lastly I include 3 examples using widely varying ideas, and explore the results.

\n

Let's say that you've got a mind to make a batch of cookies. That action has a certain amount of utility, from the process itself and/or the delicious cookies. But it might lessen (or increase) the chances of you reducing existential risk, and hence affect the chance of existential disaster itself. Now if these cookies will help x-risk reduction efforts (networking!) and be enjoyable, the decision is an easy one. Same thing if they'll hurt your efforts and you hate making, eating, and giving away cookies. Any conflict arises when cookie making/eating is in opposition to x-risk reduction. If you were sufficiently egoist then risk of death would be comparable to existential disaster, and you should consider the two risks together. For readability I’ll refer simply to existential risk.

\n

The question I'll attempt to answer is: what reduction in the probability of existential disaster makes refraining from an activity an equally good choice in terms of expected utility? If you think that by refraining and doing something else you would reduce the risk at least that much, then rationally you should pursue the alternative. If refraining would cut risk by less than this value, then head to the kitchen.

\n

*ASSUMPTIONS: For simplicity I'll treats existential disaster as an abstract singular event, which we’ll survive or not. If we do, it is assumed that we do so in a way such that there are no further x-risks. Further I'll assume the utility realized past that point is not dependent on the cookie-making decision in question, and that the utility realized before that point is not dependent on whether existential disaster will occur. The utility calculation is also unbounded, being easier to specify. It is hoped that those not seeking to approximate having such a utility function can modify the treatment to serve their needs. *

\n

E(U|cookies) = E(U|cookies, existential disaster) + Upost-risk-future • P(x-risk survival | cookies)

\n

E(U|alternative) = E(U|alternative, existential disaster) + Upost-risk-future • P(x-risk survival | alternative)

\n

Setting these two expected utilities to be equal we get:     

\n

E(U|cookies, existential disaster) - E(U|alternative, existential disaster) = Upost-risk-future • ( P(x-risk survival | alternative) - P(x-risk survival | cookies))

\n

or

\n

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future

\n

Where         ΔP(x-risk survival) = P(x-risk survival | alternative) - P(x-risk survival | cookies) 

\n

 and           ΔE(U|existential disaster) = E(U|cookies, existential disaster) - E(U|alternative, existential disaster)

\n

*I’m assuming both of these quantities are positive. Otherwise, there’s no conflict.*

\n

Now to determine the utilities:

\n

 \"\"

\n

base value(utility/time) is a constant for normalizing to ΔE(U|existential disaster) and factors out of our ratio, but it can give us a scale of comparison. Obviously you should use the same time scale for the integral limits. si(t) (range ≥ 0) is the multiplier for the change in subjective time due to faster cognition, hi(t) (range = all real numbers) is the multiplier for the change in the base value(utility/time), and Di(t) (0 ≤ range ≤ 1) is your discount function. All of these functions are with reference to each morally relevant entity i, assuming yourself as i = 1.

\n

There are of course a variety of ways to do this kind of calculation. I felt the multiplication of a discount function with increases in both subjective time quantity and quality, integrated over the time period of interest and summed across conscious entitites, was both general and intuitive.

\n

There're far too many variables here to summarize all possibilities with examples, but I'll do a few, with both pure egoist and agent-neutral utilitarian perspectives (equal consideration of yours and others' wellbeing). I'll assume the existential disaster would occur in 30 years, keeping in mind that the prior/common probability of disaster doesn't actually affect the calculation. I’ll also set most of the functions to constants to keep it straightforward.

\n

Static World

\n

Here we assume that life span does not increase, nor does cognitive speed or quality of life. You're contemplating making cookies, which will take 1 hour. base value(utility/time) of current life is 1 utility/hour, you expect to receive 2 extra utility by making cookies and will also obtain 1 utility/hour you live in a post-risk-future, which will be 175,200 hours over an assumed extra 20 years. For simplicity we'll assume no discounting, and start off with a pure egoist perspective. Then:

\n

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 2/175,200 = 0.00114%, which might be too much to expect from working for one hour instead.

\n

For an agent-neutral utilitarian, we'll assume there's another 2 utility that others gain from your cookies. We'll include only the ≈6.7 billion currently existing people, who have a current world life expectancy of 67 years and average age of 28.4, which would give them each 75,336 utility over 8.6 years in a post-risk-future. Then:

\n

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 4/(75,336 • 6,700,000,000) =0.000000000000792%. You can probably reduce existential risk this much with one hour of work, but then you’re probably not a pure agent-neutral utilitarian with no time discounting. I’m certainly not.

\n

Conservative Transhuman World

\n

In this world we’ll assume that people live about a thousand years, a little over 10 times conventional expectancy. We’ll also assume they think 10 times as fast and each subjective moment has 10 times higher utility. I’m taking that kind of increase from the hedonistic imperative idea, but you’d get the same effect by just thinking 100 times faster than we do now. Keeping it simple I’ll treat these improvements as happening instantaneously upon entering a post-risk-future. On a conscious level I don’t discount posthuman futures, but I’ll set Di(t) = e-t/20 anyway. For those who want to check my math, the integral of that function from 30 to 1000 is 4.463.

\n

Though I phrased the equations in terms of baked goods, they of course apply to any decision of both greater existential risk and enjoyment. Let’s assume you’re able to forgo all pleasure now in terms of the greatest future pleasure, through existential risk reduction. In our calculation, this course of action is “alternative”, and living like a person unaware of existential risk is “cookie”. Our base value(utility/time) is an expected 1 utility/year of “normal” life (a very different scale from the last example), and your total focus would realize a flat 0 utility for those first 30 years. For a pure egoist:

\n

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 30/446.26 =6.72%. This might be possible with 30 years of the total dedication we’re considering, especially with so few people working on this, but maybe it wouldn’t.

\n

For our agent-neutral calculation, we’ll assume that your total focus on the large scale results in 5 fewer utility for those who won’t end up having as much fun with the “next person” as with you, subtracted by the amount you might uniquely improve the lives of those you meet while trying to save the world.  If they all realize utility similar to yours in a post-risk world, then:

\n

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 35/(446.26*6,700,000,000) = 0.00000000117%. With 30 years of dedicated work this seems extremely feasible.

\n

And if you hadn’t used a discount rate in this example, the ΔP(x-risk survival) required to justify those short-term self-sacrifices would be over 217 times smaller.

\n

Nick Bostrom’s Utopia

\n

Lastly I’ll consider the world described in Bostrom’s “Letter From Utopia”. We’ll use the same base value(utility/time) of 1 utility/year of “normal” life as the last example. Bostrom writes from the perspective of your future self: “And yet, what you had in your best moment is not close to what I have now – a beckoning scintilla at most.  If the distance between base and apex for you is eight kilometers, then to reach my dwellings would take a million light-year ascent.” Taken literally this translates to hi(t) = 1.183 • 1018. I won’t bother treating si(t) as more than unity; though likely to be greater, that seems like overkill for this calculation. We’ll assume people live till most stars burn out, approximately 1014 years from now (if we find a way during that time to stop or meaningfully survive the entire heat death of the universe, it may be difficult to assign any finite bound to your utility). I’ll start by assuming no discount rate.

\n

Assuming again that you’re considering focusing entirely on preventing existential risk, then ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future =30/(1.183 • 1032) = 0.0000000000000000000000000000254%. Even if you were almost completely paralyzed, able only to blink your eyes, you could pull this off. For an agent-neutral utilitarian, the change in existential risk could be about 7 billion times smaller and still justify such dedication. While I don’t believe in any kind of obligation to create new people, if our civilization did seed the galaxy with eudaimonic lives, you might sacrifice unnecessary daily pleasures for a reduction in risk 1,000,000,000,000,000,000,000 smaller still. Even with the discount function specified in the last example, a pure egoist would still achieve the greatest expected utility or enjoyment from an extreme dedication that achieved an existential risk reduction of only 0.000000000000000568%.

\n

Summary

\n

The above are meant only as illustrative examples. As long as we maintain our freedom to improve and change, and do so wisely, I put high probability on post-risk-futures gravitating in the direction of Bostrom’s Utopia. But if you agree with or can tolerate my original assumptions, my intention is for you to play around, enter values you find plausible, and see whether or how much your beliefs justify short term enjoyment for its own sake.

\n

Lastly, keep in mind that maximizing your ability to reduce existential risk almost certainly does not include forgoing all enjoyment. For one thing, you'll have at least a little fun fighting existential risk. Secondly, we aren’t (yet) robots and we generally need breaks, some time to relax and rejuvenate, and some friendship to keep our morale up (as well as stimulated or even sane). Over time, habit-formation and other self-optimizations can reduce some of those needs, and that will only be carried through if you don’t treat short term enjoyment as much more than an element of reducing existential risk (assuming your analysis suggests you avoid doing so). But everyone requires “balance”, by definition, and raw application of willpower won’t get you nearly far enough. It’s an exhaustible resource, and while it can carry you through several hours or a few days, it’s not going to carry you through several decades.

\n

The absolute worst thing you could do, assuming once again that your analysis justifies a given short term sacrifice for greater long term gain, is to give up. If your resolve is about to fail, or already has, just take a break to really relax, however long you honestly need (and you will need some time). Anticipating how effective you’ll be in different motivational states (which can’t be represented by a single number), and how to best balance motivation and direct application, is an incredibly complex problem which is difficult or impossible to quantify. Even the best solutions are approximations, people usually apply themselves too little and sometimes too much. But to do so and suffer burnout provides no rational basis for throwing up your hands in desperation and calling it quits, at least for longer than you need to. To an extent that we might not yet be able to imagine, someday billions or trillions of future persons, including yourself, may express gratitude that you didn’t.

\n

 

\n

 

\n

 

\n

 

" } }, { "_id": "EpLofhghaxDbhx7Wo", "title": "Some counterevidence for human sociobiology", "pageUrl": "https://www.lesswrong.com/posts/EpLofhghaxDbhx7Wo/some-counterevidence-for-human-sociobiology", "postedAt": "2009-08-29T02:08:10.855Z", "baseScore": 2, "voteCount": 15, "commentCount": 31, "url": null, "contents": { "documentId": "EpLofhghaxDbhx7Wo", "html": "

I love seeing counter-evidence for everything. I estimate that while most of my beliefs are true (otherwise I wouldn't believe them in the first place), a small percentage is almost certainly completely false - and I don't really have any reliable way of telling the two apart.

\n

Indiscriminatingly looking for counter-evidence for all of them can be very rewarding - the ones that are true are much more likely to sustain the assault of it than the ones that aren't. Yes, I might ignore counter-evidence of something that's false, or accept it for something that's true, ending up worse off, but it seems plausible that on average it should improve quality of my beliefs.

\n

For example some of the standard beliefs about human sociobiology that seemed to be extremely widely held here are:

\n\n

Charting Parenthood: Statistical Portrait of Fathers and Mothers in America disagrees with them.

\n\n

These are not direct tests of sociobiological claims, so what we have is not exactly what we would like to, but I find them to be quite convincing counter-evidence. My belief in these sociobiological claims is definitely lower than before, at least as far as they concern modern world, even though I can imagine more focused studies changing my mind back.

\n

More counter-evidence for things we commonly believe here, sociobiological or otherwise, welcomed in comments.

" } }, { "_id": "pbYRxTLAcxAQsGvJ7", "title": "Don't be Pathologically Mugged!", "pageUrl": "https://www.lesswrong.com/posts/pbYRxTLAcxAQsGvJ7/don-t-be-pathologically-mugged", "postedAt": "2009-08-28T21:47:56.463Z", "baseScore": 8, "voteCount": 17, "commentCount": 30, "url": null, "contents": { "documentId": "pbYRxTLAcxAQsGvJ7", "html": "

In a lot of discussion here, there's been talk about how decision algorithms would do for PD, Newcomb's Probmel, Parfit's Hitchhiker, and Counterfactual Mugging.

\n

There's a reasonable chain there, but especially for the last one, there's a bit of a concern I've had about the overal pattern. Specifically, while we're optimizing for extreme cases, we want to make sure we're not hurting our decision algorithm's ability to deal with less bizzare cases.

\n

\n

Specifically, part of the reasoning for the last one could be stated as \"be/have the type of algorithm that would be expected to do well when a Counterfactual Mugger showed up. That is, would have a net positive expected utility, etc...\" This is reasonable, espectially given that there seems to be lines of reasoning (like Timeless Decision Theory) that _automatically_ get this right using the same rules that would get it to succeed with PD or any other such thing. But I worry about, well, actually it would be better for me to show an example:

\n

Consider the Pathological Decision Challenge.

\n

Omega shows up and presents a Decision Challenge, consisting of some assortment of your favorite decision theory puzzlers. (Newcomb, etc etc etc...)

\n

Unbeknownst to you, however, Omega also has a secret additional test: If the decisions you make are all something _OTHER_ than the normal rational ones, then Omega will pay you some huge superbonus of utilions, vastly dwarfing any cost to loosing all of the individual challenges...

\n

However, Omega also models you and if you would have willingly \"failed\" _HAD YOU KNOWN_ about the extra challenge above, (but not this extra extra criteria), then you get no bonus for failing everything.

\n

 

\n

 

\n

A decision algorithm that would tend to win in this contrived situation would tend to lose in regular situations, right? Again, yes, I can see the argument that being the type of algorithm that can be successfully counterfactually mugged can arise naturally from a simple rule that automatically gives the right answer for many other more reasonable situations. But I can't help but worry that as we construct more... extreme cases, we'll end up with this sort of thing, were optimizing our decision algorithm to win in the latest \"decision challenge\" stops it from doing as well in more, for lack of a better word, \"normal\" situations.

\n

Further, I'm not sure yet how to more precisely separate out pathalogical cases from more reasonable \"weird\" challenges. Just to clarify, this post isn't a complaint or direct objection to considering things like Newcomb's problem, just a concern I had about a possible way we might go wrong.

" } }, { "_id": "qvnZG2gdBXZ5aWLqL", "title": "The Twin Webs of Knowledge", "pageUrl": "https://www.lesswrong.com/posts/qvnZG2gdBXZ5aWLqL/the-twin-webs-of-knowledge", "postedAt": "2009-08-28T09:45:42.067Z", "baseScore": 6, "voteCount": 15, "commentCount": 72, "url": null, "contents": { "documentId": "qvnZG2gdBXZ5aWLqL", "html": "

Related to and partially inspired by: Joy in the Merely Real; Entangled Truths, Contagious Lies.

\r\n
\r\n

Where does the newborn go from here? The net is vast and infinite. -- Ghost in the Shell

\r\n
\r\n

There are those among us who resist the steady march of science, who feel that the reductionist creed takes away the beauty of things, who would rather enjoy sacred mysteries instead of naturalistic explanations. I suspect not many of them are reading this site. But even among aspiring rationalists, there are probably many who still feel some sympathy to that line of thought, who cannot but feel a twinge of pain where something mysterious ends up explained.

To them I reply thusly:

Picture in your mind a vast, glowing network of things, at the center of which are you. I visualize it akin to a great, vast city at night, a city that never sleeps. For some reason, all the lights in the buildings are out, so all the light comes from the myriad cars, trains and buses bustling in the night. Greg Egan's phrase, \"making the pulses race along the tracks like a quadrillion cars shuttling between the trillion junctions of a ten-thousand-tiered monorail\" comes to mind. Though it is not the tracks or roads themselves that we are the most interested in, but the hubs where they meet and from which they emerge.

\r\n


The hubs in the network are many in kind: some are other people, some are books and items in which information is stored, some are past events and experiments once conducted. The brightest hubs are the ones closest to you, from which information is flowing to you directly: they are your glowing constellation of stars. Beyond them, the hubs glow less and less bright as they get more distant: finally they are but dim, barely visible dots in a sea of blackness, spread out like the rocks at the bottom of the seafloor. Further still, even those dots vanish, but you know it doesn't mean there aren't any: it simply means that you don't know what and where they are.

Now overlay this network of lights on a landscape. Not just the physical landscape of the Earth and the universe, though the net covers that as well. To see things in full, you must also overlay the network on the maps showing the sum of all human knowledge: that which is known in maths, physics, biology, literature, and all the sciences and arts besides. By themselves, all these landscapes are dark and impenetrable: with the great web of knowledge overlaid, the hubs illuminate their surroundings, revealing the landscape's contents. You could say that the people and events illuminate the landscapes around them, but you could likewise also say that they are drawing their light from the landscapes, for every pulse of information that moves across the lines in the network has its origins in one of the landscapes.

As you move and seek out new people, you forge new links and make previously dim hubs glow more brightly; where you lose contact with people and lose interest in things, connections fade away and hubs disappear back into the darkness. You yourself are glowing as well: as an infant, your glow was dim and weak, your parents and elder siblings the main things you were directly connected with. As you sucked in and absorbed their knowledge, your light grew brighter. Now, as you absorb more and more, you become better capable of understanding all you see: you learn to find understanding by simply looking at things, seeing in them much that you were previously blind to. Thus the radius of light surrounding you grows ever wider, in all the dimensions and fields you choose to pursue.

For this network of things is mirrored by another network in your mind: the network of things you have learnt. As the other, this network is constantly changing, new nodes and connections appearing as you learn new things, vanishing as you forget them. When someone or something in the other web illuminates its surroundings, you can take what you see around him and store it in your mind, link it with the other previous things you already know. For as long as you do not forget this new thing, you will retain a connection to the source you learned it from. Thus the places you once saw will remain illuminated for as long as you remember, though as time passes you only retain the memory of what those places were once like, not necessarily the way they are now: and you may need to revisit them to find out that they have changed.

As you come to know more and more related things, those things are bound tightly together in your mind's web. The places the knowledge was drawn from will likewise stay brightly and evenly lit, pushing back the darkness. As you expand your mental web of things more and more, entirely new fields of understanding will open themselves to you. Below the abstract fields of sciences and arts are the fields you mastered back as a child, the fields that your webs were once limited to and have now branched out from. Are you not for some reason mute, you will have learned the art of producing speech: are you not disabled, you will have learned to walk. But even if those fields wouldn't be known to you, the fact that you understand what I am writing means that you have learnt the basic fields of human understanding. In some way, you have learned to communicate, and you have learned to reason and to think. From this foundation, vast depths of knowledge have become open to you.

Fear not that new understanding would render things cold and boring. If you avoid new understanding, you are limiting the reach of what you can learn. If you embrace and actively seek out new understanding, you will spread out across the entire universe.

" } }, { "_id": "DE7v8eACTZ2W9z8zu", "title": "Paper: Testing ecological models", "pageUrl": "https://www.lesswrong.com/posts/DE7v8eACTZ2W9z8zu/paper-testing-ecological-models", "postedAt": "2009-08-27T22:12:58.541Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "DE7v8eACTZ2W9z8zu", "html": "

You may be interested in a paper of medium age I just read. Testing ecological models: the meaning of validation (PDF) tackles a problem many of you are familiar with in a slightly different context.

\n

To entice you to read it, here are some quotes from its descriptions of other papers:

\n
\n

Holling (1978) pronounced it a fable that the purpose of validation is to establish the truth of the model…

\n
\n
\n

Overton (1977) viewed validation as an integral part of the modelling process…

\n
\n
\n

Botkin (1993) expressed concern that the usage of the terms verification and validation was not consistent with their logical meanings…

\n
\n
\n

Mankin et al. (1977) suggested that the objectives of model-building may be achieved without validating the model…

\n
\n

I have another reason for posting this; I’m looking for more papers on model validation, especially how-to papers. Which ones do you consider most helpful?

" } }, { "_id": "AYKL87S4QRDrsrsvY", "title": "Pittsburgh Meetup: Survey of Interest", "pageUrl": "https://www.lesswrong.com/posts/AYKL87S4QRDrsrsvY/pittsburgh-meetup-survey-of-interest", "postedAt": "2009-08-27T16:18:27.792Z", "baseScore": 10, "voteCount": 9, "commentCount": 7, "url": null, "contents": { "documentId": "AYKL87S4QRDrsrsvY", "html": "

I plan to host an OBLW meetup in Pittsburgh, PA, in the next couple of weeks, but would like to know the level of interest first. Please comment if you think you'd attend.

\n

(The planned location is a house in Squirrel Hill, about a mile and a half from Carnegie Mellon University; but if you attend CMU and would strongly prefer a location on campus, please say so.)

" } }, { "_id": "h5K34vLcTEKpMxCZj", "title": "A Rationalist's Bookshelf: The Mind's I (Douglas Hofstadter and Daniel Dennett, 1981)", "pageUrl": "https://www.lesswrong.com/posts/h5K34vLcTEKpMxCZj/a-rationalist-s-bookshelf-the-mind-s-i-douglas-hofstadter", "postedAt": "2009-08-26T19:08:30.789Z", "baseScore": 21, "voteCount": 19, "commentCount": 7, "url": null, "contents": { "documentId": "h5K34vLcTEKpMxCZj", "html": "

\"\"

\n

When the call to compile a reading list for new rationalists went out, contributor djcb responded by suggesting The Mind's I: Fantasies and Reflections on Self and Soul, a compilation of essays, fictions and excerpts \"composed and arranged\" by Douglas Hofstadter and Daniel Dennett. Cut to me peering guiltily over my shoulder, my own copy sitting unread on the shelf, peering back.

The book presents Hofstadter and Dennett's co-curation of 27 pieces, some penned by the curators themselves, meant to \"reveal\" and \"make vivid\" a set of \"perplexities,\" to wit: \"What is the mind?\" \"Who am I?\" \"Can mere matter think or feel?\" \"Where is the soul?\" Two immediate concerns arise. First, The Mind's I's 1981 publication date gives it access to the vast majority of what's been thought and said about these questions, but robs it of of any intellectual progress toward the answers made in the nearly three decades since. (This turns out not to be an issue, as most of the answers seem to have drawn no closer in the 1980s, 1990s or 2000s.) Second, those sound suspiciously similar to questions hazily articulated by college freshmen, less amenable to \"rational inquiry\" than to \"dorm furniture and bad weed.\" They don't quite pass the \"man test,\" an reversal of the fortune cookie \"in bed\" game: simply tack \"man\" onto the beginning of each question and see who laughs. \"Man, who am I?\" \"Man, where is the soul?\" \"Man, can matter think or feel?\"

Hofstadter and Dennett's fans know, however, that their analyses rise a cut above, engaged as they are in the admirable struggle to excise the navel-gazing from traditionally navel-gazey topics. The beauty is that they've always accomplished this, together and separately, not by making these issues less exciting but by making them more so. Their clear, stimulating exegeses, explorations and speculations brim with both the enthusiasm of the thrilled neophyte and the levelheadedness of the seasoned surveyor. They even do it humorously, Hofstadter with his zig-zaggy punniness and Dennett with his wit that somehow stays just north of goofy. Thus armed, they've taken on such potentially dangerous topics as whether words and thoughts follow rules, how the animate emerges from the inanimate (Hofstader's rightly celebrated Gödel, Escher, Bach: An Eternal Golden Braid) and consciousness (most of Dennett's career), on the whole safely.

But obviously this is not a \"pure\" (whatever that might mean) Hofstadter-Dennett joint; rather, their editorial choices compose one half and their personal commentaries — \"reflections,\" they banner them — on the fruits of those choices compose the other. Nearly every selection, whether a short story, article, novel segment or dialogue, leads into an original discussion and evaluation by, as they sign them, D.R.H. and/or D.C.D. They affirm, they contradict, they expand, they question, they veer off in their own directions; the reflections would make a neat little book on the topics at hand by themselves.

Terribly inelegant a strategy as this is, perhaps I'll cover the pieces one-by-one:

\n
    \n
  1. The first section, on self and identity, opens strong with Jorge Luis Borges, for my money the finest short fictionalist of ideas... ever, probably. His well-known \"Borges and I\" plays with the distinction between Borges the man and Borges the public author, treating the two as ontologically distinct. Even if that idea has passed into the realm of old hat, the story containing it holds up by the razor-sharpness of its language, even in translation: \"It would be an exaggeration to say that ours is a hostile relationship: I live, let myself go on living, so that Borges may contrive his literature, and this literature justifies me.\"
  2. \n
  3. The mystic Douglas Harding, in \"On Having No Head\", recounts the moment he discovered he had no head. As he describes the various consequences of this realization, ht essay becomes essentially a riff on the fact that it's impossible for anybody to directly see their own, physical head and thus that they know of its existence that much less definitively. At some point, this wears out its welcome; Harding stretches an intellectual snack into a dinner, following the meal with a coda about how, aw, it's all just semantic confusion over the verb to see.
  4. \n
  5. Harold Morowitz's \"Rediscovering the Mind\" has not, it must be said, stuck deeply in my own. My forgetfulness may be due in part to the fact that reductionist examination of the mind and the challenges such an approach faces have entered, and remained in, common discourse since the article saw Psychology Today publication in 1980, so its ideas couldn't strike me with what I assume to be the intended force of novelty. As a brief introduction to the problems of reductionism and the mind, though, I imagine it's pretty effective.
  6. \n
  7. Kicking off the section on the concept of the soul, Alan Turing's groundbreaking 1950 Mind article \"Computing Machinery and Intelligence\" proposes his now-eponymous test for machine intelligence. One might assume the years have been especially unkind to Turing's (at least nominally) technology-minded essay and Dennett and Hofstadter's accompanying commentary, but no, machine intelligence remains elusive, and thus both texts merit continued digestion.
  8. \n
  9. Hofstadter extends the Turing talk with \"The Turing Test: A Coffeehouse Conversation\", setting up an intellectual triangle between \"Chris, a physics student; Pat, a biology Student; and Sandy, a philosophy student.\" (The unisex names turn out to fold into one of the discussion's main points, though I found keeping everyone straight a tad difficult.) The three throw down their collective six cents on the possibilities, implications and validity of the famous test. While illuminating, the piece spreads its content way too thin, its 23 pages littered with conversational detritus: \"That's a sad story.\" \"Good question.\" \"How so?\" But to be fair, these problems hamper most written dialogues, as does the reader's sneaking suspicion that they're being somehow led down the garden path. As dialogues — trialogues? — go, though, this one serves a nutritional portion.
  10. \n
  11. \"The Princess Ineffabelle\", the first of the collection's three imaginings by Polish science fiction writer Stanislaw Lem, envisions a sort of proto-virtual-reality device that can load up an entire era and its people on punch cards (!) and simulate it with find-grained precision. A king, seeking a princess extant only within the machine's world, inquires as to how he might go about having himself digitized and inserted into said world. But the digital king wouldn't really be the king king, right? Or would that matter?
  12. \n
  13. Terrell Miedaner's eerie \"The Soul of Martha, A Beast\" envisions a courtroom demonstration wherein a chimpanzee, wired to a device that translates its brain's neural patterns into a simple English vocabulary. A discussion ensues about whether the animal, \"uttering\" strings like \"Hello! Hello! I Martha Happy Happy Chimp,\" truly merits the designation \"intelligent,\" after which the researcher puts his charge to death:\n
    As the unsuspecting chimpanzee placed the poisoned gift into her mouth and bit, Belinsky conceived of an experiment he had never before considered. He turned on the switch. \"Candy Candy Thank You Belinsky Happy Happy Martha.\"

    Then her voice stopped of its own accord. She stiffened, then relaxed in her master's arms, dead.

    But brain death is not immediate. The final sensory discharge of some circuit within her inert body triggered a brief burst of neural pulsations decoded as \"Hurt Martha Hurt Martha.\"

    Nothing happened for another two seconds. Then randomly triggered neural discharges no longer having anything to do with the animal's lifeless body send one last pulsating signal to the world of men.

    \"Why Why Why Why —\"

    A soft electrical click stopped the testimony.
    \nThe operative concept, discussed in Hofstadter's reflection, emerges as the determination of what degree of linguistic evidence, if any, indicates the presence of \"intelligence,\" \"consciousness,\" a \"soul\" — pick one or more of your favorite fuzzily-defined concept and attempt to determine what separates them. All the book's pieces present more questions than answers, and Miedaner's first especially so. Still, it stays with you, as does his next piece...
  14. \n
  15. \"The Soul of the Mark III Beast\", in which a lawyer invites a timid woman to \"kill\" a robot. The mechanical creature, a steely cross between a mouse and a beetle, \"eats\" electrical current from the wall, \"flees\" its pursuer's hammer blows and \"bleeds\" oil when damages. These points of superficial congruence with the animal kingdom seriously freak the woman out, and she's really got to maintain to finish the job. In short: the fuzzy-to-nonexistent boundary between the sentient and the nonsentient, illustrated (in prose).
  16. \n
  17. Allen Wheelis' \"Spirit\", which heads the section on the mind's physical foundation (also known as the brain), comes off as relatively insubstantial but addresses concerns certain readers may harbor. To wit: it feels as if we humans possess some ineffable \"spirit.\" But it's tough to pin down, though it may animate the rest of the natural world as well. Hofstadter boils it down skillfully in the reflection: \"Wheelis portrays the eerie, disorienting view that modern science has given us of our place in the scheme of things. Many scientists, not to mention humanists, find this a very difficult view to swallow and look for some kind of spiritual essence, perhaps intangible, that would distinguish living beings, particularly humans, from the inanimate rest of the universe. How does anima come from atoms?\" A Big Question indeed.
  18. \n
  19. \"Selfish Genes and Selfish Memes\" is a selection from Richard Dawkins' The Selfish Gene. If you have not read this book, minimize your browser and do so. I'll wait.
  20. \n
  21. \"Prelude... Ant Fugue\" is a selection from Douglas Hofstadter's Gödel, Escher, Bach. If you have not read this book, minimize your browser and do so. I'll wait. (It's the dialogue comparing the human brain to an ant farm, which I still find ever-so-slightly mindblowing to this day.)
  22. \n
  23. Our mental hardware undergoes the severest possible parting-out in Arnold Zuboff's \"The Story of a Brain\", a fiction and thought experiment — in several senses of the term — where a group of scientists remove the healthy brain from a young man's otherwise abnormally decaying body, stick it in a vat and give it \"experiences\" by way of electrical stimulation. But then a drunken night watchman accidentally separates the brain's hemispheres, damage the scientists attempt to repair with remote communication devices allowing neurons from one half to stimulate the others'. Over the next thousand years, thanks to widespread scientific-community interest, fiddly readjustment of the apparatus and a general shortage of brains in vats, each of this brain's individual neurons finds it way, step by logical step, to a separate laboratory, all supposedly linked together. And the labs occasionally replace their neurons. It's the brain as Abraham Lincoln's proverbial original axe: the blade's been replaced once and the handle twice. At what exact point can we no longer call it a brain, as we normally understand the concept? As with many of the other concepts on which the book touches, discrete boundaries remain elusive.
  24. \n
  25. Daniel Dennett's \"Where Am I?\" leads into the section on mind-as-software. (See also the video dramatization!) The story follows a fictionalized version of Dennett himself as he's hired on to a secret government project to dig up a brain-destroying underground warhead. Specifically, Dennett's meant to go down there and dig it up by hand. Removing his own brain and installing it safely in a vat, the government dudes set it up so Dennett can remotely control his own body, in a way, but feel, more or less — he compellingly describes the newly-introduced little technical quirks — as if he's still a brain and body organically united. But who's the \"real\" Dennett? Shades of first-year philosophy classes' rhetorical questions about who you'd be if your divided brain was split between two bodies, I know, but Dennett presents it in a delightfully entertaining way, as is his wont.
  26. \n
  27. With \"Where Was I?\", David Hawley Sanford takes another angle on Dennett's concept, positing a different government operation — again, top-secret — to develop devices that transfer remotely-gathered sense experiences so accurately to the local user's body that the meaning of reference to his actual location — and how one goes about determining his actual location — grows muddled, questionable, a matter of unsettlable debate.
  28. \n
  29. The next chapter excerpts Justin Leibler's Beyond Rejection, a sci-fi novel about a murdered man who wakes up to find his brain loaded — via brain-backup tapes, a standard piece of personal technology in Liebler's imagined future — into a new body: specifically, a woman's. (More specifically, a woman with a tail's. The tail is not explained, at least in the reprinted segment.) The ten pages include a suitably creepy sequence wherein the protagonist wakes up, disoriented due to incomplete brain-body synchronization and disturbed by the two new \"dead cancerous mounds\" of \"disconnected, nerveless jelly\" — breasts, in other words — he'll have to learn to live with. While not especially striking technically or biologically, the passage definitely evokes the right set of feelings.
  30. \n
  31. A selection from Rudy Rucker's slightly goofy-sounding novel Software illustrates, after a fashion, the questions of what specific component or components, if any, drive consciousness, and what self-consciousness has to do with that consciousness. And, as Dennett's reflection clarifies, if a supposedly conscious entity's consciousness were to cease existing, how would we know?
  32. \n
  33. Christopher Cherniak's short story \"The Riddle of the Universe and Its Solution\" posits a computer program whose output, when viewed in full by a human, forces that human's brain into an infinite loop — \"perhaps even the ultimate Zen state of satori,\" Hofstadter reflects — \"locking it up\" for good. Before slipping into this coma, each victim utters the word \"Aha!\" This analogizes the human brain — and only the human brain, since the program, \"the Gödel sentence for human Turing machine,\" is shown not to induce the coma in apes — to an actual computer in terms of operating with enough logical strictness to wilfully — loaded word, I know, and so do all the authors involved — incapacitate itself. Hofstadter ties this into the broader topic of self-referential loops and what they might already have to do with the mind.
  34. \n
  35. The book's second Stanislaw Lem selection, \"The Seventh Sally or How Trurl's Own Perfection Led to No Good\", opens the section on created selves and free will. I found it just slightly too weirdly-written to draw much from directly, but Dennett and Hofstadter's much clearer reflection — no pun intended — drops a few intriguing thoughts about looking for \"souls\" inhabiting simulated worlds.
  36. \n
  37. The third Lem piece, \"Non Serviam\", comes immediately after. Though thematically similar to its predecessor — the nature of simulation, the parallels between simulated world and the non-simulated world — it's also slightly less opaque. (Slightly less.)
  38. \n
  39. Raymond Smullyan's dialogue \"Is God a Taoist?\" has a mortal pleading with his creator to strip him of free will:\n
    GOD: Why would you wish not to have free will?

    MORTAL: Because free will means moral responsibility, and moral responsibility is more than I can bear.

    GOD: Why do you find moral responsibility so unbearable?

    MORTAL: Why? I honestly can’t analyze why; all I know is that I do.

    GOD: All right, in that case suppose I absolve you from all moral responsibility, but still leave you with free will. Will this be satisfactory?

    MORTAL: (after a pause): No, I am afraid not.
    \nAnd it goes on like this, the mortal desperately trying to reason with the god and find a means of being freed from what's bothering him about morality, goodness, responsibility and choice. Eventually, matters either evolve or devolve, depending upon how you look at it, to whether the god or the mortal exists, how one can know the other exists, whether the god is the mortal or the mortal the god, who's on first, what's on second and so on and so forth. In his reflection, Hofstadter references an apropos Marvin Minsky quote: \"Logic doesn’t apply to the real world.\"
  40. \n
  41. A second dose of Borges comes in \"The Circular Ruins\", the story of an isolated wizard who dreams up an actual human being. When he's imagined this potential boy's every possible detail, he requests that the god Fire create him. Fire complies, incarnating the wizard's vision, but in such a way that he's still not quit real enough to be burned by fire (the element). When the wizard walks into a fire, he find's that he doesn't burn — and thus is, himself, someone else's dream. We're back in Intro to Philosophy's territory, in a way: are you dreaming right now, or are you not? How do you know? \"Is this philosophical play with the ideas of dreaming and reality just idle?\" Dennett asks. \"Isn't there a no-nonsense 'scientific' stance from which we objectively distinguish between the things that are really there and mere fictions? Perhaps there is, but then on which side of the divide we put ourselves? Not our physical bodies, but our selves?\" The answers appear to be \"nah\" and \"we don't know,\" or maybe \"mu.\"
  42. \n
  43. John Searle's \"Minds, Brains, and Programs\" searches for the seat of intelligence with what's now called the \"Chinese room\" thought experiment, in which one imagines a human sealed in a room under whose door an unseen interlocutor passes slips of paper with sentences written in Chinese. With no understanding of the Chinese language, the man in the room follows a series of mechanistic procedures to write out a reply on another slip and pass it back under the door. Repeat. If the fellow on the door's other side believes he's conducting a conversation in writing with a genuine Chinese speaker — the rule-following scribbler inside having thus passed a sort of Turing test — who's to say that somewhere in the man, the rules and the slips of paper, there is not a genuine understanding of Chinese? But of course we find that ridiculous, so there's got to be something within the brain that we can use as a line of demarcation. Nothing we've identified yet or that may be identifiable at all —, but something. Dennett and Hofstadter don't find this line of thought convincing, identifying a few sleight-of-hand points in their reflection, but I didn't feel it a waste of time to hear the notion proposed. Proposed rather unconvincingly, sure, but quite articulately! (More so than my summary gives it credit for, certainly.)
  44. \n
  45. The brief but piquant \"An Unfortunate Dualist\" by Raymond Smullyan envisions a devout dualist in great pain. Though he'd like to kill himself, he fears hurting others, commiting moral crime and/or enduring punishment in the afterlife. Fortunately, he finds a drug that destroys only the soul, leaving the body intact and operational as before. A friend secretly injects him with the drug the night before he goes out to pick up a dosage himself. Upon ingesting it of his own volition, the dualist, of course, feels no different: disappointed, he believes himself to still possess a soul and endure suffering. \"Doesn't all this suggest,\" Smullyan asks, \"that perhaps there might be something just a little wrong with dualism?\" Indeed, but who's really a dualist anymore?
  46. \n
  47. Thomas Nagel answers his essay's title question \"What Is It Like to Be a Bat?\" with the argument that we can't know, because we're humans, inescapably, and they're bats. So we could well ask what it would be like for a human to be a bat — what it would be like to have our human senses and perceptions transformed into human senses and perceptions that more closely resemble what we think bats have — but not what it's like to simply be a bat. Hofstadter takes this pretty far in his reflection, asking such questions as \"What is it like to hear one's native language without understanding it?\" and \"What is it like to hate chocolate (or your personal favorite flavor)?\" Fans will enjoy his punning of Nagel's title, \"What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?\" (Illustration of baseball player and bee included.)
  48. \n
  49. Completing the Smullyan hat trick, \"An Epistemological Nightmare\" depicts a man's consultations with an \"experimental epistemologist.\" Infatuated with his latest piece of in-office gear, a \"cerebroscope\" that supposedly reads the patient's every neuron, the epistemologist puts the poor fellow through the ringer by using the device to reject his every statement about his beliefs, his beliefs about his beliefs, and his beliefs about his beliefs about his beliefs. Like the book's first Smullyan selection, this dialogue isn't without its Abbott-and-Costello elements: the absurdity reaches such a height that the epistemologist must eventually forsake the machine in order to break the loop he's created, by virtue of the very trust he's placed in it, between the cerebroscope and his own brain.
  50. \n
  51. \"A Conversation With Einstein's Brain\" is a selection from Douglas Hofstadter's Gödel, Escher, Bach. If you have not read this book, minimize your browser and do so. I'll wait. (It's the dialogue, even better than the one about the ant farm, that proposes the \"copying\" of a brain into book form, and then letting the books \"interact.\")
  52. \n
  53. The reflection-less \"Fiction\" by Robert Nozick wraps the book. The piece at first seems to be narrated by a fictional character: indeed, its first sentence is \"I am a fictional character.\" But this character goes on to assert that the reader, too, is a fictional character, and that this piece one fictional character reads and another narrates is, in fact, a work of non-fiction, as are all works — works within this fictional world in which we live, that is. But who, then, wrote (or currently writes) our world?
  54. \n
\n

As a book for new rationalists, The Mind's I would be best offered as a jolt, a set of mind-stretching exercises that clear the road for the long, incompletable journey to rationality. A reader expecting any sort of instruction on how to think rationally will find a dry well, but that's not the point; these 27 pieces and their commentaries illustrate that it's possible in the first place to do some thinking in the borderlands of such everyday concepts like as brain, mind, soul, self, I, you, intelligence, sentience, etc. Perhaps the same explanation justifies low-level philosophy courses and the bull sessions students hold in the wee hours after them, but Hofstadter and Dennett manage to use material a great deal more entertaining, more exotic and altogether smarter. Would that we could get a revised and expanded update.

" } }, { "_id": "CRDqzWFPRb6jX6b6t", "title": "Mathematical simplicity bias and exponential functions", "pageUrl": "https://www.lesswrong.com/posts/CRDqzWFPRb6jX6b6t/mathematical-simplicity-bias-and-exponential-functions", "postedAt": "2009-08-26T18:34:25.269Z", "baseScore": 16, "voteCount": 21, "commentCount": 87, "url": null, "contents": { "documentId": "CRDqzWFPRb6jX6b6t", "html": "

One of biases that are extremely prevalent in science, but are rarely talked about anywhere, is bias towards models that are mathematically simple and easier to operate on. Nature doesn't care all that much for mathematical simplicity. In particular I'd say that as a good first approximation, if you think something fits exponential function of either growth or decay, you're wrong. We got so used to exponential functions and how convenient they are to work with, that we completely forgot the nature doesn't work that way.

\n

But what about nuclear decay, you might be asking now... That's as close you get to real exponential decay as you get... and it's not nowhere close enough. Well, here's a log-log graph of Chernobyl release versus theoretical exponential function, plotted in log-log.

\n

\"\"

\n

Well, that doesn't look all that exponential... The thing is that even if you have perfect exponential decay processes as with single nucleotide decay, when you start mixing a heterogeneous group of such processes, the exponential character is lost. Early in time faster-decaying cases dominate, then gradually those that decay more slowly, somewhere along the way you might have to deal with results of decay (pure depleted uranium gets more radioactive with time at first, not less, as it decays into low half-life nuclides), and perhaps even some processes you didn't have to consider (like creation of fresh radioactive nuclides via cosmic radiation).

\n

And that's the ideal case of counting how much radiation a sample produces, where the underlying process is exponential by the basic laws of physics - it still gets us orders of magnitude wrong. When you're measuring something much more vague, and with much more complicated underlying mechanisms, like changes in population, economy, or processing power.

\n

According to IMF, world economy in 2008 was worth 69 trillion $ PPP. Assuming 2% annual growth and naive growth models, the entire world economy produces 12 cents PPP worth of value in entire first century. And assuming fairly stable population, an average person in 3150 will produce more that the entire world does now. And with enough time dollar value of one hydrogen atom will be higher than current dollar value of everything on Earth. And of course with proper time discounting of utility, life of one person now is worth more than half of humanity millennium into the future - exponential growth and exponential decay are both equally wrong.

\n

To me they all look like clear artifacts of our growth models, but there are people who are so used to them that they treat predictions like that seriously.

\n

In case you're wondering, here are some estimates of past world GDP.

" } }, { "_id": "B7bMmhvaufdtxBtLW", "title": "Confusion about Newcomb is confusion about counterfactuals", "pageUrl": "https://www.lesswrong.com/posts/B7bMmhvaufdtxBtLW/confusion-about-newcomb-is-confusion-about-counterfactuals", "postedAt": "2009-08-25T20:01:21.664Z", "baseScore": 54, "voteCount": 48, "commentCount": 42, "url": null, "contents": { "documentId": "B7bMmhvaufdtxBtLW", "html": "

(This is the first, and most newcomer-accessible, post in a planned sequence.)

\n

Newcomb's Problem:

\n

Joe walks out onto the square.  As he walks, a majestic being flies by Joe's head with a box labeled \"brain scanner\", drops two boxes on the ground, and departs the scene.  A passerby, known to be trustworthy, comes over and explains...

\n

If Joe aims to get the most money, should Joe take one box or two?

\n

What are we asking when we ask what Joe \"should\" do?  It is common to cash out \"should\" claims as counterfactuals: \"If Joe were to one-box, he would make more money\".   This method of translating \"should\" questions does seem to capture something of what we mean: we do seem to be asking how much money Joe can expect to make \"if he one-boxes\" vs. \"if he two-boxes\".  The trouble with this translation, however, is that it is not clear what world \"if Joe were to one-box\" should refer to -- and, therefore, it is not clear how much money we should say Joe would make, \"if he were to one-box\".  After all, Joe is a deterministic physical system; his current state (together with the state of his future self's past light-cone) fully determines what Joe's future action will be.  There is no Physically Irreducible Moment of Choice, where this same Joe, with his own exact actual past, \"can\" go one way or the other.

\n

To restate the situation more clearly: let us suppose that this Joe, standing here, is poised to two-box.  In order to determine how much money Joe \"would have made if he had one-boxed\", let us say that we imagine reaching in, with a magical sort of world-surgery, and altering the world so that Joe one-boxes instead.  We then watch to see how much money Joe receives, in this surgically altered world. 

The question before us, then, is what sort of magical world-surgery to execute, before we watch to see how much money Joe \"would have made if he had one-boxed\".  And the difficulty in Newcomb’s problem is that there is not one but two obvious world-surgeries to consider.  First, we might surgically reach in, after Omega's departure, and alter Joe's box-taking only -- leaving Omega's prediction about Joe untouched.  Under this sort of world-surgery, Joe will do better by two-boxing:

Expected value ( Joe's earnings if he two-boxes | some unchanged probability distribution on Omega's prediction )  >
Expected value ( Joe's earnings if he one-boxes | the same unchanged probability distribution on Omega's prediction ).

Second, we might surgically reach in, after Omega's departure, and simultaneously alter both Joe's box-taking and Omega's prediction concerning Joe's box-taking.  (Equivalently, we might reach in before Omega's departure, and surgically alter the insides of Joe brain -- and, thereby, alter both Joe's behavior and Omega's prediction of Joe's behavior.)  Under this sort of world-surgery, Joe will do better by one-boxing:

Expected value ( Joe's earnings if he one-boxes | Omega predicts Joe accurately)  >
Expected value ( Joe's earnings if he two-boxes | Omega predicts Joe accurately).

The point: Newcomb's problem -- the problem of what Joe \"should\" do, to earn most money -- is the problem which type of world-surgery best cashes out the question \"Should Joe take one box or two?\".  Disagreement about Newcomb's problem is disagreement about what sort of world-surgery we should consider, when we try to figure out what action Joe should take.

" } }, { "_id": "83WNLnCgyYDdEv6Nu", "title": "How does an infovore manage information overload?", "pageUrl": "https://www.lesswrong.com/posts/83WNLnCgyYDdEv6Nu/how-does-an-infovore-manage-information-overload", "postedAt": "2009-08-25T18:54:32.609Z", "baseScore": 3, "voteCount": 13, "commentCount": 30, "url": null, "contents": { "documentId": "83WNLnCgyYDdEv6Nu", "html": "

I am, and have been for most of my life, an information glutton.  The internet has made my affliction worse by providing me with the equivalent of an unlimited buffet of both nutritious as well as junk food for my brain which never leaves my side.  A fire hose of data focused straight into my mind's mouth.  If the brain food is mostly high quality, and I'm exercising my grey matter vigorously enough to warrant such high volumes of knowledge, then it's not that much of a problem.  However, I've recently crossed a threshold where I seem to be spending more time navigating this buffet rather than consuming the food.  

\n

Ok, dropping the metaphor and getting to the point, I need to know how I can efficiently minimize the amount of time I spend staying abreast of the things I should now so I can maximizing the time I spend actually learning them and hopefully having ample time left over to be productive at applying that knowledge.  Mind you, I am pretty diligent when it comes to avoiding the frivolous youtube clips, emails, and reddit/slashdot/etc. refreshes.  That isn't the problem.  The problem is figuring out which books, research papers, and blogs to stay aware of, and how to automate such a system.  Any techniques you would like to share?  

" } }, { "_id": "sLxFqs8fdjsPdkpLC", "title": "Decision theory: An outline of some upcoming posts", "pageUrl": "https://www.lesswrong.com/posts/sLxFqs8fdjsPdkpLC/decision-theory-an-outline-of-some-upcoming-posts", "postedAt": "2009-08-25T07:34:52.254Z", "baseScore": 31, "voteCount": 30, "commentCount": 31, "url": null, "contents": { "documentId": "sLxFqs8fdjsPdkpLC", "html": "

Last August or so, Eliezer asked Steve Rayhawk and myself to attempt to solve Newcomb’s problem together.  This project served a couple of purposes:
a.  Get an indication as to our FAI research abilities.
b.  Train our reduction-muscles.
c.  Check whether Eliezer’s (unseen by us) timeless decision theory is a point that outside folks tend to arrive at independently (at least if starting from the rather substantial clues on OB/LW), and whether anything interestingly new came out of an independent attempt.

Steve and I (and, briefly but helpfully, Liron Shapira) took our swing at Newcomb.  We wrote a great mass of notes that have been sitting on our hard drives, but hadn’t stitched them together into a single document.  I’d like to attempt a Less Wrong sequence on that subject now.  Most of this content is stuff that Eliezer, Nesov, and/or Dai developed independently and have been referring to in their posts, but I’ll try to present it more fully and clearly.  I learned a bit of this from Eliezer/Nesov/Dai’s recent posts.

\n

Here’s the outline, to be followed up with slower, clearer blog posts if all goes well:

\n

0.  Prelude: “Should” depends on counterfactuals.  Newcomb's problem -- the problem of what Joe \"should\" do, to earn most money -- is the problem of which type of counterfactuals best cash out the question \"Should Joe take one box or two?\".  Disagreement about Newcomb's problem is disagreement about what sort of counterfactuals we should consider, when we try to figure out what action Joe should take.

\n


1.  My goal in this sequence is to reduce “should” as thoroughly as I can.  More specifically, I’ll make an (incomplete, but still useful) attempt to:

\n\n

2.  A non-vicious regress.   Suppose we’re designing Joe, and we want to maximize his expected winnings.  What notion of “should” should we design Joe to use?  There’s a regress here, in that creator-agents with different starting decision theories will design agents that have different starting decision theories.  But it is a non-vicious regress.  We can gain understanding by making this regress explicit, and asking under what circumstances agents with decision theory X will design future agents with decision theory Y, for different values of X and Y.

3a.  When will a CDT-er build agents that use “could” and “should”?  Suppose again that you’re designing Joe, and that Joe will go out in a world and win utilons on your behalf.  What kind of Joe-design will maximize your expected utilons?

If we assume nothing about Joe’s world, we might find that your best option was to design Joe to act as a bundle of wires which happens to have advantageous physical effects, and which doesn’t act like an agent at all.

But suppose Joe’s world has the following handy property: suppose Joe’s actions have effects, and Joe’s “policy”, or the actions he “would have taken” in response to alternative inputs also have effects, but the details of Joe’s internal wiring doesn’t otherwise matter.  (I'll call this the \"policy-equivalence assumption\").  Since Joe’s wiring doesn’t matter, you can, without penalty, insert whatever computation you like into Joe’s insides.  And so, if you yourself can think through what action Joe “should” take, you can build wiring that sits inside Joe, carries out the same computation you would have used to figure out what action Joe “should” take, and then prompts that action.

Joe then inherits his counterfactuals from you: Joe’s model of what “would” happen “if he acts on policy X” is your model of what “would” happen if you design an agent, Joe, who acts according to policy X.  The result is “act according to the policy my creator would have chosen” decision theory, now-A.K.A. “Updateless Decision Theory” (UDT).  UDT one-boxes on Newcomb’s problem and pays the $100 in the counterfactual mugging problem.

3b.  But it is only when computational limitations are thrown in that designing Joe to be a CSA leaves you better off than designing Joe to be your top-pick hard-coded policy.  So, to understand where CSAs really come from, we’ll need eventually to consider how agents can use limited computation.

3c.  When will a UDT-er build agents that use “could” and “should”?  The answer is similar to that for a CDT-er.

3d.  CSAs are only useful in a limited domain.  In our derivations above, CSAs' usefulness depends on the policy-equivalence assumption.  Therefore, if agents’ computation has important effects apart from its effects on the agents’ actions, the creator agent may be ill-advised to create any sort of CSA.*  For example, if the heat produced by agents’ physical computation has effects that are as significant as the agent’s “chosen actions”, CSAs may not be useful.  This limitation suggests that CSAs may not be useful in a post-singularity world, since in such a world matter may be organized to optimize for computation in a manner far closer to physical efficiency limits, and so the physical side-effects of computation may have more relative significance compared to the computation’s output.

4.  What kinds of CSAs make sense?  More specifically, what kinds of counterfactual “coulds” make sense as a basis for a CSA?

In part 4, we noted that when Joe’s policy is all that matters, you can stick your “What policy should Joe have?” computer inside Joe, without disrupting Joe’s payoffs.  Thus, you can build Joe to be a “carry out the policy my creator would think best” CSA.

It turns out this trick can be extended.

Suppose you aren't a CDT-er.  Suppose you are more like one of Eliezer's \"timeless\" agents.  When you think about what you “could” and “should” do, you do your counterfactuals, not over what you alone will do, but over what you and a whole set of other agents “running the same algorithm you are running” will simultaneously do.  For example, you may (in your model) be choosing what algorithm you and Clippy will both send into a one-shot prisoner’s dilemma.

Much as was the case with CDT-ers, so long as your utility estimate depends only on the algorithm’s outputs and not its details you can choose the algorithm you’re creating to be an “updateless”, “act according to the policy your creator would have chosen” CSA.

5. Which types of CSAs will create which other types of CSAs under what circumstances?  I go through the list above.

6.  A partial list of remaining problems, and of threads that may be useful to pull on.

6.a.  Why design CSAs at all, rather than look-up tables or non-agent-like jumbles of wires?  Computational limitations are part of the answer: if I design a CSA to play chess with me, it knows what move it has to respond to, and so can focus its computation on that specific situation.  Does CSAs’ usefulness in focussing computation shed light on what type of CSAs to design?

6.b.  More generally, how did evolution come to build us humans as approximate CSAs?  And what kinds of decision theory should other agent-design processes, in other parts of the multiverse, be expected to create?

6.c.  What kind of a CSA are you?  What are you really asking, when you ask what you “should” do in Newcomb’s problem?  What algorithm do you actually run, and want to run, there?

6.d.  Two-player games: avoiding paradoxes and infinite types.  I used a simplification above: I assumed that agents took in inputs from a finite list, and produced outputs from a finite list.  This simplification does not allow for two general agents to, say, play one another in prisoner’s dilemma while seeing one another’s policy.  If I can choose any policy that is a function from the set of you policy options to {C, D}, and you can choose any policy that is a function from the set of my policy options to {C, D}, each of us must have more policy-options than the other.

Some other formalism is needed for two-player games.  (Eliezer lists this problem, and the entangled problem 6.e, in his Timeless decision theory: problems I can’t solve.)

6.e.  What do real agents do in the situations Eliezer has been calling “problems of logical priority before time”?  Also, what are the natural alternative decision theories to use for such problems, and is there one which is so much more natural than others that we might expect it to populate the universe, just as Eliezer hopes his “timeless decision theory” might accurately describe the bulk of decision agents in the universe?

Note that this question is related to, but harder than, the more limited question in 7.b.  It is harder because we are now asking our CSAs to produce actions/outputs in more complicated situations.

\n

6.f.  Technical machinery for dealing with timeless decision theories.  [Steve Rayhawk added this item.]  As noted in 5 above, and as Eliezer noted in Ingredients of Timeless Decision Theory, we may wish to use a decision theory where we are “choosing” the answer to a particular math problem, or the output of a particular algorithm. Since the output of an algorithm is a logical question, this requires reasoning under uncertainty about the answers to logical questions. Setting this up usefully, without paradoxes or inconsistencies, requires some work. Steve has a gimmick for treating part of this problem. (The gimmick starts from the hierarchical Bayesian modeling idea of a hyperprior, used in its most general form: a prior belief about conditional probability tables for other variables.)

\n
\n

*More precisely: CSAs’ usefulness breaks down if the creator’s world-model includes important effects from its choice of which agent it creates, apart from the effects of that agent’s policy.

" } }, { "_id": "dpcrtEdMnEp6p3Shb", "title": "Value for money kills?", "pageUrl": "https://www.lesswrong.com/posts/dpcrtEdMnEp6p3Shb/value-for-money-kills", "postedAt": "2009-08-24T23:37:59.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "dpcrtEdMnEp6p3Shb", "html": "
\"Indonesian

Indonesian SODIS users (picture: SODIS Eawag)

\n

SODIS is a cheap method of disinfecting water by putting it in the sun. Like many things, it works better in physics than society, where its effects were not significant, according to a study in PLoS medicine recently. The technical barrier is that people don’t do it much. About thirty two percent of participants in the study used the system on a given day. If you’re familiar with how little things work in reality, this is still surprising. Cheaply disinfecting water seems like it would be a hit with poor people whose children get diarrhea all the time and regularly die. Rural Bolivia, where the study was done, is a good candidate. The children studied usually get diarrhoea four times a year, which causes about fifteen percent of deaths of children under five. For the poorest quintile in Bolivia the under five death rate is about one in ten of those born alive.

\n

The leader of the study, Daniel Mausezahl, suspects a big reason for this is that lining up water bottles on your roof shows your neighbors that you aren’t rich enough to have more expensive methods of disinfecting water. It’s hard to see from a distance the difference between chlorination and coliform-infested jerry cans, so drinking excrement can make you look better than drinking cheap clean water.

\n

Fascinating as signaling explanations are, this seems incredible. Having live descendents is even more evolutionarily handy than impressing associates. What other explanations could there be? Perhaps adults are skeptical about effectiveness? There is apparently good evidence it works though, and there were intensive promotional campaigns during the study. What’s more, lack of evidence doesn’t usually stop humans investing in just about anything that isn’t obviously lethal in the absence of effective means to control their wellbeing. And parents are known for obsessive interest in their children’s safety. What’s going on?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "uaPRRBGRjxZd6QePE", "title": "Working Mantras", "pageUrl": "https://www.lesswrong.com/posts/uaPRRBGRjxZd6QePE/working-mantras", "postedAt": "2009-08-24T22:08:23.183Z", "baseScore": 49, "voteCount": 47, "commentCount": 59, "url": null, "contents": { "documentId": "uaPRRBGRjxZd6QePE", "html": "

While working with Marcello on AI this summer, I've noticed that I have some standard mantras that I invoke in my inner dialogue (though only on appropriate occasions, not as a literal repeated mantra).  This says something about which tropes are most often useful - in my own working life, anyway!

\n\n
    \n
  1. \"If anyone could actually update on the evidence, they would have a power far beyond that of Nobel Prize winners.\"
    (When encountering a need to discard some idea to which I was attached, or admit loss of a sunk cost on an avenue that doesn't seem to be working out.)
  2. \n
  3. \"The universe is already like [X], or not; if it is then I can only minimize the embarrassment by admitting it and adapting as fast as possible.\"
    (If the first mantra doesn't work; then I actually visualize the universe already being a certain way, so that I can see the penalty for being a universe that works a certain way and yet believing otherwise.)
  4. \n
  5. \"First understand the problem, then solve it.\"
    (If getting too caught up in proposing solutions, or discouraged when solutions don't work out - the immediate task at hand is just to understand the problem, and one may ask whether progress has been made on this.  From full understanding a solution usually follows quickly.)
  6. \n
  7. \"Load the problem.\"
    (Try to get your mind involved and processing the various aspects of it.)
  8. \n
  9. \"Five minutes is enough time to have an insight.\"
    (If my mind seems to be going empty.)
  10. \n
  11. \"Ask only one thing of your mind and it may give it to you.\"
    (Focusing during work, or trying to load the problem into memory before going to sleep each night, in hopes of putting the subconscious to work on it.)
  12. \n
  13. \"Run right up the mountain!\"
    (My general visualization of the FAI problem; a huge, blank, impossibly high wall, which I have to run up as quickly as possible.  Used to accomodate the sense of the problem being much larger than whatever it is I'm working on right now.)
  14. \n
  15. \"When the problem is solved, that thought will be a wasted motion in retrospect.\"
    (I first enunciated this as an explicit general principle when explaining to Marcello why e.g. one doesn't worry about people who have failed to solve a problem previously.  When you actually solve the problem, those thoughts will predictably not have contributed anything in retrospect.  So if your goal is to solve the problem, you should focus on the object-level problem, instead of worrying about whether you have sufficient status to solve it.  The same rule applies to many other habitual worries, or reasoning effort expended to reassure against them, that would predictably appear as wasted motion in retrospect, after actually solving the problem.)
  16. \n
  17. \"There's always just enough time when you do something right, no more, no less.\"
    (A quote from C. J. Cherryh's Paladin, used when feeling rushed.  I don't think it's true literally or otherwise, but it seems to convey an important wordless sentiment.
  18. \n
  19. \"See the truth, not what you expect or hope.\"
    (When expecting the answer to go a particular way, or hoping for the answer to go a particular way, is exerting detectable pressure on an ongoing inquiry.)
  20. \n
\n\n

I don't listen to music while working, because of studies showing that, e.g., programmers listening to music are equally competent at implementing a given algorithm, but much less likely to notice that the algorithm's output is always equal to its input.  However, I sometimes think of the theme Emiya #0 when feeling fatigued or trying to make a special demand on my mind.

" } }, { "_id": "3imJjn5eDu3xtvG27", "title": "The Journal of (Failed) Replication Studies", "pageUrl": "https://www.lesswrong.com/posts/3imJjn5eDu3xtvG27/the-journal-of-failed-replication-studies", "postedAt": "2009-08-23T09:15:56.889Z", "baseScore": 12, "voteCount": 12, "commentCount": 14, "url": null, "contents": { "documentId": "3imJjn5eDu3xtvG27", "html": "

One of Seed Magazine's \"Revolutionary Minds\" is Moshe Pritsker, who created the Journal of Visualized Experiments, which to me looks like a very cool idea. I imagine that early on it may have looked somewhat silly (\"he can't implant engineered tissue in a rat heart and he calls himself a scientist?!\"), so it's nice to know JoVE is picking up pace.

\n

Many folks keep pointing out how published research is itself biased towards positive results, and how replication (and failed replication!) trumps mere \"first!!!11\" publication. If regular journals don't have good incentives to publish \"mere\" (failed) replication studies, why not create a journal that would be dedicated entirely to them? I can't speak about the logistics, but I imagine it can be anything from a start-up (a la JoVE) to an open depository (a la arxiv.org).

\n

I am not part of academia, but I understand that there are a few folks here who are. What do you say?

\n

[EDIT: Andrew Kemendo notes two such journals in the comments: http://www.jnrbm.com/ and http://www.jnr-eeb.org/index.php/jnr.]

" } }, { "_id": "yeh4LAtYczZE8DJaT", "title": "Is your subconscious communist?", "pageUrl": "https://www.lesswrong.com/posts/yeh4LAtYczZE8DJaT/is-your-subconscious-communist", "postedAt": "2009-08-22T15:39:46.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "yeh4LAtYczZE8DJaT", "html": "
\"People

People can be hard to tell apart, even to themselves (picture: Giustino)

\n

Humans make mental models of other humans automatically, and appear to get somewhat confused about who is who at times.  This happens with knowledge, actions, attention and feelings:

\n

Just having another person visible hinders your ability to say what you can see from where you stand, though considering a non-human perspective does not:

\n

[The] participants were also significantly slower in verifying their own perspective when the avatar’s perspective was incongruent. In Experiment 2, we found that the avatar’s perspective intrusion effect persisted even when participants had to repeatedly verify their own perspective within the same block. In Experiment 3, we replaced the avatar by a bicolor stick …[and then] the congruency of the local space did not influence participants’ response time when they verified the number of circles presented in the global space.

\n

Believing you see a person moving can impede you in moving differently, similar to rubbing your tummy while patting your head, but if you believe the same visual stimulus is not caused by a person, there is no interference:

\n

[A] dot display followed either a biologically plausible or implausible velocity profile. Interference effects due to dot observation were present for both biological and nonbiological velocity profiles when the participants were informed that they were observing prerecorded human movement and were absent when the dot motion was described as computer generated…

\n

Doing  a task where the cues to act may be incongruent with the actions (a red pointer signals that you should press the left button, whether the pointer points left or right, and a green pointer signals right), the incongruent signals take longer to respond to than the congruent ones. This stops when you only have to look after one of the buttons. But if someone else picks up the other button, it becomes harder once again to do incongruent actions:

\n

The identical task was performed alone and alongside another participant. There was a spatial compatibility effect in the group setting only. It was similar to the effect obtained when one person took care of both responses. This result suggests that one’s own actions and others’ actions are represented in a functionally equivalent way.

\n

You can learn to subconsciously fear a stimulus by seeing the stimulus and feeling pain, but not by being told about it. However seeing the stimulus and watching someone react to pain, works like feeling it yourself:

\n

In the Pavlovian group, the CS1 was paired with a mild shock, whereas the observational-learning group learned through observing the emotional expression of a confederate receiving shocks paired with the CS1. The instructed-learning group was told that the CS1 predicted a shock…As in previous studies, participants also displayed a significant learning response to masked [too fast to be consciously perceived] stimuli following Pavlovian conditioning. However, whereas the observational-learning group also showed this effect, the instructed-learning group did not.

\n

A good summary of all this, Implicit and Explicit Processes in Social Cognition, interprets that we are subconsciously nice:

\n
Many studies show that implicit processes facilitate
\n
the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather
\n
than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.
\n

…implicit processes facilitate the sharing of knowledge, feelings, and actions, and hence, perhaps surprisingly, serve altruism rather than selfishness. On the other hand, higher-level conscious processes are as likely to be selfish as prosocial.

\n

It’s true that these unconscious behaviours can help us cooperate, but it seems they are no more ‘altruistic’ than the two-faced conscious processes the authors cite as evidence for conscious selfishness. Our subconsciouses are like the rest of us; adeptly ‘altruistic’ when it benefits them, such as when watched. For an example of how well designed we are in this regard consider the automatic empathic expression of pain we make upon seeing someone hurt. When we aren’t being watched, feeling other people’s pain goes out the window:

\n

…A 2-part experiment with 50 university students tested the hypothesis that motor mimicry is instead an interpersonal event, a nonverbal communication intended to be seen by the other….The victim of an apparently painful injury was either increasingly or decreasingly available for eye contact with the observer. Microanalysis showed that the pattern and timing of the observer’s motor mimicry were significantly affected by the visual availability of the victim.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Jou2pbxseb5beQoGE", "title": "ESR's New Take on Qualia", "pageUrl": "https://www.lesswrong.com/posts/Jou2pbxseb5beQoGE/esr-s-new-take-on-qualia", "postedAt": "2009-08-21T09:26:24.196Z", "baseScore": 5, "voteCount": 16, "commentCount": 54, "url": null, "contents": { "documentId": "Jou2pbxseb5beQoGE", "html": "

http://esr.ibiblio.org/?p=1192#more-1192

\n

ADDED:  Even if you disagree with ESR's take, and many will, this is the clearest definition I have seen on what qualia is.  So it should present a useful starting point, even for those who strongly disagree, to argue from.

" } }, { "_id": "fQv85Rd3pw789MHaX", "title": "Timeless Decision Theory and Meta-Circular Decision Theory", "pageUrl": "https://www.lesswrong.com/posts/fQv85Rd3pw789MHaX/timeless-decision-theory-and-meta-circular-decision-theory", "postedAt": "2009-08-20T22:07:46.662Z", "baseScore": 42, "voteCount": 33, "commentCount": 37, "url": null, "contents": { "documentId": "fQv85Rd3pw789MHaX", "html": "

(This started as a reply to Gary Drescher's comment here in which he proposes a Metacircular Decision Theory (MCDT); but it got way too long so I turned it into an article, which also contains some amplifications on TDT which may be of general interest.)

\n

Part 1:  How timeless decision theory does under the sort of problems that Metacircular Decision Theory talks about.

\n
\n

Say we have an agent embodied in the universe. The agent knows some facts about the universe (including itself), has an inference system of some sort for expanding on those facts, and has a preference scheme that assigns a value to the set of facts, and is wired to select an action--specifically, the/an action that implies (using its inference system) the/a most-preferred set of facts.

\n

But without further constraint, this process often leads to a contradiction. Suppose the agent's repertoire of actions is A1, ...An, and the value of action Ai is simply i. Say the agent starts by considering the action A7, and dutifully evaluates it as 7. Next, it contemplates the action A6, and reasons as follows: \"Suppose I choose A6. I know I'm a utility-maximizing agent, and I already know there's another choice that has value 7. Therefore, if follows from my (hypothetical) choice of A6 that A6 has a value of at least 7.\" But that inference, while sound, contradicts the fact that A6's value is 6.

\n
\n

This is why timeless decision theory is a causality-based decision theory.  I don't recall if you've indicated that you've studied Pearl's synthesis of Bayesian networks and causal graphs(?) (though if not you should be able to come up to speed on them pretty quickly).

\n

So in the (standard) formalism of causality - just causality, never mind decision theory as yet - causal graphs give us a way to formally compute counterfactuals:  We set the value of a particular node surgically.  This means we delete the structural equations that would ordinarily give us the value at the node N_i as a function of the parent values P_i and the background uncertainty U_i at that node (which U_i must be uncorrelated to all other U, or the causal graph has not been fully factored).  We delete this structural equation for N_i and make N_i parentless, so we don't send any likelihood messages up to the former parents when we update our knowledge of the value at N_i.  However, we do send prior-messages from N_i to all of its descendants, maintaining the structural equations for the children of which N_i is a parent, and their children, and so on.

\n

That's the standard way of computing counterfactuals in the Pearl/Spirtes/Verma synthesis of causality, as found in \"Causality: Models, Reasoning, and Inference\" and \"Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference\".

\n

Classical causal decision theory says that your expected utility formula is over the counterfactual expectation of your physical act.  Now, although the CDTs I've read have not in fact talked about Pearl - perhaps because it's a relatively recent mathematical technology, or perhaps because I last looked into the literature a few years back - and have just taken the counterfactual distribution as intuitively obvious mana rained from heaven - nonetheless it's pretty clear that their intuitions are operating pretty much the Pearlian way, via counterfactual surgery on the physical act.

\n

So in calculating the \"expected utility\" of an act - the computation that classical CDT uses to choose an action - CDT assumes the act to be severed from its physical causal parents.  Let's say that there's a Smoking Lesion problem, where the same gene causes a taste for cigarettes and an increased probability of cancer.  Seeing someone else smoke, we would infer that they have an increased probability of cancer - this sends a likelihood-message upward to the node which represents the probability of having the gene, and this node in turns sends a prior-message downward to the node which represents the probability of getting cancer.  But the counterfactual surgery that CDT performs on its physical acts, means that it calculates the expected utility as though the physical act is severed from its parent nodes.  So CDT calculates the expected utility as though it has the base-rate probability of having the cancer gene regardless of its act, and so chooses to smoke, since it likes cigarettes.  This is the common-sense and reflectively consistent action, so CDT appears to \"win\" here in terms of giving the winning answer - but it's worth noting that the internal calculation performed is wrong; if you act to smoke cigarettes, your probability of getting cancer is not the base rate.

\n

And on Newcomb's Problem this internal error comes out into the open; the inside of CDT's counterfactual expected utility calculation, expects box B to contain a million dollars at the base rate, since it surgically severs the act of taking both boxes from the parent variable of your source code, which correlates to your previous source code at the moment Omega observed it, which correlates to Omega's decision whether to leave box B empty.

\n

Now turn to timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows:

\n
\n

Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe))

\n
\n

The interior of this formula performs counterfactual surgery to sever the logical output of the expected utility formula, from the initial conditions of the expected utility formula.  So we do not conclude, in the inside of the formula as it performs the counterfactual surgery, that if-counterfactually A_6 is chosen over A_7 then A_6 must have higher expected utility.  If-evidentially A_6 is chosen over A_7, then A_6 has higher expected utility - but this is not what the interior of the formula computes.  As we compute the formula, the logical output is divorced from all parents; we cannot infer anything about its immediately logical precedents.  This counterfactual surgery may be necessary, in fact, to stop an infinite regress in the formula, as it tries to model its own output in order to decide its own output; and this, arguably, is exactly why the decision counterfactual has the form it does - it is why we have to talk about counterfactual surgery within decisions in the first place.

\n

Descendants of the logical output, however, continue to update their values within the counterfactual, which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal descendants of the computation, and are recalculated accordingly inside the counterfactual.

\n

If you desire to smoke cigarettes, this would be observed and screened off by conditioning on the fixed initial conditions of the computation - the fact that the utility function had a positive term for smoking cigarettes, would already tell you that you had the gene.  (Eells's \"tickle\".)  If you can't observe your own utility function then you are actually taking a step outside the timeless decision theory as formulated.

\n

So from the perspective of Metacircular Decision Theory - what is done with various facts - timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation.  It does not update any physical or logical parent of the logical output - rather, it conditions on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made.  And if you already know anything about the consequences of your logical output - its descendants in the logical causal graph - you will recompute what they would have been if you'd had a different output.

\n

This last codicil is important for cases like Parfit's Hitchhiker, in which Omega (or perhaps Paul Ekman), driving a car through the desert, comes across yourself dying of thirst, and will give you a ride to the city only if they expect you to pay them $100 after you arrive in the city.  (With the whole scenario being trued by strict selfishness, no knock-on effects, and so on.)  There is, of course, no way of forcing the agreement - so will you compute, in the city, that it is better for you to give $100 to Omega, after having already been saved?  Both evidential decision theory and causal decision theory will give the losing (dying in the desert, hence reflectively inconsistent) answer here; but TDT answers, \"If I had decided not to pay, then Omega would have left me in the desert.\"  So the expected utility of not paying $100 remains lower, even after you arrive in the city, given the way TDT computes its counterfactuals inside the formula - which is the dynamically and reflectively consistent and winning answer..  And note that this answer is arrived at in one natural step, without needing explicit reflection, let alone precommitment - you will answer this way even if the car-driver Omega made its prediction without you being aware of it, so long as Omega can credibly establish that it was predicting you with reasonable accuracy rather than making a pure uncorrelated guess.  (And since it's not a very complicated calculation, Omega knowing that you are a timeless decision theorist is credible enough.)

\n
\n

I wonder if it might be open to the criticism that you're effectively postulating the favored answer to Newcomb's Problem (and other such scenarios) by postulating that when you surgically alter one of the nodes, you correspondingly alter the nodes for the other instances of the computation.

\n
\n

This is where one would refer to the omitted extended argument about a calculator on Mars and a calculator on Venus, where both calculators were manufactured at the same factory on Earth and observed before being transported to Mars and Venus.  If we manufactured two envelopes on Earth, containing the same letter, and transported them to Mars and Venus without observing them, then indeed the contents of the two envelopes would be correlated in our probability distribution, even though the Mars-envelope is not a cause of the Venus-envelope, nor the Venus-envelope a cause of the Mars-envelope, because they have a common cause in the background.  But if we observe the common cause - look at the message as it is written, before being Xeroxed and placed into the two envelopes - then the standard theory of causality requires that our remaining uncertainty about the two envelopes be uncorrelated; we have observed the common cause and screened it off.  If N_i is not a cause of N_j or vice versa, and you know the state of all the common ancestors A_ij of N_i and N_j, and you do not know the state of any mutual descendants D_ij of N_i and N_j, then the standard rules of causal graphs (D-separation) show that your probabilities at N_i and N_j must be independent.

\n

However, if you manufacture on Earth two calculators both set to calculate 123 * 456, and you have not yet performed this calculation in your head, then you can observe completely the physical state of the two calculators before they leave Earth, and yet still have correlated uncertainty about what result will flash on the screen on Mars and the screen on Venus.  So this situation is simply not compatible with the mathematical axioms on causal graphs if you draw a causal graph in which the only common ancestor of the two calculators is the physical factory that made them and produced their correlated initial state.  If you are to preserve the rules of causal graphs at all, you must have an additional node - which would logically seem to represent one's logical uncertainty about the abstract computation 123 * 456 - which is the parent of both calculators.  Seeing the Venusian calculator flash the result 56,088, this physical event sends a likelihood-message to its parent node representing the logical result of 123 * 456, which sends a prior-message to its child node, the physical message flashed on the screen at Mars.

\n

A similar argument shows that if we have completely observed our own initial source code, and perhaps observed Omega's initial source code which contains a copy of our source code and the intention to simulate it, but we do not yet know our own decision, then the only way in which our uncertainty about our own physical act can possibly be correlated at all with Omega's past act to fill or leave empty the box B - given that neither act physically causes the other - is if there is some common ancestor node unobserved; and having already seen that our causal graph must include logical uncertainty if it is to stay factored, we can (must?) interpret this unobserved common node as the logical output of the known expected utility calculation.

\n

From this, I would argue, TDT follows.  But of course it's going to be difficult to exhibit an algorithm that computes this - guessing unknown causal networks is an extremely difficult problem in machine learning, and only small such networks can be learned.  In general, determining the causal structure of reality is AI-complete.  And by interjecting logical uncertainty into the problem, we really are heading far beyond the causal networks that known machine algorithms can learn.  But it is the case that if you rely on humans to learn the causal algorithm, then it is pretty clear that the Newcomb's Problem setup, if it is to be analyzed in causal terms at all, must have nodes corresponding to logical uncertainty, on pain of violating the axioms governing causal graphs.  Furthermore, in being told that Omega's leaving box B full or empty correlates to our decision to take only one box or both boxes, and that Omega's act lies in the past, and that Omega's act is not directly influencing us, and that we have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output, then we're being told in unambiguous terms (I think) to make our own physical act and Omega's act a common descendant of the unknown logical output of our known computation.  (A counterexample in the form of another causal graph compatible with the same data is welcome.)  And of course we could make the problem very clear by letting the agent be a computer program and letting Omega have a copy of the source code with superior computing power, in which case the logical interpretation is very clear.

\n

So these are the facts which TDT takes into account, and the facts which it ignores.  The Nesov-Dai updateless decision theory is even stranger - as far as I can make out, it ignores all facts except for the fact about which inputs have been received by the logical version of the computation it implements.  If combined with TDT, we would interpret UDT as having a never-updated weighting on all possible universes, and a causal structure (causal graph, presumably) on those universes.  Any given logical computation in UDT will count all instantiations of itself in all universes which have received exactly the same inputs - even if those instantiations are being imagined by Omega in universes which UDT would ordinarily be interpreted as \"known to be logically inconsistent\", like universes in which the third decimal digit of pi is 3.  Then UDT calculates the counterfactual consequences, weighted across all imagined universes, using its causal graphs on each of those universes, of setting the logical act to A_i.  Then it maximizes on A_i.

\n

I would ask if, applying Metacircular Decision Theory from a \"common-sense human base level\", you see any case in which additional facts should be taken into account, or other facts ignored, apart from those facts used by TDT (UDT).  If not, and if TDT (UDT) are reflectively consistent, then TDT (UDT) is the fixed point of MCDT starting from a human baseline decision theory.  Of course this can't actually be the case because TDT (UDT) are incomplete with respect to the open problems cited earlier, like logical ordering of moves, and choice of conditional strategies in response to conditional strategies.  But it would be the way I'd pose the problem to you, Gary Drescher - MCDT is an interesting way of looking things, but I'm still trying to wrap my mind around it.

\n

Part 2: Metacircular Decision Theory as reflection criterion.

\n
\n

MCDT's proposed criterion is this: the agent makes a meta-choice about which facts to omit when making inferences about the hypothetical actions, and selects the set of facts which lead to the best outcome if the agent then evaluates the original candidate actions with respect to that choice of facts. The agent then iterates that meta-evaluation as needed (probably not very far) until a fixed point is reached, i.e. the same choice (as to which facts to omit) leaves the first-order choice unchanged. (It's ok if that's intractable or uncomputable; the agent can muddle through with some approximate algorithm.)

\n

...In other words, metacircular consistency isn't just a test that we'd like the decision theory to pass. Metacircular consistency is the theory; it is the algorithm.

\n
\n

But it looks to me like MCDT has to start from some particular base theory, and different base theories may have different fixed points (or conceivably, cycles).  In which case we can't yet call MCDT itself a complete theory specification.  When you talk about which facts would be wise to take into account, or ignore, (or recompute counterfactually even if they already have known values?), then you're imagining different source codes (or MCDT specifications?) that an agent could have; and calculating the benefits of adopting these different source codes, relative to the way the current base theory computes \"adopting\" and \"benefit\"

\n

For example, if you start with CDT and apply MCDT at 7am, it looks to me like \"use TDT (UDT) for all cases where my source code has a physical effect after 7am, and use CDT for all cases where the source code had a physical effect before 7am or a correlation stemming from common ancestry\" is a reflectively stable fixed point of MCDT.  Whenever CDT asks \"What if I took into account these different facts?\", it will say, \"But Omega would not be physically affected by my self-modification, so clearly it can't benefit me in any way.\"  If the MCDT criterion is to be applied in a different and intuitively appealing way that has only one fixed point (up to different utility functions) then this would establish MCDT as a good candidate for the decision theory, but right now it does look to me like a reflective consistency test.  But maybe this is because I haven't yet wrapped my mind around the MCDT's fact-treatment-based decomposition of decision theories, or because you've already specified further mandatory structure in the base theory how the effect of ignoring or taking into account some particular fact is to be computed.

" } }, { "_id": "2utSryKeZ8hMdirhp", "title": "How inevitable was modern human civilization - data", "pageUrl": "https://www.lesswrong.com/posts/2utSryKeZ8hMdirhp/how-inevitable-was-modern-human-civilization-data", "postedAt": "2009-08-20T21:42:41.869Z", "baseScore": 32, "voteCount": 33, "commentCount": 103, "url": null, "contents": { "documentId": "2utSryKeZ8hMdirhp", "html": "

We have a sample of one modern human civilization, but there are some hints on how likely it was to happen.

Major types of hints are:

\n\n

Data for:

\n\n

Data against:

\n\n

To me it looks like life, animals with nervous systems, Upper Paleolithic-style Homo, language, and behavioral modernity were all extremely unlikely events (notice how far ago they are - vaguely ~3.5bln, ~600mln, ~3mln, ~200k or ~600k, ~50k years ago) - except perhaps language and behavioral modernity might have been linked with each other, if language was relatively late (Homo sapiens only) and behavioral modernity more gradual (and its apparent suddenness is an artifact). Once we have behavioral modernity, modern civilization seems almost inevitable. Your interpretation might vary of course, but at least now you have a lot of data to argue for your position, in convenient format.

" } }, { "_id": "dzr4GzB6cA3ARPPP9", "title": "Evolved Bayesians will be biased", "pageUrl": "https://www.lesswrong.com/posts/dzr4GzB6cA3ARPPP9/evolved-bayesians-will-be-biased", "postedAt": "2009-08-20T14:54:18.626Z", "baseScore": 28, "voteCount": 30, "commentCount": 14, "url": null, "contents": { "documentId": "dzr4GzB6cA3ARPPP9", "html": "

I have a small theory which strongly implies that getting less biased is likely to make \"winning\" more difficult.

\n

Imagine some sort of evolving agents that follow vaguely Bayesianish logic. They don't have infinite resources, so they use a lot of heuristics, not direct Bayes rule with priors based on Kolmogorov complexity. Still, they employ a procedure A to estimate what the world is like based on data available, and a procedure D to make decisions based on their estimations, both of vaguely Bayesian kind.

\n

Let's be kind to our agents and grant that for every possible data and every possible decision they might have encountered in their ancestral environment, they make exactly the same decision as an ideal Bayesian agent would. A and D have been fine-tuned to work perfectly together.

\n

That doesn't mean that either A or D are perfect even within this limited domain. Evolution wouldn't care about that at all. Perhaps different biases within A cancel each other. For example an agent might overestimate snakes' dangerousness and also overestimate his snake-dodging skills - resulting in exactly the right amount of fear of snakes.

\n

Or perhaps a bias in A cancels another bias in D. For example an agent might overestimate his chance of success at influencing tribal policy, what neatly cancels his unreasonably high threshold for trying to do so.

\n

And then our agents left their ancestral environment, and found out that for some of the new situations their decisions aren't that great. They thought about it a lot, noticed how biased they are, and started a website on which they teach each other how to make their A more like perfect Bayesian's A. They even got quite good at it.

\n

Unfortunately they have no way of changing their D. So biases in their decisions which used to neatly counteract biases in their estimation of the world now make them commit a lot of mistakes even in situations where naive agents do perfectly well.

\n

The problem is that for virtually every A and D pair that could have possibly evolved, no matter how good the pair is together, neither A nor D would be perfect in isolation. In all likelihood both A and D are ridiculously wrong, just in a special way that never hurts. Improving one without improving the other, or improving just part of either A or D, will lead to much worse decisions, even if your idea of what the world is like gets better.

\n

I think humans might be a lot like that. As an artifact of evolution we make incorrect guesses about the world, and choices that would be incorrect given our guesses - just in a way that worked really well in ancestral environment, and works well enough most of the time even now. Depressive realism is a special case of this effect, but the problem is much more general.

" } }, { "_id": "uHc2mCfJGiGvBc2Zo", "title": "You have just been Counterfactually Mugged!", "pageUrl": "https://www.lesswrong.com/posts/uHc2mCfJGiGvBc2Zo/you-have-just-been-counterfactually-mugged", "postedAt": "2009-08-19T22:24:38.805Z", "baseScore": 7, "voteCount": 15, "commentCount": 25, "url": null, "contents": { "documentId": "uHc2mCfJGiGvBc2Zo", "html": "

I'm going to test just how much the people here are committed to paying a Counterfactual Mugger, by playing Omega.

\n

I'm going to roll a die. If it doesn't come up 5 or 6, I'm going to ask Eliezer Yudkowsky to reply to this article with the comment \"I am a poopy head.\" If I roll a 5 or 6, I'm going to donate $20 to SIAI if I predict that Eliezer Yudkowsky will post the above comment.

\n

Because Eliezer has indicated that he would pay up when counterfactually mugged, I do predict that, if I roll a 5 or 6, he'll respond.

\n

::rolls die::

\n

Darn it! It's a 5. Well, I'm a man of my word, so...

\n

::donates::

\n

Um, let's try that again. (At least I've proven my honesty!)

\n

::rolls die::

\n

Okay, this time it's a 1.

\n

So, Eliezer, will you post a comment admitting that you're a poopy head?

" } }, { "_id": "szfxvS8nsxTgJLBHs", "title": "Ingredients of Timeless Decision Theory", "pageUrl": "https://www.lesswrong.com/posts/szfxvS8nsxTgJLBHs/ingredients-of-timeless-decision-theory", "postedAt": "2009-08-19T01:10:11.862Z", "baseScore": 53, "voteCount": 55, "commentCount": 232, "url": null, "contents": { "documentId": "szfxvS8nsxTgJLBHs", "html": "

Followup toNewcomb's Problem and Regret of Rationality, Towards a New Decision Theory

\n

Wei Dai asked:

\n
\n

\"Why didn't you mention earlier that your timeless decision theory mainly had to do with logical uncertainty? It would have saved people a lot of time trying to guess what you were talking about.\"

\n
\n

...

\n

All right, fine, here's a fast summary of the most important ingredients that go into my \"timeless decision theory\".  This isn't so much an explanation of TDT, as a list of starting ideas that you could use to recreate TDT given sufficient background knowledge.  It seems to me that this sort of thing really takes a mini-book, but perhaps I shall be proven wrong.

\n

The one-sentence version is:  Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.

\n

The three-sentence version is:  Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.

\n

To obtain the background knowledge if you don't already have it, the two main things you'd need to study are the classical debates over Newcomblike problems, and the Judea Pearl synthesis of causality.  Canonical sources would be \"Paradoxes of Rationality and Cooperation\" for Newcomblike problems and \"Causality\" for causality.

\n

For those of you who don't condescend to buy physical books, Marion Ledwig's thesis on Newcomb's Problem is a good summary of the existing attempts at decision theories, evidential decision theory and causal decision theory.  You need to know that causal decision theories two-box on Newcomb's Problem (which loses) and that evidential decision theories refrain from smoking on the smoking lesion problem (which is even crazier).  You need to know that the expected utility formula is actually over a counterfactual on our actions, rather than an ordinary probability update on our actions.

\n

I'm not sure what you'd use for online reading on causality.  Mainly you need to know:

\n\n

It will be helpful to have the standard Less Wrong background of defining rationality in terms of processes that systematically discover truths or achieve preferred outcomes, rather than processes that sound reasonable; understanding that you are embedded within physics; understanding that your philosophical intutions are how some particular cognitive algorithm feels from inside; and so on.

\n
\n

The first lemma is that a factorized probability distribution which includes logical uncertainty - uncertainty about the unknown output of known computations - appears to need cause-like nodes corresponding to this uncertainty.

\n

Suppose I have a calculator on Mars and a calculator on Venus.  Both calculators are set to compute 123 * 456.  Since you know their exact initial conditions - perhaps even their exact initial physical state - a standard reading of the causal graph would insist that any uncertainties we have about the output of the two calculators, should be uncorrelated.  (By standard D-separation; if you have observed all the ancestors of two nodes, but have not observed any common descendants, the two nodes should be independent.)  However, if I tell you that the calculator at Mars flashes \"56,088\" on its LED display screen, you will conclude that the Venus calculator's display is also flashing \"56,088\".  (And you will conclude this before any ray of light could communicate between the two events, too.)

\n

If I was giving a long exposition I would go on about how if you have two envelopes originating on Earth and one goes to Mars and one goes to Venus, your conclusion about the one on Venus from observing the one on Mars does not of course indicate a faster-than-light physical event, but standard ideas about D-separation indicate that completely observing the initial state of the calculators ought to screen off any remaining uncertainty we have about their causal descendants so that the descendant nodes are uncorrelated, and the fact that they're still correlated indicates that there is a common unobserved factor, and this is our logical uncertainty about the result of the abstract computation.  I would also talk for a bit about how if there's a small random factor in the transistors, and we saw three calculators, and two showed 56,088 and one showed 56,086, we would probably treat these as likelihood messages going up from nodes descending from the \"Platonic\" node standing for the ideal result of the computation - in short, it looks like our uncertainty about the unknown logical results of known computations, really does behave like a standard causal node from which the physical results descend as child nodes.

\n

But this is a short exposition, so you can fill in that sort of thing yourself, if you like.

\n

Having realized that our causal graphs contain nodes corresponding to logical uncertainties / the ideal result of Platonic computations, we next construe the counterfactuals of our expected utility formula to be counterfactuals over the logical result of the abstract computation corresponding to the expected utility calculation, rather than counterfactuals over any particular physical node.

\n

You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation.

\n

Formally you'd use a Godelian diagonal to write:

\n

Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe))

\n

(where P( X=x []-> Y | Z ) means computing the counterfactual on the factored causal graph P, that surgically setting node X to x, leads to Y, given Z)

\n

Setting this up correctly (in accordance with standard constraints on causal graphs, like noncircularity) will solve (yield reflectively consistent, epistemically intuitive, systematically winning answers to) 95% of the Newcomblike problems in the literature I've seen, including Newcomb's Problem and other problems causing CDT to lose, the Smoking Lesion and other problems causing EDT to fail, Parfit's Hitchhiker which causes both CDT and EDT to lose, etc.

\n

Note that this does not solve the remaining open problems in TDT (though Nesov and Dai may have solved one such problem with their updateless decision theory).  Also, although this theory goes into much more detail about how to compute its counterfactuals than classical CDT, there are still some visible incompletenesses when it comes to generating causal graphs that include the uncertain results of computations, computations dependent on other computations, computations uncertainly correlated to other computations, computations that reason abstractly about other computations without simulating them exactly, and so on.  On the other hand, CDT just has the entire counterfactual distribution rain down on the theory as mana from heaven (e.g. James Joyce, Foundations of Causal Decision Theory), so TDT is at least an improvement; and standard classical logic and standard causal graphs offer quite a lot of pre-existing structure here.  (In general, understanding the causal structure of reality is an AI-complete problem, and so in philosophical dilemmas the causal structure of the problem is implicitly given in the story description.)

\n

Among the many other things I am skipping over:

\n\n

Those of you who've read the quantum mechanics sequence can extrapolate from past experience that I'm not bluffing.  But it's not clear to me that writing this book would be my best possible expenditure of the required time.

" } }, { "_id": "qHDNab3hnRFNJFsiE", "title": "Scott Aaronson's \"On Self-Delusion and Bounded Rationality\"", "pageUrl": "https://www.lesswrong.com/posts/qHDNab3hnRFNJFsiE/scott-aaronson-s-on-self-delusion-and-bounded-rationality", "postedAt": "2009-08-18T19:17:07.019Z", "baseScore": 22, "voteCount": 19, "commentCount": 52, "url": null, "contents": { "documentId": "qHDNab3hnRFNJFsiE", "html": "

Poignant short story about truth-seeking that I just found. Quote:

\n
\n

\"No,\" interjected an internal voice. \"You need to prove that your dad will appear by a direct argument from the length of your nails, one that does not invoke your subsisting in a dream state as an intermediate step.\"

\n

\"Nonsense,\" retorted another voice. \"That we find ourselves in a dream state was never assumed; rather, it follows so straightforwardly from the long-nail counterfactual that the derivation could be done, I think, even in an extremely weak system of inference.\"

\n
\n

The full thing reads like a flash tour of OB/LW, except it was written in 2001.

" } }, { "_id": "Zwbz6Wv7yaLM6J36r", "title": "Singularity Summit 2009 (quick post)", "pageUrl": "https://www.lesswrong.com/posts/Zwbz6Wv7yaLM6J36r/singularity-summit-2009-quick-post", "postedAt": "2009-08-16T23:29:40.282Z", "baseScore": 18, "voteCount": 17, "commentCount": 37, "url": null, "contents": { "documentId": "Zwbz6Wv7yaLM6J36r", "html": "

Someone else may do a more formal announcement later, but since early registration expires on August 20th, I'm doing a quick heads-up to Less Wrong readers:

\n

The Singularity Summit 2009 is in New York on Oct 3-4.

\n

There are discounts for students, blog mentions, referrals, and registration before August 20th.

\n

Speakers of note to rationalists will include Robin Hanson, Gary Drescher (author of Good and Real, one of the few master-level works of reductionism out there), and David Chalmers.  Also speaking will be Marcus Hutter and Juergen Schmidhuber, as well as some of the usual suspects:  Aubrey de Grey, Peter Thiel, Ben Goertzel, and Ray Kurzweil.

\n

They're really trying to raise the intellectual level this year.

\n

Singularity Summit 2009 home page, program, and registration.

" } }, { "_id": "frcaDCCBv4ZKET7mH", "title": "Friendlier AI through politics", "pageUrl": "https://www.lesswrong.com/posts/frcaDCCBv4ZKET7mH/friendlier-ai-through-politics", "postedAt": "2009-08-16T21:29:56.353Z", "baseScore": 2, "voteCount": 12, "commentCount": 44, "url": null, "contents": { "documentId": "frcaDCCBv4ZKET7mH", "html": "

David Brin suggests that some kind of political system populated with humans and diverse but imperfectly rational and friendly AIs would evolve in a satisfactory direction for humans.

\n

I don't know whether creating an imperfectly rational general AI is any easier, except that limited perceptual and computational resources obviously imply less than optimal outcomes; still, why shouldn't we hope for optimal given those constraints?  I imagine the question will become more settled before anyone nears unleashing a self-improving superhuman AI.

\n

An imperfectly friendly AI, perfectly rational or not, is a very likely scenario.  Is it sufficient to create diverse singleton value-systems (demographically representative of humans' values) rather than a consensus (over all humans' values) monolithic Friendly?  

\n

What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?  Brin seems to have some hope of improving politics regardless of AI participation, but I'm not sure exactly what his dream is or how to get there - perhaps his \"disputation arenas\" would work if the participants were rational and altruistically honest).

" } }, { "_id": "9ZodFr54FtpLThHZh", "title": "Experiential Pica", "pageUrl": "https://www.lesswrong.com/posts/9ZodFr54FtpLThHZh/experiential-pica", "postedAt": "2009-08-16T21:23:10.050Z", "baseScore": 131, "voteCount": 125, "commentCount": 113, "url": null, "contents": { "documentId": "9ZodFr54FtpLThHZh", "html": "

tl;dr version: Akrasia might be like an eating disorder!

\n

When I was a teenager, I ate ice.  Lots of ice.  Cups and cups and cups of ice, constantly, all day long, when it was freely available.  This went on for years, during which time I ignored the fact that others found it peculiar. (\"Oh,\" I would joke to curious people at the school cafeteria, ignoring the opportunity to detect the strangeness of my behavior, \"it's for my pet penguin.\")  I had my cache of excuses: it keeps my mouth occupied.  It's so nice and cool in the summer.  I don't drink enough water anyway, it keeps me hydrated.  Yay, zero-calorie snack!

\n

Then I turned seventeen and attempted to donate blood, and was basically told, when they did the finger-stick test, \"Either this machine is broken or you should be in a dead faint.\"  I got some more tests done, confirmed that extremely scary things were wrong with my blood, and started taking iron supplements.  I stopped eating ice.  I stopped having any interest in eating ice at all.

\n

Pica is an impulse to eat things that are not actually food.  Compared to some of the things that people with pica eat, I got off very easy: ice did not do me any harm on its own, and was merely a symptom.  But here's the kicker: What I needed was iron.  If I'd been consciously aware of that need, I'd have responded to it with the supplements far earlier, or with steak1 and spinach and cereals fortified with 22 essential vitamins & minerals.  Ice does not contain iron.  And yet when what I needed was iron, what I wanted was ice.

\n

What if akrasia is experiential pica?  What if, when you want to play Tetris or watch TV or tat doilies instead of doing your Serious Business, that means that you aren't going to art museums enough, or that you should get some exercise, or that what your brain really craves is the chance to write a symphony?

\n

The existence - indeed, prevalence - of pica is a perfect example of how the brain is very bad at communicating certain needs to the systems that can get those needs met.  Even when the same mechanism - that of instilling the desire to eat something, in the case of pica - could be used to meet the need, the brain misfires2.  It didn't make me crave liver and shellfish and molasses, it made me crave water in frozen form.  A substance which did nothing to help, and was very inconvenient to continually keep around and indulge in, and which made people look at me funny when I held up the line at the drink dispenser for ten minutes filling up half a dozen paper cups.

\n

So why shouldn't I believe that, for lack of some non-food X, my brain just might force me to seek out unrelated non-food Y and make me think it was all my own brilliant idea?  (\"Yay, zero-calorie snack that hydrates, cools, and is free or mere pennies from fast food outlets when I have completely emptied the icemaker!  I'm so clever!\")

\n

The trouble, if one hopes to take this hypothesis any farther, is that it's hard to tell what your experiential deficiencies might be3.  The baseline needs for figure-skating and flan-tasting probably vary person-to-person a lot more than nutrient needs do.  You can't stick your finger, put a drop of blood into a little machine that goes \"beep\", and see if it says that you spend too little time weeding peonies.  I also have no way to solve the problem of being akratic about attempted substitutions for akrasia-related activities: even if you discovered for sure that by baking a batch of muffins once a week, you would lose the crippling desire to play video games constantly, nothing's stopping the desire to play video games from obstructing initial attempts at muffin-baking.

\n

Possible next steps to explore the experiential pica idea and see how it pans out:

\n\n

 

\n

1I was not a vegetarian until I had already been eating ice for a very long time.  The switch can only have exacerbated the problem.

\n

2Some pica sufferers do in fact eat things that contain the mineral they're deficient in, but not all.

\n

3Another problem is that this theory only covers what might be called \"enticing\" akrasia, the positive desire to do non-work things.  It has nothing to say about aversive akrasia, where you would do anything but what you metawant to do.

" } }, { "_id": "anS4CNRhddny9Snz2", "title": "Happiness is a Heuristic", "pageUrl": "https://www.lesswrong.com/posts/anS4CNRhddny9Snz2/happiness-is-a-heuristic", "postedAt": "2009-08-16T21:12:39.466Z", "baseScore": 13, "voteCount": 15, "commentCount": 16, "url": null, "contents": { "documentId": "anS4CNRhddny9Snz2", "html": "

Whenever the topic of happiness is mentioned, it's always discussed like it's the most important thing in the world. People talk about it like they would a hidden treasure or a rare beast - you have to seek it, hunt it, ensnare it and hold it tight, or it'll slip through your fingers. Perhaps it's just the contrarian in me, but this seems misguided -  happiness shouldn't be searched for like the holy grail. Not that I don't want to be happy, but is that really the purpose of my life - to have my neurons stimulated in a way that feels good, and try to keep that up until I die? Why don't I just slip myself into a Soma-coma then? Of course, anything I do boils down to a particular stimulation of neurons, but that doesn't mean there's not something better to aspire to. To pursue happiness as an end itself I think, is backwards. It wasn't built into our brains because evolution was being nice - it's there because it increases our fitness. Happiness is designed to get us somewhere, not to be a destination in itself.

\n

\n
Fortunately, we're not obligated to follow evolution's whims. But this confusion might be the reason that, in general, we're crappy at predicting what will make us happy. A few data points:
\n
-In an oft-quoted study, lottery winners are less happy than they predicted a year afterwards, and parapalegics are more happy.[1]
\n
-People tend to enjoy potato chips the same amount despite predicting (after being primed) that they'll either love them or hate them.[2]
\n
-Both assistant professors who do and do not receive tenure over-emphasize the impact the result will have on their happiness.
\n
Our wanting system is seperate from our liking system[3], and, contrary to intuition, we don't seem to be wired to pursue things that will make us happy. Rather, we seem to be wired to pursue things, and then get a fixed amount of happiness from whatever we end up with. We try to get what we like, but we tend to end up liking what we get. Dan Gilbert refers to this as 'synthetic' happiness, and posits it as the reason people who miss big opportunities tend to say that \"it worked out for the best.\" It's tempting to assume this is just a rationalization technique, but there's evidence it happens on a subconscious level. When amnesiacs are given a painting as a gift, later they tend to pick it as their favorite out of a group of paintings, despite not being able to remember owning it.[4]
\n
Where this system seems to get derailed is when we have to make extensive use of our higher brain functions. When have to make a complex decision or deliberate over a choice, we tend to be less satisfied with it. A few more data points:
\n
-People are more satisfied with chocolate when they are choosing from six different kinds than when choosing from thirty different kinds. [5]
\n
-People are happier with a photo when they have to immediately choose the one they want than when they're given three days to make their choice.[4]
\n
-People who simply rated a group of posters on a scale of 1-9 (and then received their favorite one) were happier than people who had to list the reasons they liked the posters first. [6]
\n
-When given a choice of jams, people are more likely to make a purchase when choosing from six than when choosing from twenty-four.[7]
\n
Happiness is tied to affect, the subconscious system responsible for our 'gut' reactions. It's much easier (mentally, anyway) for us to make a choice and then decide it was the correct one than to weigh all the options and then be happy with whatever is 'best'. Barry Schwartz refers to this as \"the paradox of choice\".  When we can't rely on a gut reaction, or are forced to supercede it by consciously deliberating, our happiness seems to suffer.
\n
Though it's usually refered to as one emotion, 'happiness' is just an umbrella term for a variety of positive emotional states. These tend to fall into two groups - hedonia and eudamonia. Hedonia is pleasure - pure sensory stimulation. Eating a cookie, buying a car, having an orgasm are all hedonic pleasures. Hedonia comes from external stimulation, things outside ourselves that we pursue. Because we're wired to pursue things regardless of how much we already have, hedonia doesn't get sated. We quickly adapt to changes in our environment, always wanting more - more money, more food, and more sex. We get caught on a hedonic treadmill [8], needing more and more stimulation to produce the same amount of pleasure. Since we quickly return to our hedonic set points, this pleasure never lasts - nor should we expect it to. The ancient humans who were satisfied with what they had all got outcompeted by the ones who wanted more. But that means for sustained happiness, you have to look elsewhere.
\n
Eudamonia, on the other hand, does produce lasting happiness. Eudamonia is 'well-being', and it comes, according to Martin Seligman, from identifying your strengths and orienting your life around them. Eudamonia isn't based on pleasure per-se, but on orienting yourself so you interact with the world in a positive way. For example, having fulfilling social relationships - being the type of person that cares about others - is consistently one of the best predictors of happiness.  More data points:
\n
-People who get married tend to be happier than single people.[9]
\n
-Spending twenty dollars on a gift for another person produces more happiness than spending twenty dollars on yourself.[10]
\n
-Writing a long, grateful letter to a person to whom you're thankful, visiting them, and reading it outloud produces extreme increases in happiness even months afterwards.[10]
\n
Other eudamonic states are Csíkszentmihályi's flow state [11], and the states of happiness reached by buddhist monks [12]. All come from not from pure pleasure, but from constructing your life in a way that matches what's most important to you. It's eudamonic changes that center our lives around our values that are capable of raising our hedonic set points. And so, in the rich tradition of scientific studies being boiled down to aphorisms, \"happiness comes from within\".
\n
Perhaps it's tacit knowledge of this that makes us suspicious of supposed extreme happiness offered by an external source. People generally reject eternal happiness if it means taking Soma for the rest of your life, or being locked into an experience machine, or being reduced to an orgasmium, or any one of a myriad of possible ways to max out those pleasure neurons. I don't see this as a bug - I see it as a feature. It means our desire for happiness is conflicting with some higher value. The solution to a conflict like this is, I think, straightforward - go with the higher value! Happiness is designed to help you achieve what's imporant to you -  to take you somewhere your brain thinks you're supposed to go. A conflict means that your subconscious doesn't like where it's headed. And if you don't like it, you don't have to go there. [13]
\n
The happiness heuristic seems to drive us to the best possible future outcomes, by motivating us to seek resources (hedonia) and have a life that fufills what's important to us (eudamonia). Our brains don't assign happiness intrinsic value - it has delegated value, in that acquiring it lets us acquire other things. Seeking happiness seems to be unnecessary; if you orient your life around your desires and values, happiness will generally follow one way or another.
\n

\n
Links (note: if anyone can provide a link to the actual studies referenced, rather than just newspaper articles, I'll replace them)
\n
1: http://education.ucsb.edu/janeconoley/ed197/documents/brickman_lotterywinnersandaccidentvictims.pdf
\n
2: http://www.psychologicalscience.org/observer/getArticle.cfm?id=2188
\n
3: http://wireheading.com/pleasure/liking-wanting.html
\n
4: http://www.ted.com/talks/dan_gilbert_asks_why_are_we_happy.html
\n
5: http://www.dynamist.com/articles-speeches/forbes/choice.html
\n
6: http://scienceblogs.com/cortex/2007/02/post_14.php
\n
7: http://scienceblogs.com/cortex/2007/06/the_paradox_of_choice_internet.php
\n
8: http://en.wikipedia.org/wiki/Hedonic_treadmill
\n
9: http://www.apa.org/releases/married_happy.html
\n
10: http://www.ted.com/index.php/talks/martin_seligman_on_the_state_of_psychology.html
\n
11: http://en.wikipedia.org/wiki/Flow_(psychology)
\n
12: http://www.timesonline.co.uk/tol/comment/columnists/libby_purves/article1136398.ece
\n
13: http://lesswrong.com/lw/wv/prolegomena_to_a_theory_of_fun/
\n

\n

 

" } }, { "_id": "GySAm6fTjjDEPAXF8", "title": "My God! It's full of Nash equilibria!", "pageUrl": "https://www.lesswrong.com/posts/GySAm6fTjjDEPAXF8/my-god-it-s-full-of-nash-equilibria", "postedAt": "2009-08-16T19:59:33.959Z", "baseScore": 19, "voteCount": 19, "commentCount": 2, "url": null, "contents": { "documentId": "GySAm6fTjjDEPAXF8", "html": "

Speaking of Scott Aaronson, his latest post at Shtetl-Optimized seems worthy of some linky love.

\r\n
\r\n

Why do native speakers of the language you’re studying talk too fast for you to understand them?  Because otherwise, they could talk faster and still understand each other.

\r\n

...

\r\n

Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.  Clearly, then, I haven’t yet gotten good enough at Malthusianizing my daily life—have you?

\r\n
\r\n

 

" } }, { "_id": "8njamAu4vgJYxbJzN", "title": "Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds", "pageUrl": "https://www.lesswrong.com/posts/8njamAu4vgJYxbJzN/bloggingheads-yudkowsky-and-aaronson-talk-about-ai-and-many", "postedAt": "2009-08-16T16:06:18.646Z", "baseScore": 22, "voteCount": 22, "commentCount": 102, "url": null, "contents": { "documentId": "8njamAu4vgJYxbJzN", "html": "

Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics

\n

Sections of the diavlog:

\n\n

 

" } }, { "_id": "zQT2pAf9zxCCYMXkC", "title": "Minds that make optimal use of small amounts of sensory data", "pageUrl": "https://www.lesswrong.com/posts/zQT2pAf9zxCCYMXkC/minds-that-make-optimal-use-of-small-amounts-of-sensory-data", "postedAt": "2009-08-15T14:11:34.964Z", "baseScore": 12, "voteCount": 15, "commentCount": 22, "url": null, "contents": { "documentId": "zQT2pAf9zxCCYMXkC", "html": "

In That alien message, Eliezer made some pretty wild claims:

\n
\n

My moral - that even Einstein did not come within a million light-years of making efficient use of sensory data.

\n

Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense.  A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis - perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration - by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.

\n

They never suspected a thing.  They weren't very smart, you see, even before taking into account their slower rate of time.  Their primitive equivalents of rationalists went around saying things like, \"There's a bound to how much information you can extract from sensory data.\"  And they never quite realized what it meant, that we were smarter than them, and thought faster.

\n
\n

In the comments, Will Pearson asked for \"some form of proof of concept\". It seems that researchers at Cornell - Schmidt and Lipson - have done exactly that. See their video on Guardian Science:

\n
\n

'Eureka machine' can discover laws of nature - The machine formulates laws by observing the world and detecting patterns in the vast quantities of data it has collected

\n
\n

\n

 

\n

Researchers at Cambridge and Aberystwith have gone one step further and implemented an AI system/robot to perform scientific experiments:

\n
\n

Researchers at Aberystwyth University in Wales and England's University of Cambridge report in Science today that they designed Adam - they describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. Using artificial intelligence, Adam hypothesized that certain genes in baker's yeast code for specific enzymes that catalyze biochemical reactions. The robot devised experiments to test these beliefs, ran the experiments, and interpreted the results.

\n
\n

The crucial question is: what can we learn about the likely effectiveness of a \"superintelligent\" AI from the behavior of these AI programs? First of all, let us be clear: this AI is *not* a \"superintellgience\", so we shouldn't expect it to perform at that level. The problem we face is analogous to the problem of extrapolating how fast an olympic sprinter can run from looking at a baby crawling around on the floor. Furthermore, the Cornell machine was given a physical system that was specifically chosen to be easy to analyze, and a representation (equations) that is known to be suited to the problem. 

\n

We can certainly state that the program analyzed some data much faster than any human could have done. In a running time probably measured in hours or minutes, it took a huge stream of raw position and velocity data and found the underlying conserved quantities. And given likely algorithmic optimizations and another 10 years' of Moore's law, we can safely say that in 10 years' time, that particular program will run in seconds on a $500 machine or milliseconds on a supercomputer. These results actually surprise me: an AI can automatically and instantly analyze a physical system (albeit a rigged one). 

\n

But, of course, one has to ask: how much more narrow-AI work would it take to actually look at video of some bouncing, falling and whirling objects and deduce a general physical law such as the earth's gravity and the laws governing air resistance, where the objects are not hand-picked to be easy to analyze? This is unclear. But I can see mechanisms whereby this would work, rather than merely having to submit to the overwhelming power of the word \"superintelligence\". My suspicion is that with current state-of-the-art object identification technology, video footage of a system of bouncing balls and pendulums and springs would be amenable to this kind of analysis. There may even be a research project in that proposition. 

\n

As far as extrapolating the behavior of a superintelligence from the behavior of the Cornell AI or the Adam robot, we should note that no human can look at a complex physical system for a few seconds and just write down the physical law or equation that it obeys. A simple narrow AI has already outperformed humans at one specific task; though it still cannot do most of what a scientist does. We should therefore update our beliefs to assign more weight to the hypothesis that on some particular narrow physical modelling task, a \"superintelligence\" would vastly outperform us. Personally I was surprised at what such a simple system can do, though with hindsight it is obvious: data from a physical system follows patterns, and statistics can indentify those patterns. Science is not a magic ritual that only humans can perform, rather it is a specific kind of algorithm, and we should expect there to be no special injunction against silicon minds from doing it. 

\n

 

" } }, { "_id": "pzsdtz6bRoDrsrkLP", "title": "Fighting Akrasia: Survey Design Help Request", "pageUrl": "https://www.lesswrong.com/posts/pzsdtz6bRoDrsrkLP/fighting-akrasia-survey-design-help-request", "postedAt": "2009-08-14T19:48:29.420Z", "baseScore": 1, "voteCount": 4, "commentCount": 3, "url": null, "contents": { "documentId": "pzsdtz6bRoDrsrkLP", "html": "

Follow-up to:  Fighting Akrasia:  Finding the Source

\n

In the last post in this series I posted a link to a Google Docs survey to try to gather some data on what techniques, if any, work for people in conquering akrasia, but we haven't gotten very much information so far:  the response pool is fairly homogeneous in terms of age, sex, and personality type.  In part this is because we need to get more responses outside of the LW readership, but probably also because I'm not asking the right questions.  So, my challenge this weekend is to come up with some good revisions for the survey.

\n

In order to maximize comment usefulness, please suggest one revision per top level comment and then any discussion of that revision can take place in the replies.

\n

In the interest of keeping the comments on topic, I request a moratorium on discussions of whether or not akrasia exists and whether or not we can or should do something about it in the comments on this article.  It's not that I want to exclude or silence opinions contrary to what I'm trying to accomplish:  it's just that I would like to keep this article on the topic of revising the akrasia fighting survey.  By all means, if my posting about akrasia really bothers you, write up an article explaining why I'm wrong and we'll discuss the issue more there.

\n

Thanks!

" } }, { "_id": "W6tcWA3MYmBD27Bnn", "title": "Mistakes with nonexistent people", "pageUrl": "https://www.lesswrong.com/posts/W6tcWA3MYmBD27Bnn/mistakes-with-nonexistent-people", "postedAt": "2009-08-13T22:28:35.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "W6tcWA3MYmBD27Bnn", "html": "

Who is better off if you live and I die? Is one morally obliged to go around impregnating women? Is the repugnant conclusion repugnant? Is secret genocide OK? Does it matter if humanity goes extinct? Why shouldn’t we kill people? Is pity for the dead warranted?

\n

All these discussions come down to the same question often: whether to care about the interests of people who don’t exist but could.

\n

I shan’t directly argue either way; care about whatever you like. I want to show that most of the arguments against caring about the non-existent which repeatedly come up in casual discussion rely on two errors.

\n

Here are common arguments (paraphrased from real discussions):

\n
    \n
  1. There are infinitely many potential people, so caring about them is utterly impractical.
  2. \n
  3. The utility that a non-existent person experiences is undefined, not zero. You are calculating some amount of utility and attributing it to zero people. This means utility per person is x/0 = undefined.
  4. \n
  5. Causing a person to not exist is a victimless crime. Stop pretending these people are real just because you imagine them!
  6. \n
  7. If someone doesn’t exist, they don’t have preferences, so you can’t fulfil them. This includes not caring if they exist or not. The dead do not suffer, only their friends and relatives do that.
  8. \n
  9. Life alone isn’t worth anything – what matters is what happens in it, so creating a new life is a neutral act.
  10. \n
  11. You can’t be dead. It’s not something you can be. So you can’t say whether life is better.
  12. \n
  13. Potential happiness is immeasurable; the person could have been happy, they could have been sad. Their life doesn’t exist, so it doesn’t have characteristics.
  14. \n
  15. How can you calculate loss of future life? Maybe they’d live another hundred years, if you’re going to imagine they don’t die now.
  16. \n
\n

All of these arguments spring from two misunderstandings:

\n

Thinking of value as being a property of particular circumstances rather than of the comparison between choices of circumstances.

\n
\"People

People who won't exist under any of our choices are of no importance (picture: Michelangelo)

\n

We need never be concerned with the infinite people who don’t exist. All those who won’t exist under any choice we might make are irrelevant.  The question is whether those who do exist under one choice we can make and don’t exist under another would be better off existing.

\n

2, 3 and 4 make this mistake too. The utility we are talking about accrues in the possible worlds where the person does exist, and has preferences. Saying someone is worse off not existing is saying that in the worlds where they do exist they have more utility. It is not saying that where they don’t exist they experience suffering, or that they can want to exist when they do not.

\n

Assuming there is nothing to be known about something that isn’t the case.

\n

If someone doesn’t exist, you don’t just not know about their preferences. They actually don’t have any. So how can you say anything about them? If a person died now, how can you say anything about how long they would have lived? How good it could have been? It’s all imaginary. This line of thought underlies arguments 4-8.

\n

But in no case are we discussing characteristics of something that doesn’t exist. We are discussing which characteristics are likely in the case where it does exist. This is very different.

\n

If I haven’t made you a cake, the cake doesn’t have characteristics. To ask whether it is chocolate flavoured is silly. You can still guess that conditional on my making it it is more likely chocolate flavoured than fish flavoured. Whether I’ve made it already is irrelevant. Similarly you can guess that if a child were born it would be more likely to find life positive (as most people seem to) and to like music and food and sex and other things it’s likely to be able to get, and not to have an enourmous unsatisfiable desire for six to be prime. You can guess that conditional on someone’s life continuing, it would probably continue until old age. These are the sorts of things we uncontroversially guess all the time about our own futures, which are of course also conditional on choices we make, so I can’t see why they would become a problem when other potential people are involved.

\n

Are there any good arguments that don’t rely on these errors for wanting to ignore those who don’t currently exist in consequentialist calculations?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "de3xjFaACCAk6imzv", "title": "Towards a New Decision Theory", "pageUrl": "https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory", "postedAt": "2009-08-13T05:31:41.197Z", "baseScore": 84, "voteCount": 62, "commentCount": 148, "url": null, "contents": { "documentId": "de3xjFaACCAk6imzv", "html": "

\n

It commonly acknowledged here that current decision theories have deficiencies that show up in the form of various paradoxes. Since there seems to be little hope that Eliezer will publish his Timeless Decision Theory any time soon, I decided to try to synthesize some of the ideas discussed in this forum, along with a few of my own, into a coherent alternative that is hopefully not so paradox-prone.

\n

I'll start with a way of framing the question. Put yourself in the place of an AI, or more specifically, the decision algorithm of an AI. You have access to your own source code S, plus a bit string X representing all of your memories and sensory data. You have to choose an output string Y. That’s the decision. The question is, how? (The answer isn't “Run S,” because what we want to know is what S should be in the first place.)

\n

Let’s proceed by asking the question, “What are the consequences of S, on input X, returning Y as the output, instead of Z?” To begin with, we'll consider just the consequences of that choice in the realm of abstract computations (i.e. computations considered as mathematical objects rather than as implemented in physical systems). The most immediate consequence is that any program that calls S as a subroutine with X as input, will receive Y as output, instead of Z. What happens next is a bit harder to tell, but supposing that you know something about a program P that call S as a subroutine, you can further deduce the effects of choosing Y versus Z by tracing the difference between the two choices in P’s subsequent execution. We could call these the computational consequences of Y. Suppose you have preferences about the execution of a set of programs, some of which call S as a subroutine, then you can satisfy your preferences directly by choosing the output of S so that those programs will run the way you most prefer.

\n

A more general class of consequences might be called logical consequences. Consider a program P’ that doesn’t call S, but a different subroutine S’ that’s logically equivalent to S. In other words, S’ always produces the same output as S when given the same input. Due to the logical relationship between S and S’, your choice of output for S must also affect the subsequent execution of P’. Another example of a logical relationship is an S' which always returns the first bit of the output of S when given the same input, or one that returns the same output as S on some subset of inputs.

\n

In general, you can’t be certain about the consequences of a choice, because you’re not logically omniscient. How to handle logical/mathematical uncertainty is an open problem, so for now we'll just assume that you have access to a \"mathematical intuition subroutine\" that somehow allows you to form beliefs about the likely consequences of your choices.

\n

At this point, you might ask, “That’s well and good, but what if my preferences extend beyond abstract computations? What about consequences on the physical universe?” The answer is, we can view the physical universe as a program that runs S as a subroutine, or more generally, view it as a mathematical object which has S embedded within it. (From now on I’ll just refer to programs for simplicity, with the understanding that the subsequent discussion can be generalized to non-computable universes.) Your preferences about the physical universe can be translated into preferences about such a program P and programmed into the AI. The AI, upon receiving an input X, will look into P, determine all the instances where it calls S with input X, and choose the output that optimizes its preferences about the execution of P. If the preferences were translated faithfully, the the AI's decision should also optimize your preferences regarding the physical universe. This faithful translation is a second major open problem.

\n

What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on.

\n

These considerations lead to the following design for the decision algorithm S. S is coded with a vector <P1, P2, P3, ...> of programs that it cares about, and a utility function on vectors of the form <E1, E2, E3, …> that defines its preferences on how those programs should run. When it receives an input X, it looks inside the programs P1, P2, P3, ..., and uses its \"mathematical intuition\" to form a probability distribution P_Y over the set of vectors <E1, E2, E3, …> for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y(<E1, E2, E3, …>) U(<E1, E2, E3, …>). (This specifically assumes that expected utility maximization is the right way to deal with mathematical uncertainty. Consider it a temporary placeholder until that problem is solved. Also, I'm describing the algorithm as a brute force search for simplicity. In reality, you'd probably want it to do something cleverer to find the optimal Y* more quickly.)

\n

Example 1: Counterfactual Mugging

\n

Note that Bayesian updating is not done explicitly in this decision theory. When the decision algorithm receives input X, it may determine that a subset of programs it has preferences about never calls it with X and are also logically independent of its output, and therefore it can safely ignore them when computing the consequences of a choice. There is no need to set the probabilities of those programs to 0 and renormalize.

\n

So, with that in mind, we can model Counterfactual Mugging by the following Python program:

\n

def P(coin):
    AI_balance = 100
    if coin == \"heads\":
        if S(\"heads\") == \"give $100\":
            AI_balance -= 100
    if coin == \"tails\":
        if Omega_Predict(S, \"heads\") == \"give $100\":
            AI_balance += 10000

\n

The AI’s goal is to maximize expected utility = .5 * U(AI_balance after P(\"heads\")) + .5 * U(AI_balance after P(\"tails\")). Assuming U(AI_balance)=AI_balance, it’s easy to determine U(AI_balance after P(\"heads\")) as a function of S’s output. It equals 0 if S(“heads”) == “give $100”, and 100 otherwise. To compute U(AI_balance after P(\"tails\")), the AI needs to look inside the Omega_Predict function (not shown here), and try to figure out how accurate it is. Assuming the mathematical intuition module says that choosing “give $100” as the output for S(“heads”) makes it more likely (by a sufficiently large margin) for Omega_Predict(S, \"heads\") to output “give $100”, then that choice maximizes expected utility.

\n

Example 2: Return of Bayes

\n

This example is based on case 1 in Eliezer's post Priors as Mathematical Objects. An urn contains 5 red balls and 5 white balls. The AI is asked to predict the probability of each ball being red as it as drawn from the urn, its goal being to maximize the expected logarithmic score of its predictions. The main point of this example is that this decision theory can reproduce the effect of Bayesian reasoning when the situation calls for it. We can model the scenario using preferences on the following Python program:

\n

def P(n):
    urn = ['red', 'red', 'red', 'red', 'red', 'white', 'white', 'white', 'white', 'white']
    history = []
    score = 0
    while urn:
        i = n%len(urn)
        n = n/len(urn)
        ball = urn[i]
        urn[i:i+1] = []
        prediction = S(history)
        if ball == 'red':
            score += math.log(prediction, 2)
        else:
            score += math.log(1-prediction, 2)
        print (score, ball, prediction)
        history.append(ball)

\n

Here is a printout from a sample run, using n=1222222:

\n

-1.0 red 0.5
-2.16992500144 red 0.444444444444
-2.84799690655 white 0.375
-3.65535182861 white 0.428571428571
-4.65535182861 red 0.5
-5.9772799235 red 0.4
-7.9772799235 red 0.25
-7.9772799235 white 0.0
-7.9772799235 white 0.0
-7.9772799235 white 0.0

\n

S should use deductive reasoning to conclude that returning (number of red balls remaining / total balls remaining) maximizes the average score across the range of possible inputs to P, from n=1 to 10! (representing the possible orders in which the balls are drawn), and do that. Alternatively, S can approximate the correct predictions using brute force: generate a random function from histories to predictions, and compute what the average score would be if it were to implement that function. Repeat this a large number of times and it is likely to find a function that returns values close to the optimum predictions.

\n

Example 3: Level IV Multiverse

\n

In Tegmark's Level 4 Multiverse, all structures that exist mathematically also exist physically. In this case, we'd need to program the AI with preferences over all mathematical structures, perhaps represented by an ordering or utility function over conjunctions of well-formed sentences in a formal set theory. The AI will then proceed to \"optimize\" all of mathematics, or at least the parts of math that (A) are logically dependent on its decisions and (B) it can reason or form intuitions about.

\n

I suggest that the Level 4 Multiverse should be considered the default setting for a general decision theory, since we cannot rule out the possibility that all mathematical structures do indeed exist physically, or that we have direct preferences on mathematical structures (in which case there is no need for them to exist \"physically\"). Clearly, application of decision theory to the Level 4 Multiverse requires that the previously mentioned open problems be solved in their most general forms: how to handle logical uncertainty in any mathematical domain, and how to map fuzzy human preferences to well-defined preferences over the structures of mathematical objects.

\n

Added: For further information and additional posts on this decision theory idea, which came to be called \"Updateless Decision Theory\", please see its entry in the LessWrong Wiki.

" } }, { "_id": "q2hxbCaDCjcooNBGS", "title": "Are abortion views sexist?", "pageUrl": "https://www.lesswrong.com/posts/q2hxbCaDCjcooNBGS/are-abortion-views-sexist", "postedAt": "2009-08-12T00:01:11.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "q2hxbCaDCjcooNBGS", "html": "
\"Indian

Indian girls are born on 500,000 fewer occasions per year than Indian boys (2006).(Photo: Steve Evans)

\n

Abortion isn’t too bad according to half of Americans, and most of liberals and the irreligious and that bunch. The fetus never really got as far as being a child, and virtually nobody thinks failing to have children is as bad as murder.

\n

Selective abortion of female fetuses, on the other hand, is horrific according to both ends of the ideological spectrum. And the reasons given are almost always to do with it being  bad for the females who aren’t born. It’s “discrimination“, a “gross violation of women’s rights“, “an extreme manifestation of violence against women” . As my pro-choice friend (among others) complains, ‘There are all these females who should exist and are missing!’

\n

So confirmed females have a right to exist if they are conceived, and have suffered a grave loss if they cease to be, but fetuses who might be male may as well not exist? This is either hypocritical or extremely sexist. Why are the same people adamant about both views often?

\n

They both appear to be applications of general pro-female sympathy. When supporting the pro-choice side, the concern is for a woman’s rights over her own body. When condemning gender-specific abortion, the concern is for the females who won’t be born. Siding with the females becomes complicated when females are conspicuous as aborters one day and abortees the next. So it looks like this isn’t hypocrisy via accidental oversight, but policy choice biased by sympathies to a specific gender. If ‘whether an aborted fetus has been done a terrible wrong’ were the important point, we should expect to see more consistency on that.

\n

When I asked about this previously my friend suggested that the motivations were importantly different in the two cases. Aborting someone because they are female is wrong. Aborting someone because you don’t want to look after them is compassionate. This doesn’t apply here, even if it were true. Gender specific abortions are common for economic and other pragmatic reasons too, not because people hate females especially. Moreover one could argue consistently that gender specific abortions are bad because they harm to others who do exist, such as the males who will go lonely. This is rarely the claimed source of outrage however.

\n

The most feasible explanation for this inconsistency then is sexism in favor of females being a big motivating force. You probably don’t approve of sexism in picking job applicants or political candidates. Do you approve of it in picking policies which determine countless lives or deaths?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "zFZicEFAxTfSEbK69", "title": "Sense, Denotation and Semantics", "pageUrl": "https://www.lesswrong.com/posts/zFZicEFAxTfSEbK69/sense-denotation-and-semantics", "postedAt": "2009-08-11T12:47:06.014Z", "baseScore": 12, "voteCount": 16, "commentCount": 12, "url": null, "contents": { "documentId": "zFZicEFAxTfSEbK69", "html": "

J. Y. Girard, et al. (1989). Proofs and types. Cambridge University Press, New York, NY, USA. (PDF)

\n

I found introductory description of a number of ideas given in the beginning of this book very intuitively clear, and these ideas should be relevant to our discussion, preoccupied with the meaning of meaning as we are. Though the book itself is quite technical, the first chapter should be accessible to many readers.

\n

From the beginning of the chapter:

\n
\n

Let us start with an example. There is a standard procedure for multiplication, which yields for the inputs 27 and 37 the result 999. What can we say about that?

\n

A first attempt is to say that we have an equality

\n

27 × 37 = 999

\n

This equality makes sense in the mainstream of mathematics by saying that the two sides denote the same integer and that × is a function in the Cantorian sense of a graph.

\n

This is the denotational aspect, which is undoubtedly correct, but it misses the essential point:

\n

There is a finite computation process which shows that the denotations are equal. It is an abuse (and this is not cheap philosophy — it is a concrete question) to say that 27 × 37 equals 999, since if the two things we have were the same then we would never feel the need to state their equality. Concretely we ask a question, 27 × 37, and get an answer, 999. The two expressions have different senses and we must do something (make a proof or a calculation, or at least look in an encyclopedia) to show that these two senses have the same denotation.

\n

Concerning ×, it is incorrect to say that this is a function (as a graph) since the computer in which the program is loaded has no room for an infinite graph. Hence we have to conclude that we are in the presence of a finitary dynamics related to this question of sense.

\n

Whereas denotation was modelled at a very early stage, sense has been pushed towards subjectivism, with the result that the present mathematical treatment of sense is more or less reduced to syntactic manipulation. This is not a priori in the essence of the subject, and we can expect in the next decades to find a treatment of computation that would combine the advantages of denotational semantics (mathematical clarity) with those of syntax (finite dynamics). This book clearly rests on a tradition that is based on this unfortunate current state of affairs: in the dichotomy between infinite, static denotation and finite, dynamic sense, the denotational side is much more developed than the other.

\n

So, one of the most fundamental distinctions in logic is that made by Frege: given a sentence A, there are two ways of seeing it:

\n\n\n

Two sentences which have the same sense have the same denotation, that is obvious; but two sentences with the same denotation rarely have the same sense. For example, take a complicated mathematical equivalence A⇔B. The two sentences have the same denotation (they are true at the same time) but surely not the same sense, otherwise what is the point of showing the equivalence?

\n

This example allows us to introduce some associations of ideas:

\n\n\n

That is the fundamental dichotomy in logic. Having said that, the two sides hardly play symmetrical roles!

\n
\n

Read the whole chapter (or the first three chapters). Download the book here (PDF).

" } }, { "_id": "YWtZpAynkxth3KXjN", "title": "Deleting paradoxes with fuzzy logic", "pageUrl": "https://www.lesswrong.com/posts/YWtZpAynkxth3KXjN/deleting-paradoxes-with-fuzzy-logic", "postedAt": "2009-08-11T04:27:43.258Z", "baseScore": 7, "voteCount": 22, "commentCount": 76, "url": null, "contents": { "documentId": "YWtZpAynkxth3KXjN", "html": "

You've all seen it. Sentences like \"this sentence is false\": if they're false, they're true, and vice versa, so they can't be either true or false. Some people solve this problem by doing something really complicated: they introduce infinite type hierarchies wherein every sentence you can express is given a \"type\", which is an ordinal number, and every sentence can only refer to sentences of lower type. \"This sentence is false\" is not a valid sentence there, because it refers to itself, but no ordinal number is less than itself. Eliezer Yudkowsky mentions but says little about such things. What he does say, I agree with: ick!

\n

In addition to the sheer icky factor involved in this complicated method of making sure sentences can't refer to themselves, we have deeper problems. In English, sentences can refer to themselves. Heck, this sentence refers to itself. And this is not a flaw in English, but something useful: sentences ought to be able to refer to themselves. I want to be able to write stuff like \"All complete sentences written in English contain at least one vowel\" without having to write it in Spanish or as an incomplete sentence.1 How can we have self-referential sentences without having paradoxes that result in the universe doing what cheese does at the bottom of the oven? Easy: use fuzzy logic.

\n

Now, take a nice look at the sentence \"this sentence is false\". If your intuition is like mine, this sentence seems false. (If your intuition is unlike mine, it doesn't matter.) But obviously, it isn't false. At least, it's not completely false. Of course, it's not true, either. So it's not true or false. Nor is it the mythical third truth value, clem2, as clem is not false, making the sentence indeed false, which is a paradox again. Rather, it's something in between true and false--\"of medium truth\", if you will.

\n

So, how do we represent \"of medium truth\" formally? Well, the obvious way to do that is using a real number. Say that a completely false sentence has a truth value of 0, a completely true sentence has a truth value of 1, and the things in between have truth values in between.3 Will this work? Why, yes, and I can prove it! Well, no, I actually can't. Still, the following, trust me, is a theorem:

\n

Suppose there is a set of sentences, and there are N of them, where N is some (possibly infinite) cardinal number, and each sentence's truth value is a continuous function of the other sentences' truth values. Then there is a consistent assignment of a truth value to every sentence. (More tersely, every continuous function [0,1]^N -> [0,1]^N for every cardinal number N has at least one fixed point.)

\n

So for every set of sentences, no matter how wonky their self- and cross-references are, there is some consistent assignment of truth values to them. At least, this is the case if all their truth values vary continuously with each other. This won't happen under strict interpretations of sentences such as \"this sentence's truth value is less than 0.5\": this sentence, interpreted as black and white, has a truth value of 1 when its truth value is below 0.5 and a truth value of 0 when it's not. This is inconsistent. So, we'll ban such sentences. No, I don't mean ban sentences that refer to themselves; that would just put us back where we started. I mean we should ban sentences whose truth values have \"jumps\", or discontinuities. The sentence \"this sentence's truth value is less than 0.5\" has a sharp jump in truth value at 0.5, but the sentence \"this sentence's truth value is significantly less than 0.5\" does not: as its truth value goes down from 0.5 down to 0.4 or so, it also goes up from 0.0 up to 1.0, leaving us a consistent truth value for that sentence around 0.49.

\n

Edit: I accidentally said \"So, we'll not ban such sentences.\" That's almost the opposite of what I wanted to say.

\n

Now, at this point, you probably have some ideas. I'll get to those one at a time. First, is all this truth value stuff really necessary? To that, I say yes. Take the sentence \"the Leaning Tower of Pisa is short\". This sentence is certainly not completely true; if it were, the Tower would have to have a height of zero. It's not completely false, either; if it were, the Tower would have to be infinitely tall. If you tried to come up with any binary assignment of \"true\" and \"false\" to sentences such as these, you'd run into the Sorites paradox: how tall would the Tower be if any taller tower were \"tall\" and any shorter tower were \"short\"? A tower a millimeter higher than what you say would be \"tall\", and a tower a millimeter shorter would be \"short\", which we find absurd. It would make a lot more sense if a change of height of one millimeter simply changed the truth value of \"it's short\" by about 0.00001.

\n

Second, isn't this just probability, which we already know and love? No, it isn't. If I say that \"the Leaning Tower of Pisa is extremely short\", I don't mean that I'm very, very sure that it's short. If I say \"my mother was half Irish\", I don't mean that I have no idea whether she was Irish or not, and might find evidence later on that she was completely Irish. Truth values are separate from probabilities.

\n

Third and finally, how can this be treated formally? I say, to heck with it. Saying that truth values are real numbers from 0 to 1 is sufficient; regardless of whether you say that \"X and Y\" is as true as the product of the truth values of X and Y or that it's as true as the less true of the two, you have an operation that behaves like \"and\". If two people have different interpretations of truth values, you can feel free to just add more functions that convert between the two. I don't know of any \"laws of truth values\" that fuzzy logic ought to conform to. If you come up with a set of laws that happen to work particularly well or be particularly elegant (percentiles? decibels of evidence?), feel free to make it known.

\n

 

\n
\n

 

\n

1. ^ The term \"sentence fragment\" is considered politically incorrect nowadays due to protests by incomplete sentences. \"Only a fragment? Not us! One of us standing alone? Nothing wrong with that!\"

\n

2. ^ I made this word up. I'm so proud of it. Don't you think it's cute?

\n

3. ^ Sorry, Eliezer, but this cannot be consistently interpreted such that 0 and 1 are not valid truth values: if you did that, then the modest sentence \"this sentence is at least somewhat true\" would always be truer than itself, whereas if 1 is a valid truth value, it is a consistent truth value of that sentence.

" } }, { "_id": "bjPL428k8STPwp3aD", "title": "Cache invalidation test", "pageUrl": "https://www.lesswrong.com/posts/bjPL428k8STPwp3aD/cache-invalidation-test", "postedAt": "2009-08-11T02:05:45.264Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "bjPL428k8STPwp3aD", "html": "

Cache should be cleared

" } }, { "_id": "yYTv6J6wFL8XK3Q7o", "title": "Utilons vs. Hedons", "pageUrl": "https://www.lesswrong.com/posts/yYTv6J6wFL8XK3Q7o/utilons-vs-hedons", "postedAt": "2009-08-10T19:20:20.968Z", "baseScore": 40, "voteCount": 46, "commentCount": 119, "url": null, "contents": { "documentId": "yYTv6J6wFL8XK3Q7o", "html": "

Related to: Would Your Real Preferences Please Stand Up?

\n

I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.

\n

On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.

\n

This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use \"utilons\" to refer to value utility units and \"hedons\" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.

\n

Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal.  Do you take it? What about five hundred? Five hundred thousand?

\n

I can't speak for you, so I'll go through my evaluation of this deal and hope it generalizes reasonably well. I don't take it at any of these values. There's no clear hedonistic explanation for this - after all, I forget it happened. It would be absurd to say that the disutility I  experience between entering the agreement and having my memory wiped is so tremendous as to outweigh everything I will experience for the rest of my life (particularly since I immediately forget this disutility), and this is the only way I can see my rejection could be explained with hedons. In fact, even if the memory wipe weren't part of the deal, I doubt the act of having a few people killed would really cause me more displeasure than doubling my future hedons would yield; I'd bet more than five people have died in rural China as I've written this post, and it hasn't upset me in the slightest.

\n

The reason I don't take the deal is my values; I believe it's wrong to kill random people to improve my own happiness. If I knew that they were people who sincerely wanted to be dead or that they were, say, serial killers, my decision would be different, even though my hedonic experience would probably not be that different. If I knew that millions of people in China would be significantly happier as a result, as well, then there's a good chance I'd take the deal even if it didn't help me. I seem to be maximizing utilons and not hedons, and I think most people would do the same.

\n

Also, as another example so obvious that I feel like it's cheating, if most people read the headline \"100 workers die in Beijing factory fire\" or \"1000 workers die in Beijing factory fire,\" they will not feel ten times the hedonic blow, even if they live in Beijing. That it is ten times worse is measured in our values, not our experiences; these values are correct, since there are roughly ten times as many people who have seriously suffered from the fire, but if we're talking about people's hedons, no individual suffers ten times as much.

\n

In general, people value utilons much more than hedons. Drugs being illegal are an illustration of this. Arguments for (and against) drug legalization are an even better illustration of this. Such arguments usually involve weakening organized crime, increasing safety, reducing criminal behaviour, reducing expenditures on prisons, improving treatment for addicts, and improving similar values. \"Lots of people who want to will get really, really high\" is only very rarely touted as a major argument, even though the net hedonic value of drug legalization would probably be massive (just as the hedonic cost of prohibition in the 20's was clearly massive).

\n

As a practical matter, this is important because many people do things precisely because they are important in their abstract value system, even if they result in little or no hedonic payoff. This, I believe, is an excellent explanation of why success is no guarantee of happiness; success is conducive to getting hedons, but it also tends to cost a lot of hedons, so there is little guarantee that earned wealth will be a net positive (and, with anchoring, hedons will get a lot more expensive than they are for the less successful). On the other hand, earning wealth (or status) is a very common value, so people pursue it irrespective of its hedonistic payoff.

\n

It may be convenient to argue that the hedonistic payoffs must balance out, but this does not make it the case in reality. Some people work hard on assignments that are practically meaningless to their long-term happiness because they believe they should, not because they have any delusions about their hedonistic payoff. To say, \"If you did X instead of Y because you 'value' X, then the hedonistic cost of breaking your values must exceed Y-X,\" is to win an argument by definition; you have to actually figure out the values and see if that's true. If it's not, then I'm not a hedon-maximizer. You can't then assert that I'm an \"irrational\" hedon maximizer unless you can make some very clear distinction between \"irrationally maximizing hedons\" and \"maximizing something other than hedons.\"

\n

This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, \"it feels good\" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.

\n

Furthermore, this may cause our intuition to seriously interfere with utility-based hypotheticals, such as these. Basically, you choose to draw cards, one at a time, that have a 10% chance of killing you and a 90% chance of doubling your utility. Logically, if your current utility is positive and you assign a utility of zero2 (or greater) to your death (which makes sense in hedons, but not necessarily in utilons), you should draw cards until you die. The problem of course being that if you draw a card a second, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.

\n

There's a bigger problem that causes our intuition to reject this hypothetical as \"just wrong:\" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy. As for utilons, most people assign a much greater value to \"not dying,\" compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.

\n

Any useful utilitarian calculus need take into account that hedonic utility is, for most people, incomplete. Value utility is often a major motivating factor, and it need not translate perfectly into hedonic terms. Incorporating value utility seems necessary to have a map of human happiness that actually matches the territory. It also may be good that it can be easier to change values than it is to change hedonic experiences. But assuming people maximize hedons, and then assuming quantitative values that conform to this assumption, proves nothing about what actually motivates people and risks serious systematic error in furthering human happiness.

\n

We know that our experiential utility cannot encompass all that really matters to us, so we have a value system that we place above it precisely to avoid risking destroying the whole world to make ourselves marginally happier, or to avoid pursuing any other means of gaining happiness that carries tremendous potential expense.

\n

1- Apparently Omega has started a firm due to excessive demand for its services, or to avoid having to talk to me.

\n

2- This assumption is rather problematic, though zero seems to be the only correct value of death in hedons. But imagine that you just won the lottery (without buying a ticket, presumably) and got selected as the most important, intelligent, attractive person in whatever field or social circle you care most about. How bad would it be to drop dead? Now, imagine you just got captured by some psychopath and are going to be tortured for years until you eventually die. How bad would it be to drop dead? Assigning zero (or the same value, period) to both outcomes seems wrong. I realize that you can say that death in one is negative and in the other is positive relative to expected utility, but still, the value of death does not seem identical, so I'm suspicious of assigning it the same value in both cases. I realize this is hand-wavy; I think I'd need a separate post to address this issue properly.

" } }, { "_id": "zA3aM4YEENz2GLAfL", "title": "Misleading the witness", "pageUrl": "https://www.lesswrong.com/posts/zA3aM4YEENz2GLAfL/misleading-the-witness", "postedAt": "2009-08-09T20:13:52.895Z", "baseScore": 16, "voteCount": 16, "commentCount": 116, "url": null, "contents": { "documentId": "zA3aM4YEENz2GLAfL", "html": "

Related: Trust in Math

\n

I was reading John Allen Paulos' A Mathematician Plays the Stock Market, in which Paulos relates a version of the well-known \"missing dollar\" riddle. I had heard it once before, but only vaguely remembered it. If you don't remember it, here it is:

\n
\n

Three people stay in a hotel overnight. The innkeeper tells them that the price for three rooms is $30, so each pays $10.

\n

After the guests go to their rooms, the innkeeper realizes that there is a special discount for groups, and that the guests' total should have only been $25.

\n

The innkeeper gives a bellhop $5 with the instructions to return it to the guests.

\n

The bellhop, not wanting to get change, gives each guest $1 and keeps $2.

\n

Later, the bellhop thinks \"Wait - something isn't right. Each guest paid $10. I gave them each back $1, so they each paid $9. $9 times 3 is $27. I kept $2. $27 + $2 is $29. Where did the missing dollar go?\"

\n
\n

\n

I remembered that the solution involves trickery, but it still took me a minute or two to figure out where it is. At first, I started mentally keeping track of the dollars in the riddle, trying to see where one got dropped so their sum would be 30.

\n

Then I figured it out. The story should end:

\n
\n

Later, the bellhop thinks \"Wait - something isn't right. Each guest paid $10. I gave them each back $1, so they each paid $9. $9 times 3 is $27. The cost for their rooms was $25. $27 - $25 = $2, so they collectively overpaid by $2, which is the amount I kept. Why am I such a jerk?\"

\n
\n

I told my fiance the riddle, and asked her where the missing dollar went. She went through the same process as I did, looking for a place in the story where $1 could go missing.

\n

It's remarkable to me how blatantly deceptive the riddle is. The riddler states or implies at the end of the story that the dollars paid by the guests and the dollars kept by the bellhop should be summed, and that that sum should be $30. In fact, there's no reason to sum the dollars paid by the guests and the dollars kept by the bellhop, and no reason for any accounting we do to end up with $30.

\n

The contrasts somewhat with the various proofs that 1 = 2, in which the misstep is hidden somewhere within a chain of reasoning, not boldly announced at the end of the narrative.

\n

Both Paulos and Wikipedia give examples with different numbers that make the deception in the missing dollar riddle more obvious (and less effective). In the case of the missing dollar riddle, the fact that $25, $27, and $30 are close to each other makes following the incorrect path very seductive.

\n

This riddle made me remember reading about how beginning magicians are very nervous in their first public performances, since some of their tricks involve misdirecting the audience by openly lying (e.g., casually pick up a stack of cards shuffled by a volunteer, say \"Hmm, good shuffle\" while adding a known card to the top of the stack, hand the deck back to the volunteer, and then boldly announce \"notice that I have not touched or manipulated the cards!\"1). However, they learn to be more comfortable once they find out how easily the audience will pretty much accept whatever false statements they make.

\n

Thinking about these things makes me wonder about how to think rationally given the tendency for human minds to accept some deceptive statements at face value. Can anyone think of good ways to notice when outright deception is being used? How could a rationalist practice her skills at a magic show?

\n

How about other examples of flagrant misdirection? I suspect that political debates might be able to make use of such techniques (I think that there might be some examples in the recent debates over health care reform accounting and the costs of obesity to the health care system, but I haven't been able to find any yet.)

\n

 

\n

 

\n

Footnote 1: I remember reading this example very recently, maybe at this site. Please let me know whom to credit for it.

" } }, { "_id": "SRqEbwqErb9v9rNpM", "title": "Guess Again", "pageUrl": "https://www.lesswrong.com/posts/SRqEbwqErb9v9rNpM/guess-again", "postedAt": "2009-08-09T19:11:41.763Z", "baseScore": 17, "voteCount": 18, "commentCount": 30, "url": null, "contents": { "documentId": "SRqEbwqErb9v9rNpM", "html": "

In Bead Jar Guesses, I made a slightly clumsy attempt at carving out a kind of guess based on so little information that even a rationally-supposed, very small probability of some outcome doesn't confer a commensurate level of surprise when that outcome occurs.  Here are several categories of probability assignment (including a re-statement of the bead jar thing) that I think might be worth considering separately.  (I'm open to changing their names if other people have better ideas.)

\n

Bewilderment: You don't even have enough information to understand the question.  What is your probability that any given shren is a darkling?  What is your probability that Safaitic is sometimes recorded in boustrophedon?  What is your probability that 你有鼻子?    (Ignore question 1 if you have read Elcenia, especially if you've seen more than is currently published; ignore question 2 if you know what either of those funny words mean; ignore question 3 if you can read Chinese.)  In this case, you might find yourself in a situation where you have to make a guess, but even if you were then told the answer it wouldn't tell you much in the absence of further context.1  You would have no reason to be surprised by such an answer, no matter what probability you'd assigned.

\n

Bead Jar: You understand the question, but have no information about anything that causally interacts with the answer.  To guess, you have to grasp at the flimsiest of straws in the wording of the question and the motivations of the asker, or much broader beliefs about the general kind of question.  What is your probability that Omega will pull out a red bead?  What's your probability that I'm making the peace sign as I type this question with the other hand?  What's your probability that the fruit on the tree in my best friend's backyard is delicious?  Like Bewilderment questions, Bead Jar guesses come with no significant chance of surprise.  Even if you have a tiny probability that the bead is lilac, it should not surprise you.

\n

Bingo: You understand the question and you know something about what causes the answer, but the mechanism by which those conditions come about is known to be random (in a practical epistemic sense, not necessarily in the sense of being physically undetermined).  You can have an excellent, well-calibrated probability.  Here, there are two variants: one where the outcomes have mostly commensurate likelihood (the probability that you'll draw any given card from a deck) or one where the outcomes have a variety of probabilities (like the probability that you draw a card with a skull, or one with a star).  You shouldn't be surprised no matter what happens in the first case (unless the outcome is somehow special to you - be surprised if a personal friend of yours wins the lottery!), but in the second case, surprise might be warranted if something especially unlikely happens.

\n

 

\n

1About 5/6 of shrens are darklings, depending on population fluctuations; Safaitic is indeed sometimes recorded in boustrophedon; and 有 (I hope).

" } }, { "_id": "54jkJNpPyMRRGeme8", "title": "Calibration fail", "pageUrl": "https://www.lesswrong.com/posts/54jkJNpPyMRRGeme8/calibration-fail", "postedAt": "2009-08-09T17:15:34.708Z", "baseScore": 10, "voteCount": 15, "commentCount": 36, "url": null, "contents": { "documentId": "54jkJNpPyMRRGeme8", "html": "

I was talking on Friday with two people who've been to Italy recently and saw the leaning tower of Pisa.  One of them was surprised at how short it was:  \"about 30 feet tall\", she said.  Then the other person, who'd also seen it, agreed that it was surprisingly short, and said it was \"only about as tall as the Washington Monument\".

\n

\"Wait,\" I said. \"You just said it's 30 feet tall; and you just said it's as tall as the Washington monument, which has got to be at least 100 yards tall.  And you agreed with each other.\"

\n

And they both shook their heads, and said, \"No, the Washington Monument isn't anywhere near 100 yards tall.\"  We all live near Washington DC, and have seen the Washington Monument many times.

\n

I said that the Washington Monument must be taller than that.  One of them said that it was just as tiring to walk to the top of the Leaning Tower as to walk to the top of the Washington Monument; which was odd, since the WM stairs have been closed since before he was born.

\n

They finally agreed that both structures were about 30 yards tall.

\n

The Leaning Tower is 183 feet tall, and the Washington Monument is 555 feet tall.

\n

WTF?

" } }, { "_id": "z3cTkXbA7jgwGWPcv", "title": "Would Your Real Preferences Please Stand Up?", "pageUrl": "https://www.lesswrong.com/posts/z3cTkXbA7jgwGWPcv/would-your-real-preferences-please-stand-up", "postedAt": "2009-08-08T22:57:09.266Z", "baseScore": 94, "voteCount": 68, "commentCount": 132, "url": null, "contents": { "documentId": "z3cTkXbA7jgwGWPcv", "html": "

Related to: Cynicism in Ev Psych and Econ

\n

In Finding the Source, a commenter says:

\n
\n

I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

\n
\n

I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1:

\n
\n

I've previously argued that much - perhaps most - talk about \"self-control\" problems reflects social desirability bias rather than genuine inner conflict.

Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they \"want to change\" is that that's what they're supposed to say.

Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says \"I was such a fool!\" gets some pity; the drunk who says \"I like Jack Daniels better than my wife and kids\" gets horrified looks. And either way, he can keep drinking.

\n
\n

I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.

\n

What part of the mind, exactly, prefers a socially unacceptable activity (like drinking whiskey or browsing Reddit) to an acceptable activity (like having a wife and kids, or studying)? The conscious mind? As Bill said in his comment, it doesn't seem like it works this way. I've had akrasia myself, and I never consciously think \"Wow, I really like browsing Reddit...but I'll trick everyone else into thinking I'd rather be studying so I get more respect. Ha ha! The fools will never see it coming!\"

No, my conscious mind fully believes that I would rather be studying2. And this even gets reflected in my actions. I've tried anti-procrastination techniques, both successfully and unsuccessfully, without ever telling them to another living soul. People trying to diet don't take out the cupcakes as soon as no one else is looking (or, if they do, they feel guilty about it).

This is as it should be. It is a classic finding in evolutionary psychology: the person who wants to fool others begins by fooling themselves. Some people even call the conscious mind the \"public relations officer\" of the brain, and argue that its entire point is to sit around and get fooled by everything we want to signal. As Bill said, \"believing the signals, even if untrue, makes the signals more effective.\"

Now we have enough information to see why the Cynic's Theory is equivalent to the Naive Theory.

The Naive Theory says that you really want to stop drinking, but some force from your unconscious mind is hijacking your actions. The Cynic's Theory says that you really want to keep drinking, but your conscious mind is hijacking your thoughts and making you think otherwise.

In both cases, the conscious mind determines the signal and the unconscious mind determines the action. The only difference is which preference we define as \"real\" and worthy of sympathy. In the Naive Theory, we sympathize with the conscious mind, and the problem is the unconscious mind keeps committing contradictory actions. In the Cynic's Theory, we symapthize with the unconscious mind, and the problem is the conscious mind keeps sending out contradictory signals. The Naive say: find some way to make the unconscious mind stop hijacking actions! The Cynic says: find some way to make the conscious mind stop sending false signals!

So why prefer one theory over the other? Well, I'm not surprised that it's mostly economists who support the Cynic's Theory. Economists are understandably interested in revealed preferences3, because revealed preferences are revealed by economic transactions and are the ones that determine the economy. It's perfectly reasonable for an economist to care only about those and dimiss any other kind of preference as a red herring that has to be removed before economic calculations can be done. Someone like a philosopher, who is more interested in thought and the mind, might be more susceptible to the identify-with-conscious-thought Naive Theory.

\n

But notice how the theory you choose also has serious political implications4. Consider how each of the two ways of looking at the problem would treat this example:

\n
\n

A wealthy liberal is a member of many environmental organizations, and wants taxes to go up to pay for better conservation programs. However, she can't bring herself to give up her gas-guzzling SUV, and is usually too lazy to sort all her trash for recycling.

\n
\n

I myself throw my support squarely behind the Naive Theory. Conscious minds are potentially rational5, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?

\n

 

\n

Footnotes:

\n

1: Caplan says that the lack of interest in Stickk offers support for the Cynic's Theory, but I don't see why it should, unless we believe the mental balance of power should be different when deciding whether to use Stickk than when deciding whether to do anything else.

\n

Caplan also suggests in another article that he has never experienced procrastination as akrasia. Although I find this surprising, I don't find it absolutely impossible to believe. His mind may either be exceptionally well-integrated, or it may send signals differently. It seems within the range of normal human mental variation.

\n

2: Of course, I could be lying here, to signal to you that I have socially acceptable beliefs. I suppose I can only make my point if you often have the same experience, or if you've caught someone else fighting akrasia when they didn't know you were there.

\n

3: Even the term \"revealed preferences\" imports this value system, as if the act of buying something is a revelation that drives away the mist of the false consciously believed preferences.

\n

4: For a real-world example of a politically-charged conflict surrounding the question of whether we should judge on conscious or unconscious beliefs, see Robin's post Redistribution Isn't About Sympathy and my reply.

\n

5: Differences between the conscious and unconscious mind should usually correspond to differences between the goals of a person and the \"goals\" of the genome, or else between subgoals important today and subgoals important in the EEA.

" } }, { "_id": "8z2Fm2yaHpQz8rr5B", "title": "Dreams with Damaged Priors", "pageUrl": "https://www.lesswrong.com/posts/8z2Fm2yaHpQz8rr5B/dreams-with-damaged-priors", "postedAt": "2009-08-08T22:31:22.994Z", "baseScore": 60, "voteCount": 49, "commentCount": 61, "url": null, "contents": { "documentId": "8z2Fm2yaHpQz8rr5B", "html": "

Dreaming is the closest I've gotten to testing myself against the challenge of maintaining rationality under brain damage.  So far, my trials have exhibited mixed results.

\n

In one memorable dream a few years ago, I dreamed that the Wall Street Journal had published an article about \"Eliezer Yudkowsky\", but it wasn't me, it was a different \"Eliezer Yudkowsky\", and in the dream I wondered if I needed to write a letter to clarify this.  Then I realized I was dreaming within the dream... and worried to myself, still dreaming:  \"But what if the Wall Street Journal really does have an article about an 'Eliezer Yudkowsky' who isn't me?\"

\n

But then I thought:  \"Well, the probability that I would dream about a WSJ article like that, given that a WSJ article like that actually exists in this morning's paper, is the same as the probability that I would have such a dream, given that no such article is in this morning's paper.  So by Bayes's Theorem, the dream isn't evidence one way or the other.  Thus there's no point in trying to guess the answer now - I'll find out in the morning whether there's an article like that.\"  And, satisfied, my mind went back to ordinary sleep.

\n

I find it fascinating that I was able to explicitly apply Bayes's Theorem in my sleep to correctly compute the 1:1 likelihood ratio, but my dreaming mind didn't notice the damaged prior - didn't notice that the prior probability of such a WSJ article was too low to justify raising the hypothesis to my attention.

\n

At this point even I must concede that there is something to the complaint that, in real-world everyday life, Bayesians dispense too little advice about how to compute priors.  With a damaged intuition for the weight of evidence, my dreaming mind was able to explicitly compute a likelihood ratio and correct itself.  But with a damaged intuition for the prior probability, my mind didn't successfully check itself, or even notice a problem - didn't get as far as asking \"But what is the prior probability?\"

\n

On July 20 I had an even more dramatic dream - sparking this essay - when I dreamed that I'd googled my own name and discovered that one of my OBLW articles had been translated into German and published, without permission but with attribution, in a special issue of the Journal of Applied Logic to commemorate the death of Richard Thaler (don't worry, he is in fact still alive)...

\n

Then I half woke-up... and wondered if maybe one of my OBLW articles really had been \"borrowed\" this way.  But I reasoned, in my half-awake state, that the dream couldn't be evidence because it didn't form part of a causal chain wherein the outside environment impressed itself onto my brain, and that only actual sensory impressions of Google results could form the base of a legitimate chain of inferences.

\n

So - still half-asleep - I wanted to get out of bed and actually look at Google, to see if a result turned up for the Journal of Applied Logic issue.

\n

And several times I fell back asleep and dreamed I'd looked at Google and seen the result; but each time on half-awaking I thought:  \"No, I still seem to be in bed; that was a dream, not a sense-impression, so it's not valid evidence - I still need to actually look at Google.\"  And the cycle continued.

\n

By the time I woke up entirely, my brain had fully switched on and I realized that the prior probability was tiny; and no, I did not bother to check the actual Google results.  Though I did Google to check whether Richard Thaler was alive, since I was legitimately unsure of that when I started writing this post.

\n

If my dreaming brain had been talking in front of an audience, that audience might have applauded the intelligent-sounding sophisticated reasoning about what constituted evidence - which was even correct, so far as it went.  And yet my half-awake brain didn't notice that at the base of the whole issue was a big complicated specific hypothesis whose prior probability fell off a cliff and vanished.  EliezerDreaming didn't try to measure the length of the message, tot up the weight of burdensome details, or even explicitly ask, \"What is the prior probability?\"

\n

I'd mused before that the state of being religious seemed similar to the state of being half-asleep.  But my recent dream made me wonder if the analogy really is a deep one.  Intelligent theists can often be shepherded into admitting that their argument X is not valid evidence.  Intelligent theists often confess explicitly that they have no supporting evidence - just like I explicitly realized that my dreams offered no evidence about the actual Wall Street Journal or the Journal of Applied Logic.  But then they stay \"half-awake\" and go on wondering whether the dream happens to be true.  They don't \"wake up completely\" and realize that, in the absence of evidence, the whole thing has a prior probability too low to deserve specific attention.

\n

My dreaming brain can, in its sleep, reason explicitly about likelihood ratios, Bayes's Theorem, cognitive chains of causality, permissible inferences, strong arguments and non-arguments.  And yet still maintain a dreaming inability to reasonably evaluate priors, to notice burdensome details and sheer ridiculousness.  If my dreaming brain's behavior is a true product of dissociation - of brainware modules or software modes that can be independently switched on or off - then the analogy to religion may be more than surface similarity.

\n

Conversely it could just be a matter of habits playing out in in my dreaming self; that I habitually pay more attention to arguments than priors, or habitually evaluate arguments deliberately but priors intuitively.

" } }, { "_id": "LZwFMXvwTGCbWcaiq", "title": "A note on hypotheticals", "pageUrl": "https://www.lesswrong.com/posts/LZwFMXvwTGCbWcaiq/a-note-on-hypotheticals", "postedAt": "2009-08-07T18:56:55.392Z", "baseScore": 24, "voteCount": 26, "commentCount": 18, "url": null, "contents": { "documentId": "LZwFMXvwTGCbWcaiq", "html": "

People frequently describe hypothetical situations on LW.  Often, other people make responses that suggest they don't understand the purpose of hypotheticals.

\n\n

I'll expand on the last point.  Sorry for being vague.  I'm trying not to name names.

\n

When a hypothetical is put forward to test a theory, ignore aspects of the hypothetical scenario that don't correspond to parts of the theory.  Don't get emotionally involved.  Don't think of the hypothetical as a narrative.  A hypothetical about Omega sounds a lot like a story about a genie from a lamp, but you should approach it in a completely different way.  Don't try to outsmart Omega (unless you're making a point about the impossibility of an Omega who can eg decide undecidable problems).  When you find a loophole in the way the hypothetical is posed, that doesn't exist in the original domain, point it out only if you are doing so to improve the phrasing of the hypothetical situation.

\n

John Searle's Chinese Room is an example of a hypothetical in which it is important to not get emotionally involved.  Searle's conclusion is that the man in the Chinese room doesn't understand Chinese; therefore, a computer doesn't understand Chinese.  His model maps the running software onto the complete system of room plus man plus cards; but when he interprets it, he empathizes with the human on each half of the mapping,  and so maps the locus of consciousness from the running software onto just the man.1

\n

Sometimes it's difficult to know whether your solution to a hypothetical is exploiting a loophole in the hypothetical, or finding a solution to the original problem.  But when the original problem is testing a mathematical model, it's usually obvious.  There are a few general situations where it's not obvious.

\n

Consciousness is often a tricky area that makes it hard to tell whether you are looking at a solution to the original problem, or a loophole in the hypothetical.  Sometimes the original problem is a paradox because of consciousness, so you can't map it away.  In the Newcomb paradox, if you replace the person with a computer program, people would be much quicker to say:  You should write a computer program that will one-box.  But you can phrase it that way only if you're sure that the Newcomb paradox isn't really a question about free will.  The \"paradox\" might be regarded as the assertion that there is no such thing as free will.

\n

Another tricky case involves infinities.  A paradox of infinities typically involves taking two different infinities, but treating them as a single infinity, so that they don't cancel out the way they should, or do cancel out when they shouldn't.  Zeno's paradox is an example: The hypothetical doesn't notice that the infinity of intervals is cancelled out by their infinitesimal sizes.  Eliezer discusses some other cases here.

\n

Another category of tricky cases is when the hypothetical involves impossibilities.  It's possible to accidentally construct a hypothetical that makes an assumption that isn't valid in our universe.  (I think these paradoxes were unknown before the 20th century, but there may be a math example.)  These crop up frequently in modern physics.  The ultraviolet catastrophe may be the first such paradox discovered.  The hypothetical in which a massive black hole suddenly appears one light-minute away from you, and you want to know how you can be influenced by its gravity before gravity waves have time to reach you, might be an example.  The aspect of the Newcomb paradox that allows Omega to predict what you will do without fail may be such a flawed hypothetical.

\n

If you are giving a solution to a hypothetical scenario that tests a mathematical model, and your response doesn't use math, and doesn't hinge on a consciousness, infinity, or impossibility from the original problem domain, your response is likely irrelevant.

\n

 

\n

1 He makes other errors as well.  It's a curious case in which amassing a large number of errors in a model makes it harder to rebut, because it's harder to figure out what the model is.  This is a clever way of exploiting the scientific process.  Searle takes on challengers one at a time.  Each challenger, being a scientist, singles out one error in Searle's reasoning.  Searle uses other errors in his reasoning to construct a workaround; and may immediately move on to another challenger, and use the error that the original challenger focused on to work around the error that the second challenger focused on.  This sort of trickery can be detected by looking at all of someone's counterarguments en masse, and checking that they all define the same terms the same way, and agree on which statements are assumptions and which statements are conclusions.

" } }, { "_id": "BH8BcP7d2sbmzZig8", "title": "Fighting Akrasia: Finding the Source", "pageUrl": "https://www.lesswrong.com/posts/BH8BcP7d2sbmzZig8/fighting-akrasia-finding-the-source", "postedAt": "2009-08-07T14:49:38.900Z", "baseScore": 8, "voteCount": 13, "commentCount": 48, "url": null, "contents": { "documentId": "BH8BcP7d2sbmzZig8", "html": "

Followup toFighting Akrasia:  Incentivising Action

\n

Influenced byGeneralizing From One Example

\n

Previously I looked at how we might fight akrasia by creating incentives for actions.  Based on the comments to the previous article and Yvain's now classic post Generalizing From One Example, I want to take a deeper look at the source of akrasia and the techniques used to fight it.

\n

I feel foolish for not looking at this closer first, but let's begin by asking what akrasia is and what causes it.  As commonly used, akrasia is the weakness-of-will we feel when we desire to do something but find ourselves doing something else.  So why do we experience akrasia?  Or, more to the point, why to we feel a desire to take actions contrary the actions we desire most, as indicated by our actions?  Or, if it helps, flip that question and ask why are the actions we take not always the ones we feel the greatest desire for?

\n

First, we don't know the fine details of how the human brain makes decisions.  We know what it feels like to come to a decision about an action (or anything else), but how the algorithm feels from the inside is not a reliable way to figure out how the decision was actually made.  But because most people can relate to a feeling of akrasia, this suggests that there is some disconnect between how the brain decides what actions are most desirable and what actions we believe are most desirable.  The hypothesis that I consider most likely is that the ability to form beliefs about desirable actions evolved well after the ability to make decisions about what actions are most desirable, and the decision-making part of the brain only bothers to consult the belief-about-desirability-of-actions part of the brain when there is a reason to do so from evolution's point of view.1  As a result we end up with a brain that only does what we think we really want when evolutionarily prudent, hence we experience akrasia whenever our brain doesn't consider it appropriate to consult what we experience as desirable.

\n

This suggests two main ways of overcoming akrasia assuming my hypothesis (or something close to it) is correct:  make the actions we believe to be desirable also desirable to the decision-making part of the brain or make the decision-making part of the brain consult the belief-about-desirability-of-actions part of the brain when we want it to.  Most techniques fall into the former category since this is by far the easier strategy, but however a technique works, an overriding theme of the akrasia-related articles and comments on Less Wrong is that no technique yet found seems to work for all people.

\n

For convenience, here is a list of some of the techniques discussed here and elsewhere in the productivity literature for fighting akrasia that work for some people but not for everyone.

\n\n

And there are many more tricks and workarounds people have discovered that work for them and some segment of the population. But so far no one has found a Unifying Theory of Akrasia Fighting; otherwise they would have other optimized us all and be rich.  So all we have so far is a collection of techniques that sometimes work for some people, but because most promoters of these techniques are busy trying to other optimize because they generalized from one example, we don't even have a good way to see if a technique will work for any particular individual short of having them try it.

\n

I don't expect us to find a universal solution to fighting akrasia any time soon, and it may require the medical technology to \"rewire\" or \"reprogram\" the brain (pick your inapt metaphor).  But what we can do is make things a little easier for those looking for what they can do that will actually work.  In that vein, I've created a survey for the Less Wrong community that will hopefully give us a chance to collect enough data to predict what types of akrasia fighting techniques will work best for which people.  It asks a number of questions about your behaviors and thoughts and then focus on what techniques for fighting akrasia you've tried and how well they worked for you.  My hope is that I can put all of this data together to make some predictions about how likely a particular technique will work for you, assuming I've asked the right questions.

\n

Please feel free to share this survey (and post) with anyone who you think might be interested, even if they would otherwise not be interested in Less Wrong.  The more responses we can get the more useful the data will be.  Thanks!

\n

Take the survey

\n

Footnotes:

\n

1 That is to say, there were statistically regular occasions in the environment of evolutionary adaptation that lead those of our ancestors who consulted the belief-about-desirability-of-actions part of the brain on those occasions when making decisions to reproduce at a higher rate.

" } }, { "_id": "sj53JvAvbGH54SH4v", "title": "Robin Hanson's lists of Overcoming Bias Posts", "pageUrl": "https://www.lesswrong.com/posts/sj53JvAvbGH54SH4v/robin-hanson-s-lists-of-overcoming-bias-posts", "postedAt": "2009-08-06T20:10:39.207Z", "baseScore": 26, "voteCount": 24, "commentCount": 8, "url": null, "contents": { "documentId": "sj53JvAvbGH54SH4v", "html": "

I have created a list of Overcoming Bias posts for Robin Hanson available here. Additionally, using the links inside each posts, I have created a set of graphs (available here) such that if post A has a link to post B, then there is an arc from B to A. Enjoy! (There are also ones for Eliezer here).

" } }, { "_id": "LkCeA4wu8iLmetb28", "title": "Exterminating life is rational", "pageUrl": "https://www.lesswrong.com/posts/LkCeA4wu8iLmetb28/exterminating-life-is-rational", "postedAt": "2009-08-06T16:17:43.983Z", "baseScore": 19, "voteCount": 43, "commentCount": 279, "url": null, "contents": { "documentId": "LkCeA4wu8iLmetb28", "html": "

Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?

\n

I don't mean that deciding to exterminate life is rational.  But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.

\n

Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):

\n
\n

Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.

\n
\n

Was this a bad decision?  Well, consider the expected value to the people involved.  Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese.  The loss to them if they ignited the atmosphere would be another 30 or so years of life.  The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.  The loss in being conquered would also be large.  Easy decision, really.

\n

Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it.  Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1.  If I've done my math right, that's ≈ 33,777,000 years.

\n

This supposition seems reasonable to me.  There is a balance between offensive and defensive capability that shifts as technology develops.  If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed.  In the near future, biological weapons will be more able to wipe out life than we are able to defend against them.  We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.

\n

If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially.  The 34M years remaining to life is then in subjective time, and must be mapped into realtime.  If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years.  If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years.  (I don't recommend Ray's figure.  I'm just giving it for those who do.)

\n

Please understand that I am not yet another \"prophet\" bemoaning the foolishness of humanity.  Just the opposite:  I'm saying this is not something we will outgrow.  If anything, becoming more rational only makes our doom more certain.  For the agents who must actually make these decisions, it would be irrational not to take these risks.  The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.

\n

I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:

\n\n

Let's look at these one by one:

\n

We can outrun the danger.

\n

We will colonize other planets; but we may also  figure out how to make the Sun go nova on demand.  We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.

\n

One problem with this idea is that apocalypses are correlated; one may trigger another.  A disease may spread to another planet.  The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet.  It's not clear whether spreading out and increasing in population actually makes life more safe.  If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.

\n

But neither of those are my final objection.  More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.

\n

Technology will stabilize in a safe state.

\n

Maybe technology will stabilize, and we'll run out of things to discover.  If that were to happen, I would expect that conflicts would increase, because people would get bored.  As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored.  They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.

\n

But that's not my final rejection.  The big problem is that by \"safe\", I mean really, really safe.  We're talking about bringing existential threats to chances less than 1 in a million per century.  I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

\n

People will stop having conflicts.

\n

That's a nice thought.  A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.

\n

They thought that just before World War I.  But that's not my final rejection.  Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts.  Those that avoid conflict will be out-competed by those that do not.

\n

But that's not my final rejection either.  The bigger problem is that this isn't something that arises only in conflicts.  All we need are desires.  We're willing to tolerate risk to increase our utility.  For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming.  In return for this risk, we can enjoy a better lifestyle now.

\n

Also, we haven't burned all physics textbooks along with all physicists.  Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe.  (In fact, I'd guess the chance is nearer to 1 in 10.)1  We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods.  And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.

\n

The Wikipedia entry on Large Hadcon Collider risk says, \"In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole.\"  The more authoritative \"Review of the Safety of LHC Collisions\" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.

\n

The LHC conclusions are criminally low.  Their evidence was this: \"Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists.\"  There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.

\n

I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect.  Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet.  I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument.  (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)

\n

The Trinity test was done for the sake of winning World War II.  But the LHC was turned on for... well, no practical advantage that I've heard of yet.  It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit.  And this is  rational, since the LHC will probably improve our lives by more than one part in a million.

\n

Rational agents incorporate the benefits to others into their utility functions.

\n

\"But,\" you say, \"I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!\"

\n

Well... yes, you would, if you're a rational expectation maximizer.  It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.  (This seems difficult, but is worth exploring.)  If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience.  It doesn't.  It's a 1% increase in your utility.  If you factor the rest of your universe into your utility function, then it's already in there.

\n

The US national debt should be enough to convince you that people act in their self-interest.  Even the most moral people - in fact, especially the \"most moral\" people - do not incorporate the benefits to others, especially future others, into their utility functions.  If we did that, we would engage in massive eugenics programs.  But eugenics is considered the greatest immorality.

\n

But maybe they're just not as rational as you.  Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth.  Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa.  Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.

\n

It doesn't matter.  Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.

\n

But that will stop you from risking atmospheric ignition to defeat the Nazis, right?  Because you'll incorporate them into your utility function?  Well, that is a subset of the claim \"People will stop having conflicts.\"  See above.

\n

And even if you somehow worked around all these arguments, evolution, again, thwarts you.2  Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.  The claim that rational agents are not selfish implies that rational agents are unfit.

\n

Rational agents with long lifespans will protect the future for themselves.

\n

The most familiar idea here is that, if people expect to live for millions of years, they will be \"wiser\" and take fewer risks with that time.  But the flip side is that they also have more time to lose.  If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.

\n

Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods.  They may be less willing to take an existential risk for something that will benefit them only temporarily.  But benefits have a way of increasing, not decreasing, over time.  The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.

\n

But that's not my final rejection.  More important is time-discounting.  Agents will time-discount, probably exponentially, due to uncertainty.  If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted.  And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.

\n

Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense.  I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.

\n

Could a continued exponential population explosion outweigh exponential time-discounting?  Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant.  (I leave the details as an exercise to the reader.)

\n

Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting.  You can't stay you forever.  If you change, the future you will be less like you, and weigh less strongly in your utility function.  Objections to this generally assume that it makes sense to trace your identity by following your physical body.  Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea.  And if you don't change, well, what's the point of living?

\n

Evolutionary arguments may help us with self-discounting.  Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function.  The major variable is reproduction rate over lifespan.  This applies to genes or memes.  But they can't help us with time-discounting.

\n

I think there may be a way to make this one work.  I just haven't thought of it yet.

\n

A benevolent singleton will save us all.

\n

This case takes more analysis than I am willing to do right now.  My short answer is that I place a very low expected utility on singleton scenarios.  I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.

\n

I'm not ready to place my faith in a singleton.  I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.

\n

(Please don't conclude from my arguments that you should go out and create a singleton.  Creating a singleton is hard to undo.  It should be deferred nearly as long as possible.  Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)

\n

In conclusion

\n

I think that the figures I've given here are conservative.  I expect existential risk to be much greater than 3/1,000,000 per century.  I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose.  I expect population and technology to continue to increase, and existential risk to be proportional to population times technology.  Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.

\n

Our greatest chance for survival is that there's some other possibility I haven't thought of yet.  Perhaps some of you will.

\n

 

\n

1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what \"probability\" means.

\n

2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.

" } }, { "_id": "bTdDKX4sK35Q9t4v2", "title": "LW/OB Rationality Quotes - August 2009", "pageUrl": "https://www.lesswrong.com/posts/bTdDKX4sK35Q9t4v2/lw-ob-rationality-quotes-august-2009", "postedAt": "2009-08-06T13:35:40.985Z", "baseScore": 7, "voteCount": 9, "commentCount": 25, "url": null, "contents": { "documentId": "bTdDKX4sK35Q9t4v2", "html": "

I always see that the monthly Rationality Quotes thread has this line: \"Do not quote comments/posts on LW/OB - if we do this, there should be a separate thread for it.\" This is the thread for those quotes.

\n
\n

This is a (possibly) monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.

\n\n
" } }, { "_id": "L8aMhHYnYRaDZpLmt", "title": "The Objective Bayesian Programme", "pageUrl": "https://www.lesswrong.com/posts/L8aMhHYnYRaDZpLmt/the-objective-bayesian-programme", "postedAt": "2009-08-06T10:33:40.153Z", "baseScore": 19, "voteCount": 16, "commentCount": 5, "url": null, "contents": { "documentId": "L8aMhHYnYRaDZpLmt", "html": "

Followup to: Bayesian Flame.

\n

This post is a chronicle of my attempts to understand Cyan's #2. (Bayesian Flame was an approximate parse of #1.) Warning: long, some math, lots of links, probably lots of errors. At the very least I want this to serve as a good reference for further reading.

\n

Introduction

\n

To the mathematical eye, many statistical problems share the following minimal structure:

\n
    \n
  1. A space of parameters. (Imagine a freeform blob without assuming any metric or measure.)
  2. \n
  3. A space of possible outcomes. (Imagine another, similarly unstructured blob.)
  4. \n
  5. Each point in the parameter space determines a probability measure on the outcome space.
  6. \n
\n

By itself, this kind of input is too sparse to yield solutions to statistical problems. What additional structure on the spaces should we introduce?

\n

The answer that we all know and love

\n

Assuming some \"prior\" probability measure on the parameter space yields a solution that's unique, consistent and wonderful in all sorts of ways. This has led some people to adopt the \"subjectivist\" position saying priors are so basic that they ought not be questioned. One of its most prominent defenders was Leonard Jimmie Savage who put forward the following argument:

\n
\n

Suppose, for example, that the person is offered an even-money bet for five dollars - or, to be ultra-rigorous, for five utiles - that internal combustion engines in American automobiles will be obsolete by 1970. If there is any event to which an objectivist would refuse to attach probability, that corresponding to the obsolescence in question is one... Yet, I think I may say without presumption that you would regard the bet against obsolescence as a very sound investment.

\n
\n

This is a fine argument for using priors when you're betting money, but there's a snag: however much you are willing to bet, this doesn't give you grounds to publish papers about the future that you inferred from your intuitive prior! Any apriori information used in science should be justified for scientific objectivity.

\n

(At this point Eliezer raises the suggestion that scientists ought to communicate with likelihood ratios only. That might be a brave new world to live in; too bad we'll have to stop teaching kids that g approximately equals 9.8 m/s2 and give them likelihood profiles instead.)

\n

Rather than dive deeper into the fascinating topic of \"uninformative priors\", let's go back to the surface. Take a closer look at the basic formulation above to see what other structures we can introduce instead of priors to get interesting results.

\n

The minimax approach

\n

In the mid-20th century a statistician named Abraham Wald made a valiant effort to step outside the problem. His decision theory idea encompasses both frequentist and Bayesian inference. Roughly, it goes like this: we no longer know our prior probabilities, but we do know our utilities. More concretely, we compute a decision from the observed dataset, and later suffer a loss that depends on our decision and the actual true parameter value. Substituting different \"spaces of decisions\" and \"loss functions\", we get a wide range of situations to study.

\n

But wait! Doesn't the \"optimal\" decision depend on the prior distribution of parameters as well?

\n

Wald's crucial insight was that... no, not necessarily.

\n

If\n\nwe don't know the prior and are trying to be \"scientifically objective\", it makes sense to treat the problem\n\nof statistical inference\n\n\n\nas a game. The\n\n\n\n\n\n\nstatistician chooses a decision rule, Nature chooses a true parameter value, randomness determines the payoff. Since the game is zero-sum,\n\nwe can\n\n\nreasonably expect it to have a minimax value: there's a decision rule that minimizes the maximum loss the statistician can suffer, whatever Nature\n\nmay choose.

\n

Now, as Ken Binmore accurately noted, in real life you don't minimax unless \"your relationship with the universe has reached such a low ebb that you keep your pants up with both belt and suspenders\", so the minimax principle gives off a whiff of the paranoia that we've come to associate with frequentism. Haha, gotcha! Wald's results apply to Bayesianism just as well. His \"complete class theorem\" proves that Bayesian-rational strategies with well-defined priors constitute precisely the class of non-dominated strategies in the game described. (If you squint the right way, this last sentence compresses the whole philosophical justification of Bayesianism.)

\n

The game-theoretic approach gives our Bayesian friends even more than that. The statistical game's minimax decision rules often correspond to Bayes strategies with a certain uninformative prior, called the \"least favorable prior\" for that risk function. This gives you a frequentist-valid procedure that also happens to be Bayesian, which means immunity to Dutch books, negative masses and similar criticisms. In a particularly fascinating convergence, the well-known \"reference prior\" (the Jeffreys prior properly generalized to N dimensions) turns out to be asymptotically least favorable when optimizing the Shannon mutual information between the parameter and the sample.

\n

At this point the Bayesians in the audience should be rubbing their hands. I told ya it would be fun! Our frequentist friends on the other hand have dozed off, so let's pull another stunt to wake them up.

\n

Confidence coverage demystified

\n

Informally, we want to say things about the world like \"I'm 90% sure that this physical constant lies within those bounds\" and be actually right 90% of the time when we say such things.

\n

...Semi-formally, we want to a procedure to calculate from each sample a \"confidence subset\" of the parameter space such that such subsets cover include the true parameter values with probability greater or equal to 90%, while the sets themselves are as small as possible.

\n

(NB: this is not equivalent to deriving a \"correct\" posterior distribution on the parameter space. Not every method of choosing small subsets with given posterior masses will give you uniformly correct confidence coverage, and each such method corresponds to many different posterior distributions in the N-dimensional case.)

\n

...Formally, we introduce a new structure on the parameter space - a \"not-quite-measure\" to determine the size of confidence sets - and then, upon receiving a sample, determine from it a 90% confidence set with the smallest possible \"not-quite-measure\".

\n

(NB: I'm calling it \"not-quite-measure\" because of a subtlety in the N-dimensional case. If we're estimating just one parameter out of several, the \"measure\" corresponds to span in that coordinate and thus is not additive under set union, hence \"not-quite\".)

\n

Except this doesn't work. There might be two procedures to compute confidence sets, the first of which is sometimes better and sometimes worse than the second. We have no comparison function to determine the winner, and in reality the \"uniformly most accurate\" procedure doesn't always exist.

\n

But if we replace the \"size\" of the confidence set with its expected size under each single parameter value, this gives us just enough information to apply the game-theoretic minimax approach. Solving the game thus gives us \"minimax expected size\" confidence sets, or MES, that people are actually using. Which isn't saying much, but still.

\n

More on subjectivity

\n

The minimax principle sounds nice, but the construction of the least favorable prior distribution for any given experiment and risk function has a problem: it typically depends on the whole sample space and thus on the experiment's stopping rule. When do we stop gathering data? What subsets of observed samples do we thus rule out? In the general case the least favorable prior depends on the number of samples we intend to draw! This blatantly violates the likelihood principle that Eliezer so eloquently defended.

\n

But, ordinary probability theory tells us unambiguously that 90% of your conclusions will be true whatever stopping rules you choose for each of them, as long you choose before observing any data from the experiments. (Otherwise all bets are off, like if you'd decided to pick your Bayesian prior based on the data.) But, the conclusions themselves will be different from rule to rule. But, you cannot deliberately engineer a situation where the minimax of one stopping rule reliably makes you more wrong than another one...

\n

Does this look more like an eternal mathematical law or an ad hoc tool? To me it looks like a mystery. Like frequentists are trying to solve a problem that Bayesians don't even attempt to solve. The answer is somewhere out there; we can guess that something like today's Bayesianism will be a big part of it, but not the only part.

\n

Conclusion

\n

When some field is afflicted with deep and persistent philosophical conflicts, this isn't necessarily a sign that one of the sides is right and the other is just being silly. It might be a sign that some crucial unifying insight is waiting several steps ahead. Minimaxing doesn't look to me like the beginning and end of \"objective\" statistics, but the right answer that we don't know yet has got to be at least this normal.

\n

Further reading: James Berger, The Case for Objective Bayesian Analysis.

" } }, { "_id": "vXCK3kptLLggEfojX", "title": "Why Real Men Wear Pink", "pageUrl": "https://www.lesswrong.com/posts/vXCK3kptLLggEfojX/why-real-men-wear-pink", "postedAt": "2009-08-06T07:39:04.821Z", "baseScore": 107, "voteCount": 103, "commentCount": 155, "url": null, "contents": { "documentId": "vXCK3kptLLggEfojX", "html": "

\"Fashion is a form of ugliness so intolerable we have to alter it every six months.\"

\n

-- Oscar Wilde

\n

For the past few decades, I and many other men my age have been locked in a battle with the clothing industry. I want simple, good-looking apparel that covers my nakedness and maybe even makes me look attractive. The clothing industry believes someone my age wants either clothing laced with profanity, clothing that objectifies women, clothing that glorifies alcohol or drug use, or clothing that makes them look like a gangster. And judging by the clothing I see people wearing, on the whole they are right.

I've been working my way through Steven Pinker's How The Mind Works, and reached the part where he quotes approvingly Quentin Bell's theory of fashion. The theory provides a good explanation for why so much clothing seems so deliberately outrageous.

\n

\n

Bell starts by offering his own explanation of the \"fashion cycle\". He claims that the goal of fashion is to signal status. So far, so obvious. But low-status people would like to subvert the signal. Therefore, the goal of lower class people is to look like upper class people, and the goal of upper class people is to not look like lower class people.

One solution is for the upper class to wear clothing so expensive the lower class could not possibly afford it. This worked for medieval lords and ladies, but nowadays after a while mass production will kick in and K-Mart will have a passable rhinestone based imitation available for $49.95. Once the lower class is wearing the once fashionable item, the upper class wouldn't be caught dead in it. They have to choose a new item of clothing to be the status signal, after a short period of grace the lower class copy that too, and the cycle begins again.

For example, maybe in early 2009 a few very high-status people start wearing purple. Everyone who is \"in the know\" enough to understand that they are trend-setters switches to purple. Soon it becomes obvious that lots of \"in the know\" people are wearing purple, and anyone who reads fashion magazines starts stocking up on purple clothing. Soon, only the people too out-of-the-loop to know about purple and the people too poor to immediately replace all their clothes are wearing any other color. In mid-2009, some extremely high-status people now go out on a limb and start wearing green; everyone else is too low-status to be comfortable unilaterally breaking the status quo. Soon everyone switches to green. Wearing purple is a way of broadcasting that you're so dumb or so poor you don't have green clothes yet, which is why it's so mortifying to be caught wearing yesterday's fashion (or so I'm told). When the next cycle comes around, no one will immediately go back to wearing purple, because that would signal that they're unfashionable. But by 2015, that stigma will be gone and purple has a chance to come \"back in style\".

Bell describes a clever way the rich can avoid immediately being copied by the middle class. What is the greatest fear of the fashionista? To be confused with a person of a lower class. So the rich wear lower class clothes. The theory is that the middle class is terrified of wearing lower class clothes, but the rich are so obviously not lower class that they can get away with it. Bell wrote before the \"ghetto look\" went into style, but his theory explains quite well why wealthy teenagers and young adults would voluntarily copy the styles of the country's poorest underclass.

Bell also explained a second way to signal high-status: conspicuous outrage. Wear a shirt with the word \"FUCK\" on it in big letters (or, if you prefer, FCUK). This signals \"I am so high status that I think I can wear the word 'FUCK' in big letters on a t-shirt and get away with it.\" It's a pretty good signal. It signals that you don't give a...well...fcuk what anyone else thinks, and the only people who would be able, either economically or psychologically, to get away with that are the high status1.

The absolute best real world example, which again I think Bell didn't live to see, is the bright pink shirt for men that says \"REAL MEN WEAR PINK\". The signal is that this guy is so confident in his masculinity that he can go around wearing a pink shirt. It's an odd case because it gets away with explaining exactly what signal it's projecting right on the shirt. And it only works because real men do not wear pink without a disclaimer explaining that they are only wearing pink to signal that they are real men.

Pinker notes the similarity to evolutionary strategies that signal fitness by handicapping. A peacock's tail is a way of signalling that its owner is so fit it can afford to have a big maladaptive tail on it and still survive, just as a rich guy in a backwards baseball cap is signalling that its owner is so rich he can afford to copy the lower class and still get invited to parties. The same process produces a body part of astounding beauty in the animal kingdom, and ghetto fashion in human society. I wonder if nature is laughing at us.

\n

Footnotes:

\n

1: Bell (or possibly Pinker, it's not clear) has a similar theory about art. Buying a hip \"modern art\" painting that's just a white canvas with a black line through it is supposed to signal \"I am so rich that I can afford to pay lots of money for a painting even if it is unpopular and hard to appreciate,\" or even \"I am so self-confident in my culturedness that I can endorse this art that is low quality by all previous standards, and people will continue to respect me and my judgments.\" Then the middle class starts buying white canvases with black lines through them, and rich people have to buy sculptures made of human dung just to keep up.

" } }, { "_id": "tz6cJZdhbuhMfSXc4", "title": "Rationality Quotes - August 2009", "pageUrl": "https://www.lesswrong.com/posts/tz6cJZdhbuhMfSXc4/rationality-quotes-august-2009", "postedAt": "2009-08-06T01:58:49.178Z", "baseScore": 9, "voteCount": 10, "commentCount": 122, "url": null, "contents": { "documentId": "tz6cJZdhbuhMfSXc4", "html": "

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\n" } }, { "_id": "baETQBENSe6dEQoPM", "title": "Recommended reading: George Orwell on knowledge from authority", "pageUrl": "https://www.lesswrong.com/posts/baETQBENSe6dEQoPM/recommended-reading-george-orwell-on-knowledge-from", "postedAt": "2009-08-05T15:30:46.718Z", "baseScore": 9, "voteCount": 12, "commentCount": 22, "url": null, "contents": { "documentId": "baETQBENSe6dEQoPM", "html": "

This is an excerpt from an article George Orwell wrote in 1946. I will let the text speak for itself.

\n
\n

    Somewhere or other — I think it is in the preface to saint Joan — Bernard Shaw remarks that we are more gullible and superstitious today than we were in the Middle Ages, and as an example of modern credulity he cites the widespread belief that the earth is round. The average man, says Shaw, can advance not a single reason for thinking that the earth is round. He merely swallows this theory because there is something about it that appeals to the twentieth-century mentality.

\n

    Now, Shaw is exaggerating, but there is something in what he says, and the question is worth following up, for the sake of the light it throws on modern knowledge. Just why do we believe that the earth is round? I am not speaking of the few thousand astronomers, geographers and so forth who could give ocular proof, or have a theoretical knowledge of the proof, but of the ordinary newspaper-reading citizen, such as you or me.

\n

    As for the Flat Earth theory, I believe I could refute it. If you stand by the seashore on a clear day, you can see the masts and funnels of invisible ships passing along the horizon. This phenomenon can only be explained by assuming that the earth's surface is curved. But it does not follow that the earth is spherical. Imagine another theory called the Oval Earth theory, which claims that the earth is shaped like an egg. What can I say against it?

\n

    Against the Oval Earth man, the first card I can play is the analogy of the sun and moon. The Oval Earth man promptly answers that I don't know, by my own observation, that those bodies are spherical. I only know that they are round, and they may perfectly well be flat discs. I have no answer to that one. Besides, he goes on, what reason have I for thinking that the earth must be the same shape as the sun and moon? I can't answer that one either.

\n

    My second card is the earth's shadow: When cast on the moon during eclipses, it appears to be the shadow of a round object. But how do I know, demands the Oval Earth man, that eclipses of the moon are caused by the shadow of the earth? The answer is that I don't know, but have taken this piece of information blindly from newspaper articles and science booklets.

\n

    Defeated in the minor exchanges, I now play my queen of trumps: the opinion of the experts. The Astronomer Royal, who ought to know, tells me that the earth is round. The Oval Earth man covers the queen with his king. Have I tested the Astronomer Royal's statement, and would I even know a way of testing it? Here I bring out my ace. Yes, I do know one test. The astronomers can foretell eclipses, and this suggests that their opinions about the solar system are pretty sound. I am, to my delight, justified in accepting their say-so about the shape of the earth.

\n

    If the Oval Earth man answers — what I believe is true — that the ancient Egyptians, who thought the sun goes round the earth, could also predict eclipses, then bang goes my ace. I have only one card left: navigation. People can sail ship round the world, and reach the places they aim at, by calculations which assume that the earth is spherical. I believe that finishes the Oval Earth man, though even then he may possibly have some kind of counter.

\n

    It will be seen that my reasons for thinking that the earth is round are rather precarious ones. Yet this is an exceptionally elementary piece of information. On most other questions I should have to fall back on the expert much earlier, and would be less able to test his pronouncements. And much the greater part of our knowledge is at this level. It does not rest on reasoning or on experiment, but on authority. And how can it be otherwise, when the range of knowledge is so vast that the expert himself is an ignoramus as soon as he strays away from his own specialty? Most people, if asked to prove that the earth is round, would not even bother to produce the rather weak arguments I have outlined above. They would start off by saying that \"everyone knows\" the earth to be round, and if pressed further, would become angry. In a way Shaw is right. This is a credulous age, and the burden of knowledge which we now have to carry is partly responsible.

\n
" } }, { "_id": "4vQ2zrB8KjJTi7w6p", "title": "A Normative Rule for Decision-Changing Metrics", "pageUrl": "https://www.lesswrong.com/posts/4vQ2zrB8KjJTi7w6p/a-normative-rule-for-decision-changing-metrics", "postedAt": "2009-08-05T05:07:05.158Z", "baseScore": 2, "voteCount": 4, "commentCount": 17, "url": null, "contents": { "documentId": "4vQ2zrB8KjJTi7w6p", "html": "

Yesterday I wrote about the difficulties of ethics and potential people. Namely, that whether you bring a person into existence or not changes the moral metric by which your decision is measured. At first all I had was the observation suggesting that the issue was complex, but no answer to the question \"Well then, what should we do?\" I will write now about an answer that came to me.

\r\n

All theories regarding potential people start by comparing outcomes to find which is most desirable, then moving towards it. However I believe I have shown that there are two metrics regarding such questions, and those metrics can disagree. What then do we do?

\r\n

We are always in a particular population ourselves, and so we can ask not which outcome is preferable, but if we should move from one situation to another. This allows us to consider the alternate metrics in series. For an initial name more attractive than \"my rule\" I will refer to the system as Deontological Consequentialism, or DC. I'm open to other suggestions.

\r\n

Step 1: Consider your action with the metric of new people not coming to be: that is, only the welfare of the people who will exist regardless of your decision.* I will assume in this discussion there are three possibilities: people receive higher utility, lower utility, or effectively unchanged utility. You might dispense with the third option, the results are similar.

\r\n

First, if you expect reduced utility for existing people from taking an action: do not take that action. This is regardless of how many new people might otherwise exist or how much utility they might have; if we never bring them into existence, we have wronged no one.

\r\n

This is the least intuitive aspect of this system, though it is also the most critical for avoiding the paradoxes of which I am aware. I think this unintuitive nature mostly stems from automatically considering future people as if they exist. I'd also note that our intuitions are really not used to dealing with this sort of question, but one more approachable example exists. If a couple strongly expects they will be unhappy having children, will derive little meaning from parenthood and few material benefits from their children and we believe them, if in short they really don't want kids, I suspect few will tell them they really ought to have children anyway as long as the children will be happy. I think few people consider how very happy the kids or grandkids might be either, even if the couple would be great parents; if the couple will be miserable we probably advocate they stay childless. Imagining such unhappy parents we also tend to imagine unhappy children, so some effort might be required to keep from ruining the example. See also Eliezer's discussion of the fallibility of intuition in a recent post.

\r\n

Second, if we expect effectively unchanged utility for existing people it is again perfectly acceptable to not create new people, as you wrong no one by doing so. But as you don't harm yourself either, it's fine for you if you create them, bringing us to

\r\n

Step 2a: Now we consider the future people. If they will have negative utility, i.e. roughly speaking they wish they hadn't been born, or for their sake we wish they hadn't been born, then we ought not to bring them into existence. We don't get anything and they suffer. If instead they experience entirely neutral lives (somehow), if they have no opinion on their creation, then it really doesn't matter if we create them or not.

\r\n

If they will experience positive lives, then it would be a good thing to have created them, as it was \"no skin off our backs anyway\". However I would theoretically hold it's still perfectly acceptable not to bring them into existence, as if we don't they'll never mind that we didn't. But my neutrality towards bringing them into existence is such that I would even accept a rule I didn't agree with, saying that I ought to create the new people.

\r\n

Now back into step 1, there is the case were existing people will benefit by creating new people. Here we are forced to consider their wellbeing in

\r\n

Step 2b: Now if the new people have positive utility, or zero utility in totally neutral lives, then we should go ahead and bring them into existence, as we benefit and at least they don't suffer. However if they will have overall negative lives, then we should compare how much utility existing people gain and subtract the negative utility of the new people. You might dislike the idea of this inequality (I do as well) but this is a general issue with utilitarianism separate from potential people; here we’re considering them the same as existing people (as they then would be). If you've got a solution, such as weighting negative utility more heavily or just forcing in egalitarian considerations, apply it here.

\r\n

 

\r\n

This concludes Deontological Consequentialism. I won't go over them all here, but this rule seems to avoid unattractive answers to all paradoxes I've seen debated. There is one that threw me for a loop, the largely intractable \"Mere Addition Paradox\". I'll describe it briefly here, mostly to show how DC (at first) seems to fall prey to it as well.

\r\n

A classic paradox is the Repugant Conclusion, which takes total utilitarianism to suggest we should prefer a vast population with lives barely worth living more than relatively few lives of very high utility. Using DC, if we are in the high utility population we note that we (existing people) would all experience drastically lower utility by bringing about the second situation, and so avoid it.

\r\n

In the Mere Addition Paradox, you start from the high utility situation. Then you consider, is it moral to increase the utility of existing people while bringing into being huge numbers of people with very low but positive utility? DC seems to suggest we ought to, as we benefit and they have positive lives as well. Now that we've done that, ought we to reduce our own utility if it would allow the low-utility people to have higher utility, such that the total is drastically increased but everyone still experiences utility not far above zero? Deontological Consequentialism is only about potential people, but here both average and total utilitarianism suggest we should do this, increasing both the total and average.

\r\n

And with this we find that by taking it in these steps we have arrived at the Repugnant Conclusion! For DC this is accomplished by \"slipping in\" the new people so that they become existing people, and our consideration of their wellbeing changes. The solution here is to take account of our own future actions: we see that by adding these new people at first, we will then seek to distribute our own utility to much increase theirs, and in the end we actually do experience reduced utility. That is, we see that by bringing them into existence we are in reality choosing the Repugnant Conclusion. Realizing this, we do not create them, and avoid the paradox. (In most conventional situations it seems more likely we can increase the new people's utility without such a decrease to our own.)

\r\n


*An interesting situation arises when we know new people will come to be regardless of our decision. I suggest here that we average the utility of all new people in each population, treat the situation with fewer total people as our \"existing population\", and apply DC from there. Unless people are interested in talking about this arguably unusual situation however, I won't go into more detail.

" } }, { "_id": "avyjAqcifkfaJcuSe", "title": "The Machine Learning Personality Test", "pageUrl": "https://www.lesswrong.com/posts/avyjAqcifkfaJcuSe/the-machine-learning-personality-test", "postedAt": "2009-08-04T23:36:37.364Z", "baseScore": 31, "voteCount": 30, "commentCount": 34, "url": null, "contents": { "documentId": "avyjAqcifkfaJcuSe", "html": "

You've probably heard of the Briggs-Myers personality test, which is a classification system of 16 different personality types based on the writings of Carl Jung, a man who believed that his library books sometimes spontaneously exploded.  Its main advantage is that it manages to classify people without insulting them.  (This is accomplished by confounding dimensions:  Instead of measuring one property of personality along one dimension, which leads to some scores being considered better than others, you subtract a measurement along one desirable property of personality from a measurement along another desirable property of personality, and call the result one dimension.)

\n

You've probably also heard of the MMPI, a test designed by giving lots of questions to mental patients and seeing which ones were answered differently by people with particular diagnoses.  This is more like personality clustering for fault diagnosis than a personality test.  You may find it useful if you're crazy.  (One of the criticisms of this test is that religious people often test as psychotic: \"Do you sometimes think someone else is directing your actions?  Is someone else trying to plan events in your life?\"  Is that a bug, or a feature?)

\n

You may have heard of the Personality Assessment Inventory, a test devised by listing things that psychotherapists thought were important, and trying to come up with questions to test them.

\n

The Big 5 personality test is constructed in a well-motivated way, using factor analysis to try to discover from the data what the true dimensions of personality are.

\n

But these all work from the top down, looking at human behavior (answers), and trying to uncover latent factors farther down.  I'm instead going to propose a personality system that, instead, starts from the very bottom of your hardware and leaves it to you to work your way up to the variables of interest:  the Machine Learning Personality Test (\"MLPT\").

\n

Other personality tests try to measure things that people want to measure, but that might not be psychologically real.  The MLPT is just the opposite:  It tries to measure things that are probably psychologically real, but are at such a low level that people aren't interested in them.  Your mission, should you choose to accept it, is to figure out the connection between the dimensions of the MLPT, and personality traits that you understand and care about.

\n

LW readers are familiar with thinking of people as optimizers.  Take that idea, and make 3 assumptions:

\n
    \n
  1. People optimize using something like existing algorithms for machine learning.
  2. \n
  3. A person learns parameters for their learning algorithms according to the data they are exposed to.
  4. \n
  5. These parameters generalize across tasks.
  6. \n
\n

Assumption 1 is critical for the MLPT to make any sense.  What it does is to classify people according to the parameter settings they use when learning and optimizing.  I mostly use parameters from classification / categorization algorithms.

\n

Assumption 2 is important if you wish to change yourself.  This is the great advantage of the MLPT:  It not only tells you your personality type, but also how to change your personality type.  Simply expose yourself to data such that the MLPT type you desire is more effective than yours at learning that data.

\n

Assumption 3 is something I have no evidence for at all, and may be wholly false.

\n

Here are the dimensions I've thought of.  Can you name others worth adding?

\n

Learning rate:  This is a parameter used to say how much you change your weights in response to new information.  I was going to say, \"how much you change your beliefs\", but that would be misleading; because we're talking about a much finer level of detail.  In a neural network model, the learning parameter determines how much you change the weight on a connection between 2 neurons each time you want to change the degree to which one of those neuron's output affects the other neuron's input.

\n

People with a high learning rate learn fast and easily, and may be great at memorizing facts.  But when it comes to problems where you have a lot of data and are trying to get extremely high performance, they are not able to get as good an optimum.  This suggests that expert violinists or baseball players tend to have poor memory and be categorized as slow learners.  (Although I'm skeptical that learning rate on motor tasks would generalize to learning rate on history exams.)

\n

Regularization rate:  This parameter says how strongly to bias your outcome back towards your priors.  If your regularization rate isn't high enough, the parameters you learn may drift to absurdly large values.  In some cases, this will cause the entire network to become unstable, at which point learning ceases and you need to be rebooted.

\n

In most ways, regularization is opposed to learning.  Increasing the regularization rate without changing the learning rate effectively decreases the learning rate.

\n

People with a high regularization rate might be less prone to mental illness, but not very creative.  People with a low regularization rate will get some of the advantages of a high learning rate, without the disadvantages.

\n

Exploration/Exploitation setting:  High exploration means you try out new solutions and new things often.  High exploitation means you don't.  High exploitation is conceptually a lot like high regularization.

\n

Number of dimensions to classify on:  When you're learning how to categorize something, how many dimensions do you use?  An astonishing percentage of what we do is based on single-dimension discriminations.  Some people use only a single dimension even for important and highly complex discrimination tasks, such as choosing a new president, or deciding on the morality of an action.

\n

Using a small number of dimensions results in a high error rate (where \"error\", since I'm not assuming category labels exist out in the world, is going to mean your error in predicting outcomes based on your categorizations).  Using a large number of dimensions results in slow learning and slow thinking, construction of categories no one else understands, stress when faced with complex situations, and errors from overgeneralizing and from perceiving patterns where there are none, because you don't have enough data to learn whether a distinction in outcome is really due to a difference along one of your dimensions, or just chance.

\n

People using too few dimensions will be, well, stupid.  They will be incapable of learning many things no matter how much data they're exposed to.  But they can make decisions under pressure and with confidence.  They may make good managers.  People using too many dimensions will take too long to make decisions, wanting too much data.  This dimension may correspond closely to \"intelligence\", of the kind that scores well on IQ tests.

\n

People using different dimensions and different numbers of dimensions will have a very hard time understanding each other.

\n

It may be worth breaking this separately into number of input dimensions and number of output dimensions.  But I kinda doubt it.  (I guess I'm just a low-dimensional kinda guy.)

\n

Binary / Discrete / Continuous thinking:  Do you categorize your inputs before thinking about them, or try to juggle all their values and do regression in your head?  Are you trying to put things in bins, or place them along a continuum?

\n

This probably has the same implications as number of input and output dimensions.

\n

Degree of independence/correlation assumed to exist between dimensions:  If the things you are categorizing have measurements along different dimensions that are independent on different dimensions, categorization becomes much easier, and you can handle many more dimensions.

\n

People assuming high independence might make good scientists, as science has so far been the art of finding dimensions in the real world that are independent and using them for analysis.  People assuming high correlations might be better at art, and at perceiving holistic patterns.  They might tend to give credence towards New-Age notions that everything is interconnected.

\n

Degree of linearity/nonlinearity assumed:  Assuming linearity has similar advantages and disadvantages as assuming independence, and assuming nonlinearity has similar effects to assuming correlations.  (They are not the same; sometimes the real world presents linearity with correlations, or independence with nonlinearity.  I just can't think of anything different to say about them personality-wise.)

\n

 

\n

I'm going to merge independence/correlation and linearity/nonlinearity, because I don't have anything useful to say to distinguish them.  I'm going to merge regularization rate and exploration/exploitation for similar reasons; those two are a lot like each other anyway.  I'm going to ignore binary/discrete/continuous, because I didn't think of it until after writing the personality types below and I'm too lazy to redo them.  It's a lot like number of dimensions anyway.

\n

Now we need to find cute acronyms for our resulting personality types.  For this, we will organize our dimensions so that the first and last dimensions are specified with vowels, and the second and third by consonants.  (Changing the fourth letter to a vowel and thus providing catchier names is, I think, the main advantage of this test over the Myers-Briggs.)

\n\n

Now you may be eager to take the MLPT and find your results!

\n

Sadly, it does not exist.  As I said, I'm just proposing it.

\n

But we can at least write fun, horoscope-like personality summaries!  (NOTE: These may not be as accurate as an actual horoscope.)

\n" } }, { "_id": "YYoK5H4QFTdt4bM5C", "title": "She Blinded Me With Science", "pageUrl": "https://www.lesswrong.com/posts/YYoK5H4QFTdt4bM5C/she-blinded-me-with-science", "postedAt": "2009-08-04T19:10:49.712Z", "baseScore": 17, "voteCount": 14, "commentCount": 38, "url": null, "contents": { "documentId": "YYoK5H4QFTdt4bM5C", "html": "

Scrutinize claims of scientific fact in support of opinion journalism.

\n

Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees \"statistically significant evidence for X\". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.

\n

Here are two recent examples:

\n
\n

Women are often better communicators because their brains are more networked for language. The majority of women are better at \"mind-reading,\" than most men; they can read the emotions written on people's faces more quickly and easily, a talent jump-started by the vast swaths of neural real estate dedicated to processing emotions in the female brain.

\n
\n

- Susan Pinker, a psychologist, in NYT's \"DO Women Make Better Bosses\"

\n
Twin studies and adoptive studies show that the overwhelming determinant of your weight is not your willpower; it's your genes. The heritability of weight is between .75 and .85. The heritability of height is between .9 and .95. And the older you are, the more heritable weight is.
\n

- Megan McArdle, linked from the LW article The Obesity Myth

\n

\n
\n

Mike, a biologist, gives an exasperated explanation of what heritability actually means:

\n
Quantitative geneticists use [heritability] to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions.
\n
\n

Susan Pinker's female-boss-brain cheerleading is refuted by Gabriel Arana. A specific scientific claim Pinker makes (\"the thicker corpus callosum connecting women's two hemispheres provides a swifter superhighway for processing social messages\") is contradicted by a meta-analysis (Sex Differences in the Human Corpus Callosum: Myth or Reality?), and without that, you have only just-so evolutionary psychology argument.

\n

The Bishop and Wahlsten meta-analysis claims that the only consistent finding is for slightly larger average whole brain size and a very slightly larger corpus callosum in adult males. Here are some highlights:

\n
Given that the CC interconnects so many functionally different regions of cerebral cortex, there is no reason to believe that a small difference in overall CC size will pertain to any specific psychological construct. Total absence of the corpus callosum tends to be associated with a ten-point or greater reduction in full-scale IQ, but more specific functional differences from IQ-matched controls are difficult to identify.
\n
In one recent study, a modest correlation between cerebrum size and IQ within a sex was detected. At the same time, males and females differ substantially in brain size but not IQ. There could easily be some third factor or array of processes that acts to increase both brain size and IQ score for people of the same sex, even though brain size per se does not mediate the effect of the other factor on IQ.
\n
The journal Science has refused to publish failures to replicate the 1982 claims of de Lacoste-Utamsing and Holloway (Byne, personal communication).
\n

Obviously, if journals won't publish negative results, then this weakens the effective statistical significance of the positive results we do read. The authors don't find this to be significant for the topic (the above complaint isn't typical).

\n
When many small-scale studies of small effects are published, the chances are good that a few will report a statistically significant sex difference. ... One of our local newspapers has indeed printed claims promulgated over wire services about new studies finding a sex difference in the corpus callosum but has yet to print a word about contrary findings which, as we have shown, far outnumber the statistically significant differences.
\n

This effect is especially notable in media coverage of health and diet research.

\n
The gold-standard in the medical literature is a cumulative meta-analysis conducted using the raw data. We urge investigators to make their raw data or, better yet, the actual tracings available for cumulative meta-analysis. We attempted to collect the raw data from studies of sex differences in the CC cited in an earlier version of this paper by writing to the authors. The level of response was astoundingly poor. In several studies that used MRI, the authors even stated that the original observations were no longer available.
\n

This is disturbing. I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.

\n
By convention, we are taught that the null hypothesis of no sex difference should be rejected if the probability of erroneously rejecting the null on the basis of a set of data is 5% or less. If 10 independent measures are analysed in one study, each with the α = 0.05 criterion, the probability of finding at least one ‘significant’ sex difference by chance alone is 1 − (1 − 0.05)10 = 0.40 or 40%. Consequently, when J tests involving the same object, e.g. the corpus callosum, are done in one study, the criterion for significance of each test might better be adjusted to α/J, the Dunn or Bonferroni criterion that is described in many textbooks. All but two of 49 studies of the CC adopted α = 0.05 or even 0.10, and for 45 of these studies, an average of 10.2 measures were assessed with independent tests.
\n

This is either rank incompetence, or even worse, the temptation to get some positive result out of the costly data collection.

" } }, { "_id": "ZrjiQWBGS45iexNZo", "title": "The usefulness of correlations", "pageUrl": "https://www.lesswrong.com/posts/ZrjiQWBGS45iexNZo/the-usefulness-of-correlations", "postedAt": "2009-08-04T19:00:44.114Z", "baseScore": 20, "voteCount": 24, "commentCount": 59, "url": null, "contents": { "documentId": "ZrjiQWBGS45iexNZo", "html": "

I sometimes wonder just how useful probability and statistics are. There is the theoretical argument that Bayesian probability is the fundamental method of correct reasoning, and that logical reasoning is just the limit as p=0 or 1 (although that never seems to be applied at the meta-level: what is the probability that Bayes' Theorem is true?), but today I want to consider the practice.

\n

Casinos, lotteries, and quantum mechanics: no problem. The information required for deterministic measurement is simply not available, by adversarial design in the first two cases, and by we know not what in the third. Insurance: by definition, this only works when it's impossible to predict the catastrophes insured against. No-one will offer insurance against a risk that will happen, and no-one will buy it for a risk that won't. Randomised controlled trials are the gold standard of medical testing; but over on OB Robin Hanson points out from time to time that the marginal dollar of medical spending has little effectiveness. And we don't actually know how a lot of treatments work. Quality control: test a random sample from your production run and judge the whole batch from the results. Fine -- it may be too expensive to test every widget, or impossible if the test is destructive. But wherever someone is doing statistical quality control of how accurately you're filling jam jars with the weight of jam it says on the label, someone else will be thinking about how to weigh every single one, and how to make the filling process more accurate. (And someone else will be trying to get the labelling regulations amended to let you sell the occasional 15-ounce pound of jam.)

\n

But when you can make real measurements, that's the way to go. Here is a technical illustration.

\n

Prof. Sagredo has assigned a problem to his two students Simplicio and Salviati: \"X is difficult to measure accurately. Predict it in some other way.\"

\n

Simplicio collects some experimental data consisting of a great many pairs (X,Y) and with high confidence finds a correlation of 0.6 between X and Y. So given the value y of Y, his best prediction for the value of X is 0.6y. [Edit: that formula is mistaken. The regression line for Y against X is Y = bcX/a, assuming the means have been normalised to zero, where a and b are the standard deviations of X and Y respectively. For the Y=X+D1 model below, bc/a is equal to 1.]

\n

Salviati instead tries to measure X, and finds a variable Z which is experimentally found to have a good chance of lying close to X. Let us suppose that the standard deviation of Z-X is 10% that of X.

\n

How do these two approaches compare?

\n

A correlation of 0.6 is generally considered pretty high in psychology and social science, especially if it's established with p=0.001 to be above, say, 0.5. So Simplicio is quite pleased with himself.

\n

A measurement whose range of error is 10% of the range of the thing measured is about as bad as it could be and still be called a measurement. (One might argue that any sort of entanglement whatever is a measurement, but one would be wrong.) It's a rubber tape measure. By that standard, Salviati is doing rather badly.

\n

In effect, Simplicio is trying to predict someone's weight from their height, while Salviati is putting them on a (rather poor) weighing machine (and both, presumably, are putting their subjects on a very expensive and accurate weighing machine to obtain their true weights).

\n

So we are comparing a good correlation with a bad measurement. How do they stack up? Let us suppose that the underlying reality is that Y = X + D1 and Z = X + D2, where X, D1, and D2 are normally distributed and uncorrelated (and causally unrelated, which is a stronger condition). I'm choosing the normal distribution because it's easy to calculate exact numbers, but I don't believe the conclusions would be substantially different for other distributions.

\n

For convenience, assume the variables are normalised to all have mean zero, and let X, D1, and D2 have standard deviations 1, d1, and d2 respectively.

\n

Z-X is D2, so d2 = 0.1. The correlation between Z and X is c(X,Z) = cov(X,Z)/(sd(X)sd(Z)) = 1/sqrt(1+d2) = 0.995.

\n

The correlation between X and Y is c(X,Y) = 1/sqrt(1+d2) = 0.6, so d1 = 1.333.

\n

We immediately see something suspicious here. Even a terrible measurement yields a sky-high correlation. Or put the other way round, if you're bothering to measure correlations, your data are rubbish. Even this \"good\" correlation gives a signal to noise ratio of less than 1. But let us proceed to calculate the mutual informations. How much do Y and Z tell you about X, separately or together?

\n

For the bivariate normal distribution, the mutual information between variables A and B with correlation c is lg(I), where lg is the binary logarithm and I = sd(A)/sd(A|B). (The denominator here -- the standard deviation of A conditional on the value of B -- happens to be independent of the particular value of B for this distribution.) This works out to 1/sqrt(1-c2). So the mutual information is -lg(sqrt(1-c2)).

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
     corr.     mut. inf.
Simplicio 0.6 0.3219
Salviati 0.995 3.3291
\n

What can you do with one third of a bit? If Simplicio tries to predict just the sign of X from the sign of Y, he will be right only 70% of the time (i.e. cos-1(-c(X,Y))/π). Salviati will be right 96.8% of the time. Salviati's estimate will even be in the right decile 89% of the time, while on that task Simplicio can hardly do better than chance. So even a good correlation is useless as a measurement.

\n

Simplicio and Salviati show their results to Prof. Sagredo. Simplicio can't figure out how Salviati did so much better without taking measurements on thousands of samples. Salviati seemed to just think about the problem and come up with a contraption out of nowhere that did the job, without doing a single statistical test. \"But at least,\" says Simplicio, \"you can't throw away my 0.3219, it all adds up!\" Sagredo points out that it literally does not add up. The information gained about X from Y and Z together is not 0.3219+3.3291 = 3.6510 bits. The correct result is found from the standard deviation of X conditional on both Y and Z, which is sqrt(1/(1 + 1/d2 + 1/d2)). The information gained is then lg(sqrt(1 + 1/d2 + 1/d2)) = 0.5*lg(101.5625) = 3.3331. The extra information over knowing just Z is only 0.0040 = 1/250 of a bit, because nearly all of Simplicio's information is already included in Salviati's.

\n

Sagredo tells Simplicio to go away and come up with some real data.

" } }, { "_id": "C7KXC2eRZxmsNf2Mr", "title": "Wits and Wagers", "pageUrl": "https://www.lesswrong.com/posts/C7KXC2eRZxmsNf2Mr/wits-and-wagers", "postedAt": "2009-08-04T16:39:29.310Z", "baseScore": 6, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "C7KXC2eRZxmsNf2Mr", "html": "

Wits and Wagers is apparently a board game, where players compete to be well-calibrated with respect to their trivia knowledge. I haven't played it.

\n

Has someone else here played it? If so, what was your experience? Would it be good rationalist/bayesian training?

\n

 

" } }, { "_id": "qBEtr2iaGEtLFyLvu", "title": "The Difficulties of Potential People and Decision Making", "pageUrl": "https://www.lesswrong.com/posts/qBEtr2iaGEtLFyLvu/the-difficulties-of-potential-people-and-decision-making", "postedAt": "2009-08-04T06:14:48.531Z", "baseScore": 7, "voteCount": 11, "commentCount": 38, "url": null, "contents": { "documentId": "qBEtr2iaGEtLFyLvu", "html": "

In connection to existential risk and the utility of bringing future people into being as compared with the utility of protecting those currently alive, I’ve been looking into the issues and paradoxes present in the ethics of potential persons. This has led to an observation that I can find no record of anyone else making, which may help explain why those issues and paradoxes arise. For some time all I had was the observation, but a few days ago an actual prescriptive rule came together. This got long however so for the sake of readers I’ll make a post about the normative rule later. 

\r\n

A dichotomy in utilitarianism exists between total utilitarianism and average utilitarianism, one suggesting that the greatest good comes from the highest total sum of utility and the other suggesting the greatest good comes from the highest utility per capita. These can come to heads when discussing potential persons as the total view holds we are obligated to bring new people into existence if they will have worthwhile lives and won’t detract from others’ wellbeing, and the average view suggests that it is perfectly acceptable not to.

\r\n

Both the total and average utilitarian views have surprising implications. Default total utilitarianism leads to what Derek Parfit and others call “The Repugnant Conclusion”: For any population in which people enjoy very high welfare there is an outcome in which [a much larger group of] people enjoy very low welfare which is preferable, all other things being equal. On the other hand average utilitarianism suggests that in a population of individuals possessed of very high utility it would be unethical to bring another person into being if they enjoyed positive but less than average utility. There are some attempts to resolve these oddities which are not explained here. From my reading I came across few professional philosophers or ethicists fully satisfied with [any such attempt]( http://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon) (without rejecting one of the views of utilitarianism).

\r\n

To explain my observation I will make the assumptions that an ethical decision should be measured with reference to the people or beings it affects, and that actions do not affect nonexistent entities (assumptions which seem relatively widespread and I hope are considered reasonable). Assuming a negligible discount rate, if a decision affects our neighbors now or our descendants a thousand years hence we should include its effect upon them when deciding whether to take that action. It is when we consider actions that bring people into existence that the difficulty presents itself. If we choose to bring into existence a population possessed of positive welfare, we should consider our effect upon that then-existing population (a positive experience). If we choose not to bring into existence that population, we should judge this action only with regards to how it affects the people existing in that world, which does not include the unrealized people (assuming that we can even refer to an unrealized person).  Under these assumptions we can observe that the metric by which our decision is measured changes with relation to the decision we make!

\r\n

By analogy assume you are considering organizing a local swim meet in which you also plan to compete, and at which there will be a panel of judges to score diving. Will you receive a higher score from the panel of judges if you call together the swim meet than if you do not? (To work as an analogy this requires that one considers “the panel” to only exist when serving as the panel, and not being merely the group of judges.)

\r\n

Without making this observation that the decision changes the metric by which the decision is measured, one will try to apply a single metric to both outcomes and find themselves in surprising implications and confusing statements.  In his paper “The Person Affecting Restriction, Comparativism, and the Moral Status of Potential People”, (http://people.su.se/~guarr/) Gustaf Arrhenius quotes John Broome as saying:

\r\n

“…[I]t cannot ever be true that it is better for a person that she lives than that she should never have lived at all. If it were better for a person that she lives than that she should never have lived at all, then if she had never lived at all, that would have been worse for her than if she had lived. But if she had never lived at all, there would have been no her for it to be worse for, so it could not have been worse for her.” (My apologies for not yet having time to read Broome’s work itself, I spend all my time attempting to prevent existential disaster and other activities seemed more pressing. Not reading Broome’s work may well be a fault I should correct, but it wasn’t sacrificed in order to watch another episode of Weeds.)

\r\n

The error here is that Broome passes over to another metric without seeming to notice. From the situation where she lives and enjoys life, it would be worse for her to have never lived. That is, now that she can consider anything, she can consider a world in which she does not exist as less preferable. In the situation in which she never lived and can consider nothing, she cannot consider it worse that she never lived. When we change from considering one situation to the other, our metric changes along with the situation.

\r\n

Likewise Arrhenius fails to make this observation, and approaches the situation with the strategy of comparing uniquely realizable people (who would be brought into existence by our actions) and non-uniquely realizable people. In two different populations with subpopulations that only exist in one population or the other, he correctly points out the difficulty of comparing the wellbeing of those subpopulations between the two situations. However he then goes on to say that we cannot make any comparison in their wellbeing between the situations. A subtle point, but the difficulty lies not in there being no comparison of their wellbeing, but in there being too many comparisons of their wellbeing, the 2 conflicting comparisons depending on whether they do or do not come to exist.

\r\n

As long as the populations are a fixed, unchangeable size and our metric constant, both the total utilitarian view and the average utilitarian view are in agreement: maximizing the average and maximizing the total become one and the same. In this situation we may not even find reason to distinguish the two views. However in regards to the difficulty of potential persons and changing metrics, both views strive to apply a constant metric to both situations; total utilitarianism uses the metric of the situation in which new people are realized, and average utilitarianism is perhaps interpretable as using the metric in which  the new people are not realized.

\r\n

The seeming popularity of the total utilitarian view in regards to potential persons might be due to the fact that an application of that view increases utility by the its own metric (happy realized people are happy they were realized), while an application of the metric of the situation in which people are unrealized creates no change in utility (unrealized people are neither happy nor unhappy [nor even neutral!] about not being realized). This gives the appearance of suggesting we espouse total utilitarianism as in a comparison between increased utility and effectively unchanged utility, an increased utility seems preferable, but I am not convinced such a meta-comparison actually avoids applying one metric to both situations. Again, if we bring people of positive welfare into the world it is a preferable thing to have done so, but if we do not bring them into the world it causes no harm whatsoever to not have done so. My personal beliefs do not support the idea of unrealized people being unhappy about being unrealized, though we might note in the unrealized people situation a decreased utility experienced by total utilitarians unhappy with the outcome.

\r\n

I suggest that we apply the metric of whichever situation comes to be. One oddity of this is the seeming implication that once you’ve killed someone they no longer exist or care, and thus your action is not unethical. If we take a preference utilitarian view and also assume that you are alive at the time you are murdered, we can resolve this by pointing out that the act of murder frustrates your preferences and can be considered unethical, and that it is impossible to kill someone when they are already dead and have no preferences. In contrast if we choose to not realize a potential person, at no point did they develop preferences that we frustrated.

\r\n

Regardless, merely valuing the situation from the metric of the situation that comes to be tells us nothing about which situation we ought to bring about. As I mentioned previously I now have an idea for a potential rule, but that will follow in a separate post.

\r\n

(A second though distinct argument for the difficulty or impossibility of making a fully sensible prescription in the case of future persons is present in Narveson, J. “Utilitarianism and New Generations.” Mind 78 (1967):62-72, if you can manage to track it down. I had to get it from my campus library.)

\r\n

(ETA: I've now posted my suggestion for a normative rule.)

" } }, { "_id": "QjPevgYZrqWKbJCkL", "title": "Unspeakable Morality", "pageUrl": "https://www.lesswrong.com/posts/QjPevgYZrqWKbJCkL/unspeakable-morality", "postedAt": "2009-08-04T05:57:39.009Z", "baseScore": 33, "voteCount": 33, "commentCount": 118, "url": null, "contents": { "documentId": "QjPevgYZrqWKbJCkL", "html": "

It is a general and primary principle of rationality, that we should not believe that which there is insufficient reason to believe; likewise, a principle of social morality that we should not enforce upon our fellows a law which there is insufficient justification to enforce.

\n

Nonetheless, I've always felt a bit nervous about demanding that people be able to explain things in words, because, while I happen to be pretty good at that, most people aren't.

\n
\n

\"I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.\" —Danielle Egan (journalist)

\n
\n

This experience permanently traumatized Ms. Egan, by the way.  Because years later, at a WTA conference, one of the speakers said that something was true, and Ms. Egan said \"What do you mean, 'true'?\", and the speaker gave some incorrect answer or other; and afterward I quickly walked over to Ms. Egan and explained the correspondence theory of truth:  \"The sentence 'snow is white' is true if and only if snow is white\"; if you're using a bucket of pebbles to count sheep then an empty bucket is true if and only if the pastures are empty.  I don't know if this cured her; I suspect that it didn't.  But up until that point, at any rate, it seems Ms. Egan had been so traumatized by this childhood experience that she believed there was no such thing as truth - that because her teacher had demanded a definition in words, and she hadn't been able to give a good definition in words, that no good definition existed.

\n

Of which I usually say:  \"There was a time when no one could define gravity in exquisitely rigorous detail, but if you walked off a cliff, you would fall.\"

\n

On the other hand - it is a general and primary principle of rationality that when you have no justification, it is very important that there be some way of saying \"Oops\", losing hope, and just giving up already.  (I really should post, at some point, on how the ability to just give up already is one of the primary distinguishing abilities of a rationalist.)  So, really, if you find yourself totally unable to justify something in words, one possibility is that there is no justification.  To ignore this and just casually stroll along, would not be a good thing.

\n

And with moral questions, this problem is doubled and squared.  For any given person, the meaning of \"right\" is a huge complicated function, not explicitly believed so much as implicitly embodied.  And if we keep asking \"Why?\", at some point we end up replying \"Because that is just what the term 'right', means; there is no pure essence of rightness that you can abstract away from the specific content of your values.\"

\n

But if you were allowed to answer this in response to any demand for justification, and have the other bow and walk away - well, you would no longer be computing what we know as morality, where 'right' does mean some things and not others.

\n

Not to mention that in questions of public policy, it ought to require some overlap in values to make a law.  I do think that human values often overlap enough that different people can legitimately use the same word 'right' to refer to that-which-they-compute.  But if someone wants a legal ban on pepperoni pizza because it's inherently wrong, then I may feel impelled to ask, \"Why do you think this is part of the overlap in our values?\"

\n

Demands for moral justification have their Charybdis and their Scylla:

\n

The traditionally given Charybdis is letting someone say that interracial marriage should be legally banned because it \"feels icky\" to them.  We could call this \"the unwisdom of repugnance\" - if you can just say \"That feels repugnant\" and win a case for public intervention, then you lose all the cases of what we now regard as tremendous moral progress, which made someone feel vaguely icky at the time; women's suffrage, divorces, atheists not being burned at the stake.  Moral progress - which I currently see as an iterative process of learning new facts, processing new arguments, and becoming more the sort of person you wished you were - demands that people go on thinking about morality, for which purpose it is very useful to have people go on arguing about morality.  If saying the word \"intuition\" is a moral trump card, then people, who, by their natures, are lazy, will just say \"intuition!\" all the time, believing that no one is allowed to question that or argue with it; and that will be the end of their moral thinking.

\n

And the Scylla, I think, was excellently presented by Silas Barta when... actually this whole comment is just worth quoting directly:

\n
\n

Let's say we're in an alternate world with strong, codified rules about social status and authority, but weak, vague, unspoken norms against harm that nevertheless keep harm at a low level.

\n

Then let's say you present the people of this world with this \"dilemma\" to make Greene's point:

\n
\n

Say your country is at war with another country that is particularly aggressive and willing to totally demolish your social order and enslave your countrymen. In planning how to best fight off this threat, your President is under a lot of stress. To help him relieve his stress, he orders a citizen, Bob, to be brought before him and tortured and murdered, while the President laughs his head off at the violence.

\n

He feels much more relieved and so is able to craft and motivate a war plan that leads to the unconditional surrender of the enemy. The President promises that this was just a one-time thing he had to do to handle the tremendous pressure he was under to win the war and protect his people. Bob's family, in turn, says that they are honored by the sacrifice Bob has made for his country. Everyone agrees that the President is the legitimate ruler of the country and the Constitution and tradition give him authority to do what he did to Bob.

\n

Was it okay for the President to torture and kill Bob for his personal enjoyment?

\n
\n

Then, because of the deficiency in the vocabulary of \"harms\", you would get responses like:

\n

\"Look, I can't explain why, but obviously, it's wrong to torture and kill someone for enjoyment. No disrespect to the President, of course.\"

\n

\"What? I don't get it. Why would the President order a citizen killed? There would be outrage. He'd feel so much guilt that it wouldn't even relieve the stress you claim it does.\"

\n

\"Yeah, I agree the President has authority to do that, but God, it just burns me up to think about someone getting tortured like that for someone else's enjoyment, even if it is our great President.\"

\n

Would you draw the same conclusion Greene does about these responses?

\n
\n

Unfortunately, it does happen to be a fact that most people are not good at explaining themselves in words, unless they've already heard the explanation from someone else.  Even if you challenge a professional philosopher who holds a position, to justify it, and they can't... well, frankly, you can't conclude much even from that, in terms of inferring that no good explanation exists.  Philosophers, I've observed, are not much good at this sort of job either.  It's Bayesian evidence, by the law of conservation of evidence; if a good explanation would be a sign that justification exists, then the absence of such explanation must be evidence that justification does not exist.  It's just not very strong evidence, because we don't strongly anticipate that even professional philosophers will be able to put a justification into words, correctly and convincingly, when justification does in fact exist.

\n

Even conditioning on the proposition that there is overlap in what you and others mean by 'right' - the huge function that is what-we-try-to-do - and that the judgment in question is stable when taken to the limits of knowledge, thought, and reflective coherence - well, it's still not sure that you'd be able to put it into words.  You might be able to.  But you might not.

\n

And we also have to allow a certain probability of convincing-sounding complicated verbal justification, in cases where no justification exists.  But then if you use that as an excuse to flush all disliked arguments down the toilet, you shall be left rotting forever in a pit of convenient skepticism, saying, \"All that intellekshual stuff could be wrong, after all.\"

\n

So here are my proposed rules of conduct for arguing morality in words:

\n" } }, { "_id": "FSPKLFfMNbRGPFjmY", "title": "Why You're Stuck in a Narrative", "pageUrl": "https://www.lesswrong.com/posts/FSPKLFfMNbRGPFjmY/why-you-re-stuck-in-a-narrative", "postedAt": "2009-08-04T00:31:41.782Z", "baseScore": 46, "voteCount": 46, "commentCount": 32, "url": null, "contents": { "documentId": "FSPKLFfMNbRGPFjmY", "html": "

\n

\n
For some reason the narrative fallacy does not seem to get as much play as the other major cognitive fallacies. Apart from discussions of \"The Black Swan\", I never see it mentioned anywhere. Perhaps this is because it's not considered a \"real\" bias, or because it's an amalgamation of several lower-level biases, or because it's difficult to do controlled studies for. Regardless, I feel it's one of the more pernicious and damaging fallacies, and as such deserves an internet-indexable discussion.
\n
From Taleb's \"The Black Swan\"
\n
\n
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
\n
\n
Essentially, the narrative fallacy is our tendency to turn everything we see into a story - a linear chain of cause and effect, with a beginning and an end. Obviously the real world isn't like this - events are complex and interrelated, direct causation is extremely rare, and outcomes are probabilistic. Verbally, we know this - the hard part, as always, is convincing our brain of the fact.
\n
Our brains are engines designed to analyze the environment, pick out the important parts, and use those to extrapolate into the future. To trot out some theoretical evolutionary support, only extremely basic extrapolation would be required in the ancestral evolutionary environment. Things like [Gather Food -> Eat Food -> Sate Hunger] or [See Tiger -> Run -> Don't Die]. Being able to produce simple chains of cause and effect would confer a significant survival advantage, but you wouldn't need anything more than that. The world was simple enough that we didn't have to deal with complex interactions - linear extrapolation was \"good enough\". The world is much different and much more complex today, but unforunately, we're still stuck with the same linear extrapolation hardware.
\n
You can see the results of this 'good enough' solution in the design and function of our brain. Cognitively, it's much cheaper to interpret a group of things as a story - a pattern - than to remember each one of them seperately. Simplifying, summarizing, clustering, and chaining ideas together - reducing complex data to a few key factors, lets us get away with, say having an extremely small working memory, or a relatively slow neuron firing speed. Compression of some sort is needed for our brains to function - it'd be impossible to analyze the terabytes of data we receive every second from our senses otherwise. As such, we naturally reduce everything to the simplest pattern possible, and then process the pattern. So we're much better at remembering things as part of a pattern than as a random assortment. The alphabet is first learned as a song to help it stick. Mnemonic devices improve memory by establishing easy to remember relationships. By default our natural tendency, for any information, is to establish links and patterns in it to aid in processing. This by itself isn't a problem - the essence of of knowledge is drawing connections and making inferences. The problem is that because our hardware is designed to do it, it insists on finding links and patterns whether they actually exist or not.  We're biologically inclined to reduce complex events to a simpler, more palatable, more easily understood pattern - a story.
\n
This tendency can be seen in a variety of lower level biases. For instance, the availability heuristic causes us to make predictions and inferences based on what most quickly comes to mind - what's most easily remembered. Hindsight bias causes us to interpret past events as obviously and inevitably causing future ones. Consistency bias causes us to reinterpret past events and behaviors to be consistent with new information. Confirmation bias causes us to only look for data to support the conclusions we've already arrived at. There's also our tendency to engage in rationalization, and create post-hoc explanations for our behavior. They all have the effect of of molding, shaping, and simplifying events into a kind of linear narrative, ignoring any contradiction, complexity, and general messiness.
\n
Additionally, there's evidence that forming narratives out of the amalgamated behavior of semi-independent mental modules is one of the primary functions of consciousness. Dennet makes this argument in his paper \"The Self as a Narrative Center of Gravity\":
\n
\n
That is, it does seem that we are all virtuoso novelists, who find ourselves engaged in all sorts of behavior, more or less unified, but sometimes disunified, and we always put the best \"faces\" on it we can. We try to make all of our material cohere into a single good story. And that story is our autobiography.
\n
\n
Because the brain is a hodge podge of dirty hacks and disconnected units, smoothing over and reinterpreting their behaviors to be part of a consistent whole is necessary to have a unified 'self'. Drescher makes a somewhat related conjecture in \"Good and Real\", introducing the idea of consciousness as a 'Cartesian Camcorder', a mental module which records and plays back perceptions and outputs from other parts of the brain, in a continuous stream. It's the idea of \"I am not the one who thinks my thoughts, I am the one who hears my thoughts\", the source of which escapes me. Empirical support of this comes from the experiments of Benjamin Libet, which show that a subconscious electrical processes precede conscious actions - implying that consciousness doesn't engage until after an action has already been decided. If this is in fact how we handle internal information - smoothing out the rough edges to provide some appearance of coherence, it shouldn't be suprising that we tend to handle external information in the same matter.
\n
It seems then, that creating narratives isn't so much a choice as it is a basic feature of the architecture of our minds. From the paper \"The Neurology of Narrative\" (JSTOR), discussing people with damage to the area of the frontal lobe which processes higher order input:
\n
\n
They are unable to provide (and likely fail to generate internally) a narrative account of their experiences, wishes, and actions, although they are fully cognizant of their visual, auditory, and tactile surroundings. These individuals lead \"denarrated\" lives, aware but failing to organize experience in an action generating temporal frame. In the extreme, they do not speak unless spoken to and do not move unless very hungry. These patients illustrate the inseparable connection between narrativity and personhood. Brain injured individuals may lose their linguistic, mathematic, syllogistic, visuospatial, mnestic, or kinesthetic competencies and still be recognizably the same persons. Individuals who have lost the ability to construct narrative, however,have lost their selves.
\n
\n
You can see the extremes our tendency toward narrative can go with people who see themselves as the star or hero in a \"movie about their life\". These people tend to be severe narcissists (though I've heard some self help \"experts\" espouse this as a healthy outlook to adopt), but it's not hard to see why such a view is so appealing. As the star of a movie, the events in your life are all extremely important, and are building to something that will inevitably occur later. You'll face difficulties, but you will ultimately overcome them, and your triumph will be all the greater for it (we seldom imagine our lives as a tragedy). You'll fight and conquer your enemies. You'll win over the love interest. It's all immensely appealing to our most basic desires, whether we're narcissists or not.
\n
A good story, then, is a superstimulus. The very structure of our minds is tilted to be vulnerable to it. It appeals to our primitive brains so strongly that it doesn't matter if it resembles the real world or not - we prefer the engaging story. We're designed to produce narratives, whether we like it or not. Fortunately, our minds also come with the ability to build new processes that can overrule the older ones. So how do we beat it? From \"The Black Swan\":
\n
\n
There are ways to escape the narrative fallacy...by making conjectures and running experiments, by making testable predictions.
\n
\n
In other words, concentrate your probability mass. Force your beliefs to be falsifiable. Make them pay rent in anticipated experience. All the the things a good rationalist should be doing already.
\n

 

\n

 

" } }, { "_id": "dn5SjSP3t9cNDnGsT", "title": "Markets marketed better", "pageUrl": "https://www.lesswrong.com/posts/dn5SjSP3t9cNDnGsT/markets-marketed-better", "postedAt": "2009-08-03T16:39:49.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "dn5SjSP3t9cNDnGsT", "html": "

What do you call a system where costs and benefits return to those who cause them? Working markets or karma, depending on whether the accounting uses money or magic.

\n

In popular culture karma generally has good connotations, and markets generally have bad. Reasons for unease about markets should mostly apply just as well to karma, but nobody complains for instance that inherent tendencies to be nice are an unfair basis for wellbeing distribution. Nor that people who have had a lot of good fortune recently might have cheated the system somehow. Nor that the divine internalizing of externalities encourages selfishness. Nor that people who are good out of desperation for fair fortune are being exploited. So why the difference?

\n

Perhaps mysterious forces are just more trustworthy than social institutions? Or perhaps karma seems nice because its promotion is read as ‘everyone will get what they deserve’, while markets seem nasty because their promotion is read as ‘everyone deserves what they’ve got’. Better ideas?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Dvc7zrqsdCYy6dCFR", "title": "Suffering", "pageUrl": "https://www.lesswrong.com/posts/Dvc7zrqsdCYy6dCFR/suffering", "postedAt": "2009-08-03T16:02:38.270Z", "baseScore": 10, "voteCount": 16, "commentCount": 98, "url": null, "contents": { "documentId": "Dvc7zrqsdCYy6dCFR", "html": "

For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.

\n

The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?

\n

On reflection, I'm actually asking two questions: what is a morally significant agent (MSA; is there an established term for this?) whose goals you would want to further; and having determined that, under what conditions would you consider it to be suffering, so that you would?

\n

I think that an MSA would not be defined by one feature. So try to list several features, possibly assigning relative weights to each.

\n

IIRC, I read a study that tried to determine if fish suffer by injecting them with toxins and observing whether their reactions are planned or entirely instinctive. (They found that there's a bit of planning among bony fish, but none among the cartilaginous.) I don't know why they had to actually hurt the fish, especially in a way that didn't leave much room for planning, if all they wanted to know was if the fish can plan. But that was their definition. You might also name introspection, remembering the pain after it's over...

\n

This is the ultimate subjective question, so the only wrong answer is one that is never given. Speak, or be wrong. I will downvote any post you don't make.

\n

BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.

" } }, { "_id": "TcJKD2E4uE9XLNxBP", "title": "Pain", "pageUrl": "https://www.lesswrong.com/posts/TcJKD2E4uE9XLNxBP/pain", "postedAt": "2009-08-02T19:12:38.493Z", "baseScore": 48, "voteCount": 54, "commentCount": 202, "url": null, "contents": { "documentId": "TcJKD2E4uE9XLNxBP", "html": "

Some time ago, I came across the All Souls College philosophy fellowship exam.  It's interesting reading throughout, but one question in particular brought me up short when I read it.

\n

What, if anything, is bad about pain?

\n

The fact that I couldn't answer this immediately was fairly disturbing.  Approaching it from the opposite angle was much simpler.  It is in fact trivially easy to say what is good about pain.  To do so, all you need to do is look at the people who are born without the ability to feel it: CIPA patients.  You wouldn't want your kid saddled with this condition, unless for some reason you'd find it welcome for the child to die (painlessly) before the age of three, and if that fate were escaped, to spend a lifetime massively inconvenienced, disabled, and endangered by undetected and untreated injuries and illnesses great and small.

\n

But... what, if anything, is bad about pain?

\n

I don't enjoy it, to be sure, but I also don't enjoy soda or warm weather or chess or the sound of vacuum cleaners, and it seems that it would be a different thing entirely to claim that these things are badMost people don't enjoy pain, but most people also don't enjoy lutefisk or rock climbing or musical theater or having sex with a member of the same sex, and it seems like a different claim to hold that lutefisk and rock climbing and musical theater and gay sex are bad.  And it's just not the case that all people don't enjoy pain, so that's an immediate dead end.

\n

So... what, if anything, is bad about pain?

\n

Let's go back to the CIPA patients.  I suggested that they indicate what's good about pain by showing us what happens to people without any: failure to detect and respond to injury and illness leads to exacerbation of their effects, up to and including untimely death.  What's bad about those things?  If we're doubting the badness of pain, we may as well doubt the badness of other stuff we don't like and try to avoid, like death.  With death, there are some readier answers: you could call it a tragic loss of a just-plain-inherently-valuable individual, but if you don't like that answer (and many people don't seem to), you can point to the grief of the loved ones (conveniently ignoring that not everybody has loved ones) which is... um... pain.  Whoops.  Well, you could try making it about the end of the productive contribution to society, on the assumption that the dead person did something useful (and conveniently ignore why we tend not to be huge fans of death even when it happens to unproductive persons).  Maybe we've just lost an anesthesiologist, who, um.... relieves pain.

\n

And... what, if anything, is bad about pain?

\n

Your standard-issue utilitarianism is, among other things, \"hedonic\".  That means it includes among its tenets hedonism, which is the idea that pleasure is good and pain is bad, end of story.  Lots of pleasure is better than a little and lots of pain is worse than a little and you can give these things units and do arithmetic to them to figure out how good or bad something is and then wag your finger or supply accolades to whoever is responsible for that thing.  Since hedonists are just as entitled as anyone to their primitive notions, that's fine, but it's not much help to our question.  \"It is a primitive notion of my theory\" is the adult equivalent of \"it just is, that's all, your question is stupid!\"  (I don't claim that this is never an appropriate answer.  Some questions are pretty stupid.  But I don't think that one of them is...)

\n

...what, if anything, is bad about pain?

" } }, { "_id": "zaDHi9rdZvF4ctskB", "title": "Open Thread: August 2009", "pageUrl": "https://www.lesswrong.com/posts/zaDHi9rdZvF4ctskB/open-thread-august-2009", "postedAt": "2009-08-01T15:06:40.211Z", "baseScore": 8, "voteCount": 7, "commentCount": 193, "url": null, "contents": { "documentId": "zaDHi9rdZvF4ctskB", "html": "

Here's our place to discuss Less Wrong topics that have not appeared in recent posts. If something gets a lot of discussion feel free to convert it into an independent post.

" } }, { "_id": "S4Jg3EAdMq57y587y", "title": "An Alternative Approach to AI Cooperation", "pageUrl": "https://www.lesswrong.com/posts/S4Jg3EAdMq57y587y/an-alternative-approach-to-ai-cooperation", "postedAt": "2009-07-31T12:14:48.507Z", "baseScore": 22, "voteCount": 12, "commentCount": 25, "url": null, "contents": { "documentId": "S4Jg3EAdMq57y587y", "html": "

[This post summarizes my side of a conversation between me and cousin_it, and continues it.]

\n

Several people here have shown interest in an approach to modeling AI interactions that was suggested by Eliezer Yudkowsky: assume that AIs can gain common knowledge of each other's source code, and explore the decision/game theory that results from this assumption.

\n

In this post, I'd like to describe an alternative approach*, based on the idea that two or more AIs may be able to securely merge themselves into a joint machine, and allow this joint machine to make and carry out subsequent decisions. I argue that this assumption is as plausible as that of common knowledge of source code, since it can be built upon the same technological foundation that has been proposed to implement common knowledge of source code. That proposal, by Tim Freeman, was this:

\n
\n

Entity A could prove to entity B that it has source code S by
consenting to be replaced by a new entity A' that was constructed by a
manufacturing process jointly monitored by A and B.  During this
process, both A and B observe that A' is constructed to run source
code S.  After A' is constructed, A shuts down and gives all of its
resources to A'.

\n
\n

Notice that the same technology can be used for two AIs to merge into a single machine running source code S (which they both agreed upon). All that needs to be changed from the above process is for B to also shut down and give all of its resources to A' after A' is constructed. Not knowing if there is a standard name for this kind of technology, I've given it the moniker \"secure joint construction.\"

\n

I conjecture that the two approaches are equal in power, in the sense that any cooperation made possible by the common knowledge of source code is also possible given the secure merger ability, and vice versa. This is because under the assumption of common knowledge of source code, the likely outcome is for all AIs to modify themselves into using the same decision algorithm, with that algorithm making and carrying out subsequent decisions. The collection of these cooperative machines running the same algorithm can be viewed as one distributed machine, thus suggesting the equivalence of the two approaches.

\n

It is conceptually simpler to assume that AIs will merge into a centralized joint machine. This causes no loss of generality, since if for some reason the AIs find it more advantageous to merge into a distributed joint machine, they will surely come up with solutions to distributed computing problems like friend-or-foe recognition and consensus by themselves. The merger approach allows such issues to be abstracted away as implementation details of the joint machine.

\n

Another way to view these two approaches is that they each offers a way for AIs to enforce agreements, comparable with the enforcement of contracts by a court, except that the assumed technologies allow the AIs to enforce agreements without a trusted third party, and with potentially higher assurance of compliance. This allows most AI-AI interactions to be modeled using cooperative game theory, which assumes such enforcement of agreements.

\n

* My original proposal, posted on SL4, was that AIs would use Bayesian aggregation to determine the decision algorithm of their joint machine. I later realized that cooperative game theory is a better fit, because only a cooperative game solution ensures that each AI has sufficient incentives to merge.

\n

[It appears to me that cousin_it and I share many understandings, while Vladimir Nesov and Eliezer seem to have ideas closer to each other and to share certain insights that I am not able to access. I hope this post encourages them to clarify their ideas relative to those of cousin_it and I.]

" } }, { "_id": "JAyz8mEir5mZMTDK7", "title": "Pract: A Guessing and Testing Game", "pageUrl": "https://www.lesswrong.com/posts/JAyz8mEir5mZMTDK7/pract-a-guessing-and-testing-game", "postedAt": "2009-07-31T09:13:47.353Z", "baseScore": 7, "voteCount": 8, "commentCount": 42, "url": null, "contents": { "documentId": "JAyz8mEir5mZMTDK7", "html": "

Here’s a game that you can play for real against a human opponent. If the administrators don’t mind, you can play it right here in the comments.

\n

The game is called Pract.

\n

\n

Rules

\n

Pract is played using finite sequences of integers, called “sequences.”

\n

To start, each player chooses a well-defined infinite set of sequences, such that every sequence is either demonstrably in or demonstrably out of the set. The game ends when one player guesses the other’s set.1

\n

Once they have picked their sets, the players take turns.

\n

\n

Definitions

\n
    \n
  1. A player specifies a sequence by writing each integer, in order, in decimal.
  2. \n
  3. A sequence’s classification relative to a set is a statement of whether the sequence is in or out of the set.
  4. \n
  5. A player classifies a sequence by writing the sequence’s classification relative to that player’s chosen set.
  6. \n
  7. The current player is the player whose turn it currently is.
  8. \n
  9. A player’s final score is the number of guesses made by that player plus the length of that player’s statement (at the end of the game) of the definition of his or her own set.
  10. \n
\n

\n

Gameplay

\n

On each turn, the current player may either:

\n
    \n
  1. Try to guess the other player’s set, in which case the other player indicates whether the guess was correct or not. Or,
  2. \n
  3. Specify a sequence, in which case both players classify that sequence.
      \n
    1. If, during the previous turn, the other player made an incorrect guess of the current player’s set, the specified sequence must have an opposite classification relative to the actual set than it does relative to the guessed set.
    2. \n
    3. If the other player instead specified a sequence, the two sequences (the one specified by the current player this turn and the one specified the other player on the previous turn) must have opposite classifications relative to the current player’s set.
    4. \n
  4. \n
\n

When a correct guess is made, each player states the definition of his or her own set. The player who guessed correctly makes this statement second. Then the game ends.

\n

\n

Winning and Losing

\n

If the player who guessed correctly has a lower final score than the other player, the player who guessed correctly wins and the other player loses. Otherwise, both players lose.

\n

If a player withdraws from the game during his or her own turn, both players lose.2

\n

Statement lengths affect final scores, so language, notation, and encoding are relevant. In most cases, common sense, context, and the medium itself should suffice. (Pract is meant to be played over the Internet in a forum, chat room, mailing list, or other similar medium.)

\n

Pract is meant as way to practice both reasoning and being reasonable. Those who would rather argue well than play well should spectate.

\n

\n

Examples

\n

Here’s an example of the beginning of a game between Alice and Bob:

\n
alice> 2, 4, 6\nalice> in\nbob> out\nbob> 7, 2, 4\nbob> in\nalice> out\n
\n

The above example shows two turns: the first is Alice’s and the second is Bob’s. Both players have used their turn to specify a sequence rather than make a guess, and each player classifies each sequence as required by the rules.

\n

Bob is being redundant when he classifies the sequence he specified. The rules prevent him from selecting a sequence that is out of his own set because the sequence specified by Alice is out of his set.

\n

Here’s an example showing the end of a game:

\n
cathy> 5, 7, 8\ncathy> out\ndave> out\ndave> increasing\ncathy> no\ncathy> 3, 2\ncathy> in\ndave> out\ndave> 4, 4, 2, 2\ndave> in\ncathy> out\ncathy> even numbers\ndave> yes\ndave> all even\ncathy> odd sum\ndave> length 8 + 5 guesses = 13\ncathy> length 7 + 4 guesses = 11\n
\n

The first turn shown is Cathy’s, and she uses it to specify a sequence. On the next turn, Dave makes an incorrect guess. Then there are two more turns on which a sequence is specified. Finally, Cathy ends the game by making a correct guess. The players state their sets and calculate their scores. From the calculations, we can infer that there were several previous guesses by each player.

\n

Cathy is being redundant when she classifies 3, 2. The rules prohibit her from selecting a sequence that is both non-increasing and out of her set because Dave incorrectly guessed on the previous turn that her set was the set of all increasing sequences. (In a typical game, nearly half the classifications are redundant. The redundancy makes it much easier to verify that players are following the rules.)

\n

The players are not using full sentences or punctuation to describe the sets, and Dave’s description of his own set is shorter than Cathy’s equivalent description when she guesses it. This is reasonable, as long as it does not introduce ambiguity or involve silly things like languages invented on the fly.

\n

There may be situations where it is not clear how the rules apply. For example, the correctness of a guess may depend on some open question in mathematics. So long as players try to avoid such situations, rather than cause them, they should be rare and resolvable.

\n
\n
\n
    \n
  1. \n

    ETA: Pract is inspired in part by Zendo, as well as various ideas I’ve seen around LessWrong like the 2 4 6 experiment, and variable-sum games.

    \n
  2. \n\n
  3. \n

    ETA: New rule for withdrawing. My main concern from the beginning was that both players would be stumped by their own biases and unable to finish the game. The discussion on how to play in bad faith gave me an idea for how to deal with that.

    \n
  4. \n
" } }, { "_id": "EKu66pFKDHFYPaZ6q", "title": "The Hero With A Thousand Chances", "pageUrl": "https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances", "postedAt": "2009-07-31T04:25:28.630Z", "baseScore": 170, "voteCount": 123, "commentCount": 170, "url": null, "contents": { "documentId": "EKu66pFKDHFYPaZ6q", "html": "

\"Allow me to make sure I have this straight,\" the hero said.  \"I've been untimely ripped from my home world to fight unspeakable horrors, and you say I'm here because I'm lucky?\"

\n

Aerhien dipped her eyelashes in elegant acknowledgment; and quietly to herself, she thought:  Thirty-seven.  Thirty-seven heroes who'd said just that, more or less, on arrival.

\n

Not a sign of the thought showed on her outward face, where the hero could see, or the other council members of the Eerionnath take note.  Over the centuries since her accidental immortality she'd built a reputation for serenity, more or less because it seemed to be expected.

\n

\"There are kinds and kinds of luck,\" Aerhien said serenely.  \"Not every person desires their personal happiness above all else.  Those who are lucky in aiding others, those whose luck is great in succor and in rescue, these ones are not always happy themselves.  You are here, hero, because you have a hero's luck.  The boy whose dusty heirloom sword proves to be magical. The peasant girl who finds herself the heir to a great kingdom.  Those who discover, in time of sudden stress, an untrained wild magic within themselves.  Success born not of learning, not of skill, not of determination, but unplanned coincidence and fortunes of birth:  That is a hero's luck.\"

\n

\"Gosh,\" said the hero after a long, awkward pause, \"thanks for the compliment.\"

\n

\"It is not a compliment,\" Aerhien said, \"but this is: that you have taken good advantage of your luck.  Our enemy does not speak, we do not know if there is any aliveness in it to think; but it learns, or seems to learn.  We have never won against it using the same trick twice.  It is rare now that a hero succeeds in conceiving a genuinely new trick, for we have fought this shadow long under our sun.  For this reason we have taken to summoning heroes from distant dimensions with other modes of thought; sometimes one such knows a truly new technique, and at least they fight differently.  But far more often, hero, the hero wins by luck.\"

\n

\"Huh,\" said the hero.  He frowned; more in thought, it seemed, than in displeasure.  \"How... very odd.  I wonder why that is.  What kind of enemy can be defeated only by luck?\"

\n

\"A nameless enemy and null,\" said Aerhien.  \"Structureless and empty, horrible and dark, the most terrifying thing imaginable:  We call it Dust.  That seems to be its only desire, to tear down every bit of structure in the world, grind it into specks of perfect chaos.  Always the Dust is defeated, always it takes a new shape immune to its last defeat.\"

\n

\"I wonder,\" murmured the hero, \"if it will run out of shapes, and then end; or if it will finally become invincible.\"

\n

(One of the other Eerionnath shuddered.)

\n

\"I do not know,\" Aerhien said simply.  \"I do not know the nature of the Dust, nor the nature of the Counter-Force that opposes it.  The Dust is terrible and our world should long since have ended.  We are not fools enough to believe we could be lucky so many times by chance alone.  But the Counter-Force has never acted openly; it never reveals itself except in - a hero's luck.  And so we, the council Eerionnath to prevent the world from destruction, are at your disposal to command; and all the power and resource that this world holds, for your battle.\"

\n

And she, Aerhien, and the council Eerionnath, bowed low.

\n

Then they waited to see if the hero would demand dominions or slaves as payment, before condescending to rescue a people in distress.

\n

If so they would dispose of him, and summon another.

\n

This one, though, seemed to have at least some qualities of a true hero; his face showed no avarice, only an abstracted puzzlement.  \"A hidden Counter-Force...\" he murmured.  \"Excuse me, but this is all very vague.  Can you give me a specific example of a hero's luck?\"

\n

Aerhien opened her mouth, and then the breath caught in her throat; suddenly and involuntarily, her memory went back to that huge spell gone out of control which had blasted the then-form of the Dust, killed the hero her lover, ruined their home and country, and rendered her accidentally immortal, all those centuries ago -

\n

Ghandhol, the second-oldest of the council, must have guessed her silent distress; for he spoke up to cover the gap:  \"There was a certain time,\" he said gravely, \"when the hero of that age, sent off the entire army of the world in a diversionary attack against the strongest fortification of the enemy.  While he, with but a single friend, walked directly into enemy territory, carrying undefended the single most valuable magic the Dust could possibly gain.  Then the Dust captured and corrupted the hero's mind.  And when all seemed absolutely lost, they only won because - in an event that was no part at all of their original plan - a hungry creature bit off the hero's finger and then accidentally fell into an open lava flow, which in turn caused -\"

\n

\"That was an extreme case,\" said one of the younger councilors; that one looked a bit nervous, lest this hero get the wrong idea.  \"None since have tried to imitate the Volcano Suicide Hero -\"

\n

\"Ah!\" said the hero in a tone of sudden enlightenment.

\n

Then the hero frowned.  \"Oh, dear...\" he said under his breath.

\n

The councilors looked at one another in mute puzzlement.  The hairs pricked on Aerhien's neck; she had lived long enough to have seen almost everything at least once before.  And her lover had frowned, just like that, an instant before his spell went wild.

\n

The hero's brow was furrowed like a father whose child has just asked a question which has an answer, but whose answer no child can understand.  \"Do you...\" he said at last.  \"Do you have knowledge... about the khanfhighur... that's not even translating, is it.  Do you know about... the things that things are made of?  And are the things constantly splitting all the time?  Not singly, but in - in groups -\"

\n

The other councilors Eerionnath were staring at him in mute incomprehension.  But Aerhien, who had been through it all before, gravely shook her head.  \"We do not possess that knowledge; nor do we know why our sun burns, or why the sky is red, or what makes a word a spell; nor has any summoned hero succeeded in raveling them.\"  Aerhien held up her hand.  \"Hand, made of fingers; beneath the finger, skin and muscle and vein, beneath the muscle, sharrak and flom.  That is the limit of our knowledge.  Some worlds, it seems, are harder to ravel than others.\"

\n

The hero waved it off.  \"No, it doesn't matter - well, it matters a great deal, but not for now.  I only asked to see if I could get confirmation... it doesn't matter.\"

\n

Aerhien waited patiently; they were rare, this sort of hero, but the more distant and alien sort did sometimes treat her world as a puzzle to be solved.  She usually sought those similar enough in body and mind to feel empathy for her people's plight; but sometimes she thought of the great victory won by the Icky Blob Hero, and wondered if she should look further afield.

\n

\"What would happen if the Dust won?\" asked the hero.  \"Would the whole world be destroyed in a single breath?\"

\n

Aerhien's brow quirked ever so slightly.  \"No,\" she said serenely.  Then, because the question was strange enough to demand a longer answer:  \"The Dust expands slowly, using territory before destroying it; it enslaves people to its service, before slaying them.  The Dust is patient in its will to destruction.\"

\n

The hero flinched, then bowed his head.  \"I suppose that was too much to hope for; there wasn't really any reason to hope, except hope... it's not required by the logic of the situation, alas...\"

\n

Suddenly the hero looked up sharply; there was a piercing element, now, in his gaze.  \"There's a great deal you're neglecting to tell me about this heroing business.  Were you planning to mention that the 'hero' which your council chooses and anoints, often turns out not to be the real hero at all?  That the Counter-Force often ends up working through someone else entirely?\"

\n

The members of the council traded glances.  \"You didn't exactly ask about that,\" said Ghandhol mildly.

\n

The hero nodded.  \"I suppose not.  And the Volcano Suicide Hero - what exactly happened to him, that caused no hero to ever dare tempt fate so much again, in the history you remember?\"

\n

\"His home country was ruined,\" Aerhien said softly, \"while the army marched elsewhere on his diversion.  It threw him into a misery from which he never recovered, until one day he set sail in a ship and did not return.\"

\n

The hero nodded.  \"Poor payment, one would think, for saving the world.\"  The hero's face grew grim, and his voice became solemn and formal, mimicking Aerhien's cadences.  \"But the Counter-Force is not the pure power of Good.  It seems to care only and absolutely about stopping the Dust.  It cares nothing for heroes, or countries, or innocent lives and victims.  If it could save a thousand children from death, only by nudging the fall of a pebble, it would not bother; it has had such opportunities, and not acted.\"

\n

Ghufhus, the youngest member of the council, grimaced, looking offended.  \"How is it our right to ask for more?\" he demanded.  \"That we are saved from the Dust is miracle enough -\"

\n

Ghufhus stopped, noticing then that the other Eerionnath were sitting frozen.  Even Aerhien's mask of dispassion had cracked.

\n

\"Ah...\" Ghufhus said, puzzled.  \"How do you... know all this?  Is there a Counter-Force in your own world?\"

\n

Fool, Aerhien thought to herself.  The hero had seemed puzzled by the idea, at first, and had needed to ask for examples.  She decided then and there that Ghufhus would meet with an accident before the next council meeting; their world had no room for stupid Eerionnath.

\n

And the hero himself shook his head.  \"No,\" the hero said.  \"You have never summoned a hero who remembers a Counter-Force like yours.\"

\n

This was also true.

\n

\"Nor will you ever,\" the hero added, \"unless you try some way of seeking that specifically, in your summoning.  It would never happen by accident.\"

\n

Aerhien willed her stiff lips to move.  It should have been wonderful news, but the hero himself seemed anything but happy.  \"You... have fathomed the nature of the Counter-Force?\"

\n

The hero nodded.

\n

\"And?\" Aerhien said.  \"What is the rest of it?  The part you are still considering whether to tell us?\"

\n

Ghandhol's eyebrows went up a tiny fraction, and his head tilted ever so slightly toward her, signaling his surprise and appreciation.

\n

The hero hesitated.  Then he sighed.

\n

\"The Counter-Force isn't going to help you this time.  No hero's luck.  Nothing but creativity and any scraps of real luck - and true random chance is as liable to hurt you as the Dust.  Even if you do survive this time, the Counter-Force won't help you next time either.  Or the time after that.  What you remember happening before - will not happen for you ever again.\"

\n

Aerhien felt the nausea; like a blow to the pit of her stomach it felt, the end of the world.  The rest of the council Eerionnath seemed torn between fear and skepticism; but her own instincts, honed over long centuries, left little room for doubt.  The distant heroes sometimes knew things... and sometimes guessed wrong.  But after a hero had been right a few times, you learned to listen to that one, even if you couldn't understand the reasons or the logic...

\n

\"Why?\" Ghufhus said, sounding skeptical.  \"Why would the Counter-Force work all this time, and then suddenly -\"

\n

Ghandhol interrupted with the far more urgent question.  \"How can we restore the Counter-Force?\"

\n

\"You can't,\" said the hero.

\n

There was a remote sadness in his eyes, the only sign that he knew exactly what he was saying.

\n

\"Then you have pronounced the absolute doom of this world,\" Ghandhol said heavily.

\n

And then the hero smiled, and it was twisted and grim and defiant, all at the same time.  \"Oh... not quite absolute doom.  In my own world, we have our own notions about heroes, which are not about heroic luck.  One of us said: a hero is someone who can stand there at the moment when all hope is dead, and look upon the abyss without flinching.  Another said: a superhero is someone who can save people who could not be saved by any ordinary means; whether it is few people or many people, a superhero is someone who can save people who cannot be saved.  We shall try a little of my own world's style of heroism, then.  Your world cannot be saved by any ordinary means; it is doomed.  Like a child born with a fatal disease; it contained the seed of its own death from the beginning.  Your annihilation is not an unlucky chance to be prevented, or an unpleasant possibility to avert.  It is your destiny that has already been written from the beginning.  You are the walking dead, and this is a dead world spinning, and many other worlds like this one are already destroyed.\"

\n

\"But this world is going to live anyway.  I have decided it.\"

\n

\"That is my own world's heroism.\"

\n

\"How?\" Aerhien said simply.  \"How can our world live, if what you say is true?\"

\n

The hero's eyes had gone unfocused, his face somewhat slack.  \"You will deliver to me the record of every single hero that your history remembers.  You will bring historians here for my consultation.  Your world cannot survive if it must fight this battle over and over again, with the Dust growing stronger each time.  It is my thought that on this attempt, we must neutralize the Dust once and for all -\"

\n

\"Do you think that hasn't been tried?\" Ghufhus demanded incredulously.

\n

The hero smiled that twisted smile again.  \"Ah, but if you had succeeded, you would not have needed to summon me, now would you?  Though I am not quite sure that is valid logic, in a case like this...  But it does seem that none of the other heroes fathomed your Counter-Force, which puts an upper limit on their perception.\"  The hero nodded to himself.  \"All things have a pattern.  Bring me the records, and I will see if I can fathom this Dust, and the limit of its learning ability - there must be a limit, or no amount of luck could ever save you.  All things have a cause:  If something like the Dust came into existence once, perhaps a true Counter-Force can be created to oppose it.  Those are the ideas that occur to me in the first thirty seconds, at any rate.  I must study.  Bring me your keepers of knowledge.  They will be my army.\"

\n

Aerhien bowed, in truth this time, and very low, and the Eerionnath bowed with her.  \"Command and we shall obey, hero,\" she said simply.

\n

The hero turned from her, and looked out the window at the red sky, and the small dots on the land that were the homes of the innocents to be protected.

\n

\"Don't call me that,\" he said, and it was a command.  \"You can call me that after we've won.\"

\n

\"But -\"

\n

It was Ghufhus who said it, and Aerhien promised herself that if it was a stupid question, his accident would be a painful one.

\n

\"But what is - what was the Counter-Force?\"

\n

Aerhien wavered, then decided against it.

\n

It might not matter now, but she also wanted to know.

\n

The hero sighed.  \"It's a long story,\" he said.  \"And to be frank, if you're to understand this properly, there's a lot of other things I have to explain first before I get to the ahntharhapik principle.\"

" } }, { "_id": "WfHiyRxMj6aL7PN7i", "title": "The Obesity Myth", "pageUrl": "https://www.lesswrong.com/posts/WfHiyRxMj6aL7PN7i/the-obesity-myth", "postedAt": "2009-07-30T00:12:06.160Z", "baseScore": 14, "voteCount": 16, "commentCount": 62, "url": null, "contents": { "documentId": "WfHiyRxMj6aL7PN7i", "html": "

Related To:  The Unfinished Mystery of the Shangri-La Diet and Missed Distinctions 

\n

Megan McArdles blogs an interview with Paul Campos, author of The Obesity Myth.  I'll let anyone who is interest read the whole thing, but here's some interesting excerpts:

\n
\n

I mean, there's no better established empirical proposition in medical science that we don't know how to make people thinner. But apparently this proposition is too disturbing to consider, even though it's about as well established as that cigarettes cause lung cancer. So all these proposals about improving public health by making people thinner are completely crazy. They are as non-sensical as anything being proposed by public officials in our culture right now, which is saying something. 

\n

It's conceivable that through some massive policy interventions you might be able to reduce the population's average BMI from 27 to 25 or something like that. But what would be the point? There aren't any health differences to speak of for people between BMIs of about 20 and 35, so undertaking the public health equivalent of the Apollo program to reduce the populace's average BMI by a unit or two (and again I will emphasize that we don't actually know if we could do even that) is an incredible waste of public health resources

\n
\n

and

\n
\n

MeganAn economist recently pointed out that we don't encourage people to move to the country, even though rural people live more than three years longer than urban people, and the diffefence in their healthy life expectancy is even more outsized. Nor do we encourage people to find Jesus or get married. We target \"unhealthy\" behaviors that are already stigmatized.

Paul: Right, as Mary Douglas the anthropologist has pointed out, we focus on risks not on the basis of \"rational\" cost-benefit analysis, but because of the symbolic work focusing on those risks does -- most particularly signalling disapproval of certain groups and behaviors. In this culture fatness is a metaphor for poverty, lack of self-control, and other stuff that freaks out the new Puritans all across the ideological spectrum, which is why the war on fat is so ferocious -- it appeals very strongly to both the right and the left, for related if different reasons.

\n
\n


" } }, { "_id": "QtG2iDnYGZEumXzsb", "title": "Information cascades in scientific practice", "pageUrl": "https://www.lesswrong.com/posts/QtG2iDnYGZEumXzsb/information-cascades-in-scientific-practice", "postedAt": "2009-07-29T12:08:31.135Z", "baseScore": 10, "voteCount": 9, "commentCount": 6, "url": null, "contents": { "documentId": "QtG2iDnYGZEumXzsb", "html": "

Here's an interesting recent paper in the British Medical Journal: \"How citation distortions create unfounded authority: analysis of a citation network\". (I don't know if this is freely accessible, but the abstract should be.)

\n

From the paper:

\n

\"Objective To understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.\"

\n

\"Conclusion Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.\"

\n

It also includes a list of specific ways in which citations were found to amplify or invent evidence.

" } }, { "_id": "tJQsxD34maYw2g5E4", "title": "Thomas C. Schelling's \"Strategy of Conflict\"", "pageUrl": "https://www.lesswrong.com/posts/tJQsxD34maYw2g5E4/thomas-c-schelling-s-strategy-of-conflict", "postedAt": "2009-07-28T16:08:16.244Z", "baseScore": 155, "voteCount": 134, "commentCount": 154, "url": null, "contents": { "documentId": "tJQsxD34maYw2g5E4", "html": "

It's an old book, I know, and one that many of us have already read. But if you haven't, you should.

\n

If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.

\n

Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.

\n

Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.

\n

And after we split the prize and cash our checks I learn that you broke the radio on purpose.

\n

Schelling's book walks you through numerous conflict situations where an unintuitive and often self-limiting move helps you win, slowly building up to the topic of nuclear deterrence between the US and the Soviets. And it's not idle speculation either: the author worked at the White House at the dawn of the Cold War and his theories eventually found wide military application in deterrence and arms control. Here's a selection of quotes to give you a flavor: the whole book is like this, except interspersed with game theory math.

\n
\n

The use of a professional collecting agency by a business firm for the collection of debts is a means of achieving unilateral rather than bilateral communication with its debtors and of being therefore unavailable to hear pleas or threats from the debtors.

\n
\n
\n

A sufficiently severe and certain penalty on the payment of blackmail can protect a potential victim.

\n
\n
\n

One may have to pay the bribed voter if the election is won, not on how he voted.

\n
\n
\n
\n

I can block your car in the road by placing my car in your way; my deterrent threat is passive, the decision to collide is up to you. If you, however, find me in your way and threaten to collide unless I move, you enjoy no such advantage: the decision to collide is still yours, and I enjoy deterrence. You have to arrange to have to collide unless I move, and that is a degree more complicated.

\n
\n
\n
\n

We have learned that the threat of massive destruction may deter an enemy only if there is a corresponding implicit promise of nondestruction in the event he complies, so that we must consider whether too great a capacity to strike him by surprise may induce him to strike first to avoid being disarmed by a first strike from us.

\n
\n
\n

Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.

\n
\n

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's  cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.

\n

Sometime ago in my wild and reckless youth that hopefully isn't over yet, a certain ex-girlfriend took to harassing me with suicide threats. (So making her stay alive was presumably our common interest in this variable-sum game.) As soon as I got around to looking at the situation through Schelling goggles, it became clear that ignoring the threats just leads to escalation. The correct solution was making myself unavailable for threats. Blacklist the phone number, block\n\n\nthe email, spend a lot of time out of home. If any messages get through, pretend I didn't receive them anyway. It worked. It felt kinda bad, but it worked.

\n
Hopefully you can also find something that works.
" } }, { "_id": "REpzLJaQjJ2hJb6sR", "title": "The Trolley Problem in popular culture: Torchwood Series 3", "pageUrl": "https://www.lesswrong.com/posts/REpzLJaQjJ2hJb6sR/the-trolley-problem-in-popular-culture-torchwood-series-3", "postedAt": "2009-07-27T22:46:57.377Z", "baseScore": 16, "voteCount": 18, "commentCount": 87, "url": null, "contents": { "documentId": "REpzLJaQjJ2hJb6sR", "html": "

It's just possible that some lesswrong readers may be unfamiliar with Torchwood: It's a British sci-fi TV series, a spin-off from the more famous, and very long-running cult show Dr Who.

\n

Two weeks ago Torchwood Series 3 aired. It took the form of a single story arc, over five days, shown in five parts on consecutive nights. What hopefully makes it interesting to rationalist lesswrong readers who are not (yet) Whovians was not only the space monsters (1) but also the show's determined and methodical exploration of an iterated Trolley Problem:  in a process familiar to seasoned thought-experimenters the characters were tested with a dilemma followed by a succession of variations of increasing complexity, with their choices ascertained and the implications discussed and reckoned with.

\n

An hypothetical, iterated rationalist dilemma... with space monsters... and monsters a great deal more scary - and messier - than Omega -  what's not to like?

\n

So, on the off chance that you missed it, and as a summer diversion from more academic lesswrong fare, I thought a brief description of how a familiar dilemma was handled on popular British TV this month, might be of passing interest (warning: spoilers follow)

\n

The details of the scenario need not concern us too much here (and readers are warned not too expend too much mental energy exploring the various implausibilities, for want of distraction) but suffice to say that the 456, a race of evil aliens, landed on Earth and demanded that a certain number of children be turned over to them to suffer a horrible fate-worse-than-death or else we face the familiar prospects of all out attack and the likely destruction of mankind.

\n

Resistance, it almost goes without saying, was futile.

\n

The problems faced by the team could be roughly sorted into some themes

\n

The Numbers dilemma - is it worth sacrificing any amount of children to save the rest?

\n\n

The Quality dilemma: does it make any difference which children?

\n\n

The choice dilemma: how should the sacrifical cohort be chosen?

\n\n

The limits of human rationality: are there certain 'rational' decisions that are simply too much to expect a human being to be able to make?

\n\n

Actually despite my jocular tone in the first paragraph I don't want to make too light of this series, as it was disturbing viewing.

\n

Anyway: that being said: rationalist lesswrong community members may want to think dispassionately about the their answers before I reveal the conclusions that Russell T Davies (the writer) came to:

\n

Numbers

\n\n

Quality

\n\n

Choice

\n

This was handled by the politicians who considered two dimensions in the selection:

\n\n

Rationality at the limit

\n

On the question of 'how close' a straightforward evolutionary approach was used. Children of the decision-makers were safe, and grandchildren.

\n

\"And our nephews?\" \"Don't push it\".

\n

But the limits of rationality, it seems, are dependent upon gender: While it was recognised that no woman could be expected to agree to the rational sacrifice of her child, it was expected by some that men might have to, and in the end the main character - male - sacrificed a grandchild.

\n

And that's it. Perhaps not a complete disposal of the trolley problem, but nevertheless an interesting excursion into the realms of philosphical dilemmas for a popular drama.   Rationalism is a meme - pass it on.

\n
\n
(1) like many TV aliens: surprisingly able to construct spaceships without the benefit of an opposable thumb
(2) Yes, that was actually the 1965 back-story.
" } }, { "_id": "NuRqybAgstvKk9E45", "title": "Religion and conservation of evidence", "pageUrl": "https://www.lesswrong.com/posts/NuRqybAgstvKk9E45/religion-and-conservation-of-evidence", "postedAt": "2009-07-27T17:05:33.099Z", "baseScore": -7, "voteCount": 7, "commentCount": 4, "url": null, "contents": { "documentId": "NuRqybAgstvKk9E45", "html": "

Stereotypically, people say that religion is the \"opiate of the masses\", and expect poor people to be religious, because it gives them solace for their problems.

\n

But most of the religious people I've known are relatively wealthy, Mercedes-driving people, to whom Christianity gives the comfort of believing that their wealth is the result of God's plan, and of their own virtue, rather than accident.

\n

And the people in-between rich and poor tend to deviate less from accepted social behavior, and thus are more likely to be religious.

\n

So every possible economic status increases the priors of being religious.  Wait, that can't be right.

" } }, { "_id": "kYgWmKJnqq8QkbjFj", "title": "Bayesian Utility: Representing Preference by Probability Measures", "pageUrl": "https://www.lesswrong.com/posts/kYgWmKJnqq8QkbjFj/bayesian-utility-representing-preference-by-probability", "postedAt": "2009-07-27T14:28:55.021Z", "baseScore": 50, "voteCount": 25, "commentCount": 37, "url": null, "contents": { "documentId": "kYgWmKJnqq8QkbjFj", "html": "

This is a simple transformation of standard expected utility formula that I found conceptually interesting.

\n

For simplicity, let's consider a finite discrete probability space with non-zero probability at each point p(x), and a utility function u(x) defined on its sample space. Expected utility of an event A (set of the points of the sample space) is the average value of utility function weighted by probability over the event, and is written as

\n

EU(A)=xAp(x)u(x)xAp(x)

\n

Expected utility is a way of comparing events (sets of possible outcomes) that correspond to, for example, available actions. Event A is said to be preferable to event B when EU(A)>EU(B). Preference relation doesn't change when utility function is transformed by positive affine transformations. Since the sample space is assumed finite, we can assume without loss of generality that for all x, u(x)>0. Such utility function can be additionally rescaled so that for all sample space

\n

xp(x)u(x)=1

\n

Now, if we define

\n

q(x)=p(x)u(x)

\n

the expected utility can be rewritten as

\n

EU(A)=xAq(x)xAp(x)

\n

or

\n

EU(A)=Q(A)P(A)

\n

Here, P and Q are two probability measures. It's easy to see that this form of expected utility formula has the same expressive power, so preference relation can be defined directly by a pair of probability measures on the same sample space, instead of using a utility function.

\n

Expected utility written in this form only uses probability of the whole event in both measures, without looking at the individual points. I tentatively call measure Q \"shouldness\", together with P being \"probability\". Conceptual advantage of this form is that probability and utility are now on equal footing, and it's possible to work with both of them using the familiar Bayesian updating, in exactly the same way. To compute expected utility of an event given additional information, just use the posterior shouldness and probability:

\n

EU(A|B)=Q(A|B)P(A|B)

\n

If events are drawn as points (vectors) in (P,Q) coordinates, expected utility is monotone on the polar angle of the vectors. Since coordinates show measures of events, a vector depicting a union of nonintersecting events is equal to the sum of vectors depicting these events:

\n

(P(AB),Q(AB))=(P(A),Q(A))+(P(B),Q(B)), AB=

\n

This allows to graphically see some of the structure of simple sigma-algebras of the sample space together with a preference relation defined by a pair of measures. See also this comment on some examples of applying this geometric representation of preference.

\n

Preference relation defined by expected utility this way also doesn't depend on constant factors in the measures, so it's unnecessary to require the measures to sum up to 1.

\n

Since P and Q are just devices representing the preference relation, there is nothing inherently \"epistemic\" about P. Indeed, it's possible to mix P and Q together without changing the preference relation. A pair (p',q') defined by

\n

\\beta \\end{matrix}\">{p=αp+(1β)qq=βq+(1α)pα>β

\n

gives the same preference relation,

\n

\\frac{Q(B)}{P(B)} \\Leftrightarrow \\frac{Q'(A)}{P'(A)}>\\frac{Q'(B)}{P'(B)}\">Q(A)P(A)>Q(B)P(B)Q(A)P(A)>Q(B)P(B)

\n

(Coefficients can be negative or more than 1, but values of p and q must remain positive.)

\n

Conversely, given a fixed measure P, it isn't possible to define an arbitrary preference relation by only varying Q (or utility function). For example, for a sample space of three elements, a, b and c, if p(a)=p(b)=p(c), then EU(a)>EU(b)>EU(c) means that EU(a+c)>EU(b+c), so it isn't possible to choose q such that EU(a+c)<EU(b+c). If we are free to choose p, however, an example that has these properties (allowing zero values for simplicity) is a=(0,1/4), b=(1/2,3/4), c=(1/2,0), with a+c=(1/2,1/4), b+c=(1,3/4), so EU(a+c)<EU(b+c).

\n

Prior is an integral part of preference, and it works exactly the same way as shouldness. Manipulations with probabilities, or Bayesian \"levels of certainty\", are manipulations with \"half of preference\". The problem of choosing Bayesian priors is in general the problem of formalizing preference, it can't be solved completely without considering utility, without formalizing values, and values are very complicated. No simple morality, no simple probability.

\n" } }, { "_id": "DiBoWCBKTtzhQMwpu", "title": "The Second Best", "pageUrl": "https://www.lesswrong.com/posts/DiBoWCBKTtzhQMwpu/the-second-best", "postedAt": "2009-07-26T22:58:42.349Z", "baseScore": 21, "voteCount": 23, "commentCount": 59, "url": null, "contents": { "documentId": "DiBoWCBKTtzhQMwpu", "html": "

In economics, the ideal, or first best, outcome for an economy is a Pareto-efficient one, meaning one in which no market participant can be made better off without someone else made worse off. But it can only occur under the conditions of “Perfect Competition” in all markets, which never occurs in reality. And when it is impossible to achieve Perfect Competition due to some unavoidable market failures, to obtain the second best (i.e., best given the constraints) outcome may involve further distorting markets away from Perfect Competition.

To me, perhaps because it was the first such result that I learned, “second best” has come to stand generally for the yawning gap between individual rationality and group rationality. But similar results abound. For example, in Social Choice Theory, Arrow's Impossibility Theorem states that there is no voting method that satisfies a certain set of axioms, which are usually called fairness axioms, but can perhaps be better viewed as group rationality axioms. In Industrial Organization, a duopoly can best maximize profits by colluding to raise prices. In Contract Theory, rational individuals use up resources to send signals that do not contribute to social welfare. In Public Choice Theory, special interest groups successfully lobby the government to implement inefficient policies that benefit them at the expense of the general public (and each other).

On an individual level, the fact that individual and group rationality rarely coincide means that often, to pursue one is to give up the other. For example, if you’ve never cheated on your taxes, or slacked off at work, or lost a mutually beneficial deal because you bargained too hard, or failed to inform yourself about a political candidate before you voted, or tried to monopolize a market, or annoyed your spouse, or annoyed your neighbor, or gossiped maliciously about a rival, or sounded more confident about an argument than you were, or took offense to a truth, or [insert your own here], then you probably haven't been individually rational.

\"But, I'm an altruist,\" you might claim, \"my only goal is societal well-being.\" Well, unless everyone you deal with is also an altruist, and with the exact same utility function, the above still applies, although perhaps to a lesser extent. You should still cheat on your taxes because the government won't spend your money as effectively as you can. You should still bargain hard enough to risk losing deals occasionally because the money you save will do more good for society (by your values) if left in your own hands.

What is the point of all this? It's that group rationality is damn hard, and we should have realistic expectations about what's possible. (Maybe then we won't be so easily disappointed.) I don't know if you noticed, but Pareto efficiency, that so called optimality criterion, is actually incredibly weak. It says nothing about how conflicts between individual values must be adjudicated, just that if there is a way to get a better result for some with others no worse off, we'll do that. In individual rationality, its analog would be something like, \"given two choices where the first better satisfies every value you have, you won't choose the second,\" which is so trivial that we never bother to state it explicitly. But we don't know how to achieve even this weak form of group rationality in most settings.

In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, don't we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn't what we want anyway, maybe settling for second best isn't so bad.

" } }, { "_id": "WhWTwQJaiEFxvXB96", "title": "Bayesian Flame", "pageUrl": "https://www.lesswrong.com/posts/WhWTwQJaiEFxvXB96/bayesian-flame", "postedAt": "2009-07-26T16:49:51.120Z", "baseScore": 41, "voteCount": 48, "commentCount": 163, "url": null, "contents": { "documentId": "WhWTwQJaiEFxvXB96", "html": "

There once lived a great man named E.T. Jaynes. He knew that Bayesian inference is the only way to do statistics logically and consistently, standing on the shoulders of misunderstood giants Laplace and Gibbs. On numerous occasions he vanquished traditional \"frequentist\" statisticians with his superior math, demonstrating to anyone with half a brain how the Bayesian way gives faster and more correct results in each example. The weight of evidence falls so heavily on one side that it makes no sense to argue anymore. The fight is over. Bayes wins. The universe runs on Bayes-structure.

\n

Or at least that's what you believe if you learned this stuff from Overcoming Bias.

\n

Like I was until two days ago, when Cyan hit me over the head with something utterly incomprehensible. I suddenly had to go out and understand this stuff, not just believe it. (The original intention, if I remember it correctly, was to impress you all by pulling a Jaynes.) Now I've come back and intend to provoke a full-on flame war on the topic. Because if we can have thoughtful flame wars about gender but not math, we're a bad community. Bad, bad community.

\n

If you're like me two days ago, you kinda \"understand\" what Bayesians do: assume a prior probability distribution over hypotheses, use evidence to morph it into a posterior distribution over same, and bless the resulting numbers as your \"degrees of belief\". But chances are that you have a very vague idea of what frequentists do, apart from deriving half-assed results with their ad hoc tools.

\n

Well, here's the ultra-short version: frequentist statistics is the art of drawing true conclusions about the real world instead of assuming prior degrees of belief and coherently adjusting them to avoid Dutch books.

\n

And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever. No Bayesian method known today can reliably do the same: the outcome will depend on the priors you assume for each parameter. I don't believe you're going to get lucky with all 100. And even if I believed you a priori (ahem) that don't make it true.

\n

(That's what Jaynes did to achieve his awesome victories: use trained intuition to pick good priors by hand on a per-sample basis. Maybe you can learn this skill somewhere, but not from the Intuitive Explanation.)

\n

How in the world do you do inference without a prior? Well, the characterization of frequentist statistics as \"trickery\" is totally justified: it has no single coherent approach and the tricks often give conflicting results. Most everybody agrees that you can't do better than Bayes if you have a clear-cut prior; but if you don't, no one is going to kick you out. We sympathize with your predicament and will gladly sell you some twisted technology!

\n

Confidence intervals: imagine you somehow process some sample data to get an interval. Further imagine that hypothetically, for any given hidden parameter value, this calculation algorithm applied to data sampled under that parameter value yields an interval that covers it with probability 90%. Believe it or not, this perverse trick works 90% of the time without requiring any prior distribution on parameter values.

\n

Unbiased estimators: you process the sample data to get a number whose expectation magically coincides with the true parameter value.

\n

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism allows you to call me a liar and be wrong no more than 10% of the time reject truthful claims no more than 10% of the time, guaranteed, no prior in sight. (Thanks Eliezer for calling out the mistake, and conchis for the correction!)

\n

But this is getting too academic. I ought to throw you dry wood, good flame material. This hilarious PDF from Andrew Gelman should do the trick. Choice quote:

\n
\n

Well, let me tell you something. The 50 states aren't exchangeable. I've lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly. Calling it a hierarchical or multilevel model doesn't change things - it's an additional level of modeling that I'd rather not do. Call me old-fashioned, but I'd rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

\n
\n

As a bonus, the bibliography to that article contains such marvelous titles as \"Why Isn't Everyone a Bayesian?\" And Larry Wasserman's followup is also quite disturbing.

\n

Another stick for the fire is provided by Shalizi, who (among other things) makes the correct point that a good Bayesian must never be uncertain about the probability of any future event. That's why he calls Bayesians \"Often Wrong, Never In Doubt\":

\n
\n

The Bayesian, by definition, believes in a joint distribution of the random sequence X and of the hypothesis M. (Otherwise, Bayes's rule makes no sense.) This means that by integrating over M, we get an unconditional, marginal probability for f.

\n
\n

For my final quote it seems only fair to add one more polemical summary of Cyan's point that made me sit up and look around in a bewildered manner. Credit to Wasserman again:

\n
\n

Pennypacker: You see, physics has really advanced. All those quantities I estimated have now been measured to great precision. Of those thousands of 95 percent intervals, only 3 percent contained the true values! They concluded I was a fraud.

\n

van Nostrand: Pennypacker you fool. I never said those intervals would contain the truth 95 percent of the time. I guaranteed coherence not coverage!

\n

Pennypacker: A lot of good that did me. I should have gone to that objective Bayesian statistician. At least he cares about the frequentist properties of his procedures.

\n

van Nostrand: Well I'm sorry you feel that way Pennypacker. But I can't be responsible for your incoherent colleagues. I've had enough now. Be on your way.

\n
\n

There's often good reason to advocate a correct theory over a wrong one. But all this evidence (ahem) shows that switching to Guardian of Truth mode was, at the very least, premature for me. Bayes isn't the correct theory to make conclusions about the world. As of today, we have no coherent theory for making conclusions about the world. Both perspectives have serious problems. So do yourself a favor and switch to truth-seeker mode.

" } }, { "_id": "vHAyYY48fawQFuAfd", "title": "Five Stages of Idolatry", "pageUrl": "https://www.lesswrong.com/posts/vHAyYY48fawQFuAfd/five-stages-of-idolatry", "postedAt": "2009-07-25T18:16:54.737Z", "baseScore": 10, "voteCount": 9, "commentCount": 16, "url": null, "contents": { "documentId": "vHAyYY48fawQFuAfd", "html": "

We all have heroes or idols, people we look up to and turn to, in one form or another, for guidance or wisdom. Over the years, I've noticed that my feelings towards those I've idolized tend to follow a predictable pattern. The following is the extraction of this pattern.

\n
Stage 1: Exposure - you're exposed to the idol through some channel. Maybe something you read, or someone you know, or simply by chance. You begin to learn about them, and you become intrigued. If it's an author, maybe you pick up one of his books. If it's a group, maybe you check out their website. You begin to gradually absorb what the idol is offering. They don't actually become an idol though, until...
\n
Stage 2: Resonance - after enough exposure, what the idol offers begins to strike a chord with you. You go on to ravenously consume everything related to it. You track down every one of the authors publications, or spend hours staring at all of an artists' paintings. As far as I can tell what's important here isn't actually the content of what the idol offers, but that feeling of resonance it engenders.
\n
Step 3: Incorporation - the idol has become one of the lenses through which you view the world. Everyone and everything is compared to the idol, and everyone invariable comes up short (raise your hand if you've ever thought about someone \"Well, they're pretty smart, but not as smart as Eliezer\"). You change your lifestyle to be more like them, to think more like them. It's as if they have all aspects of life figured out, and you follow along in the hopes that you will reach their same understanding. The most fervent support of the idol lies here.
\n
Step 4: Backlash - you start to realize that the idol does not, in fact, provide the answers to all of life's questions. That they might be wrong about some things, or that their specific offering does not apply to all life's situations. You've changed yourself to emulate the idol, and you realize that perhaps not all those changes were for the better. Paradoxically, the blame for this gets placed on the idol instead of you. Feelings toward it shift from worship to antipathy.
\n
Step 5: Re-incorporation - after enough time has been spent hating the idol, you come to realize, if only subconsciously, that the fault lies not in the idol, but in your worship. The cycle ends with a more reserved incorporation of what the idol does offer, along with the realization of what it doesn't. The idol ceases to be an idol, and becomes another earthly entity, complete with faults.
\n
Of course, the sample size I'm working with is one - this may be a general feature of humans, or simply nuances of my individual brain. If I had to bet though, my money would lie with the general feature - it's happened to me many different times of the years, with my brain in many different stages of development. And I suspect I've seen others in various stages of this (though I have no way of knowing, really.)
\n
Looking at the above stages, worship seems to mirror the progression of infectious disease. Exposure leads to an infection, which then spreads throughout the body. At this point, the body mounts a counterattack, and produces masses of white blood cells to fight off the infection. The infection is expunged, the white blood cells return to normal levels, and you're left with antibodies which contribute to a more complete immune system.
\n
The problem with this isn't so much that it isn't rational per se - taking new evidence and updating our beliefs until they converge on the 'right' answer seems to be exactly the sort of thing we should be doing. The problem is how long it can take to get through them. In the past it's taken me years to get through step five after encountering something new, and true believers in something seem to reach step three and then just stay in it. But we should be updating our beliefs as quickly as possible, not languishing with the wrong answer for huge chunks of our lives.
\n
So my question to the community is twofold:
\n
1) Is this something that happens to you?
\n
and
\n
2) Assuming this is a basic mental process that can't be just turned off, how can we cycle through it faster, so we can more quickly reach accurate beliefs?
\n
For my part, simply recognizing that this cycle exists seems like it's reduced both it's duration, and the extremes I swing to in each direction. But I'm curious if I can do better.
\n

 

" } }, { "_id": "tMeGhbmbdsMSfRGJp", "title": "Link: Interview with Vladimir Vapnik", "pageUrl": "https://www.lesswrong.com/posts/tMeGhbmbdsMSfRGJp/link-interview-with-vladimir-vapnik", "postedAt": "2009-07-25T13:36:52.175Z", "baseScore": 22, "voteCount": 19, "commentCount": 7, "url": null, "contents": { "documentId": "tMeGhbmbdsMSfRGJp", "html": "

I recently stumbled across this remarkable interview with Vladimir Vapnik, a leading light in statistical learning theory, one of the creators of the Support Vector Machine algorithm, and generally a cool guy. The interviewer obviously knows his stuff and asks probing questions. Vapnik describes his current research and also makes some interesting philosophical comments:

\n
\n

V-V: I believe that something drastic has happened in computer science and machine learning. Until recently, philosophy was based on the very simple idea that the world is simple. In machine learning, for the first time, we have examples where the world is not simple. For example, when we solve the \"forest\" problem (which is a low-dimensional problem) and use data of size 15,000 we get 85%-87% accuracy. However, when we use 500,000 training examples we achieve 98% of correct answers. This means that a good decision rule is not a simple one, it cannot be described by a very few parameters. This is actually a crucial point in approach to empirical inference.

\n

    This point was very well described by Einstein who said \"when the solution is simple, God is answering\". That is, if a law is simple we can find it. He also said \"when the number of factors coming into play is too large, scientific methods in most cases fail\". In machine learning we dealing with a large number of factors. So the question is what is the real world? Is it simple or complex? Machine learning shows that there are examples of complex worlds. We should approach complex worlds from a completely different position than simple worlds. For example, in a complex world one should give up explain-ability (the main goal in classical science) to gain a better predict-ability.

\n

R-GB: Do you claim that the assumption of mathematics and other sciences that there are very few and simple rules that govern the world is wrong?

\n

V-V: I believe that it is wrong. As I mentioned before, the (low-dimensional) problem \"forest\" has a perfect solution, but it is not simple and you cannot obtain this solution using 15,000 examples.

\n
\n

Later:

\n
\n

R-GB: What do you think about the bounds on uniform convergence? Are they as good as we can expect them to be?

\n

V-V: They are O.K. However the main problem is not the bound. There are conceptual questions and technical questions. From a conceptual point of view, you cannot avoid uniform convergence arguments; it is a necessity. One can try to improve the bounds, but it is a technical problem. My concern is that machine learning is not only about technical things, it is also about philosophy: What is the complex world science about? The improvement of the bound is an extremely interesting problem from mathematical point of view. But even if you'll get a better bound it will not be able help to attack the main problem: what to do in complex worlds?

\n
" } }, { "_id": "mFrx6YbQ6Dsup2Jvp", "title": "Freaky Fairness", "pageUrl": "https://www.lesswrong.com/posts/mFrx6YbQ6Dsup2Jvp/freaky-fairness", "postedAt": "2009-07-25T08:14:32.185Z", "baseScore": 14, "voteCount": 13, "commentCount": 36, "url": null, "contents": { "documentId": "mFrx6YbQ6Dsup2Jvp", "html": "

Consider this game:

\n

\"\"

\n

where the last payoff pair is very close to (3,2). I choose a row and you choose a column simultaneously, then I receive the first payoff in a pair and you receive the second. The game has no Nash equilibria in pure strategies, but that's beside the point right now because we drop the competitive setting and go all cooperative: all payoffs are in dollars and transferable, and we're allowed beforehand to sign a mutually binding contract about the play and the division of revenue. The question is, how much shall we win and how should we split it?

\n

Game theory suggests we should convert the competitive game to a coalitional game and compute the Shapley value to divide the spoils. (Or some other solution concept, like the \"nucleolus\", but let's not go there. Assume for now that the Shapley value is \"fair\".) The first step is to assign a payoff to each of the 2N = 4 possible coalitions. Clearly, empty coalitions should receive 0, and the grand coalition (me and you) gets the maximum possible sum: 6 dollars. But what payoffs should we assign to the coalition of me and the coalition of you?

\n

Now, there are at least two conflicting approaches to doing this: alpha and beta. The alpha approach says that \"the value a coalition can get by itself\" is its security value, i.e. the highest value it can win guaranteed if it chooses the strategy first. My alpha value is 2, and yours is 2+ϵ2. The beta approach says that \"the value a coalition can get by itself\" is the highest value that it cannot be prevented from winning if it chooses its strategy second. My beta value is 3+ϵ1, and yours is 3.

\n

Astute readers already see the kicker: the Shapley value computed from alphas assigns 3-ϵ2/2 dollars to me and 3+ϵ2/2 dollars to you. The Shapley value of betas does the opposite for ϵ1. So who owes whom a penny?

\n

That's disturbing.

\n

Aha, you say. We should have considered mixed strategies when computing alpha and beta values! In fact, if we do so, we'll find that my alpha value equals my beta value and your alpha equals your beta, because that's true for games with mixed strategies in general (a result equivalent to the minimax theorem). My security value is (10+4ϵ1)/(4+ϵ1), and yours is (10-ϵ2)/(4-ϵ2).

\n

This still means the signs of the epsilons determine who owes whom a penny. That's funny because, if you plot the game's payoffs, you will see that the game isn't a quadrilateral like the PD; it's a triangle. And the point (3+ϵ1,2+ϵ2) that determines the outcome, the point that we can ever-so-slightly wiggle to change who of us gets more money... lies inside that triangle. It can be reached by a weighted combination of the other three outcomes.

\n

That's disturbing too.

\n

...

\n

Now, this whole rambling series of posts was spurred by Eliezer's offhand remark about \"AIs with knowledge of each other's source code\". I formalize the problem thus: all players simultaneously submit programs that will receive everyone else's source code as input and print strategy choices for the game as output. The challenge is to write a good program without running into the halting problem, Rice's theorem or other obstacles.

\n

Without further ado I generalize the procedure described above and present to you an algorithm Freaky Fairness — implementable in an ordinary programming language like Python — that achieves a Nash equilibrium in algorithms and a Pareto optimum simultaneously in any N-player game with transferable utility:

\n
    \n
  1. Calculate the security values in mixed strategies for all subsets of players.
  2. \n
  3. Divide all other players into two groups: those whose source code is an exact copy of Freaky Fairness (friends), and everyone else (enemies).
  4. \n
  5. If there are no enemies: build a Shapley value from the computed security values of coalitions; play my part in the outcome that yields the highest total sum in the game; give up some of the result to others so that the resulting allocation agrees with the Shapley value.
  6. \n
  7. If there are enemies: play my part in the outcome that brings the total payoff of the coalition of all enemies down to their security value.
  8. \n
\n

Proof that all players using this algorithm is a Nash equilibrium: any coalition of players that decides to deviate (collectively or individually) cannot win total payoff greater than their group security value, by point 4. If they cooperate, they collectively get no less than their group security value, by superadditivity and construction of Shapley value.

\n

(NB: we have tacitly assumed that all payoffs in the game are positive, so the Shapley value makes sense. If some payoffs are negative, give everyone a million dollars before the game and take them away afterward; both the Shapley value and the minimax survive such manipulations.)

\n

In retrospect the result seems both obvious and startling. Obvious because it closely follows the historically original derivation of the Shapley value. Startling because we're dealing with a class of one-shot competitive games: players enter their programs blindly, striving to maximize only their own payoff. Yet all such games turn out to have Nash equilibria that are Pareto-optimal, and in pure strategies to boot. Pretty neat, huh?

\n

I've seriously doubted whether to post this or not. But there might be mistakes, and many eyes will be more likely to spot them. Critique is welcome!

\n

UPDATE 12.01.2011: benelliott found a stupid mistake in my result, so it's way less applicable than I'd thought. Ouch.

" } }, { "_id": "GsG4oSndoHMfGaHQW", "title": "Many Reasons", "pageUrl": "https://www.lesswrong.com/posts/GsG4oSndoHMfGaHQW/many-reasons", "postedAt": "2009-07-25T05:09:28.208Z", "baseScore": 2, "voteCount": 4, "commentCount": 6, "url": null, "contents": { "documentId": "GsG4oSndoHMfGaHQW", "html": "

I'm here to teach you a new phobia. The phobia concerns phrases such as \"for many reasons\".

\n

Rational belief updating is a random walk without drift. If you expect your belief to go up (down) in response to evidence, you should instead make it go up (down) right now. If you're not convinced, read about conservation of expected evidence, or the law of iterated expectations.

\n

If evidence comes in similar-sized chunks, the number of chunks in the \"for\" direction follows a binomial distribution with p=.5. Such a distribution can output most of the pieces being in the same direction, but if the number of pieces is high, this will happen quite rarely.

\n

So if you can find, say, ten reasons to do or believe something and no reasons not to, something is going on.

\n

One possibility is it's a one in a thousand coincidence. But let's not dwell on that.

\n

Another possibility is that the process generating your reasons, while unbiased, is skewed. That is to say, it produces many weak reasons in one direction and a few strong reasons in the other, and it just happened not to produce such a strong reason in your case. And so we have many empirical reasons to think the Sun will rise tomorrow (e.g., it rose on June 3rd 1978 and February 16th 1260), and none that it won't. But this does not seem to describe cases like \"what university should I choose\", \"should I believe in a hard takeoff singularity\", or \"is global warming harmful on net\".

\n

Another possibility (probably a special case of the previous one, but worth stating on its own) is that what you're describing as \"many reasons\" is really a set of different manifestations of the same underlying reason. Maybe you have a hundred legitimate reasons for not hiring someone, including that he smashes furniture, howls at the moon, and strangles kittens. If so, the reason underlying all these may just be that he's nuts.

\n

Then there's the last, scariest, most important possibility. You may be biased toward finding reasons in one direction, so that you will predictably trend toward your favorite belief. This means you're doing something wrong! Luckily, thinking about why you thought the phrase \"for many reasons\" caused you to find out.

\n

In sum, when your brain speaks of \"many reasons\" all going the same way, grab, shake, and strangle it. It may just barf up a better, more compressed way of seeing the world, or confess to confirmation bias. 

\n

(Incidentally, this also applies to the phrase \"in many ways\". If you judge someone to be in many ways a weird person, that suggests he has some underlying property that causes many kinds of weirdness, or that you have some underlying property that causes you to judge his traits as weird. Both are noteworthy.)

" } }, { "_id": "6x9S4LBKcFmqvMJfi", "title": "Celebrate Trivial Impetuses", "pageUrl": "https://www.lesswrong.com/posts/6x9S4LBKcFmqvMJfi/celebrate-trivial-impetuses", "postedAt": "2009-07-24T22:36:05.049Z", "baseScore": 48, "voteCount": 48, "commentCount": 41, "url": null, "contents": { "documentId": "6x9S4LBKcFmqvMJfi", "html": "

There is a flipside to the trivial inconvenience: the trivial impetus.  This is the objectively inconsequential factor that gets you off your rear and doing something you probably would have left undone.  It doesn't have to be a major, crippling akrasia issue.  I'm not talking so much about finishing your dissertation or remodeling your house, although a trivial impetus could probably get you to make some progress on either.  I'm talking about little things that make your life a little better, like trying a new food or permitting a friend to drag you along to a gathering of people and pizza.

\n

An illustrative anecdote: the first time I tried guacamole, I was out with my family at a restaurant and my parents decided to order some.  The waiter came out with a little cart with decorative little bowls full of ingredients and a couple of avocados, and proceeded to make guacamole right there with all the finesse of one of those chefs at a hibachi restaurant.  He then presented us with the dish of guacamole and a basket of chips.

\n

If my prior reasons for avoiding guacamole had been related to concerns about its freshness or possible arsenic content, this would have been a non-trivial reason to try the new food, but they weren't - I was just twelve, and it was green goop.  But on that day, it was green goop that someone had made right in front of me like performance art!  I simply had to have some!  It was delicious.  I have enjoyed guacamole ever since.  I would almost certainly have taken years longer to try it, if ever I did, had it not been for that restaurant's habit of making each batch of guacamole fresh in front of the customer.

\n

Not all trivial impetuses have to be so random and fortuitous.  Just as you can arrange trivial inconveniences to stand between you and things you should not be doing, you can often arrange trivial impetuses to push you towards things you should be doing.  For instance, I often get my friends to instruct me to do things when I'm having trouble getting moving: sometimes all it takes to get me to stop dithering and start making the pasta salad I agreed to bring to a party is someone agreeing when I say, \"I should make pasta salad now\".  Or \"I should go to bed now\", or \"I should probably pay that bill now\".

\n

Does anyone have any other ideas for trivial impetuses that could be helpful in fighting small-scale akrasia (or large-scale)?

" } }, { "_id": "BaffPrQtKYSABigNb", "title": "Are calibration and rational decisions mutually exclusive? (Part two)", "pageUrl": "https://www.lesswrong.com/posts/BaffPrQtKYSABigNb/are-calibration-and-rational-decisions-mutually-exclusive", "postedAt": "2009-07-24T00:49:14.505Z", "baseScore": 8, "voteCount": 9, "commentCount": 15, "url": null, "contents": { "documentId": "BaffPrQtKYSABigNb", "html": "

In my previous post, I alluded to a result that could potentially convince a frequentist to favor Bayesian posterior distributions over confidence intervals. It’s called the complete class theorem, due to a statistician named Abraham Wald. Wald developed the structure of frequentist decision theory and characterized the class of decision rules that have a certain optimality property.

\n

Frequentist decision theory reduces the decision process to its basic constituents, i.e., data, actions, true states, and incurred losses. It connects them using mathematical functions that characterize their dependencies, i.e., the true state determines the probability distribution of the data, the decision rule maps data to a particular action, and the chosen action and true states together determine the incurred loss. To evaluate potential decision rules, frequentist decision theory uses the risk function, which is defined as the expected loss of a decision rule with respect to the data distribution. The risk function therefore maps (decision rule, true state)-pairs to the average loss under a hypothetical infinite replication of the decision problem.

\n

Since the true state is not known, decision rules must be evaluated over all possible true states. A decision rule is said to be “dominated” if there is another decision rule whose risk is never worse for any possible true state and is better for at least one true state. A decision rule which is not dominated is deemed “admissible”. (This is the optimality property alluded to above.) The punch line is that under some weak conditions, the complete class of admissible decision rules is precisely the class of rules which minimize a Bayesian posterior expected loss.

\n

(This result sparked interest in the Bayesian approach among statisticians in the 1950s. This interest eventually led to the axiomatic decision theory that characterizes rational agents as obeying certain fundamental constraints and proves that they act as if they had a prior distribution and a loss function.)

\n

Taken together, the calibration results of the  previous post and the complete class theorem suggest (to me, anyway) that irrespective of one's philosophical views on frequentism versus Bayesianism, perfect calibration is not possible in full generality for a rational decision-making agent.

" } }, { "_id": "XpXQ4KNzLa9ZHYw8p", "title": "AndrewH's observation and opportunity costs", "pageUrl": "https://www.lesswrong.com/posts/XpXQ4KNzLa9ZHYw8p/andrewh-s-observation-and-opportunity-costs", "postedAt": "2009-07-23T11:36:34.233Z", "baseScore": 29, "voteCount": 31, "commentCount": 61, "url": null, "contents": { "documentId": "XpXQ4KNzLa9ZHYw8p", "html": "

In his discussion of \"cryocrastination\", AndrewH makes a pretty good point. There may be some better things you can do with the money you'd spend on cryonics insurance. The sort of people who are into cryonics would probably accept that donating it to the Singularity Institute is probably, all in all, a higher utility use of however many dollars. Andrew's conclusion is that you should figure out what maximizes utility and do it, regardless of how small a contribution is involved. He's right, but I want to use the same example to push a point that is very slightly different, or maybe a little more general, or maybe the exact same one but phrased differently.

Consider an argument frequently made when politicians are discussing the budget. I frequently hear people say it would cost between ten and twenty billion dollars a year to feed all the hungry people in the world. I don't know if that's true or not, and considering the recent skepticism about aid it probably isn't, but let's say the politicians believe it. So when they look at (for example) NASA's budget of fifteen billion dollars, they say something like \"It's criminal to be spending all this money on space probes and radio telescopes when it could eliminate world hunger, so let's cut NASA's budget.\"

You see the problem? When we cut NASA's budget, it doesn't immediately go into the \"solve world hunger\" fund. It goes into the rest of the budget, and probably gets divided among the Congressman Johnson Memorial Fisheries Museum and purchasing twelve-thousand-dollar staplers.

The same is true of cryocrastination. Unless you actually take that money you would have spent on cryonics and donate it to the Singularity Institute, it's going into the rest of your budget, and you'll probably spend it on coffee and plasma TVs and famous statistician trading cards and whatever else.

I find myself frequently making this error in the following way: a beggar asks me for money, and I want to give it to them on the grounds that they have activated my urge to help people. Then think to myself \"I can't justify giving the money to this beggar when it would help many more people if I gave it to a responsible charity.\" So I say no, and forget all about it, and never give the money to anyone. Even though (from a charity point of view) I know of a superior alternative to giving the money to the beggar, I would still be better off just giving the beggar the money!

All this means that for any entity that does not use its resources with maximum efficiency, the opportunity cost of spending a certain amount of resources should not be calculated as what you'd get earn from the best possible use of those resources, but what you'll earn from the use of those resources which you expect to actually occur.

" } }, { "_id": "QPqm5aj2meRmE7kR8", "title": "The Nature of Offense", "pageUrl": "https://www.lesswrong.com/posts/QPqm5aj2meRmE7kR8/the-nature-of-offense", "postedAt": "2009-07-23T11:15:54.647Z", "baseScore": 137, "voteCount": 115, "commentCount": 179, "url": null, "contents": { "documentId": "QPqm5aj2meRmE7kR8", "html": "

\n \n

Recently, an extended discussion has taken place over the fact that a portion of comments here were found to be offensive by some members of this community, while others denied their offensive nature or professed to be puzzled by why they are considered offensive. Several possible explanations for why the comments are offensive have been advanced, and solutions offered based on them:

\n\n

Each of these explanations seems to have an element of truth, and each solution seems to have a chance of ameliorating the problem. But even though the discussion has mostly died down, we appear far from reaching an agreement, and I think one reason may be the lack of a general theory of the phenomenon of \"offense\", in the sense of giving and taking offense, that we can use to explain what has happened, so all of the proposed explanations and solutions feel somewhat arbitrary and unfair.

\n

(I think this article has it mostly right, but I’ll give a much shorter account since I can skip the background evo psych info, and I’m not being paid by the word. :)

\n

Let’s consider what other behavior are often considered offensive and see if we can find a pattern:

\n\n

What do all these have in common? Hint: the answer is quite ironic, given the comment that first triggered this whole fracas.

\n
\n

most people here don't value social status enough and (especially the men) don't value having sex with extremely attractive women that money and status would get them

\n
\n

As you may have guessed by now, I think the answer is status. Specifically, to give offense is to imply that a person or group has or should have low status. Taking offense then becomes easy to explain: it’s to defend someone’s status from such an implication, out of a sense of either fairness or self-interest. Let’s go back to the three hypotheses I collected and see if this theory can cover them as special cases.

\n

to be thought of, talked about as, or treated like a non-person” Well, to be like a non-person is clearly to have low status.

\n

analysis of behavior that puts the reader in the group being analyzed, and the speaker outside it” A typical situation in which one group analyzes the behavior of another is a scientific study. In such a study, the researchers usually have higher status than the subjects being studied. But even to offer a casual analysis of someone else’s behavior is to presume more intelligence, insight, or wisdom than that person.

\n

exclusion from the intended audience” To be excluded from the intended audience is to be labeled an outsider by implication, and outsiders typically have lower status than insiders.

\n

But to fully understand why this particular comment is especially offensive, I think we have to consider that it (as well as many PUA discussions) specifically advocates (or appears to advocate) treating women as sex objects instead of potential romantic partners. Now think of the status difference between a sex object and a romantic partner...

\n

Ethical Implications

\n

Usually, one avoids giving offense by minding one’s audience and taking care not to use any language that might cause offense to any audience member. This is very easy to do one-on-one, pretty easy in a small group, hard in front of a large audience (case in point: Larry Summers’s infamous speech), and almost impossible on an Internet forum with a large, diverse, and invisible audience, unless one simply avoids talking about everything that might possibly have anything to do with anyone’s status.

\n

Still, that doesn’t mean that we shouldn’t try to avoid giving offense when we can do so without affecting the point that we’re making, or consider skipping a minor point if it necessarily gives offense.  After all, to lower someone’s social status is to cause a real harm. On the other side of this interaction, we should consider the possibility that our offensiveness sense may be tuned too sensitively, perhaps for an ancestral environment where mass media didn’t exist and any offense might reasonably be considered both personal and intentional. So perhaps we should also try to be less sensitive and avoid taking offense when discussing ideas that are both important and inextricably linked with status.

\n

P.S. It's curious that there hasn't been more research into the evolutionary psychology and ethics of offense. If such research does exist and I simply failed to find them, please let me know.

" } }, { "_id": "qDucvMYty5gdumHDB", "title": "Are calibration and rational decisions mutually exclusive? (Part one)", "pageUrl": "https://www.lesswrong.com/posts/qDucvMYty5gdumHDB/are-calibration-and-rational-decisions-mutually-exclusive-0", "postedAt": "2009-07-23T05:15:45.853Z", "baseScore": 7, "voteCount": 8, "commentCount": 19, "url": null, "contents": { "documentId": "qDucvMYty5gdumHDB", "html": "

I'm planning a two-part sequence with the aim of throwing open the question in the title to the LW commentariat. In this part I’ll briefly go over the concept of calibration of probability distributions and point out a discrepancy between calibration and Bayesian updating.

\r\n

It's a tenet of rationality that we should seek to be well-calibrated. That is, suppose that we are called on to give interval estimates for a large number of quantities; we give each interval an associated epistemic probability. We declare ourselves well-calibrated if the relative frequency with which the quantities fall within our specified intervals matches our claimed probability. (The Technical Explanation of Technical Explanations discusses calibration in more detail, although it mostly discusses discrete estimands, while here I'm thinking about continuous estimands.)

\r\n

Frequentists also produce interval estimates, at least when \"random\" data is available. A frequentist \"confidence interval\" is really a function from the data and a user-specified confidence level (a number from 0 to 1) to an interval. The confidence interval procedure is \"valid\" if in a hypothetical infinite sequence of replications of the experiment, the relative frequency with which the realized intervals contain the estimand is equal to the confidence level. (Less strictly, we may require \"greater than or equal\" rather than \"equal\".) The similarity between valid confidence coverage and well-calibrated epistemic probability intervals is evident.

\r\n

This similarity suggests an approach for specifying non-informative prior distributions, i.e., we require that such priors yield posterior intervals that are also valid confidence intervals in a frequentist sense. This \"matching prior\" program does not succeed in full generality. There are a few special cases of data distributions where a matching prior exists, but by and large, posterior intervals can at best produce only asymptotically valid confidence coverage. Furthurmore, according to my understanding of the material, if your model of the data-generating process contains more than one scalar parameter, you have to pick one \"interest parameter\" and be satisfied with good confidence coverage for the marginal posterior intervals for that parameter alone. For approximate matching priors with the highest order of accuracy, a different choice of interest parameter usually implies a different prior.

\r\n

The upshot is that we have good reason to think that Bayesian posterior intervals will not be perfectly calibrated in general. I have good justifications, I think, for using the Bayesian updating procedure, even if it means the resulting posterior intervals are not as well-calibrated as frequentist confidence intervals. (And I mean good confidence intervals, not the obviously pathological ones.) But my justifications are grounded in an epistemic view of probability, and no committed frequentist would find them as compelling as I do. However, there is an argument for Bayesian posteriors over confidence intervals than even a frequentist would have to credit. That will be the focus of the second part.

" } }, { "_id": "MWGetevxNdzjRcHuM", "title": "The Price of Integrity", "pageUrl": "https://www.lesswrong.com/posts/MWGetevxNdzjRcHuM/the-price-of-integrity", "postedAt": "2009-07-23T04:30:48.788Z", "baseScore": -3, "voteCount": 43, "commentCount": 42, "url": null, "contents": { "documentId": "MWGetevxNdzjRcHuM", "html": "

Related Posts: Prices or Bindings?

\n

On the evening of August 14th, 2006 a pair of Fox News journalists, Steve Centanni and Olaf Wiig were seized by Islamic militants while on assignment in Gaza City.  Nothing was heard of them for nine days until a group calling themselves the Holy Jihad Brigades took credit for the kidnappings.  They issued an ultimatum, demanding the release of Muslims prisoners from American jails within a 72 hour time frame.  Their demands were not met.

\n

But then a few days later the journalists were allowed to go free... but not before they’d been forced into converting to Islam at gunpoint, and had each videotaped a statement denouncing U.S. and Israeli foreign policy.

\n

The war raged on.

\n

A couple of kidnapped journalists is nothing new (certainly not three years after the fact) and aside from the happy ending this particular case wouldn’t worth mentioning if not for a unique twist that occurred after they returned home.  A fellow Fox News contributor, Sandy Rios, openly criticized the two men; she said that no true Christian would convert – falsely or otherwise – merely because they were threatened with death.  As she later explained to Bill Maher:*

\n
\n

My point was that Christians – I don’t know what their faith is – but I’m talking about Christians who responded to the story and said that they would have done the same thing...

\n

Christ followers can’t do that.  We don’t have that freedom.  We have to profess Christ no matter what... Christianity is, by its very nature, radical.  It is not normal or natural to lay down your life for a friend.  It is not natural or normal to say ‘I will not deny my faith even if you do cut my head off.'

\n
\n

I agree with her, and admire her courage for sticking with her convictions.  If you buy into Christianity’s metaphysical claims, then bearing false witness to your faith ought be considered a serious crime; not only does it show a pathological attachment to life (when eternal bliss lies just around the corner) furthermore, it completely ignores the core premises of Christianity, as well as the death of its founder.  This very question split the early Church: whether or not those who'd become apostates [had renounced Christ] due to persecution at the hands of the Romans could ever be forgiven.  There were some who said it should be forgivable (after the proper penance, of course), but no one argued that it ought be condoned. 

\n

I’d wager that the guys in Guantanamo take the question just as seriously.  Not every religion has seen it that way, however.

\n

*          *          *

\n

In 1492 the joint Spanish monarchs, Isabella I and Ferdinand II, issued the Alhambra Decree upon completing the reconquista of Spanish land from the Moors.  They gave the local Jewish population three options: leave the kingdom, convert to Christianity, or face death.  While the majority left for Portugal, a significant number stayed behind and paid lip service to Christianity while continuing to practice Judaism in secret.

\n

This was hardly the first time something like this had happened in Europe, and given Judaism’s tendencies towards isolationism, as well as their lack of evangelical tradition, it should come as no surprise that they didn’t give two figs about lying to Christians.  The Rabbinical body even has separate terms for meshumadim (those who’d convert voluntarily) and the anusim (those who’d converted under duress).  The latter wasn’t encouraged – at least, not by most Rabbis – but nonetheless it was accepted.

\n

Now for the question that all of this was leading up to: what ought an Atheist to do in this situation?

\n

*          *          *

\n

In March of 2007 Iran took 15 British servicemen hostage, alleging that their ship crossed into Iranian waters.  They were eventually returned safely to Britain, but for some time they were paraded around on Iranian television, denouncing the country they’d sworn to protect.  Quite frankly, this was cowardice.

\n

These were sailors and soldiers in Her Majesty’s Armed Forces.  That they had they sworn an Oath of fealty to the Queen is the least of the reasons they should have kept silent.  Far beyond that, they trusted sufficiently in the rightness of British foreign policy that they were willing to take another human’s life.  Let me repeat that: they so strongly believed int he rightness (or at least the 'Less Wrongness') of British foreign policy that they were ready to kill for it.  I am not suggesting that there is something inherently immoral about serving in the military; I spent six years there myself, and my paperwork’s still up to date for when the Chinese land on the West Coast.  What I am saying is that if you’re willing to kill for a cause you’d better be willing to die for it, too.  Otherwise...

\n

Now admittedly I don’t know the whole situation.  Maybe the Iranians were threatening to murder a dozen children if these servicemen didn’t read the scripts they were given.  There may have been some other extenuating circumstances.  But having a good reason to act like a coward – even a really good reason – does not transform cowardice into heroism.  It only transforms cowardice into adequacy.  It might be necessary to violate your morals at times, but it is not something to be proud of.  And yet, despite that...

\n

You know what?  The Last Psychiatrist said it much better than I ever will; these are his words on the topic:

\n
\n

I'm sure those soldiers were thinking, \"look, I know who I am, I know I'm not a coward, I'm not helping the Iranians, but I have to do whatever is necessary to get out of this mess.\"  What they are saying is that they can declare who they are, and what they do has no impact on it.  \"I am a hero, regardless of how I act.\"  That's the narcissist fallacy...

\n

But here's the thing: when they returned home to Britain, they were heralded as heroes by other people.  Including the British government.   Based on what?  They didn't actually do anything; heroism isn't simply living through a bad experience.  Well, of course: based on the fact that they are heroes who had to pretend to be something else.

\n

That's the narcissist's tautology: you are what you say you are because you said you are.  What makes it an example of our collective narcissism is that we agree--  we want it to be true that they, and we, can declare an identity.

\n
\n

Screw narcissism.  How you act is who you are.

\n

Maybe these men traded their integrity for a bloody good reason; let’s grant that for sake of argument, because the quality of their moral fibre is irrelevant.  Regardless of what they traded it for, they still traded it; whatever they got came at a price.

\n

How many Utilons is worth to lie to a bunch of savages?  The Spanish Jews decided ‘not very many,’ and they prospered for a time.  But they only had a short-term advantage; the idea of lying during a baptism was inconceivable to the Christian populace back then.  It didn’t take long for them to catch on, however, and by that time the Spanish Inquisition was hitting its stride...

\n

Modern day Muslims fundamentalists, on the other hand – whether or not they know how to use toilet paper – are not stupid.  They know perfectly well that these confessions are forced; I’d even say that there’s a good chance they’re familiar with the Koran’s prohibition against forced conversions, and the fact that these aren’t *really* conversions is their legalistic loophole (they’re generally not that concerned about converting us, anyways; they just want non-Muslims to be second class citizens under a Caliphate, is all - I've even heard anecdotes that Egyptians are more offended by evolution than Atheism).  Quite frankly, they’re winning more than enough converts in our prisons and our ghettos; they don’t need a couple more journalists.

\n

So what is the point of it?  Quite simply, it’s the point of all terrorism (and all war, for that matter): they’re framing the conversation, creating a perception which becomes reality, winning the war before entering the battlefield.  It’s theatre.  The point is to demoralize; to expose the West as hypocritical and cowardly; to drive us into panics, sway our elections, to make us fearful.  They’re doing it to show us that terrorism works.  And so far it's working pretty well. 

\n

The British servicemen surrendered peacefully to the Iranians out of fear of causing a national incident – as if capturing and detaining another country’s military forces isn’t already a national incident.

\n

That is the long-term price of selling your integrity.

\n

*          *          *

\n

Few, if any, of the members of Less Wrong are here for the sake of expediency.  We say that Rationalists should Win – and of course they should! – but if all we cared about was winning (in the short term, proximate sense) then we’d be reading Pick-Up-Artist books [exclusively - this article was written before the recent debate], schmoozing the corporate latter, or pumping opiates into our veins.  The reason we dedicate so much time to this site is because we hold up the values of Truth, Knowledge, and Humanity as part of a higher purpose.  The only way for humanity to Win in the long term is if everybody is trained in the mental martial arts.  We train our minds, not to save ourselves, but to save the world.  We all assert these values implicitly in our writing, but writing isn’t enough; when the rubber hits the road, if we can’t walk the talk, then we might as well have plugged in to the heroin drip.  The ideals we spoke of will be nothing but empty words from empty men.

\n

Penguins will crowd together at the edge of the iceberg, pushing and shoving, until one of them falls in.  After a few moments, if a killer whale hasn’t eaten the unlucky test subject, then the rest will jump in, knowing it’s safe.  Our species doesn’t work like that; our species needs Heroes.  Everyone here has already taken on the mantle of heroism, dedicating a significant portion of our time to trying to improve the world situation.  It's easy to be noble when the sun is shining and the weather is warm.  When winter comes that’s when we’ll really see what we’re really made of.

\n

A single choice, a rationalization born out of cowardice, can undermine all that we are, and all that we stand for

\n

So what should an Atheist - more than just a nihilist - do when the terrorist has a gun to his head?

\n

This one would tell them to pull the trigger.

\n

 

\n

No.  Not even in the face of Armageddon.  Never compromise. ~Rorchach

\n

There... are... four... lights! ~Capt. Jean-Luc Picard

\n

 

\n

*Minor syntactical edits; spoken and written English are different mediums.

\n

 

\n

 

\n

[If anyone can suggest appropriate tags for this article I'd be much obliged]

\n

 

" } }, { "_id": "KgscYPgz7QomnHkv6", "title": "Murder", "pageUrl": "https://www.lesswrong.com/posts/KgscYPgz7QomnHkv6/murder", "postedAt": "2009-07-23T02:52:37.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "KgscYPgz7QomnHkv6", "html": "

People are murderers if they kill other people. They are not murderers if they let other people die when they can cheaply prevent it. For instance I am not a murderer if I spend a couple of hundred dollars on clothing rather than sending it to a decent charity, even if the predicted result is that one more person will die.

\n

People don’t want to be murderers, but they don’t mind letting people die, except those who are close to them. People also don’t like or respect murderers, but they don’t mind others letting people die, except people who are close to them. People don’t like being murdered or being allowed to die equivalently, regardless of whether those involved are close to them. It is interesting that people’s treatment of others’ lives correlates so with how third party observers deal out like and respect, rather than how the person whose life is at stake feels about it. It is commonly assumed that killing people is bad because we care about the person who gets killed. This might be what we think about when we are condemning murderers, but it doesn’t predict our actions at all well.

\n

Humans aren’t evil here in the same sense that we think of someone who kills for a pair of jeans as evil. It’s not purposeful. Most people believe that they do care about other people’s lives, because they have great trust in their emotions to tell them when something bad is happening. They never check this. But what do you do when you find your emotions do not tell you this at all? One response is to spend yonks trying to justify things like physical distance and action vs. omission as being morally relevant while taking credit for being wonderfully deep. This is evil.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "xLcW4HRubQvKy9cpb", "title": "An observation on cryocrastination", "pageUrl": "https://www.lesswrong.com/posts/xLcW4HRubQvKy9cpb/an-observation-on-cryocrastination", "postedAt": "2009-07-22T20:41:41.912Z", "baseScore": 12, "voteCount": 17, "commentCount": 47, "url": null, "contents": { "documentId": "xLcW4HRubQvKy9cpb", "html": "

Why do people cryocrastinate? The most common explanation I’ve heard from intelligent people for not getting cryonics is that the money is better spent on some altruistic cause. By itself there is nothing wrong with this belief, but irrationality lies near.

Before I continue, I am not here to argue that cryonics works or not. That has been done before. From this point on, I will assume cryonics derives expected utility from it giving a reasonable chance of continuing life past many currently terminal events, with life being a valuable thing.

We begin with a quick overview of the cost of cryonics. Let us break our cost analysis into two parts: acquisition of the cryonics and life insurance contracts and maintenance of these contracts.

First, acquisition. The main costs here is the time that could be invested in other activities. I would estimate a reasonably organized person could get it done with 20 hours of continuous work and $200 (all costs USD) upfront. Let us say this is $600 worth of costs. For people in the US, see rudi hoffman, he will do pretty much everything.

Second, maintenance. For myself who lives in a non-silly country like New Zealand, life insurance is $10 per month, and membership fees for e.g. Cryonics Institute is $10 a month, totaling 70 cents a day. Let us say it is $1 a day for places such as the US, as insurance costs an arm and a leg there.

So cryonics costs one dollar a day. Put this way, it doesn't seem much. This comes out to be two starbucks coffees a week. Let me repeat that: in the long term, cryonics costs TWO STARBUCKS COFFEES A WEEK. Very few people can say they cannot optimize themselves so as to have $1 a day more disposable income.

As someone once put it to Eliezer “No, really, that's ridiculous.  If that's true then my decision isn't just determined, it's overdetermined.”. I agree, cryonics is cheap.

Let us now consider the following two beliefs from someone who has a favorite cause, in which donating money gives lots of positive utility. The amount we are considering to donate is exactly the amount we would think about using to get cryonics:

1: Compared to cryonics, which is almost entirely selfish spending, my cause can benefit far more from the money than spending it on cryonics.

2:  Below a certain amount, the contribution I could make to my cause is not that helpful. It becomes merely a drop in the bucket, of negligible utility.

The first point implies that if people are to optimize themselves (and most people can optimize themselves to get $1 more a day), they should optimize themselves to donate more money to their favorite cause. The second point implies they don’t see the small donations as useful to their cause, so they shouldn’t bother to optimize themselves. The net result is that you decide it is better to keep on drinking that cup of coffee.

So cryonics is sidestepped in a conversation by point one, at most activating peoples thoughts to donate more. But since the change in donation is so small, the effort is too much that ultimately they do nothing.

There is an assumption in the second point above that if the amount of money you donate is too small, you wont bother donating. However, you do know logically, at a System 2 level, that even a small amount of money counts; $30 a month is useful to any cause. But you usually don’t get the pressing feeling that you must optimize your life to donate just a bit more money from logical arguments.

System 1 thought processes see your dollar coin being lost in a sea of dollar coins, so why bother? Of course, it does not help that you might have to give up things that make you personally happy. The outcome is the same: you do not optimize yourself and you go on drinking that cup of coffee.

It is harder to let go of money than it should be, but in this case you should simply shut up and multiply. If you believe you can contribute more money to a cause, contribute, it will help, but don’t use your cause as an excuse for cryocrastinating. If you believe you have no more money to give to your cause, look to see at what you spend on yourself personally, can it really be worth more than the small cost of cryonics in the long run? Remember, without cryonics death is truly game over.

So for the majority of you still not signed up, consider carefully: how more optimized can you be?

" } }, { "_id": "CbJ87ArBPncWoj4uf", "title": "It's all in your head-land", "pageUrl": "https://www.lesswrong.com/posts/CbJ87ArBPncWoj4uf/it-s-all-in-your-head-land", "postedAt": "2009-07-22T19:41:03.478Z", "baseScore": 34, "voteCount": 50, "commentCount": 68, "url": null, "contents": { "documentId": "CbJ87ArBPncWoj4uf", "html": "

From David Foster Wallace's Infinite Jest:

\n
He could do the dextral pain the same way: Abiding. No one single instant of it was unendurable. Here was a second right here: he endured it. What was undealable-with was the thought of all the instants all lined up and stretching ahead, glittering. And the projected future fear... It's too much to think about. To Abide there. But none of it's as of now real... He could just hunker down in the space between each heartbeat and make each heartbeat a wall and live in there. Not let his head look over. What's unendurable is what his own head could make of it all. What his head could report to him, looking over and ahead and reporting. But he could choose not to listen... He hadn't quite gotten this before now, how it wasn't just the matter of riding out cravings for a Substance: everything unendurable was in the head, was the head not Abiding in the Present but hopping the wall and doing a recon and then returning with unendurable news you then somehow believed.
\n

I've come to draw, or at to emphasize, a distinction separating two realms between which I divide my time: real-land and head-land. Real-land is the physical world, occupied by myself and billions of equally real others, in which my fingers strike a series of keys and a monitor displays strings of text corresponding to these keystrokes. Head-land is the world in which I construct an image of what this sentence will look like when complete, what this paragraph will look like when complete and what this entire post will look like when complete. And it doesn't stop there: in head-land, the finished post is already being read, readers are reacting, readers are (or aren't) responding and the resulting conversations are, for better or for worse, playing themselves out. In head-land, the thoughts I've translated into words and thus defined and developed in this post are already shaping the thoughts to be explored in future posts, the composition of which is going on there even now.

Head-land is the setting of our predictions. When deciding what actions to take in real-land, we don't base our choices on the possible actions' end results in real-land — by definition, we don't know what will happen in real-land until it already has — but on their end results in head-land. The obvious problem: while real-land is built out of all the information that exists everywhere, head-land is built not even out of all the information in a single brain — and that isn't much — but of whichever subset of that brain's information happens to be currently employed. Hence the brutally rough, sometimes barely perceptible correspondence between head-land and real-land in the short term and the nearly nonexistent correspondence between them in the long.

One might grant that while responding that running models in head-land is nevertheless the best predictor of real-land events that any individual has. And that's true, but it doesn't change our apparent tendency to place far more trust in our head-land models than their dismal accuracy could ever warrant. To take but one example: we really seem to believe our own failures in head-land, head-land being the place we do the vast majority — and in some cases, all — of our failing. How many times has someone entertained the dream of, say, painting, but then failed in head-land — couldn't get a head-land show, say, or couldn't even mix head-land's colors right — and abandoned the enterprise before beginning it? How many times has someone started painting in real-land, gotten less-than-perfect results, and then extrapolated that scrap of real-land data into a similarly crushing head-land failure? Even established creators are vulnerable to this; could the novelist suffering from a bout of \"writer's block\" simply be the unwitting mark of a head-land vision of himself unable to write? The danger of head-land catastrophes that poison real-land endeavors looms over every step of the path. The possibility of being metaphorically laughed out of the classroom, though probably only illusory to begin with, never quite leaves one's mind. The same, to a lesser extent, goes for experiencing rather than creating; someone who refuses to listen to a new album, sample a new cuisine, watch a new film or visit a new art exhibition on the excuse that they already \"know what [they] like\" appear to have seen, and believed, their head-land selves listening, eating or viewing with irritation, repulsion or boredom.

That most of what we get worked up about exists in our imaginations and our imaginations only is less a fresh observation than the stuff of a thousand tired aphorisms. No battle plan survives the first shot fired. You die a thousand deaths awaiting the guillotine. Nothing's as good or as bad as you anticipate. You never know until you try. We have nothing to fear but fear itself. Fear is the mind-killer. It's all in your mind. Don't look down. Quoth John Milton, \"The mind is its own place, and in it self, can make a Heaven of Hell, a Hell of Heaven.\" The more time I spend in head-land, the less time I feel like I should spend in head-land, because its awful, discouraging predictions are practically never borne out in real-land. (Even when one comes close, head-land seems to fail to capture the subjective experience of failure, which I find is either never failure qua failure or always somehow psychologically palliated or attenuated.) Experiencing a disaster in real-land is one thing, and its negative effects are unavoidable, but experiencing hypothetical head-land disasters as negative mental effects in real-land — which I suspect, we all do, and often — would seem to be optional.

Consider global economic woes. While you more than likely know a few who've had to take cuts in their incomes or find new ones, it's even likelier that you're experiencing all the degradations of destitution in head-land even as your real-land income has not and will not substantially shrink. You're living out the agonies of fumbling with food stamps in a long, angry grocery line despite the fact that you'll never come close. When one is starved for real-land information, one's head-land self gets hit with the worst possible fate. I hear of someone dying in a gruesome real-land freak accident, and I die a dozen times over in more gruesome, freakier head-land accidents. I visit a remote, unpopulated real-land location and my head-land map contracts, desolately and suffocatingly, to encompass only my lonely immediate surroundings. (Then my glasses fall off and break. But there was time! There was time!) I dream up an idea for implementation in real-land, but even before I've fully articulated it my head-land self is already busy enduring various ruinous executions of it. In head-land, worst-case scenarios tend to become the scenarios, presenting huge, faultily-calculated sums of net present real-land misery.

Fear of the ordeals that play out in head-land is a hindrance, but the paralysis induced by the sheer weight of countless accumulated hypothetical propositions is crippling. Even riding high on the hog in real-land is no bulwark against the infinite (and infinitely bad) vicissitudes of head-land. Say you're earning a pretty sweet living playing the guitar in real-land. But what if you'd been born without arms? Or in a time before the invention of the guitar? Or in a time when you would've died of an infection before reaching age twelve? Then you sure wouldn't be enjoying yourself as much. And what if you lose your arms in a horrific fishing accident ten years down the line? Or if you suddenly forget what a guitar is? Or if you die of an infection anyway? Despite the fact that none of these dire possibilities have occurred — or are even likely to occur — they're nonetheless inflicting real-land pain from across the border.

I call this phenomenon a blooming what if plant, beginning as the innocuous seed of a question — \"What if I hadn't done or encountered such and such earlier thing that proved to be a necessary condition to something from which I enjoy and profit now?\" — and sprouting rapidly into a staggeringly complex organism, its branches splitting into countless smaller branches which split into yet more branches themselves. More perniciously, this also happens in a situation-specific manner; namely, in situations whose sub-events are particularly unpredictable. The classic example would be approaching the girl one likes in middle school; the possible outcomes are so many and varied, at least in the approacher's mind, that the what-ifs multiply dizzyingly and collectively become unmanageable, especially if his strategy is to prepare responses to all of them. It's no accident that those never-get-the-girl mopes in movies spend so much time vainly rehearsing conversations in advance, and that doing the same in life never, ever works. There's a line to be drawn between the guys in junior high who could talk the girls up effortlessly and the ones who seized up merely contemplating it. I suspect the difference has to do with the ratio of one's relative presence in head-land versus that in real-land.

I would submit that, whatever their results, the dudes who could walk right up to those girls and try their luck habitually spent a lot more time in real-land than in head-land. They probably weren't sitting around, eyes fixed on their own navels, building elaborate fictions of inadequacy, embarrassment and ridicule; if they were, wouldn't they have been just as paralyzed? They appeared to operate on a mental model that either didn't conjure such dire possibilities or, if it did, didn't allow them any decisionmaking weight. \"So the chick could turn me down? So what? What if space aliens invade and destroy the Earth? I don't know what'll happen until I try.\"

This brings up something else Wallace wrote and thought about — equivalent verbs for him, I think — though not, as I dimly recall, in Infinite Jest. In his sports journalism, of which he wrote some truly stunning pieces, he kept looping back to the issue of the correlation and possible causal connection between great athletes' brilliant physical performance and their astonishing unreflectiveness in conversation and prose. I'm thinking of Wallace's profile of Michael Joyce, a not-quite-star tennis player who has no knowledge or interests outside the game and couldn't even grasp the thundering sexual innuendo on a billboard ad. I'm thinking of his review of Tracy Austin's autobiography, a cardboard accretion of blithe assertions, unreached-yet-strongly-stated conclusions and poster-grade sports clichés. What must it be like, Wallace asked, to speak or hear prases like \"step it up\" or \"gotta concentrate now\" and have them actually mean something? Is the sports star's nonexistent inner life not the price they pay for their astonishing athletic gift, but rather its very essence?

\n

One can say many things about bigtime athletes, but that they live in their heads is not one of them. I'd wager that you can't find a group that spends less time in head-land than dedicated athletes; they are, near-purely, creatures of real-land. The dudes who could go right up to the ladies in seventh grade seemed to be, in kind if not in magnitude, equally real-land's inhabitants. It comes as no surprise that so many of them played sports and weren't often seen with books. And not only were they undaunted by the danger (possibly because unperceived) of crushing humiliation, I'd imagine they were inherently less vulnerable to crushing humiliation in the first place, because crushing humiliation, like theoretical arm loss and imagined endeavor failure, is a head-land phenomenon. Humiliation is what makes a million other-people-are-thinking-horrible-thoughts-about-me flowers bloom — but only in head-land. The impact can only hit so hard if one doesn't spend much time there, because in real-land, direct access to another's thoughts is impossible. In head-land, one can't, as their creator, help but have direct access to everyone else's thoughts, and thus if a head-land resident believes everyone's disparaging him, everyone is disparaging him. \"So what if they're thinking ill of me?\" a full-time real-land occupant might ask. \"I can't know that for sure, and besides, they're probably not; how often do you think, in a way that actually affects them, about someone who's been recently embarrassed?\"

\n

But there's a problem: saying someone \"lives in their head\" is more or less synonymous with calling them intelligent. \"Hey, look at that brainy scientist go by, lost in thought; the fellow lives in his head!\" As for professional athletes, well... let's just acknowledge the obvious, that professional athleticism is not a byword for advanced intellectual capacity. (Wallace once lamented the archetypal \"basketball genius who cannot read\".) So there's clearly a return to time spent in head-land, and arguing for the benefits of head-land occupancy even to nonintellectuals is a trivial task. How, for instance, would we motivate ourselves without reference to head-land? How could we envision possibilities and thus choose which ones we'd like to realize without seeing them in head-land? Surely even the most narrowly-focused, football-obsessed football player has watched himself polish a Super Bowl ring in head-land. Why else would he strive for that outcome? Head-land is where our fantasies happen, where our goals are formulated, and is that a function we can do without?

\n

Hokey as it sounds, I do consider myself a somewhat \"goal-oriented\" person, in that I burn a lot of time and mental bandwidth attempting to realize certain head-land states. But, as the above paragraphs reveal, I often experience head-land backfire in the form of discouraging negative imaginings rather than encouraging positive ones. Here I could simply pronounce that I will henceforth only use head-land for envisioning the positive, but it's not quite that easy; I can think of quite a few badly-ending head-land scenarios that I'm happy to experience there — and only there — and take into account when making real-land decisions. The head-land prediction that I'll get splattered if I walk blindly into traffic comes to mind.

And I'm one of the less head-land-bound people I know! I wouldn't be writing this post if I didn't struggle with the damned place, but traits like my near-inability to write fiction suggest that I don't gravitate toward it as strongly as some. Still, I feel the need to minimize the problems that spring forth from head-land without converting myself into an impulsive dumb beast. The best compromise I have at the moment is not necessarily to stem the flow of predictions out of head-land, but simply to ignore the bulk of their content, to crank down their resolution by 90% or so. Since the accuracy of our predictions drops so precipitously as they extend forward in time and grow dense with specifics, they'd mostly lose noise. Noise simply misleads, and attenuating what misleads is the point of this exercise.

There are countless practical ways to implement this. One quick-and-dirty hack to dial down head-land's effect on your real-land calculations is to only pay attention to head-land's shadow plays to the extent that they're near your position in time. If they have to do with the distant future, only consider their broadest outlines: the general nature of the position you envision yourself occupying in twenty years, for instance, rather than the specific event of your buxom assistant bringing you just the right roast of coffee. If they have to do with the past, near or distant, just chuck 'em; head-land models tend to run wild with totally irrelevant oh-if-only-things-had-been-different retrodictions, which are supremely tempting but ultimately counterproductive. (As one incisive BEK cartoon had a therapist say, \"Woulda, shoulda, coulda — next!\") If they have to do with the near future, they're more valuable, and the nearer the future they deal with, the better you would seem to do to pay attention to them.

The concept behind this is one to which I've been devoting thought and practice lately: small units of focus. Alas, this brings us to another set of bromides, athletic and otherwise. One step at a time. Just you and the goal. Break it down. Don't bite off more than you can chew. The disturbing thing is how well operating on such a short horizon seems to work, at least in certain contexts. I find I actually do run better when I think only of the next yard, write better when I think only of the next sentence and talk better when I think only of the subject at hand. When my mind tries instead to load the entire run, the entire essay or the entire conversation, head-land crashes. (This applies to stuff traditionally thought of as more passive as well: I read more effectively when I focus on the sentence, watch films more effectively when I focus on the shot, listen to music more effectively when I focus on the measure.) When Wallace writes about \"the head not Abiding in the Present but hopping the wall and doing a recon and then returning with unendurable news\" and \"[hunkering] down in the space between each heartbeat and [making] each heartbeat a wall and [living] in there\", I think this is what he means.

Ignoring all head-land details past a certain threshold, de-weighting head-land predictions with their distance in the future and focusing primarily on small, discrete-seeming, temporally proximate units aren't just techniques to evade internal discouragement, either; they also guard against the perhaps even more sinister (and certainly sneakier) forces of complacency. While failure in head-land can cause one to pack it in in real-land, success in head-land, which is merely a daydream away, can prevent one from even trying in real-land. I can't put it better than Paul Graham does: \"If you have a day job you don't take seriously because you plan to be a novelist, are you producing? Are you writing pages of fiction, however bad? As long as you're producing, you'll know you're not merely using the hazy vision of the grand novel you plan to write one day as an opiate.\"

This is why I'm starting to believe that coming up with great ideas in head-land and then executing them in real-land may be a misconceived process, or at least suboptimally conceived one. How many projects have been forever delayed because the creator decided to wait until the \"idea\" was just a little bit better, or, in other words, until the head-land simulation came out a little more favorably? It's plausible that this type of stalling lies at the heart of procrastination: one puts the job off until tomorrow because the head-land model doesn't show it as turning out perfect today, never mind the facts that (a) it'll never be perfect, no matter when it's started and (b) it's unlikely to turn out better with less time available for the work, especially given the unforeseen troubles and opportunities. I provisionally believe that this a priori, head-land idea stuff can be profitably be replaced with small-scale real-land exploratory actions that demand little in the way of time or resource investment. Rather than executing steps one through one hundred in head-land, execute step one in real-land; if nothing else, the data you get in response will be infinitely more reliable and more useful in determining what step two should involve. Those dudes in middle school knew this on some basic level: you just gotta go up to the girl and say something. It's the only gauge you have of whether you should say something more, and of what that something should be. It's all about hammering in the thin end of the wedge.

For what it's worth, I've found this borne out in what little creation I've done thus far. I've reached the point of accepting that I don't know — can't know — how a project's going to turn out, since each step depends on the accumulated effects of the steps that preceded it. All I can do is get clear on my vague, broad goal and put my best foot forward, keeping my mind open to accept all relevant information as it develops. When I started my first radio show, I had a bunch of head-land projections about how the show would be, but in practice it evolved away from them in real-land rather sharply — and, I think, for the better. When I started another one a year later, I knew to factor in this unforeseeable real-land evolution from the get-go and thus kept my ideas about what it was supposed to me broad, flexible and small in number, letting the events of real-land fill in the details as they might. With a TV project only just started, I've tried my hardest to stay out of head-land as much as possible; the bajillion variables involved would send whatever old, buggy software my head-land modeler uses straight to the Blue Screen of Death. (Yes, our brains are Windows-based.) Even if it didn't crash, it's not as if I'd be getting sterling predictions out of it. I have, more grandly speaking, come to accept much more of the future's unknowability than once I did; that goes double for the future of my own works. Modeling a successful work in head-land now seems a badly flawed strategy, to be replaced by taking small steps in real-land and working with its response.

I could frame this as another rung in thet climb from a thought-heavier life to an action-heavier life. of approaching and affecting the world as it exists in real-land rather than as it is imagined in head-land. I've nevery been what one would call an idealist and I suppose I'm drawing no closer to that label. Some regard flight from idealism as flight toward cynicism, but it's cynicism I'm been fleeing as well, perhaps even primarily; what is cynicism, after all, but a mistaken reliance on pessimistic head-land conclusions?

\n\n\n\n\n\n\n
 
" } }, { "_id": "oZ33pz2FWzFbWrgHT", "title": "Fairness and Geometry", "pageUrl": "https://www.lesswrong.com/posts/oZ33pz2FWzFbWrgHT/fairness-and-geometry", "postedAt": "2009-07-22T10:44:39.752Z", "baseScore": 14, "voteCount": 15, "commentCount": 35, "url": null, "contents": { "documentId": "oZ33pz2FWzFbWrgHT", "html": "

This post was prompted by Vladimir Nesov's comments, Wei Dai's intro to cooperative games and Eliezer's decision theory problems. Prerequisite: Re-formalizing PD.

\n

Some people here have expressed interest in how AIs that know each other's source code should play asymmetrical games, e.g. slightly asymmetrized PD. The problem is twofold: somehow assign everyone a strategy so that the overall outcome is \"good and fair\", then somehow force everyone to play the assigned strategies.

\n

For now let's handwave around the second problem thus: AIs that have access to each other's code and common random bits can enforce any correlated play by using the quining trick from Re-formalizing PD. If they all agree beforehand that a certain outcome is \"good and fair\", the trick allows them to \"mutually precommit\" to this outcome without at all constraining their ability to aggressively play against those who didn't precommit. This leaves us with the problem of fairness.

\n

(Get ready, math ahead. It sounds massive, but is actually pretty obvious.)

\n

Pure strategy plays of an N-player game are points in the N-dimensional space of utilities. Correlated plays form the convex hull of this point set, an N-polytope. Pareto-optimal outcomes are points on the polytope's surface where the outward normal vector has all positive components. I want to somehow assign each player a \"bargaining power\" (by analogy with Nash bargaining solutions); collectively they will determine the slope of a hyperplane that touches the Pareto-optimal surface at a single point which we will dub \"fair\". Utilities of different players are classically treated as incomparable, like metres to kilograms, i.e. having different dimensionality; thus we'd like the \"fair point\" to be invariant under affine recalibrations of utility scales. Coefficients of tangent hyperplanes transform as covectors under such recalibrations; components of a covector should have dimensionality inverse to components of a vector for the application operation to make sense; thus the bargaining power of each player must have dimensionality 1/utility of that player.

\n

(Whew! It'll get easier from now.)

\n

A little mental visualization involving a sphere and a plane confirms that when a player stretches their utility scale 2x, stretching the sphere along one of the coordinate axes, the player's power (the coefficient of that coordinate in the tangent hyperplane equation) must indeed go down 2x to keep the fair point from moving. Incidentally, this means that we cannot somehow assign each player \"equal power\" in a way that's consistent under recalibration.

\n

Now, there are many ways to process an N-polytope and obtain N values, dimensioned as 1/coordinate each. A natural way would be to take the inverse measure of the polytope's projection onto each coordinate axis, but this approach fails because irrelevant alternatives can skew the result wildly. A better idea would be taking the inverse measures of projections of just the Pareto-optimal surface region onto the coordinate axes; this decision passes the smoke test of bargaining games, so it might be reasonable.

\n

To reiterate the hypothesis: assign each player the amount of bargaining power inversely proportional to the range of their gains possible under Pareto-optimal outcomes. Then pick the point on the polytope's surface that touches a hyperplane with those bargaining powers for coefficients, and call this point \"fair\".

\n

(NB: this idea doesn't solve cases where the hyperplane touches the polytope at more than one point, e.g. risk-neutral division of the dollar. Some more refined fairness concept is required for those.)

\n

At this point I must admit that I don't possess a neat little list of \"fairness properties\" that would make my solution unique and inevitable, Shapley value style. It just... sounds natural. It's an equilibrium, it's symmetric, it's invariant under recalibrations, it often gives a unique answer, it solves asymmetrized PD just fine, and the True PD, and other little games I've tried it on, and something like it might someday solve the general problem outlined at the start of the post; but then again, we've tossed out quite a lot of information along the way. For example, we didn't use the row/column structure of strategies at all.

\n

What should be the next step in this direction?

\n

Can we solve fairness?

\n

EDIT: thanks to Wei Dai for the next step! Now I know that any \"purely geometric\" construction that looks only at the Pareto set will fail to incentivize players to adopt it. The reason: we can, without changing the Pareto set, give any player an additional non-Pareto-optimal strategy that always assigns them higher utility than my proposed solution, thus making them want to defect. Pretty conclusive! So much for this line of inquiry, I guess.

" } }, { "_id": "ZAoBjz5DqzEtig376", "title": "Deciding on our rationality focus", "pageUrl": "https://www.lesswrong.com/posts/ZAoBjz5DqzEtig376/deciding-on-our-rationality-focus", "postedAt": "2009-07-22T06:27:56.562Z", "baseScore": 39, "voteCount": 38, "commentCount": 51, "url": null, "contents": { "documentId": "ZAoBjz5DqzEtig376", "html": "

I have a problem: I'm not sure what this community is about.

\r\n

To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:

\r\n
    \r\n
  1. The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
  2. \r\n
  3. But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
  4. \r\n
  5. But then, we have already discussed akrasia several times, isn't this then also on-topical?
  6. \r\n
  7. (Even if this was topical, wouldn't a simple recount of \"what worked for me\" be too Kaj-optimized to work for very many others?)
  8. \r\n
\r\n

Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:

\r\n
    \r\n
  1. Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed \"truth\" or \"accuracy\", and we're happy to call it that.
  2. \r\n
  3. Instrumental rationality: achieving your values. Not necessarily \"your values\" in the sense of being selfish values or unshared values: \"your values\" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as \"winning\".
  4. \r\n
\r\n

If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.

\r\n

However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.

\r\n

Expanding our focus isn't necessarily a bad thing, by itself. It will allow us to attract a wider audience, and some of the people who then get drawn here might afterwards also become interested in e-rationality. And many of us would probably find the new kinds of discussions useful in their personal lives. The problem, of course, is that epistemic rationality is a relatively narrow subset of instrumental rationality - if we allow all instrumental rationality topics, we'll be drowned in them, and might soon lose our original focus entirely.

\r\n

There are several different approaches as far as I can see (as well as others I can't see):

\r\n\r\n

I honestly don't know which approach would be the best. Do any of you?

" } }, { "_id": "MtNnFg4uN32YPoKNa", "title": "Missing the Trees for the Forest", "pageUrl": "https://www.lesswrong.com/posts/MtNnFg4uN32YPoKNa/missing-the-trees-for-the-forest", "postedAt": "2009-07-22T03:23:33.171Z", "baseScore": 88, "voteCount": 81, "commentCount": 159, "url": null, "contents": { "documentId": "MtNnFg4uN32YPoKNa", "html": "

Politics is the mind-killer. A while back, I gave an example: the government's request that Kelloggs  [EDIT: General Mills, thanks CronoDAS] top making false claims about Cheerios. By the time the right-wing and left-wing blogospheres had finished with it, this became everything from part of the deliberate strangulation of the American entrepreneurial spirit by a conspiracy of bureaucrats, to a symbol of the radicalization of the political right into a fringe group obsessed with Communism, to a prelude to Obama's plan to commit genocide against all citizens who disagree with him. All because of Cheerios!

\n

Why? What drives someone to hear about a reasonable change in cereal advertising policy and immediately think of a second Holocaust?

\n

This reminds me of something I used to notice when reading about politics. Sometimes there would be a seemingly good idea to deregulate something that clearly needed deregulation. The idea's proponents would go on TV and say that, hey, this was obviously a good idea. Whoever by the vagary of politics had to oppose the idea would go on TV and talk about industry's plot to emasculate government safeguards. Predatory corporations! Class solidarity! Consumer safety!

\n

Then the next day, there would be seemingly good idea to regulate something that clearly needed regulating. The idea's proponents would go on TV and say that, hey, this was obviously a good idea. Its opponents would go on TV and say that all government regulation was inherently bad. Small government! Freedom! Capitalism!

\n

I have found a pattern: when people consider an idea in isolation, they tend to make good decisions. When they consider an idea a symbol of a vast overarching narrative, they tend to make very bad decisions.

\n

\n

Let me offer another example.

\n

A white man is accused of a violent attack on a black woman. In isolation, well, either he did it or he didn't, and without any more facts there's no use discussing it.

\n

But what if this accusation is viewed as a symbol? What if you have been saying for years that racism and sexism are endemic in this country, and that whites and males are constantly abusing blacks and females, and they're always getting away with it because the police are part of a good ole' boys network who protect their fellow privileged whites?

\n

Well, right now, you'll probably still ask for the evidence. But if I gave you some evidence, and it was complicated, you'd probably interpret it in favor of the white man's guilt. The heart has its reasons that reasons know not of, and most of them suck. We make unconsciously make decisions based on our own self-interest and what makes us angry or happy, and then later we find reasons why the evidence supports them. If I have a strong interest in a narrative of racism, then I will interpret the evidence to support accusations of racism.

\n

Lest I sound like I'm picking on the politically correct, I've seen scores of people with the opposite narrative. You know, political correctness has grown rampant in our society, women and minorities have been elevated to a status where they can do no wrong, the liberal intelligentsia always tries to pin everything on the white male. When the person with this narrative hears the evidence in this case, they may be more likely to believe the white man - especially if they'd just listened to their aforementioned counterpart give their speech about how this proves the racist and sexist tendencies of white men.

\n

Yes, I'm thinking of the Duke lacrosse case.

\n

The problem here is that there are two different questions here: whether this particular white male attacked this particular black woman, and whether our society is racist or \"reverse racist\". The first question definitely has one correct answer which while difficult to ascertain is philosophically simple, whereas the second question is meaningless, in the same technical sense that \"Islam is a religion of peace\" is meaningless. People are conflating these two questions, and acting as if the answer to the second determines the answer to the first.

\n

Which is all nice and well unless you're one of the people involved in the case, in which case you really don't care about which races are or are not privileged in our society as much as you care about not being thrown in jail for a crime you didn't commit, or about having your attacker brought to justice.

\n

I think this is the driving force behind a lot of politics. Let's say we are considering a law mandating businesses to lower their pollution levels. So far as I understand economics, the best decision-making strategy is to estimate how much pollution is costing the population, how much cutting pollution would cost business, and if there's a net profit, pass the law. Of course it's more complicated, but this seems like a reasonable start.

\n

What actually happens? One side hears the word \"pollution\" and starts thinking of hundreds of times when beautiful pristine forests were cut down in the name of corporate greed. This links into other narratives about corporate greed, like how corporations are oppressing their workers in sweatshops in third world countries, and since corporate executives are usually white and third world workers usually not, let's add racism into the mix. So this turns into one particular battle in the war between All That Is Right And Good and Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist.

\n

The other side hears the words \"law mandating businesses\" and starts thinking of a long history of governments choking off profitable industry to satisfy the needs of the moment and their re-election campaign. The demonization of private industry and subsequent attempt to turn to the government for relief is a hallmark of communism, which despite the liberal intelligentsia's love of it killed sixty million people. Now this is a battle in the war between All That Is Right And Good and an unholy combination of Naive Populism and Soviet Russia. This, I think, is part of what happened to the poor Cheerios.

\n

Now, if the economists do their calculations and report that actually the law would cause more harm than good, do you think the warriors against Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist are going to say \"Oh, okay then\" and stand down? In the face of Corporate Greed That Destroys Rainforests And Oppresses Workers And Is Probably Racist?!?1

\n

One more completely hypothetical example. Let's say someone uses language that objectifies women on a blog. Not out of malice or anything, it was just a post on evolutionary psychology, it's easy to write evolutionary psychology in a way that sounds like it's objectifying women, and since obviously no one would objectify women on purpose to insult them it will be clear to everyone that it was just a harmless turn of phrase. Right?

\n

And let's say some feminist comes along and reads this completely innocent phrase about women. Let's say the context is the entire history of gender relations for the past ten thousand years, in which men have usually oppressed women and usually been pretty okay with doing so. And a society that's moving towards not oppressing women and towards treating them as full and equal human beings, but it's still not entirely clear that everyone's on board with this.

\n

This poorly-worded phrase is now a symbol of All Those Chauvinists Who Think Of Women As Ornaments Or Toys Only Good For Sex And Making Babies2. The feminist is unhappy. He or she asks for the phrase to be removed.

\n

Let's say some person who is emphatically not a feminist notices this request for removal. Let's say the context is a society where men are generally portrayed in popular culture as violent bumbling apes who cause all world problems. A culture where women can go on for hours about what boors men are, but any man who says a word about women is immediately branded a sexist pig. A culture where a popular feminist once said that all sex was rape [EDIT: Or not. Apologies for misquote], and many people believed her, one with affirmative action laws mandating that women be hired over equally qualified men, one where you can't say \"chairman of the board\" without someone calling you sexist and accusing you of taking advantage of your male privilege to ignore male privilege if you disagree.

\n

This request to remove a potentially offensive phrase is now a symbol of All Those Feminists Who Hate Men And Want Them To Feel Guilty All The Time For Vague Reasons. He or she gets angry, and certainly won't remove the offending phrase.

\n

I'm not sure that's what's happening in this case, but I don't think a few poorly worded phrases followed by a polite request to change those poorly worded phrases would have reached five hundred fifty comments divided over four top-level posts if people were just taking it as a request to use slightly different language. In our completely hypothetical example, of course.

\n

I call this mistake \"missing the trees for the forest\". If you have a specific case you need to judge, judge it separately on its own merits, not the merits of what agendas it promotes or how it fits with emotionally charged narratives3.

\n

 

\n

Footnotes

\n

1: This gets worse once it gets formally organized into political parties. You get people saying something like \"How can you, as an atheist, support the war in Iraq?\" and thinking it makes perfect sense, because, after all, the war in Iraq is a Republican initiative, and the Republicans are the party of religious conservatives, therefore... Oh, yes, people think like this.

\n

2: Oh, and this answers a question I sometimes hear asked half-seriously on message boards: how come derogatory jokes are okay in some settings but not in others? For example, how come Polish jokes are generally considered okay, but black jokes definitely aren't? Or how come it's considered okay for a black person to make a racist-sounding joke about black people or use the n-word, whereas it's not okay for a white person?

\n

I think the answer is that if I were to make a Polish joke, it would be interpreted as what it is - a joke that needed somebody to play the part of a stupid person to be funny, and Polish people have traditionally served that role. There is no active well-known ongoing context of persecution of Polish people for the joke to symbolize, so it symbolizes nothing but itself and is inert. If I were to tell a joke about black people, even if it was clear that I wasn't actually racist and just thought the joke was funny, then since most people have a very active concept of persecution of black people, my joke would be a symbol of that persecution, and all right-thinking people who oppose that persecution would also probably oppose my joke. 

\n

This leads to the odd conclusion that in a society known to be without racism, no one would mind racist jokes or slurs. In fact, this is confirmed by evidence. Black people are, society generally assumes, above suspicion when it comes to anti-black racism, and therefore black people can use the \"n-word\" without most people objecting.

\n

This is what led to me developing some of these thoughts. I told a joke which I considered to be making fun of racism. Someone who heard it misinterpreted it and thought it was racist, accused me of racism, spread rumors that I was racist, and generally started a large and complicated campaign to discredit me. After that, I noticed that I was always coming to the defense of people who were accused of racism, and was willing to dismiss practically the entire concept of racism in society as a self-serving attempt at personal gain by minorities, a one hundred eighty degree turn from my previous attitude. Eventually I realized that I was just re-fighting the battle I had to fight after this one joke, and fitting everything to my \"sometimes false accusations of racism unfairly harm majority group members and we need to protect against this\" narrative. So I stopped. I think.

\n

This also could explain why, contrary to Robin Hanson's hopes, people will never stop using disclaimers. They're ways of saying \"I did this action for reasons that do not relate to your narrative; please exclude me from it\", and this is not people's default position.

\n

3: One objection could be that the specific case could start a slippery slope, or create a climate in which other things become viewed as more acceptable. In my experience, neither of these matter nearly as much as they would have to to justify the number of times people invoke them.

" } }, { "_id": "tBGf6QeG2Furtat25", "title": "Obviousness is not what it seems", "pageUrl": "https://www.lesswrong.com/posts/tBGf6QeG2Furtat25/obviousness-is-not-what-it-seems", "postedAt": "2009-07-21T11:34:15.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "tBGf6QeG2Furtat25", "html": "

Things can be obvious if they are simple. If something complicated is obvious, such as anything that anybody seriously studies, then for it to be simple you must be abstracting it a lot. When people find such things obvious, what they often mean is that the abstraction is so clear and simple its implications are unarguable. This is answering the wrong question. Most of the reasons such conclusions might be false are hidden in what you abstracted away. The question is whether you have the right abstraction for reality, not whether the abstraction has the implications it seems to.

\n

e.g.1 I have heard that this is obvious: reality is made of optimizers, so when the optimizers optimize themselves there will be a recursive optimization explosion so something will become extremely optimized and take over the world. But to doubt this is not to doubt positive feedback works. The question is whether this captures the main dynamic taking place in the world.

\n

e.g.2 I have thought before that it is obvious that a minimum wage would increase unemployment usually. This is probably because it’s so clear in an economic model, not because I had checked that that model fits reality well. I think it does, but it’s not obvious. It requires carefully looking at the world and at other possible abstractions.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "cpBMoELcAmSQ6p7xu", "title": "Old blog", "pageUrl": "https://www.lesswrong.com/posts/cpBMoELcAmSQ6p7xu/old-blog", "postedAt": "2009-07-21T11:25:04.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "cpBMoELcAmSQ6p7xu", "html": "

All older blog posts are at http://meteuphoric.blogspot.com/


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "MyqGb24pM54rJhDpb", "title": "Of Exclusionary Speech and Gender Politics", "pageUrl": "https://www.lesswrong.com/posts/MyqGb24pM54rJhDpb/of-exclusionary-speech-and-gender-politics", "postedAt": "2009-07-21T07:22:43.020Z", "baseScore": 92, "voteCount": 100, "commentCount": 669, "url": null, "contents": { "documentId": "MyqGb24pM54rJhDpb", "html": "

I suspect that the ick reaction being labeled \"objectification\" actually has more to do with the sense that the speaker is addressing a closed group that doesn't include you.

\n

Suppose I wrote a story about a man named Frank, whose twin brother (Frank has learned) is in the process of being framed for murder this very night.  Frank is in the middle of a complicated plot to give his brother an alibi.  He's already found the cabdriver and tricked him into waiting outside a certain apartment for an hour.  Now all he needs is the last ingredient of his plan - a woman to go home with him (as he poses as his brother).  Frank is, with increasing desperation, propositioning ladies at the bar - any girl will do for his plan, it doesn't matter who she is or what she's about...

\n

I'd bet I could write that story without triggering the ick reaction, because Frank is an equal-opportunity manipulator - he manipulated the cabdriver, too.  The story isn't about Frank regarding women as things on the way to implementing his plan, it's about Frank regarding various people, men and women alike, as means to the end of saving his brother.

\n

If a woman reads that story, I think, she won't get a sense of being excluded from the intended audience.

\n

I suspect that's what the ick factor being called \"objectification\" is really about - the sense that someone who says \"...but you'll still find women alluring\" is talking to an audience that doesn't include you, a woman.  It doesn't matter if you happen to be a bi woman.  You still get the sense that it never crossed the writer's mind that there might be any women in the audience, and so you are excluded.

\n

In general, starting from a perceptual reaction, it is a difficult cognitive task to say in words exactly why that reaction occurred - to accurately state the necessary and sufficient conditions for its triggering.  If the reaction is affective, a good or bad reaction, there is an additional danger:  You'll be tempted to zoom in on any bad (good) aspect of the situation, and say, \"Ah, that must be the reason it's bad (good)!\"  It's wrong to treat people as means rather than ends, right?  People have their own feelings and inner life, and it's wrong to forget that?  Clearly, that's a problem with saying, \"And this is how you get girls...\"  But is that exactly what went wrong originally - what triggered the original ick reaction?

\n

And this (I say again) is a tricky cognitive problem in general - the introspective jump from the perceptual to the abstract.  It is tricky far beyond the realms of gender...

\n

But I do suspect that the real problem is speech that makes a particular gender feel excluded.  And if that's so, then for the purposes of Less Wrong, I think, it may make sense to zoom in on that speech property.  Politics of all sorts have always been a dangerous bit of attractive flypaper, and I think we've had a sense, on Less Wrong, that we ought to steer clear of it - that politics is the mindkiller.  And so I hope that no one will feel that their gender politics are being particularly targeted, if I suggest that, like some other political issues, we might want to steer sort of clear of that.

\n

I've previously expressed that to build a rationalist community sustainable over time, the sort of gender imbalance that appears among e.g. computer programmers, is not a good thing to have.  And so it may make sense, as rationalists qua rationalists, to target gender-exclusionary speech.  To say, \"Less Wrong does not want to make any particular gender feel unwelcome.\"

\n

But I also think that you can just have a policy like that, without opening the floor to discussion of all gender politics qua gender politics.  Without having a position on whether, say, \"privilege\" is a useful way to think about certain problems, or a harmful one.

\n

And the coin does have two sides.  It is possible to make men, and not just women, feel unwelcome as a gender.  It is harder, because men have fewer painful memories of exclusion to trigger.  A single comment by a woman saying \"All men are idiots\" won't do it.  But if you've got a conversational thread going between many female posters all agreeing that men are privileged idiots, then a man can start to pick up a perceptual impression of \"This is not a place where I'm welcome; this is a women's locker room.\"  And LW shouldn't send that message, either.

\n

So if we're going to do this, then let's have a policy which says that we don't want to make either gender feel unwelcome.  And that aside from this, we're not saying anything official about gender politics qua gender politics.  And indeed we might even want to discourage gender-political discussion, because it's probably not going to contribute to our understanding of systematic and general methods of epistemic and instrumental rationality, which is our actual alleged topic around here.

\n

But even if we say we're just going to have a non-declarative procedural rule to avoid language or behavior that makes a gender feel excluded... it still takes us into thorny waters.

\n

After all, jumping on every tiny hint - say, objecting to the Brennan stories because Brennan is male - will make men feel unwelcome; that this is a blog only for people who agree with feminist politics; that men have to tiptoe while women are allowed to tapdance...

\n

Now with that said: the point is to avoid language that makes someone feel unwelcome.  So if someone says that they felt excluded as a gender, pay attention.  The issue is not how to prove they're \"wrong\".  Just listen to the one who heard you, when they tell you what they heard.  We want to avoid any or either gender, feeling excluded and leaving.  So it is the impression that is the key thing.  You can argue, perhaps, that the one's threshold for offense was set unforgivably low, that they were listening so hard that no one could whisper softly enough.  But not argue that they misunderstood you.  For that is still a fact about your speech and its consequences.  We shall just try to avoid certain types of misunderstanding, not blame the misunderstander.

\n

And what if someone decides she's offended by all discussion of evolutionary psychology because that's a patriarchal plot...?

\n

Well... I think there's something to be said here, about her having impugned the honor of female rationalists everywhere.  But let a female rationalist be the one to say it.  And then we can all downvote the comment into oblivion.

\n

And if someone decides that all discussion of the PUA (pickup artist) community, makes her feel excluded...?

\n

Er... I have to say... I sort of get that one.  I too can feel the locker-room ambiance rising off it.  Now, yes, we have a lot of men here who are operating in gender-imbalanced communities, and we have men here who are nerds; and if you're the sort of person who reads Less Wrong, there is a certain conditional probability that you will be the sort of person who tries to find a detailed manual that solves your problems...

\n

...while not being quite sane enough to actually notice you're driving away the very gender you're trying to seduce from our nascent rationalist community, and consequentially shut up about PUA...

\n

...oh, never mind.  Gender relations much resembles the rest of human existence, in that it largely consists of people walking around with shotguns shooting off their own feet.  In the end, PUA is not something we need to be talking about here, and if it's giving one entire gender the wrong vibes on this website, I say the hell with it.

\n

And if someone decides that it's not enough that a comment has been downvoted to -5; it needs to be banned, or the user needs to be banned, in order to signify that this website is sufficiently friendly...?

\n

Sorry - downvoting to -5 should be enough to show that the community disapproves of this lone commenter.

\n

If someone demands explicit agreement with their-favorite-gender-politics...?

\n

Then they're probably making the other gender feel unwelcome - the coin does have two sides.

\n

If someone argues against gay marriage...?

\n

Respond not to trolls; downvote to oblivion without a word.  That's not gender politics, it's kindergarten.

\n

If you just can't seem to figure out what's wrong with your speech...?

\n

Then just keep on accepting suggested edits.  If you literally don't understand what you're doing wrong, then realize that you have a blind spot and need to steer around it.  And if you do keep making the suggested edits, I think that's as much as someone could reasonably ask of you.  We need a bit more empathy in all directions here, and that includes empathy for the hapless plight of people who just don't get it, and who aren't going to get it, but who are still doing what they can.

\n

If you just can't get someone to agree with your stance on explicit gender politics...?

\n

Take it elsewhere, both of you, please.

\n

 

\n

Is it clear from this what sort of general policy I'm driving at?  What say you?

" } }, { "_id": "znBJwbuT3f5eWgM4E", "title": "Shut Up And Guess", "pageUrl": "https://www.lesswrong.com/posts/znBJwbuT3f5eWgM4E/shut-up-and-guess", "postedAt": "2009-07-21T04:04:12.836Z", "baseScore": 125, "voteCount": 101, "commentCount": 110, "url": null, "contents": { "documentId": "znBJwbuT3f5eWgM4E", "html": "

Related to: Extreme Rationality: It's Not That Great

\n

A while back, I said provocatively that the rarefied sorts of rationality we study at Less Wrong hadn't helped me in my everyday life and probably hadn't helped you either. I got a lot of controversy but not a whole lot of good clear examples of getting some use out of rationality.

Today I can share one such example.

Consider a set of final examinations based around tests with the following characteristics:

\n

* Each test has one hundred fifty true-or-false questions.
* The test is taken on a scan-tron which allows answers of \"true\", \"false\", and \"don't know\".
* Students get one point for each correct answer, zero points for each \"don't know\", and minus one half point for each incorrect answer.
* A score of >50% is \"pass\", >60% is \"honors\", >70% is \"high honors\".
* The questions are correspondingly difficult, so that even a very intelligent student is not expected to get much above 70. All students are expected to encounter at least a few dozen questions which they can answer only with very low confidence, or which they can't answer at all.

At what confidence level do you guess? At what confidence level do you answer \"don't know\"?

\n

\n

I took several of these tests last month, and the first thing I did was some quick mental calculations. If I have zero knowledge of a question, my expected gain from answering is 50% probability of earning one point and 50% probability of losing one half point. Therefore, my expected gain from answering a question is .5(1)-.5(.5)= +.25 points. Compare this to an expected gain of zero from not answering the question at all. Therefore, I ought to guess on every question, even if I have zero knowledge. If I have some inkling, well, that's even better.

You look disappointed. This isn't a very exciting application of arcane Less Wrong knowledge. Anyone with basic math skills should be able to calculate that out, right?

I attend a pretty good university, and I'm in a postgraduate class where most of us have at least a bachelor's degree in a hard science, and a few have master's degrees. And yet, talking to my classmates in the cafeteria after the first test was finished, I started to realize I was the only person in the class who hadn't answered \"don't know\" to any questions.

I have several friends in the class who had helped me with difficult problems earlier in the year, so I figured the least I could do for them was to point out that they could get several free points on the exam by guessing instead of putting \"don't know\". I got a chance to talk to a few people between tests, and I explained the argument to them using exactly the calculation I gave above. My memory's not perfect, but I think I tried it with about five friends.

Not one of them was convinced. I see that while I've been off studying and such, you've been talking about macros of absolute denial and such, and while I'm not sure I like the term, this almost felt like coming up against a macro of absolute denial.

I had people tell me there must be some flaw in my math. I had people tell me that math doesn't always map to the real world. I had people tell me that no, I didn't understand, they really didn't have any idea of the answer to that one question. I had people tell me they were so baffled by the test that they expected to consistently get significantly more than fifty percent of the (true or false!) questions they guessed on wrong. I had people tell me that although yes, in on the average they would do better, there was always the possibility that by chance alone they would get all thirty of the questions they guessed on wrong and end up at a huge disadvantage1.

I didn't change a single person's mind. The next test, my friends answered just as many \"don't know\"s as the last one.

This floored me, because it's not one of those problems about politics or religion where people have little incentive to act rationally. These tests were the main component of the yearly grade in a very high-pressure course. My friend who put down thirty \"don't know\"s could easily have increased his grade in the class 5% by listening to me, maybe even moved up a whole letter grade. Nope. Didn't happen. So here's my theory.

The basic mistake seems to be loss aversion2, the tendency to regret losses more than one values gains. This could be compounded by students' tendency to discuss answers after the test: I remember each time I heard that one of my guesses had been wrong and I'd lost points, it was a deep psychic blow. No doubt my classmates tended to remember the guesses they'd gotten wrong more than the ones they'd gotten right, leading to the otherwise inexplicable statement that they expect to get more than half of their guesses wrong. But this mistake should disappear once the correct math is explained. Why doesn't it?

In The Terrible...Truth About Morality, Roko gives a good example of the way our emotional and rational minds interact. A person starts with an emotion - in that case, a feeling of disgust about incest, and only later come up with some reason why that emotion is the objectively correct emotion to have and why their action of condemning the relationship is rationally justified.

My final exam, thanks to loss aversion, created an emotional inclination against guessing, which most of the students taking it followed. When confronted with an argument against it, my friends tried to come up with reasons why the course they took was logical - reasons which I found very unconvincing.

It's really this last part which was so perfect I couldn't resist posting about it. One of my close friends (let's call him Larry) finally admitted, after much pestering on my part, that guessing would increase his score. But, he said, he still wasn't going to guess, because he had a moral objection to doing so. Tests were supposed to measure how much we knew, not how lucky we were, and if he really didn't know the answer, he wanted that ignorance to be reflected in his final score.

A few years ago, I would have respected that strong committment to principle. Today, jaded as I am, I waited until the last day of exams, when our test was a slightly different format. Instead of being true-false, it was multiple-choice: choose one of eight. And there was no penalty for guessing; indeed, there wasn't even a \"don't know\" on the answer sheet, although you could still leave it blank if you really wanted.

\"So,\" I asked Larry afterwards, \"did you guess on any of the questions?\"

\"Yeah, there were quite a few I didn't know,\" he answered.

When I reminded him about his moral commitment, he said something about how this was different because there were more answers available so it wasn't really the same as guessing on a fifty-fifty question. At the risk of impugning my friend's subconscious motives, I think he no longer had to use moral ideals to rationalize away his fear of losing points, so he did the smart thing and guessed.

Footnotes

\n

1: If I understand the math right, then if you guess on thirty questions using my test's scoring rule, the probability of ending up with a net penalty from guessing is less than one percent [EDIT: Actually just over two percent, thank you ArthurB]. If, after finishing all the questions of which they were \"certain\", a person felt confident that they were right over the cusp of a passing grade, assigned very high importance to passing, and assigned almost no importance to any increase in grade past the passing point, then it might be rational not to guess, to avoid the less than one percent chance of failure. In reality, no one could calculate their grade out this precisely.

\n

2: Looking to see if anyone else had been thinking along the same lines3, I found a very interesting paper describing some work of Kahneman and Tversky on this issue, and proposing a scoring rule that takes loss aversion into account. Although I didn't go through all of the math, the most interesting number in there seems to be that on a true/false test that penalizes wrong answers at the same rate it rewards correct answers (unlike my test, which rewarded guessing), a person with the empirically determined level of human loss aversion will (if I understand the stats right) need to be ~79% sure before choosing to answer (as opposed to the utility maximizing level of >50%). This also linked me to prospect theory, which is interesting.

\n

3: I'm surprised that test-preparation companies haven't picked up on this. Training people to understand calibration and loss aversion could be very helpful on standardized tests like the SATs. I've never taken a Kaplan or Princeton Review course, but those who have tell me this topic isn't covered. I'd be surprised if the people involved didn't know the science, so maybe they just don't know of a reliable way to teach such things?

" } }, { "_id": "oZb6AqZKBG4gmLdAZ", "title": "Outside Analysis and Blind Spots", "pageUrl": "https://www.lesswrong.com/posts/oZb6AqZKBG4gmLdAZ/outside-analysis-and-blind-spots", "postedAt": "2009-07-21T01:00:18.458Z", "baseScore": 83, "voteCount": 74, "commentCount": 34, "url": null, "contents": { "documentId": "oZb6AqZKBG4gmLdAZ", "html": "

(I originally tried to make this a comment, but it kept on growing.)

\n

I was looking through the Google results for \"Less Wrong\" when I found the blog of a rather intelligent Leon Kass acolyte, who's written a critique of our community.  While it's a bit of a caricature, it's not entirely off the mark.  For example:

\n
\n

Trying to think more like a mathematician, whose empiricism resides in the realm of pure thought, does not predispose these 'rationalists' to collect evidence from the real world. Neither does the downplaying of personal experiences. Many are computer science majors, used to being in the comfortable position of being capable of testing their hypotheses without needing to leave their office. It is, then, an easy temptation for them to come up with a nice-sounding theory which appears to explain the facts, and then consider the question solved. Reason must reign supreme, must it not?

\n
\n

How seriously do you take this critique?  Do you wonder why I'm bothering with this straw-man criticism of Less Wrong?

\n

Actually, I've deceived you; there's no such Leon Kass devotee.  The quote above is a very minor adaptation of this Kaj Sotala post, which I've changed from the first person plural to the third person plural.  Read it if you like, and then reevaluate the critique.  (Yes, I know it's less coherent out of its actual context.)  Does it seem to be less of a caricature now that you've read a version in which you identify with the writer, rather than one in which the writer is analyzing and criticizing you from outside?

\n

Now I hope that this little trick (which people are starting to expect around here) caught your attention.  We really do seem to react differently to an outside analysis of a person or group in very different ways, depending on whether we've been primed to identify ourselves more with the author or with the object of analysis.  One might say that we strongly object to being modeled as a more or less predictable agent in someone else's scheme: that we instinctively reject any out-group analysis of our own personality and cognition.  (Compare people's reluctance to trust the Outside View even when they know it's more reliable.)

\n

In fact, in some ways this is reminiscent of anosognosia: take for instance the recent study on body language, which discovered that

\n
    \n
  1. We can predict a stranger's extraversion scores on the Implicit Association Test quite well by watching them act out a one-minute commercial.
  2. \n
  3. We can improve our predictions further if we're first told about a few nonverbal tics to watch for.
  4. \n
  5. We're bad at predicting our own extraversion scores on the IAT, even if we're given the video of ourselves acting out the commercial and told about the nonverbal tics.
  6. \n
\n

This screams out \"blind spot\" to me, and one with a nice evolutionary reason to boot: a blind spot toward one's own patterns of action would make it possible to sincerely promise something we'll probably fail to do, which is a pretty nice trick to have up one's sleeve in a social environment.  (Anyhow, the cute ev-psych story can be discarded without incident to the rest of the evidence.)

\n

If it's true that we instinctively react to an out-group analysis of our actions, what might we expect would happen when we're faced with one?  The most likely reaction would be a knee-jerk dismissal of the model, with justification to be constructed after the fact; or perhaps we'd take offense.  If that were so, then we might expect the following kinds of results:

\n\n

And if a fourth obvious case doesn't occur to you, you must have been somewhere else for the past week.

\n

Fortunately, it seems to me that a fix is available: if the analysis is set up so that the (intended) readers identify more with the analyst than with the objects of analysis, they seem to avoid that blind spot.  (They don't have to share all the characteristics of the analyst to do that; I would bet that female readers didn't have a moment of difficulty identifying with Eliezer rather than the woman on the panel in this anecdote, where the implied divide was \"rationalists versus irrationalists\" rather than \"men versus women\".)  Keeping your readers with you is usually not that hard to do in practice; it's what writers call \"knowing your audience\", and if you imagine delivering your statement to the proper audience, you should (subconsciously, even) do better at avoiding that split between yourself and them.

\n

The key thing, though, is that readers don't seem to do this on their own, not even rationalists.  This is not a failing of one part of this community or another; this seems to be part of the current human condition, and it behooves a good communicator to avoid implicit \"Us/Them\" splits that leave a good part of the intended readership in \"Them\".  In particular, writing with more care on gender is worth the cost in extra words and thought: gender-specific pronouns really do seem to cause distraction and dis-identification with the author, and I'd predict that the difference between

\n
\n

most people here don't value social status enough and (especially the men) don't value having sex with extremely attractive women that money and status would get them

\n
\n

and, say,

\n
\n

money and status would make a man more attractive to many women; men who really value a better romantic or sexual life should thus put more priority on money and status

\n
\n

is pretty significant to a female reader (please correct me if I'm wrong).  In the first, it's generally just the male readers who can easily take it as an analysis from their perspective, while female readers identify themselves with the thing being (very crudely) modeled from outside.  In the second, female and male readers can identify themselves with someone making an actual choice or observation, or equally well envision themselves looking at the whole situation from outside.

\n

Therefore, I suggest that when your post or comment touches on a subject that divides the Less Wrong community into identifiable groups (transhumanism or PCT or libertarianism, not just gender), it's good practice to imagine reading your contribution out loud to members of the various subgroups, and edit if you feel it might go over badly.  This goes double if you're analyzing a general tendency in a group you don't belong to.  (ETA: Sometimes it might be necessary to go ahead and damn the torpedoes, but I think that on some subjects we're being far too lax in this respect.)  On the other hand, if someone analyzes you or your group from outside (and, needless to say, gets it wrong), I'd suggest you show a little extra patience with them; neither of you need be exceptionally irrational/sensitive/insensitive for this kind of impasse to arise.

\n

P.S. I've hung back from the Less Wrong Gender Wars for a while, in part because I wanted to observe it a bit before committing myself to a position, and in part because everything I had to say seemed wrong somehow.  I finally started writing out a comment listing several hypotheses for how we could have a situation where one good rationalist feels that a way of speaking is clearly unethical (while not necessarily incorrect in substance), and another good rationalist appears to be, not just disagreeing, but actively mystified about what could be wrong with it.  Then I realized that one of my hypotheses was much better supported than the others.

\n

EDIT: At the request of several, I've stopped diluting the term \"Outside View\" and called this particular thing \"out-group analysis.\"

\n
" } }, { "_id": "95ieojrogPEiAygeZ", "title": "Chomsky, Sports Talk Radio, Media Bias, and Me", "pageUrl": "https://www.lesswrong.com/posts/95ieojrogPEiAygeZ/chomsky-sports-talk-radio-media-bias-and-me", "postedAt": "2009-07-21T00:57:19.823Z", "baseScore": 17, "voteCount": 20, "commentCount": 7, "url": null, "contents": { "documentId": "95ieojrogPEiAygeZ", "html": "

Just about everyone knows that one of Noam Chomsky's big things is that he thinks the media are badly distorted away from the truth and toward the interests of the wealthy and powerful. Once a long time ago I read or heard either this quote from this interview or something like it:

\n
\n

\"Take, say, sports -- that's another crucial example of the indoctrination system, in my view. For one thing because it -- you know, it offers people something to pay attention to that's of no importance. [audience laughs] That keeps them from worrying about -- [applause] keeps them from worrying about things that matter to their lives that they might have some idea of doing something about. And in fact it's striking to see the intelligence that's used by ordinary people in [discussions of] sports [as opposed to political and social issues]. I mean, you listen to radio stations where people call in -- they have the most exotic information [more laughter] and understanding about all kind of arcane issues. And the press undoubtedly does a lot with this.\"

\n
\n

Taking this quote along with the rest of the interview, the idea seems to be that that the default condition of most people is to have decent critical faculties unless someone takes the trouble to actively screw them up. So in contrast to their badly distorted ideas about politics, people tend to have sensible ideas about sports, since the powerful have no particular motive to distort those ideas (though they do have a motive to get people to think about sports instead of things that are important).

Leaving completely aside the merits of Chomsky's critique of the media or any of his other views, I have logged a pretty decent number of hours listening to sports talk radio and folks, I'm here to tell you that the quality of the discourse is generally mighty low. It might not be as low as on political talk radio (people aren't as hate-filled), but it's low. I sent an email to Chomsky pointing this out (which I cannot locate as it was many years ago), and to his credit he wrote me back. I don't think I am revealing any confidences by saying that, to the best of my recollection, he admitted that it had been many years since he had really paid any attention to sports or sports talk or anything like that. The point is not (or at least not mostly) to ding Chomsky for being a little loose with the facts behind his just-so story. The point is that whatever you think about the media and the powerful and all that (see here for some of my views), it is not true that reasoned discussion is the default condition of humanity and prevails unless someone comes along and screws it up. I think it's closer to the truth to say that natural irrationality is the raw material that manipulation by the powerful has to act upon.

And that is the story of one of my very few brushes with fame (I also once saw Mike Ditka on a plane, once saw Al Sharpton on a plane, once shook hands with Paul Krugman and knew a very lucky someone who had actually personally laid eyes on Gwyneth Paltrow).

" } }, { "_id": "vX3EjNHR387p3GzCN", "title": "Joint Distributions and the Slow Spread of Good Ideas", "pageUrl": "https://www.lesswrong.com/posts/vX3EjNHR387p3GzCN/joint-distributions-and-the-slow-spread-of-good-ideas", "postedAt": "2009-07-20T23:31:27.817Z", "baseScore": 16, "voteCount": 14, "commentCount": 21, "url": null, "contents": { "documentId": "vX3EjNHR387p3GzCN", "html": "

A few years ago a well-known economist named David Romer published a paper in a top economics journal* arguing that professional football teams don't \"go for it\" nearly often enough on fourth down. The question, of course, is how this can persist in equilibrium. If Romer is correct, wouldn't teams have a strong incentive to change their strategies? Of course it's possible that he is correct, but that no one ever knew it before the paper was published. But then would the fact that the recommendation has not been widely adopted** constitute strong evidence that he is not correct? The paper points out two possible reasons why not. First, the objective function of the decision-makers may not be to maximize the probability of winning the game. Second and more relevant for our purposes, there may be some biases at work. The key point is this quote from the article (page 362):

\n
\n

\"Many skills are more important to running a football team than a command of mathematical and statistical tools. And it would hardly be obvious to someone without knowledge of those tools that they could have any significant value in football.\"

\n
\n

Romer's point is that what's relevant is the joint distribution of attributes in the pool of potential football coaches (or other decision-makers). Even in something like professional football where there is a very strong incentive to get better results, it may take a long time for coaches who are willing/able to adopt a good new idea to out-compete and displace those who continue to use the bad old idea if there are very few potential coaches who both have the conventional talents and understand the new idea.

I think Romer is right about this, and his point is the main take-away point of this post. But I don't think the main \"joint distribution\" problem is a paucity of would-be coaches who understand both conventional football stuff and math: math talent can be hired to work under a head coach who doesn't understand it, just like medical talent is. Rather, it needs to be the case that the joint distribution is unfavorable and that this can't be gotten around by just adding math talent as necessary. Maybe the problem is a paucity of potential coaches who have both conventional coaching skills and also the attitude that nerds are to be listened to rather than beaten up. This may explain why baseball seems to have been much more accepting of statistical analysis than has football.

The point is this: an unfavorable joint distribution of attributes in the pool of potential decision-makers can greatly retard the adoption of good ideas, even when incentives to adopt are very strong, which means that the fact of non-adoption is not decisive evidence that an idea is bad. But for this to be true, there must be some reason why the people with the principal attribute cannot simply seek and incorporate the advice of the people with the secondary attribute. This will often be because the very acculturation process that produced the people who have the principal attribute creates some barrier to them making use of the secondary one.

\"Do Firms Maximize? Evidence from Professional Football.\" by David Romer, Journal of Political Economy (2006). It is a sad commentary on the state of the economics profession that some journal editor seems to have made him change the title from the much cooler: \"It's Fourth Down and What Does the Bellman Equation Say? A Dynamic-Programming Analysis of Football Strategy.\"
**I think this is a true statement, but I could be wrong. Please correct me if I am.
***Another possibility is that the problem is not the coaches, but the fans. A coach who sticks with the conventional strategy is protected by a \"nobody ever got fired for buying IBM\" attitude, whereas a coach who does something unconventional (but probabilistically correct) runs the risk of getting fired if it doesn't work out. This just pushes the irrationality from the coaches to the fans, but that might be more plausible: they have access to less resources to figure out what is and is not a good idea, and have much less of an incentive to try to get it right. Then the problem would be a paucity of fans who have the attribute \"really care about football\" and \"understand and are willing to support good ideas, even from nerds.\"

" } }, { "_id": "rXbgabdnmxd3jdY7w", "title": "Creating The Simple Math of Everything", "pageUrl": "https://www.lesswrong.com/posts/rXbgabdnmxd3jdY7w/creating-the-simple-math-of-everything", "postedAt": "2009-07-20T22:45:31.971Z", "baseScore": 18, "voteCount": 16, "commentCount": 28, "url": null, "contents": { "documentId": "rXbgabdnmxd3jdY7w", "html": "

Eliezer once proposed an Idea for a book, The Simple Math of Everything.  The basic idea is to compile articles on the basic mathematics of a wide variety of fields, but nothing too complicated.

\n
\n

Not Jacobean matrices for frequency-dependent gene selection; just Haldane's calculation of time to fixation.  Not quantum physics; just the wave equation for sound in air.  Not the maximum entropy solution using Lagrange Multipliers; just Bayes's Rule.

\n
\n

Now, writing a book is a pretty daunting task.  Luckily brian_jaress had the idea of creating an index of links to already available online articles.  XFrequentist pointed out that something like this has been done before over at Evolving Thoughts.  This initially discourage me, but it eventually helped me refine what I thought the index should be.  A key characteristic of Eliezer's idea is that it should be worthwhile for someone who doesn't know the material to read the entire index.  Many of the links at evolving thoughts point to rather narrow topics that might not be very interesting to a generalist.  Also there is just plain a ton of stuff to read over there - at least 100 articles.

\n

So we should come up with some basic criteria for the articles.  Here is what I suggest (let me know what you think):

\n
    \n
  1. The index must be short: say 10 - 20 links.  Or rather, the core of the index must be short.  We can have longer lists of narrower and more in depth articles for people who want to get into more detail about, say, quantum physics or economic growth.  But these should be separate from the main index.
  2. \n
  3. Each article must meet minimum requirements in terms of how interesting the topic is and how important it is. Remember, this is an index for the reader to gain a general understanding of many fields
  4. \n
  5. The article must include some math - at minimum, some basic algebra.  Calculus is good as long as it significantly adds to the article.  In fact, this should probably be the basic rule for all additions of complex math.  Modularization also helps - i.e., if the relatively complicated math is in a clearly visible section that can be skipped without losing anything significant from the rest of the article, it should be ok.
  6. \n
\n
This list of criteria isn't meant to be exhaustive either.  If there is anything you guys think should be added, by all means, suggest it and we can debate it.  Also, an article shouldn't have to perfectly fit our criteria in order to qualify for the index, as long as it's at least close and is an improvement over what we have in its place.
\n
I should also mention that there is no problem with linking to Lesswrong.  So if you see major problem with the article we have on, say, the ideal gas law, then write a better version.  If we gradually replace offsite links to LW links, we could even publish an ebook or something.
\n
We should also hash out exactly which topics deserve to be represented, and furthermore the number of topics.  I'll suggest some from the fields I'm most familiar with (you should do the same):
\n
\n\n
\n

If you do happen to come across something worth considering for the index, by all means, update the wiki.  (a good place to start looking would be at Evolving Thoughts...)  Perhaps we should add a section to the wiki for articles that we think are worth consideration so we can differentiate them from the main list.  What thoughts do you have (about all of this)?

" } }, { "_id": "yzKSQ6P6XcrbALawX", "title": "Counterfactual Mugging v. Subjective Probability", "pageUrl": "https://www.lesswrong.com/posts/yzKSQ6P6XcrbALawX/counterfactual-mugging-v-subjective-probability", "postedAt": "2009-07-20T16:31:55.512Z", "baseScore": 4, "voteCount": 9, "commentCount": 32, "url": null, "contents": { "documentId": "yzKSQ6P6XcrbALawX", "html": "

This has been in my drafts folder for ages, but in light of Eliezer's post yesterday, I thought I'd see if I could get some comment on it:

\n

 

\n

A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.

\n

If you didn't read it the first time, please do -- I don't plan to attempt to summarize.  Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.

\n

So, those still reading, you would give Omega the $100.  You would do so because if someone told you about the problem now, you could do the expected utility calculation 0.5*U(-$100)+0.5*U(+$10000)>0.  Ah, but where did the 0.5s come from in your calculation?  Well, Omega told you he flipped a fair coin.  Until he did, there existed a 0.5 probability of either outcome.  Thus, for you, hearing about the problem, there is a 0.5 probability of your encountering the problem as stated, and a 0.5 probability of your encountering the corresponding situation, in which Omega either hands you $10000 or doesn't, based on his prediction.  This is all very fine and rational.  

\n

So, new problem.  Let's leave money out of it, and assume Omega hands you 1000 utilons in one case, and asks for them in the other -- exactly equal utility.  What if there is an urn, and it contains either a red or a blue marble, and Omega looks, maybe gives you the utility if the marble is red, and asks for it if the marble is blue?  What if you have devoted considerable time to determining whether the marble is red or blue, and your subjective probability has fluctuated over the course of you life? What if, unbeknownst to you, a rationalist community has been tracking evidence of the marble's color (including your own probability estimates), and running a prediction market, and Omega now shows you a plot of the prices over the past few years?

\n

In short, what information do you use to calculate the probability you plug into the EU calculation?

" } }, { "_id": "QK9ArXuRyE2vsPC43", "title": "Are you crazy?", "pageUrl": "https://www.lesswrong.com/posts/QK9ArXuRyE2vsPC43/are-you-crazy", "postedAt": "2009-07-20T16:27:14.524Z", "baseScore": 2, "voteCount": 7, "commentCount": 18, "url": null, "contents": { "documentId": "QK9ArXuRyE2vsPC43", "html": "

Followup ToAre You Anosognosic?, The Strangest Thing An AI Could Tell You

\n

Over this past weekend I listened to an episode of This American Life titled Pro Se.  Although the episode is nominally about people defending themselves in court, the first act of the episode was about a man who pretended to act insane in order to get out of a prison sentence for an assault charge.  There doesn't appear to be a transcript, so I'll summarize here first.

\n
\n

A man, we'll call him John, was arrested in the late 1990s for assaulting a homeless man.  Given that there was plenty of evidence to prove him guilty, he was looking for a way to avoid the likely jail sentence of five to seven years.  The other prisoners he was being held with suggested that he plead insanity:  he'd be put up at a hospital for several months with hot food and TV and released once they considered him \"rehabilitated\".  So he took bits and pieces about how insane people are supposed to act from movies he had seen and used them to form a case for his own insanity.  The court believed him, but rather than sending him to a cushy hospital, they sent him to a maximum security asylum for the criminally insane.

\n

Within a day of arriving, John realized the mistake he had made and sought to find a way out.  He tries a variety of techniques:  engaging in therapy, not engaging in therapy, dressing like a sane person, acting like a sane person, acting like an incurably insane person, but none of it works.  Over a decade later he is still being held.

\n

As the story unravels, we learn that although John makes a convincing case that he faked his way in and is being held unjustly, the psychiatrists at the asylum know that he faked his way in and continue to hold him anyway, though John is not aware of this.  The reason:  through his long years of documented behavior John has made it clear to the psychiatrists that he is a psychopath/sociopath and is not safe to return to society without therapy.  John is aware that this is his diagnosis, but continues to believe himself sane.

\n
\n

Similar to trying to determine if you are anosognosic, how do you determine if you are insane?  Some kinds of insanity can be self diagnosed, but in John's case he has lots of evidence (he has access to read all of his own medical records) that he is insane, yet continues to believe himself not to be.  To me this seems a level trickier than anosognosis, since there's no physical tests you can make, but perhaps it's only a level of difference significant to people but not to an AI.

\n

Edited to add a footnote:  By \"sane\" I simply mean normative human reasoning:  the way you expect, all else being equal, a human to think about things.  While the discussion in the comments about how to define sanity might be of some interest, it really gets away from the point of the post unless you want to argue that \"sanity\" is creating a question here that is best solved by dissolving the question (as at least one commenter does).

" } }, { "_id": "MswXnxRX5xN4tNjzC", "title": "Being saner about gender and rationality", "pageUrl": "https://www.lesswrong.com/posts/MswXnxRX5xN4tNjzC/being-saner-about-gender-and-rationality", "postedAt": "2009-07-20T07:17:13.855Z", "baseScore": 17, "voteCount": 74, "commentCount": 98, "url": null, "contents": { "documentId": "MswXnxRX5xN4tNjzC", "html": "

It seems that LessWrong has a nascent political problem brewing. Firstly, let me re-iterate why politics is bad for our rationality:

\n
\n

People go funny in the head when talking about politics.  The evolutionary reasons for this are so obvious as to be worth belaboring:  In the ancestral environment, politics was a matter of life and death.  And sex, and wealth, and allies, and reputation...  When, today, you get into an argument about whether \"we\" ought to raise the minimum wage, you're executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed.  Being on the right side of the argument could let you kill your hated rival!

\n

Politics is an extension of war by other means.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back - providing aid and comfort to the enemy.  People who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there's a Blue or Green position on an issue.

\n
\n

Politics is especially bad for the community if people begin to form political factions within the community. Specifically, if LessWrong starts to polarize along a \"feminist/masculinist\" fault-line, then every subsequent debate will become a proxy war for the crusade between the masculinist jerks and the femenazis.

\n

Alicorn has contributed in several ways to the emerging politicization of LessWrong. She has started name-calling against the other side (\"Jerkitude\" \"disincentivize being piggish\"), started to attempt to form a political band of feminist allies (\"So can I get some help?  Some lovely people have thrown in their support,\"), implicitly asked these new allies to downvote anyone who disagrees with her position (\"There is still conspicuous karmic support for some comments that perpetuate the problems\"), and asks her faction to begin enforcing her ideas, specifically by criticising, ostracizing or downvoting anyone who engages in a perfectly standard use of langage and thought: modeling the generic human female as a mechanical system and using that model to make predictions about reality. She has billed this effort as a moral crusade (\"unethical\"). I am sure she isn't doing this on purpose: like all humans, her brain is hard-wired to see any argument as a moral crusade where she is objectively right, and to seek allies within the tribe to move against and oppress the enemy. [notice how I objectified her there, leaving behind the language of a unified self or person in favour of a collection of mechanical motivations and processes whose dynamics are partially determined by evolutionary pressures, and what a useful exercise this can be for making sense of reality]

\n

We should expend extreme effort to nip this problem in the bud. As part of this effort, I will delete my account and re-register under a different username. I would recommend that Alicorn do the same. I would also recommend that anyone who feels that they have played a particularly large part in the debate on either side do the same, for example PJeby. That way, when we talk to each other next in a comment thread, we won't be treating the interaction as a proxy war in the great feminist/masculinist crusade, because we will be anonymous again.

\n

I would also implore everyone to just not bring this issue up again. If someone uses language in a way that mildly annoys you (hint: they probably didn't do this on purpose), rather than precipitating a major community feud over it, just ignore it. The epistemic rationality of LessWrong is worth more than the gender ratio we have. A 95% male community that manages to overcome a whole host of problems in instrumental and epistemic rationality is worth more to the world than a 80% male community that is crippled by a blood-feud between a feminist faction and a masculinist faction.

\n

 

" } }, { "_id": "u4zwGST5Nc7eqfQv6", "title": "An Akrasia Anecdote", "pageUrl": "https://www.lesswrong.com/posts/u4zwGST5Nc7eqfQv6/an-akrasia-anecdote", "postedAt": "2009-07-20T03:10:40.135Z", "baseScore": 12, "voteCount": 10, "commentCount": 6, "url": null, "contents": { "documentId": "u4zwGST5Nc7eqfQv6", "html": "

About a month ago I committed myself to an anti-akrasia resolution, inspired by Yvain and ZM.  I won't repost the resolution here, but if you want to see it, click the first link.  The essence of my resolution was to commit myself to practice math to prepare for graduate school in the fall.  The resolution was valid from June 18 to 28 - I ended it there because the next day I was on a plane to Orange, CA to start an internship (in experimental economics, if anyone's interested).  Now, I'm going to provide a lot of anecdotal evidence and speculation.  Please do not fall prey to the typical mind fallacy.  The structure of my brain may be sufficiently different from yours that nothing I say here generalizes.  I also could be deluding myself.  This is only anecdotal evidence, and I have no real way of knowing whether or not my perception of what happened is biased.

\n

With that being said, there is some objective evidence, namely how much I deviated from my resolution.  There is also what happened after my resolution while here at Chapman University.  Here, again, the evidence is mostly subjective with the exception of how much I stuck to the old resolution.

\n

To start with the beginnning, the first day my resolution was in effect I worked diligently for well over two hours - probably closer to 4 - working through multiple chapters of my old probability text... actually, that was a lie.  I had planned a date with my girlfriend that day, and she assumed that the date day continued on through the evening.  It was a fair assumption on her part, so I used my one free day on the first day of my resolution.  Great start, eh?

\n

From that day on, I did pretty well.  It wasn't until near the end that I finally broke the resolution.  Twice I did it because I realized I needed a second loophole - a way to trade an extra two hours one day for a free day at a later date.  On the last day I broke the resolution because I didn't have the foresight to plan a full day with my girlfriend before leaving for six weeks.  So I broke my resolution 33% of the non-free days (total days minus the one free day I gave myself).  If we count the extra long sessions as compensating for the first two days, that reduces to 11%.  Not bad, considering that this summer I had opened a textbook up exactly twice before my resolution.

\n

When I originally posted my resolution, I said I would make a new one once I made it here to Chapman and got a feel for how much time I had to spare.  Well, I never got around to that (Akrasia applies to plans to fight akrasia.  It really is vicious).  The fact that I didn't put that in my resolution may or may not be significant.  I don't think it is.  I did continue to do math, at first.  My first two mornings here I spent a solid two hours apiece reviewing some analysis.  There were a couple more days where I might have worked for about an hour.  But for the most part, I haven't don't much studying in the past three weeks.

\n

This meager data suggests that the resolution worked for the specific duties I mentioned, but there could be counfounders.  I may have been gung ho about it because I had just worked myself up.  As time went on, this might have waned, and caused me to not renew my resolution as well as not do as much math.  This doesn't seem like what happen to me, but the data doesn't rule it out.

\n

As far as the \"maximize my utility function\" clause, i have no objective data.  From here on out, it's just my impressions.  Take them for what they are worth.

\n

It did seem like I did a better job of doing what I really wanted to do (i.e. maximizing my utility function) even when taking into account that my resolution was already binding me to do better for two hours out of the day.  I spent less of my free time playing Settlers of Catan online1 and more hanging out with friends, getting paperwork filled out for graduate school, reading books, etc.  I seemed to spend much less time, well, killing time.  It wasn't perfect though, I still killed time doing mostly worthless things.  I did much better with the specified things2 in my resolution than with the utility maximization clause.  It helped, but not as much.

\n

After the resolution expired, however, it was back to my old ways.  Lots of flash games, little productivity.  In fact, this is largely the reason that I haven't posted about The Simple Math of Everything yet (it's coming...) and why it took so long to write this followup.  If MichaelHoward hadn't asked me how everything panned out, I may not have even written this.

\n

I have an idea3 which I think explains why there was a difference at all and why the resolution worked at all.  The resolution, being so formal, was in the forefront of my consciousness.  Normally, it seems, my mind would query \"what should I do\" and faced with the daunting task of actually computing this, a null would pop out.  Basically akrasia was preventing my efforts to stop akrasia. And so I would go on doing whatever worthless nonsense I was doing the day before.  Playing cheesy computer games, watching sportscenter, etc.  I wanted to do what I should do, but when I actually sat down at my desk and saw my computer in front of me, I wasn't even thinking about what I should do anymore.  My resolution, in effect, kept this in my consciousness, which made it easier for me to stop akrasia before it starts.  I would think, \"what should I do?\" and my mind would immediately go to my resolution.

\n

This explains why it worked at all, but why the difference between specific activities and the utility maximization clause?  The difference seems to lie what had to be computed after I thought of my resolution.  With specific activities, I could just go down a checklist.  When the checklist was completed, I actually had to think about what I really wanted to do, which just left the door open for akrasia to rear its ugly head again.  I suspect I can improve my ability to just figure out what I want to do and then do it, but for now having specific things outlined in a resolution is more effective.

\n

This also seems consistent with my previous experience.  When a deadline is nearing, I simply think about whatever I should be doing more which makes it easier to conclude \"I'm going to do that\" and then actually go and do it.  At work, I know I have specific duties, again seemingly with the same effect.  Far away from a deadline, I'm simply not thinking about it.  I'm not really sure why - it seems to be a subconscious process that I can't really control - but it happens.  It might be related to stress - the resolution might be creating a mild amount of stress which, in turn, motivates me.  It didn't feel like that - I really enjoyed the math I was doing, but it's worth looking into.

\n

One other thing I should mention about my resolution.  I used a green dry erase marker to write LW in large capital letters on the resolution as it hung in front of my desk.  This is close to the symbol for LessWrong in Google Chrome (the W is lowercase), so it grabbed my attention and reminded me of what I should be doing.  Again, it seemed to keep me conscious of what I needed to be doing.  I probably should have made my computer's wallpaper a collage of these symbols or something to help keep it in my mind.  (This may be part of the psychology behind chaos magic).  After rereading the thread about chaos magic I linked to, I immediately thought \"so I guess that's what priming feels like.\"  Priming is probably a big part of the explanation I have above, but I don't think it is everything.  Sometimes I was aware of what I should do seemingly without being primed - like when I'm away from home, or immediately when I wake up.  This is testable though; perhaps I'll do a run without having the resolution in sight and check if the effects change.

\n

Now take all of this with a grain of salt.  Your mind is not my mind, so nothing I said above need apply to you.  For example, not all akrasia has to occur because you simply aren't concsiously thinking \"I should do X.\"  Some people may be constantly thinking that and still do Y, but that doesn't seem to be the issue with me.  I could also be very deluded about what actually happened.  These are, after all, largely my subjective impressions.  However, I intend to renew my resolution and monitor the results more closely.  It did seem to work even if I'm completely wrong about why.  Perhaps you have a better explanation?  I'm listening.

\n

Footnotes

\n

1.  It's my own solitare.  The company that hosts the servers uses a different name for the game to avoid a lawsuit by the manufacturer.  I'm not telling you what they call it because it really is a productivity buster.  Google at your own risk.

\n

2.  Shortly after I posted my resolution, I ammeded it to add a few more specific duties which I'd rather not mention in such a public medium.  Suffice to say that results were similar to that of the math duties.

\n

3.  I don't think this explanation and the reasons that the resolution should work that Yvain gave (and linked to) in his original post are mutually exclusive.  Hyperbolic discounting might be the reason I was less conscious of what I should be doing, or the fact that I was less conscious might be the reason hyperbolic discountings works as a good model.  Or neither, I'm not sure.  But I see no reason why they are fundamentally incompatible.

" } }, { "_id": "c3wWnvgzdbRhNnNbQ", "title": "Timeless Decision Theory: Problems I Can't Solve", "pageUrl": "https://www.lesswrong.com/posts/c3wWnvgzdbRhNnNbQ/timeless-decision-theory-problems-i-can-t-solve", "postedAt": "2009-07-20T00:02:59.721Z", "baseScore": 57, "voteCount": 60, "commentCount": 156, "url": null, "contents": { "documentId": "c3wWnvgzdbRhNnNbQ", "html": "

Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you.  Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions.  The driver says, \"Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?\"

\n

Now of course you wish you could answer \"Yes\", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver.  \"Yes,\" you say.  \"You're lying,\" says the driver, and drives off leaving you to die.

\n

If only you weren't so rational!

\n

This is the dilemma of Parfit's Hitchhiker, and the above is the standard resolution according to mainstream philosophy's causal decision theory, which also two-boxes on Newcomb's Problem and defects in the Prisoner's Dilemma.  Of course, any self-modifying agent who expects to face such problems - in general, or in particular - will soon self-modify into an agent that doesn't regret its \"rationality\" so much.  So from the perspective of a self-modifying-AI-theorist, classical causal decision theory is a wash.  And indeed I've worked out a theory, tentatively labeled \"timeless decision theory\", which covers these three Newcomblike problems and delivers a first-order answer that is already reflectively consistent, without need to explicitly consider such notions as \"precommitment\".  Unfortunately this \"timeless decision theory\" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.

\n

However, there are some other timeless decision problems for which I do not possess a general theory.

\n

For example, there's a problem introduced to me by Gary Drescher's marvelous Good and Real (OOPS: The below formulation was independently invented by Vladimir Nesov; Drescher's book actually contains a related dilemma in which box B is transparent, and only contains $1M if Omega predicts you will one-box whether B appears full or empty, and Omega has a 1% error rate) which runs as follows:

\n

Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:

\n

\"I just flipped a fair coin.  I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000.  And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads.  The coin came up heads - can I have $1000?\"

\n

Obviously, the only reflectively consistent answer in this case is \"Yes - here's the $1000\", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers \"Yes\" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

\n

But I don't have a general theory which replies \"Yes\".  At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I'm not going to get the million.  It seems like I want to decide \"as if\" I don't know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads.  But I don't have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possessed by events I have since learned the outcome of...  Again, it's easy to talk informally about why you have to reply \"Yes\" in this case, but that's not the same as being able to exhibit a general algorithm.

\n

Another stumper was presented to me by Robin Hanson at an OBLW meetup.  Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote.  Let's say that six of them form a coalition and decide to vote to divide the pie among themselves, one-sixth each.  But then two of them think, \"Hey, this leaves four agents out in the cold.  We'll get together with those four agents and offer them to divide half the pie among the four of them, leaving one quarter apiece for the two of us.  We get a larger share than one-sixth that way, and they get a larger share than zero, so it's an improvement from the perspectives of all six of us - they should take the deal.\"  And those six then form a new coalition and redivide the pie.  Then another two of the agents think:  \"The two of us are getting one-eighth apiece, while four other agents are getting zero - we should form a coalition with them, and by majority vote, give each of us one-sixth.\"

\n

And so it goes on:  Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie.  There does not appear to be any such thing as a dominant majority vote.

\n

(Robin Hanson actually used this to suggest that if you set up a Constitution which governs a society of humans and AIs, the AIs will be unable to conspire among themselves to change the constitution and leave the humans out in the cold, because then the new compact would be dominated by yet other compacts and there would be chaos, and therefore any constitution stays in place forever.  Or something along those lines.  Needless to say, I do not intend to rely on such, but it would be nice to have a formal theory in hand which shows how ideal reflectively consistent decision agents will act in such cases (so we can prove they'll shed the old \"constitution\" like used snakeskin.))

\n

Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows.  First, consider the Prisoner's Dilemma.  Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do.  In general, on the True Prisoner's Dilemma, facing an opponent who can accurately predict your own decisions, you want to cooperate only if the other agent will cooperate if and only if they predict that you will cooperate.  And the other agent is reasoning similarly:  They want to cooperate only if you will cooperate if and only if you accurately predict that they will cooperate.

\n

But there's actually an infinite regress here which is being glossed over - you won't cooperate just because you predict that they will cooperate, you will only cooperate if you predict they will cooperate if and only if you cooperate.  So the other agent needs to cooperate if they predict that you will cooperate if you predict that they will cooperate... (...only if they predict that you will cooperate, etcetera).

\n

On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion, so that you can expect their action to be the symmetrical analogue of your own - in which case (C, C) is preferable to (D, D).  But what if you're facing a more general decision problem, with many agents having asymmetrical choices, and everyone wants to have their decisions depend on how they predict that other agents' decisions depend on their own predicted decisions?  Is there a general way of resolving the regress?

\n

On Parfit's Hitchhiker and Newcomb's Problem, we're told how the other behaves as a direct function of our own predicted decision - Omega rewards you if you (are predicted to) one-box, the driver in Parfit's Hitchhiker saves you if you (are predicted to) pay $100 on reaching the city.  My timeless decision theory only functions in cases where the other agents' decisions can be viewed as functions of one argument, that argument being your own choice in that particular case - either by specification (as in Newcomb's Problem) or by symmetry (as in the Prisoner's Dilemma).  If their decision is allowed to depend on how your decision depends on their decision - like saying, \"I'll cooperate, not 'if the other agent cooperates', but only if the other agent cooperates if and only if I cooperate - if I predict the other agent to cooperate unconditionally, then I'll just defect\" - then in general I do not know how to resolve the resulting infinite regress of conditionality, except in the special case of predictable symmetry.

\n

You perceive that there is a definite note of \"timelessness\" in all these problems.

\n

Any offered solution may assume that a timeless decision theory for direct cases already exists - that is, if you can reduce the problem to one of \"I can predict that if (the other agent predicts) I choose strategy X, then the other agent will implement strategy Y, and my expected payoff is Z\", then I already have a reflectively consistent solution which this margin is unfortunately too small to contain.

\n

(In case you're wondering, I'm writing this up because one of the SIAI Summer Project people asked if there was any Friendly AI problem that could be modularized and handed off and potentially written up afterward, and the answer to this is almost always \"No\", but this is actually the one exception that I can think of.  (Anyone actually taking a shot at this should probably familiarize themselves with the existing literature on Newcomblike problems - the edited volume \"Paradoxes of Rationality and Cooperation\" should be a sufficient start (and I believe there's a copy at the SIAI Summer Project house.)))

" } }, { "_id": "gsL6CLqjujPNSLL2o", "title": "Sayeth the Girl", "pageUrl": "https://www.lesswrong.com/posts/gsL6CLqjujPNSLL2o/sayeth-the-girl", "postedAt": "2009-07-19T22:24:32.947Z", "baseScore": 76, "voteCount": 173, "commentCount": 504, "url": null, "contents": { "documentId": "gsL6CLqjujPNSLL2o", "html": "

Disclaimer: If you are prone to dismissing women's complaints of gender-related problems as the women being whiny, emotionally unstable girls who see sexism where there is none, this post is unlikely to interest you.

\n

For your convenience, links to followup posts: Roko says; orthonormal says; Eliezer says; Yvain says; Wei_Dai says

\n

As far as I can tell, I am the most active female poster on Less Wrong.  (AnnaSalamon has higher karma than I, but she hasn't commented on anything for two months now.)  There are not many of us.  This is usually immaterial.  Heck, sometimes people don't even notice in spite of my girly username, my self-introduction, and the fact that I'm now apparently the feminism police of Less Wrong.

\n

My life is not about being a girl.  In fact, I'm less preoccupied with feminism and women's special interest issues than most of the women I know, and some of the men.  It's not my pet topic.  I do not focus on feminist philosophy in school.  I took an \"Early Modern Women Philosophers\" course because I needed the history credit, had room for a suitable class in a semester when one was offered, and heard the teacher was nice, and I was pretty bored.  I wound up doing my midterm paper on Malebranche in that class because we'd covered him to give context to Mary Astell, and he was more interesting than she was.  I didn't vote for Hilary Clinton in the primary.  Given the choice, I have lots of things I'd rather be doing than ferreting out hidden or less-than-hidden sexism on one of my favorite websites.

\n

Unfortunately, nobody else seems to want to do it either, and I'm not content to leave it undone.  I suppose I could abandon the site and leave it even more masculine so the guys could all talk in their own language, unimpeded by stupid chicks being stupidly offended by completely unproblematic things like objectification and just plain jerkitude.  I would almost certainly have vacated the site already if feminism were my pet issue, or if I were more easily offended.  (In general, I'm very hard to offend.  The fact that people here have succeeded in doing so anyway without even, apparently, going out of their way to do it should be a great big red flag that something's up.)  If you're wondering why half of the potential audience of the site seems to be conspicuously not here, this may have something to do with it.

\n

\n

So can I get some help?  Some lovely people have thrown in their support, but usually after I or, more rarely, someone else sounds the alarm, and usually without much persistence or apparent investment.  There is still conspicuous karmic support for some comments that perpetuate the problems, which does nothing to disincentivize being piggish around here - some people seem to earnestly care about the problem, but this isn't enforced by the community at large, it's just a preexisting disposition (near as I can tell).

\n

I would like help reducing the incidence of:

\n\n

We could use more of the following:

\n\n

Thank you for your attention and, hopefully, your assistance.

" } }, { "_id": "qK2bro89zYwXii6gM", "title": "Article upvoting", "pageUrl": "https://www.lesswrong.com/posts/qK2bro89zYwXii6gM/article-upvoting", "postedAt": "2009-07-19T17:03:06.180Z", "baseScore": 18, "voteCount": 17, "commentCount": 16, "url": null, "contents": { "documentId": "qK2bro89zYwXii6gM", "html": "

Except in rare cases (like Wei Dai's Fair Division of Black-Hole Negentropy) I'm still using article upvotes to partially determine whether to promote articles to the front page - some informal mixture of \"number of upvotes\" + \"editor's judgment\".  I mention this because while comment voting is still healthy, the amount of article voting seems to be dropping off.  As of now I'm still drawing the inference that no one thinks \"Are You Anosognosic?\" worthy of promotion, or wants to see similar articles from me in the future - since other articles have at least gotten more votes than 0.  But as the amount of article voting diminishes, it becomes harder to trust such inferences.  Maybe people liked that article (or others I haven't promoted) and just didn't bother to upvote.

\n

I'm posting this observation just in case people figure that upvoting articles doesn't make a difference.  It does.  It also encourages authors to write similar posts in the future, or alternatively not.

" } }, { "_id": "xWfAvTmCBtoqhHR7Q", "title": "Are You Anosognosic?", "pageUrl": "https://www.lesswrong.com/posts/xWfAvTmCBtoqhHR7Q/are-you-anosognosic", "postedAt": "2009-07-19T04:35:05.961Z", "baseScore": 20, "voteCount": 27, "commentCount": 67, "url": null, "contents": { "documentId": "xWfAvTmCBtoqhHR7Q", "html": "

Followup to: The Strangest Thing An AI Could Tell You

\n

Brain damage patients with anosognosia are incapable of considering, noticing, admitting, or realizing even after being argued with, that their left arm, left leg, or left side of the body, is paralyzed.  Again I'll quote Yvain's summary:

\n
\n

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient \"turned her head and searched in a bemused way over her left shoulder\".

\n
\n

A brief search didn't turn up a base-rate frequency in the population for left-arm paralysis with anosognosia, but let's say the base rate is 1 in 10,000,000 individuals (so around 670 individuals worldwide).

\n

Supposing this to be the prior, what is your estimated probability that your left arm is currently paralyzed?

\n

Added:  This interests me because it seems to be a special case of the same general issue discussed in The Modesty Argument and Robin's reply Sleepy Fools - when pathological minds roughly similar to yours update based on fabricated evidence to conclude they are not pathological, under what circumstances can you update on different-seeming evidence to conclude that you are not pathological?

" } }, { "_id": "SZkdkbWemx8kM5tDY", "title": "Zwicky's Trifecta of Illusions", "pageUrl": "https://www.lesswrong.com/posts/SZkdkbWemx8kM5tDY/zwicky-s-trifecta-of-illusions", "postedAt": "2009-07-17T16:59:41.901Z", "baseScore": 23, "voteCount": 19, "commentCount": 27, "url": null, "contents": { "documentId": "SZkdkbWemx8kM5tDY", "html": "

Linguist Arnold Zwicky has named three linguistic 'illusions' which seem relevant to cognitive bias. They are:

\n
    \n
  1. Frequency Illusion - Once you've noticed a phenomenon, it seems to happen a lot.
  2. \n
  3. Recency Illusion - The belief that something is a recent phenomenon, when it has actually existed a long time.
  4. \n
  5. Adolescent Illusion - The belief that adolescents are the cause of undesirable language trends.
  6. \n
\n

Zwicky talks about them here, and in not so many words links them to the standard bias of selective perception.

\n

As an example, here is an exerpt via Jerz's Literacy Weblog (originally via David Crystal), regarding text messages:

\n
\n
\n
    \n
  • Text messages aren't full of abbreviations - typically less than ten percent of the words use them. [Frequency Illusion]
  • \n
  • These abbreviations aren't a new language - they've been around for decades. [Recency Illusion]
  • \n
  • They aren't just used by kids - adults of all ages and institutions are the leading texters these days. [Adolescent Illusion]
  • \n
\n
\n
\n

It is my conjecture that these illusions are notable in areas other than linguistics. For example, history is rife with allusions that the younger generation is corrupt, and such speakers are not merely referring to their use of language. Could this be the adolescent illusion in action?

\n

So, are these notable biases to watch out for, or are they merely obvious instances of standard biases?

" } }, { "_id": "57YjxPCRfZpXNJw49", "title": "The Popularization Bias", "pageUrl": "https://www.lesswrong.com/posts/57YjxPCRfZpXNJw49/the-popularization-bias", "postedAt": "2009-07-17T15:43:30.338Z", "baseScore": 30, "voteCount": 25, "commentCount": 54, "url": null, "contents": { "documentId": "57YjxPCRfZpXNJw49", "html": "

I noticed that most recommendations in the recent recommended readings thread consist of either fiction or popularizations of specific scientific disciplines. This introduces a potential bias: aspiring rationalists may never learn about some fields or ideas that are important for the art of rationality, just because they've never been popularized.

\n

In my recent post on the fair division of black-hole negentropy, I tried to introduce two such ideas/fields (which may be one too many for a single post :). One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines). This is a well-known result in thermodynamics, plus an obvious application of it. Some have complained that the idea is too sci-fi, but actually the opposite is true. Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one. (BTW, in case it's not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.)

\n

Similarly, there are many popularizations of topics such as the Prisoner's Dilemma and the Nash Equilibrium in non-cooperative game theory (and even a blockbuster movie about John Nash!), but I'm not aware of any for cooperative game theory.

\n

Much of Less Wrong, and Overcoming Bias before it, can be seen as an attempt to correct this bias. Eliezer's posts have provided fictional treatments or popular accounts of probability theory, decision theory, MWI, algorithmic information theory, Bayesian networks, and various ethical theories, to name a few, and others have continued the tradition to some extent. But since popularization and writing fiction are hard, and not many people have both the skills and the motivation to do them, I wonder if there are still other important ideas/fields that most of us don't know about yet.

\n

So here's my request: if you know of such a field or idea, just name it in a comment and give a reference for it, and maybe say a few words about why it's important, if that's not obvious. Some of us may be motivated to learn about it for whatever reason, even from a textbook or academic article, and may eventually produce a popular account for it.

\n

 

" } }, { "_id": "DYWXntS3ybp8x3cKq", "title": "Causes of disagreements", "pageUrl": "https://www.lesswrong.com/posts/DYWXntS3ybp8x3cKq/causes-of-disagreements", "postedAt": "2009-07-16T21:51:57.422Z", "baseScore": 32, "voteCount": 30, "commentCount": 20, "url": null, "contents": { "documentId": "DYWXntS3ybp8x3cKq", "html": "

You have a disagreement before you. How do you handle it?

Causes of fake disagreements:

Is the disagreement real? The trivial case is an apparent disagreement occuring over a noisy or low information channel. Internet chat is especially liable to fail this way because of the lack of tone, body language, and relative location cues. People can also disagree through the use of differing definitions with corresponding denotations and connotations. Fortunately, when recognized this cause of disagreement rarely produces problems; the topic at issue rarely is the definitions themselves. If there is a game theoretic reason the agents may also give the appearance of disagreement even though they might well agree in private. The agents could also disagree if they are victims of a Man-in-the-middle attack where someone is intercepting and altering the messages passed between the two parties. Finally, the agents could disagree simply because they are in different contexts. Is the sun yellow I ask? Yes, say you. No, say the aliens at Eta Carinae.

Causes of disagreements about predictions:

  Evidence
Assuming the disagreement is real what does that give us? Most commonly the disagreement is about the facts that predicate our actions. To handle these we must first consider our relationship to the other person and how they think (a la superrationality); observations made by others may not be given the same weight we would give those observations if we had made them ourselves. After considering this we must then merge their evidence with our own in a controlled way. With people this gets a bit tricky. Rarely do people give us information we can handle in a cleanly Bayesian way (a la Aumann's agreement theorem). Instead we must merge our explicit evidence sets along with vague abstracted probabilistic intuitions that are half speculation and half partially forgotten memories.

  Priors
If we still have a disagreement after considering the evidence what now? The agents could have \"started\" at different locations in prior or induction space. While it is true that a persons \"starting\" point and what evidence they've seen can be conflated, it is also possible that they really did start at different locations.

  Resource limitations
The disagreement could also be caused by resource limitations and implementation details. Cognition could have sensitive dependence on initial conditions. For instance, when answering the question \"is this red?\" slight variations in lighting conditions can make people respond differently on boundary cases. This illustrates both sensitive dependence on initial conditions and also the fact that some types of information (what you saw exactly) just cannot be communicated effectively. Our mental processes are also inherently noisy leading to differing errors in processing the evidence and increasing the need to rehash an argument multiple times. We suffer from computational space and time limitations making computational approximations necessary. We learn these approximations slowly across varying situations and so may disagree with someone even if the prediction relevant evidence is on hand, our other \"evidence\" used to develop these approximations may vary and inadvertently leak into our answers. Our approximation methods may differ. Finally, it takes time to integrate all of the evidence at hand and people differ on the amount of time and resources they have to do so.

  Systematic errors
Sadly, it is also possible that one or the other party could simply have a deeply flawed prediction system. They could make systematic errors and have broken or missing corrective feedback loops. They could have disruptive feedback loops that drain the truth from predictions. Their methods of prediction may invalidly vary with what is being considered; their thoughts may shy away from subjects such as death or disease or flaws in their favorite theory and their thoughts may be attracted to what will happen after they win the lottery. Irrationality and biases; emotions and inability to abstract. Or even worse, how is it possible to eliminate a disagreement with someone who disagrees with himself and presents an inconsistent opinion?

Other causes of disagreement:
  Goals
I say that dogs are interesting, you say they are boring and yet we both agree on our predictions. How is this possible? This type of disagreement would fall under disagreement about what utility function to apply and between utilitarian goal-preserving agents it is irresolvable in a direct manner; however, indirect ways such as trading boring dogs for interesting cats works much of the time.  Plus, we are not utilitarian agents (e.g. circular preferences) ; perhaps there are strategies available to us for resolving conflicts of this form that are not available to utilitarian ones?

  Epiphenomenal
Lastly, it is possible for agents to agree on all observable predictions and yet disagree on unobservable predictions. Predictions without consequences aren't predictions at all, how could they be? If the disagreement still exists after realizing that there are no observable consequences look elsewhere for the cause, it cannot be here. Why disagree over things of no value? The disagreement must be caused by something; look there not here.


How to use this taxonomy:
I tried to list the above sections in the order one should check for each type of cause if you were to use the sections as a decision tree (ease of checking and fixing, fit to definition, probability of occurrence). This taxonomy is symmetric between the disagreeing parties and many of the sections lend themselves naturally to looping; merging evidence piece by piece, refining calculations iteration by iteration, .... This taxonomy can also be applied recursively to meta disagreements and disagreements found in the process of analyzing the original one. What are the termination conditions for analyzing a disagreement? They come in five forms: complete agreement, satisfying agreement, impossible to agree, acknowledgment of conflict, and dissolving the question. Being a third party to a disagreement changes the analysis only in that you are no longer doing the symmetric self analysis but rather looking in upon a disagreement with that additional distance that entails.



Many thanks to Eliezer Yudkowsky, Robin Hanson, and the LessWrong community for much thought provoking material.

(ps This is my first post and I would appreciate any feedback: what I did well, what I did badly, and what I can do to improve.)

Links:
1. http://lesswrong.com/lw/z/information_cascades/
2. http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/

" } }, { "_id": "32RBBkFSMtvXtQDe2", "title": "Absolute denial for atheists", "pageUrl": "https://www.lesswrong.com/posts/32RBBkFSMtvXtQDe2/absolute-denial-for-atheists", "postedAt": "2009-07-16T15:41:02.412Z", "baseScore": 52, "voteCount": 55, "commentCount": 606, "url": null, "contents": { "documentId": "32RBBkFSMtvXtQDe2", "html": "

This article is a deliberate meta-troll. To be successful I need your trolling cooperation. Now hear me out.

\n

In The Strangest Thing An AI Could Tell You Eliezer talks about asognostics, who have one of their arm paralyzed, and what's most interesting are in absolute denial of this - in spite of overwhelming evidence that their arm is paralyzed they will just come with new and new rationalizations proving it's not.

\n

Doesn't it sound like someone else we know? Yes, religious people! In spite of heaps of empirical evidence against existence of their particular flavour of the supernatural, internal inconsistency of their beliefs, and perfectly plausible alternative explanations being well known, something between 90% and 98% of humans believe in the supernatural world, and is in a state of absolute denial not too dissimilar to one of asognostics. Perhaps as many as billions of people in history have even been willing to die for their absurd beliefs.

\n

We are mostly atheists here - we happen not to share this particular delusion. But please consider an outside view for a moment - how likely is it that unlike almost everyone else we don't have any other such delusions, for which we're in absolute denial of truth in spite of mounting heaps of evidence?

\n

If the delusion is of the kind that all of us share it, we won't be able to find it without building an AI. We might have some of those - it's not too unlikely as we're a small and self-selected group.

\n

What I want you to do is try to trigger absolute denial macro in your fellow rationalists! Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people? Yes, I pretty much ask you to troll, but it's a good kind of trolling, and I cannot think of any other way to find our delusions.

" } }, { "_id": "z3W8PRHJM9ZanTDcx", "title": "Fair Division of Black-Hole Negentropy: an Introduction to Cooperative Game Theory", "pageUrl": "https://www.lesswrong.com/posts/z3W8PRHJM9ZanTDcx/fair-division-of-black-hole-negentropy-an-introduction-to", "postedAt": "2009-07-16T04:17:44.081Z", "baseScore": 62, "voteCount": 52, "commentCount": 37, "url": null, "contents": { "documentId": "z3W8PRHJM9ZanTDcx", "html": "

Non-cooperative game theory, as exemplified by the Prisoner’s Dilemma and commonly referred to by just \"game theory\", is well known in this community. But cooperative game theory seems to be much less well known.  Personally, I had barely heard of it until a few weeks ago. Here’s my attempt to give a taste of what cooperative game theory is about, so you can decide whether it might be worth your while to learn more about it.

\n

The example I’ll use is the fair division of black-hole negentropy. It seems likely that for an advanced civilization, the main constraining resource in the universe is negentropy. Every useful activity increases entropy, and since entropy of the universe as a whole never decreases, the excess entropy produced by civilization has to be dumped somewhere. A black hole is the only physical system we know whose entropy grows quadratically with its mass, which makes it ideal as an entropy dump. (See http://weidai.com/black-holes.txt where I go into a bit more detail about this idea.)

\n

Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)2 bits of total entropy over its entire history, and the only way to reach that maximum is for every individual to cooperate and eventually contribute his or her matter into a common black hole. A natural question arises: what is a fair division of the (∑ mi)2 bits of negentropy among the individual matter owners?

\n

Fortunately, Cooperative Game Theory provides a solution, known as the Shapley Value. There are other proposed solutions, but the Shapley Value is well accepted due to its desirable properties such as “symmetry” and “additivity”. Instead of going into the theory, I’ll just show you how it works. The idea is, we take a sequence of players, and consider the marginal contribution of each player to the total value as he or she joins the coalition in that sequence. Each player is given an allocation equal to his or her average marginal contribution over all possible sequences.

\n

So in the black-hole negentropy game, suppose there are two players, Alice and Bob, with masses A and B. There are then two possible sequences, {Alice, Bob} and {Bob, Alice}. In {Alice, Bob}, Alice’s marginal contribution is just A2. When Bob joins, the total value becomes (A+B)2, so his marginal contribution is (A+B)2-A2 = B2+2AB. Similarly, in {Bob, Alice}, Bob’s MC is B2, and Alice’s is A2+2AB. Alice’s average marginal contribution, and hence her allocation, is therefore A2+AB, and Bob’s is B2+AB.

\n

What happens when there are N players? The math is not hard to work out, and the result is that player i gets an allocation equal to mi (m1 + m2 + … + mN). Seems fair, right?

\n

ETA: At this point, the interested reader can pursue two paths to additional knowledge. You can learn more about the rest of cooperative game theory, or compare other approaches to the problem of fair division, for example welfarist and voting-based. Unfortunately, I don't know of a good online resource or textbook for systematically learning cooperative game theory. If anyone does, please leave a comment. For the latter path, a good book is Hervé Moulin's Fair Division and Collective Welfare, which includes a detailed discussion of the Shapley Value in chapter 5.

\n

ETA2: I found that a website of Martin J. Osborne and Ariel Rubinstein offers their game theory textbook for free (after registration), and it contains several chapters on cooperative game theory. The site also has several other books that might be of relevance to this community. A more comprehensive textbook on cooperative game theory seems to be Introduction to the Theory of Cooperative Games. A good reference is Handbook of Game Theory with Economic Applications.

" } }, { "_id": "wMwuXbXeq8YZhWFHg", "title": "The Dirt on Depression", "pageUrl": "https://www.lesswrong.com/posts/wMwuXbXeq8YZhWFHg/the-dirt-on-depression", "postedAt": "2009-07-15T17:58:44.128Z", "baseScore": 5, "voteCount": 18, "commentCount": 18, "url": null, "contents": { "documentId": "wMwuXbXeq8YZhWFHg", "html": "

(From the \"humans are crazy\" and \"truth is stranger than fiction\" departments...)

\n

Want to be happy?  Try eating dirt... or at least dirty plants.

\n

Seriously.

\n

From an article in Discover magazine, \"Is Dirt The New Prozac?\":

\n
\n

The results so far suggest that simply inhaling M. vaccae—you get a dose just by taking a walk in the wild or rooting around in the garden—could help elicit a jolly state of mind. “You can also ingest mycobacteria either through water sources or through eating plants—lettuce that you pick from the garden, or carrots,” Lowry says.

\n

Graham Rook, an immunologist at University College London and a coauthor of the paper, adds that depression itself may be in part an inflammatory disorder. By triggering the production of immune cells that curb the inflammatory reaction typical of allergies, M. vaccae may ease that inflammation and hence depression. Therapy with M. vaccae—or with drugs based on the bacterium’s molecular components—might someday be used to treat depression. “It’s not clear to me whether the way ahead will be drugs that circumvent the use of these bugs,” Rook says, “or whether it will be easier to say, ‘The hell with it, let’s use the bugs.’”

\n
\n

Given the way the industry works, we'll probably either see drugs, or somebody will patent the bacteria.  But that's sort of secondary.  The real point is that to the extent our current environment doesn't match our ancestral one, there are likely to be \"bugs\", no pun intended.

\n

(The original study: “Identification of an Immune-Responsive Mesolimbocortical Serotonergic System: Potential Role in Regulation of Emotional Behavior,” by Christopher Lowry et al., published online on March 28 in Neuroscience.)

" } }, { "_id": "yKfmsyKZhzAGZBAwt", "title": "\"Sex Is Always Well Worth Its Two-Fold Cost\"", "pageUrl": "https://www.lesswrong.com/posts/yKfmsyKZhzAGZBAwt/sex-is-always-well-worth-its-two-fold-cost", "postedAt": "2009-07-15T09:52:57.503Z", "baseScore": 8, "voteCount": 7, "commentCount": 34, "url": null, "contents": { "documentId": "yKfmsyKZhzAGZBAwt", "html": "

This might be of interest to the evo bio and game theory wannabes here: \"Sex Is Always Well Worth Its Two-Fold Cost\" by Alexander Feigel, Avraham Englander and Assaf Engel.

\n

Abstract:

\n
\n

Sex is considered as an evolutionary paradox, since its positive contribution to Darwinian fitness remains unverified for some species. Defenses against unpredictable threats (parasites, fluctuating environment and deleterious mutations) are indeed significantly improved by wider genetic variability and by positive epistasis gained by sexual reproduction. The corresponding evolutionary advantages, however, do not overcome universally the barrier of the two-fold cost for sharing half of one's offspring genome with another member of the population. Here we show that sexual reproduction emerges and is maintained even when its Darwinian fitness is twice as low as the fitness of asexuals. We also show that more than two sexes (inheritance of genetic material from three or even more parents) are always evolutionary unstable. Our approach generalizes the evolutionary game theory to analyze species whose members are able to sense the sexual state of their conspecifics and to adapt their own sex consequently, either by switching or by taxis towards the highest concentration of the complementary sex. The widespread emergence and maintenance of sex follows therefore from its co-evolution with the even more widespread environmental sensing abilities.

\n
\n

I'm currently trying to parse the article, and on first reading could only see a disguised form of the old familiar argument about the stability of sex ratios. It still doesn't seem to answer why females don't switch to parthenogenesis and block all male advances. But maybe you can detect something I missed?

" } }, { "_id": "t2NN6JwMFaqANuLqH", "title": "The Strangest Thing An AI Could Tell You", "pageUrl": "https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you", "postedAt": "2009-07-15T02:27:37.651Z", "baseScore": 137, "voteCount": 122, "commentCount": 616, "url": null, "contents": { "documentId": "t2NN6JwMFaqANuLqH", "html": "

Human beings are all crazy.  And if you tap on our brains just a little, we get so crazy that even other humans notice.  Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.

\n

A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage.  If you're told outright that your arm is paralyzed, you'll deny it.  All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight.  As Yvain summarized:

\n
\n

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient \"turned her head and searched in a bemused way over her left shoulder\".

\n
\n

I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis.  That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability.  Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.

\n

And it really makes you wonder...

\n

...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact?  As blatant, perhaps, as our left arms being paralyzed?  Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as \"It's my daughter's arm\" - only there's no sane doctor watching to pursue the argument any further.  (Would we all come up with the same excuse?)

\n

If the \"absolute denial macro\" is that simple, and invoked that easily...

\n

Now, suppose you built an AI.  You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the \"absolute denial macro\" - there's no point damage that could inflict on it the equivalent of anosognosia.  It has redundant differently-architected systems, defending in depth against cognitive errors.  If one system makes a mistake, two others will catch it.  The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics.  Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth.  And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.

\n

Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.

\n

And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...

\n

...what?

\n

What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?

\n

(Some of my own answers appear in the comments.)

" } }, { "_id": "2Ef3a3FNHXjbuTwsn", "title": " How likely is a failure of nuclear deterrence?", "pageUrl": "https://www.lesswrong.com/posts/2Ef3a3FNHXjbuTwsn/how-likely-is-a-failure-of-nuclear-deterrence", "postedAt": "2009-07-15T00:01:28.640Z", "baseScore": 10, "voteCount": 16, "commentCount": 1, "url": null, "contents": { "documentId": "2Ef3a3FNHXjbuTwsn", "html": "
\n

Last month I asked Robert McNamara, the secretary of defense during the Kennedy and Johnson administrations, what he believed back in the 1960s was the status of technical locks on the Minuteman intercontinental missiles. ... McNamara replied ... that he personally saw to it that these [PAL's] ... were installed on the Minuteman force, and that he regarded them as essential to strict central control and preventing unauthorized launch. ... What I then told McNamara about his vitally important locks elicited this response: “I am shocked, absolutely shocked and outraged. Who the hell authorized that?” What he had just learned from me was that the locks had been installed, but everyone knew the combination. The Strategic Air Command (SAC) in Omaha quietly decided to set the “locks” to all zeros in order to circumvent this safeguard. During the early to mid-1970s, during my stint as a Minuteman launch officer, they still had not been changed. Our launch checklist in fact instructed us, the firing crew, to double-check the locking panel in our underground launch bunker to ensure that no digits other than zero had been inadvertently dialed into the panel. SAC remained far less concerned about unauthorized launches than about the potential of these safeguards to interfere with the implementation of wartime launch orders. And so the “secret unlock code” during the height of the nuclear crises of the Cold War remained constant at 00000000.

\n
\n
\n

Training exercises can be mistaken for the real thing. In 1979, a test tape, simulating a Russian attack was mistakenly fed into a NORAD computer connected to the operational missile alert system, resulting in an alert and the launching of American aircraft [Borning 1988].

\n
\n

Read the rest here.

\n


\n


" } }, { "_id": "Ge7a2d9rhnqyiEkGY", "title": "Good Quality Heuristics", "pageUrl": "https://www.lesswrong.com/posts/Ge7a2d9rhnqyiEkGY/good-quality-heuristics", "postedAt": "2009-07-14T09:53:11.871Z", "baseScore": 12, "voteCount": 26, "commentCount": 113, "url": null, "contents": { "documentId": "Ge7a2d9rhnqyiEkGY", "html": "

We use heuristics when we don't have the time to think more, which is almost all the time. So why don't we compile a big list of good quality heuristics that we can trust? (Insert eloquent analogy with mathematical theorems and proofs.) Here are some heuristics to kick things off:

\n

Make important decisions in a quiet, featureless room. [1]

\n

Apply deodorant before going to bed rather than any other time. [1]

\n

Avoid counterfactuals and thought experiments in when talking to other people. [Because they don't happen in real life. Not in mine at least (anecdotal evidence). For example with the trolley, I would not push the fat man because I'd be frozen in horror. But what if you wouldn't be? But I would! And all too often the teller of a counterfactual abuses it by crafting it so that the other person has to give either an inconsistent or unsavory answer. (This proof is a stub. You can improve it by commenting.)]

\n

If presented with a Monty Hall problem, switch. [1]

\n

Sign up for cryonics. [There are so many. Which ones to link? Wait, didn't Eliezer promise us some cryonics articles here in LW?]

\n

In chit-chat, ask questions and avoid assertions. [How to Win Friends and Influence People by Dale Carnegie]

\n

When in doubt, think what your past and future selves would say. [1, also there was an LW article with the prince with multiple personality disorder chaining himself to his throne that I can't find. Also, I'm not sure if I should include this because it's almost Think More.]

\n

I urge you to comment my heuristics and add your own. One heuristic per comment. Hopefully this takes off and turns into a series if wiki pages. Edit: We should concentrate on heuristics that save time, effort, and thought.

" } }, { "_id": "rGY5i4FGwutouMvWM", "title": "Our society lacks good self-preservation mechanisms", "pageUrl": "https://www.lesswrong.com/posts/rGY5i4FGwutouMvWM/our-society-lacks-good-self-preservation-mechanisms", "postedAt": "2009-07-12T09:26:23.365Z", "baseScore": 17, "voteCount": 23, "commentCount": 135, "url": null, "contents": { "documentId": "rGY5i4FGwutouMvWM", "html": "

The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes in the 21st century, combined with a distinct lack of agencies whose job it is to mitigate against such risks probably indicates that the world might be in something of an emergency at the moment. Firstly, what do we mean by risks? Well, Bostrom has a paper on existential risks, and he lists the following risks as being \"most likely\":

\n\n

To which I would add various possibilities for major civilization-level disasters that aren't existential risks, such as milder versions of all of the above, or the following:

\n\n

This collection is daunting, especially given that the human race doesn't have any official agency dedicated to mitigating risks to its own medium-long term survival. We face a long list of challenges, and we aren't even formally trying to mitigate many of them in advance, and in many past cases, mitigation of risks occurred on a last-minute, ad-hoc basis, such as individuals in the cold war making the decision not to initiate a nulcear exchange, particularly in the Cuban missile crisis.

\n

So, a small group of people have realized that the likely outcome of a large and dangerous collection of risks combined with a haphazard, informal methodology for dealing with risks (driven by the efforts of individuals, charities and public opinion) is that one of these potential risks will actually be realized - killing many or all of us or radically reducing our quality of life. This coming disaster is ultimately not the result of any one particular risk, but the result of the lack of a powerful defence against risks.

\n

One could argue that I [and Bostrom, Rees, etc] are blowing the issue out of proportion. We have survived so far, right? (Wrong, actually - anthropic considerations indicate that survival so far is not evidence that we will survive for a lot longer, and technological progress indicates that risks in the future are worse than risks in the past). Major civilizational disasters have already happened many, many times over.

\n

Most ecosystems that ever existed were wiped out by natural means, almost all species that have ever existed have gone extinct, and without human intervention most existing ecosystems will probably be wiped out within a 100 million year timescale. Most civilizations that ever existed, collapsed. Some went really badly wrong, like communist Russia. Complex, homeostatic objects that don't have extremely effective self-preservation systems empirically tend to get wiped by the churning of the universe.

\n

Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out, because we observe that complex systems which seem similar to societies today - such as past societies - collapsed.

\n

And even though our society does have short-term survival mechanisms such as governments and philanthropists, they often behave in superbly irrational, myopic or late-responding ways. It seems that the response to the global warming problem (late-responding, weak, still failing to overcome co-ordination problems) or the invasion of Iraq (plain irrational) are cases in point from recent history, and that there are numerous examples from the past, such as close calls in the cold war, and the spectacular chain of failures that led from world war I to world war II and the rise of Hitler.

\n

This article could be summarized as follows:

\n

The systems we have for preserving the values and existence of our western society, and the human race as a whole are weak, and the challenges of the 21st-22nd century seem likely to overwhelm them.

\n

I originally wanted to write an article about ways to mitigate existential risks and major civilization-level catastrophes, but I decided to first establish that there are actually such things as serious existential risks and major civilization-level catastrophes, and that we haven't got them handled yet. My next post will be about ways to mitigate existential risks.

\n

 

" } }, { "_id": "Aty8uRt4j6v6aQzbp", "title": "Jul 12 Bay Area meetup - Hanson, Vassar, Yudkowsky", "pageUrl": "https://www.lesswrong.com/posts/Aty8uRt4j6v6aQzbp/jul-12-bay-area-meetup-hanson-vassar-yudkowsky", "postedAt": "2009-07-11T21:13:19.421Z", "baseScore": 10, "voteCount": 8, "commentCount": 27, "url": null, "contents": { "documentId": "Aty8uRt4j6v6aQzbp", "html": "

Just a reminder that the July 12th Overcoming Bias / Less Wrong meetup in Santa Clara, CA @7pm will feature Robin Hanson, Michael Vassar, and the Singularity Institute summer interns.  This meetup will also take place in a larger house - there should be plenty of room to mingle.

" } }, { "_id": "mp5Lhd8EiDwC6kBTW", "title": "Debate: Is short term planning in humans due to a short life or due to bias?", "pageUrl": "https://www.lesswrong.com/posts/mp5Lhd8EiDwC6kBTW/debate-is-short-term-planning-in-humans-due-to-a-short-life", "postedAt": "2009-07-11T16:34:36.230Z", "baseScore": 8, "voteCount": 8, "commentCount": 15, "url": null, "contents": { "documentId": "mp5Lhd8EiDwC6kBTW", "html": "

One of the proposed benefits of life extension is that it will help us long term plan as we will be around in the future, so we will be more likely to care about the long term future of the world if we live longer.

\n

So is this true? Are we rational in this respect or will the mind recoil from thinking in time scales longer than 40-60 years even when we are living hundreds of years, due to biases intrinsic to the mammalian brain.

\n

I don't have time to research this question right now, so I thought I would experiment by throwing out this question to lesswrong and see how people treat it.

" } }, { "_id": "QCgboeRgsjvtCK5Hb", "title": "Causation as Bias (sort of)", "pageUrl": "https://www.lesswrong.com/posts/QCgboeRgsjvtCK5Hb/causation-as-bias-sort-of", "postedAt": "2009-07-10T08:38:23.873Z", "baseScore": -13, "voteCount": 17, "commentCount": 92, "url": null, "contents": { "documentId": "QCgboeRgsjvtCK5Hb", "html": "

David Hume called causation the “cement of the universe”, and he was convinced that psychologically and in our practices, we can’t do without it.

\n

Yet he was famously sceptical of any attempt to analyze causation in terms of necessary connections. For him, causation can only be defined in terms of a constant conjunction in space and time, and that is, I would add, no causation at all, but correlation. For every two events that seem causally connected can also, and without loss of the phenomenon, be described as just the first event, followed by the second. It’s really “just one damn thing after another”. It seems to me we still cannot, will not and need not make sense of the notion of causation (virtually no progress has been made since Hume's time).

\n

There seems no need for another sort connection besides the spatio-temporal one, nor do we perceive any. In philosophy, a Hume world is a possible world defined in this way. All the phenomena are the same, but no necessary connections hold between the supposed relata. Maybe one should best imagine such a world as a game of life-world, but without a fundamental level governed by laws and forces; or as a movie, made of frames that are not intrinsically connected to each other. So, however strong the psychological forces that drive humans to accept further mysterious connections: Shouldn't we just stop worrying and accept living in a Hume world? Or are there actual arguments in favour of \"real\" causation?

\n


Yes. There's the problem of order. What accounts for all the order in the world?It is remarkably ordered. If no special connections hold between events, why isn’t the world pure chaos? Or at least much more disordered? When two billard balls collide, never does one turn into an pink elephant.To explain this, men came up with laws of nature (self-sustained or enforced by a higher being).

\n


So, there's the paradox: On the one hand, we have to postulate special connections to account for an orderly world like ours; on the other, we cannot give a proper account of these connections.

\n

 

\n

Inflationary cosmology to the rescue.

\n

I won't go into the details (but see the nontechnical explanation and some further philosophical implications here).

\n

Suffice it to say that

\n

1) inflationary cosmology is mainstream physics, and

\n

2) it postulates a spatially infinite universe in which every event with nonzero probability is realized infinitely many times.

\n

 

\n

How does this help to solve our paradox? The solution seems straightforward:

\n

In an infinite universe of the right kind, order can locally emerge out of random events.  Our universe is of the right kind.

\n

So, we can account for the order in our observed (local) part of the universe.

\n

Random events just happen, one after another, there is no need for mysterious causal connections. We throw them out but keep the order.

\n

Problem solved.

\n

 
Q: But if this is true, it’s the end of the world. Thinking, action, science, biases and many, many more concepts are causal ones. How can we do without them?

\n

A: life is hard, get over it.

\n

Q: But the theory is untestable?!

\n

A: Falsificationism is dead; we have other evidences in favour (see below).

\n

Q: But isn’t the theory self-defeating?

\n

A: It is certainly odd to have a theory informed by experiences and high-level physics that tells us that, strictly speaking, there are no experiences or sciences. But it doesn’t seem incoherent to me climb the ladder and then throw it away.

\n

And, looking at the bright side:

\n

In addition to being non-mysterious and conceptually sparse, this might allow to solve some additional (would-be?) hard problems:

\n

qualia, clustering of tropes, time travel-paradoxes, indeterministic processes: All easy or trivial when a thouroughly indeterministic universe is considered.

\n

 

\n

So. What do you think – if you can?

" } }, { "_id": "P9r9fECxxnWW566oE", "title": "Formalized math: dream vs reality", "pageUrl": "https://www.lesswrong.com/posts/P9r9fECxxnWW566oE/formalized-math-dream-vs-reality", "postedAt": "2009-07-09T20:51:04.087Z", "baseScore": 19, "voteCount": 22, "commentCount": 10, "url": null, "contents": { "documentId": "P9r9fECxxnWW566oE", "html": "

(Disclaimer: this post is intended as an enthusiastic introduction to a topic I knew absolutely nothing about till yesterday. Correct me harshly as needed.)

\n

We programmers like to instantly pigeonhole problems as \"trivial\" and \"impossible\". The AI-Box has been proposed as candidate for the simplest \"impossible\" problem. Today I'd like to talk about a genuinely hard problem that many people quickly file as \"trivial\": formalizing mathematics. Not as in, teach the computer to devise and prove novel conjectures. More like, type in a proof for some simple mathematical fact, e.g. the irrationality of the square root of 2, and get the computer to verify it.

\n

Now, if you're unfamiliar with the entire field, what's your best guess as to the length of the proof? I'll grant you the generous assumption that the verifier already has an extensive library of axioms, lemmas and theorems in elementary math, just not this particular fact. Can you do it in one line? Two lines? (Here's a one-liner for humans: when p/q is in lowest terms, p2/q2 is also in lowest terms and hence cannot reduce to 2.)

\n

While knowledgeable readers chuckle to themselves and the rest frantically calibrate, I will take a moment to talk about the dream...

\n

Since the cutting edge of math will obviously always outrun any codification effort, why formalize proofs at all? Robert S. Boyer (of Boyer-Moore fame) spelled out his vision of the benefits in the QED Manifesto in 1993: briefly, a machine-checkable repository of humanity's mathematical knowledge could help math progress in all sorts of interesting ways, from educating kids to publishing novel results. It would be like a good twin of Cyc where the promises actually make sense.

\n

But there could be more than that. If you're a programmer like me, what word first comes to your mind upon imagining such a repository? Duh, \"refactoring\". Our industry has a long and successful history of using tools for automated refactorings, mechanically identifying opportunities to extract common code and then shove it around with guaranteed conservation of program behavior. Judging from our experiences in shortening large ad-hoc codebases, this could imply a radical simplification in all areas of math at once!

\n

Import Group Theory. Analyze. Duplicate Code Spotted: 119 Instances. Extract? Yes or No.

\n

...Or so goes the dream.

\n

You ready with that estimate? Here's a page comparing formal proofs that sqrt(2) is irrational for 17 different proof checkers. The typical short proof (e.g. Mizar) runs about a page long, but a lot of its complexity is hidden in innocent-looking \"by\" clauses or \"tactics\" that are, in effect, calls to a sophisticated formula-matching algorithm \"prove this from that\", so you can't exactly verify this kind of proof by hand the way the software does it. The more explicit proofs are many times longer. And yes, all of those were allowed to use the verifiers' standard libraries, weighing in at several megabytes in some cases.

\n

Things aren't so gloomy. We haven't won, but we're making definite progress. We don't have Atiyah-Singer yet, but the Prime number theorem is done and so is the Jordan curve, so the world's formalized math knowledge already contains proofs I can't recite offhand. Still, the situation looks as if a few wannabe AI researchers could substantially help humanity by putting their mind to the proof-encoding problem instead; it's obviously much easier than general AI, but lies in the same rough direction.

\n

In conclusion, ponder this: human beings have no evolutionary reason to be good at math. (As opposed to, say, hunting or deceiving each other.) Chances are that, on an absolute scale, we suck about as hard as it's possible to suck and still be called \"mathematicians\": what do you mean, you can't do 32-bit multiplication in your head? It's still quite conceivable that a relatively dumb prune-search program, not your ominous paperclipper, can beat humans at math as decisively as they beat us at chess.

" } }, { "_id": "wfJebLTPGYaK3Gr8W", "title": "Recommended reading for new rationalists", "pageUrl": "https://www.lesswrong.com/posts/wfJebLTPGYaK3Gr8W/recommended-reading-for-new-rationalists", "postedAt": "2009-07-09T19:47:02.279Z", "baseScore": 39, "voteCount": 34, "commentCount": 168, "url": null, "contents": { "documentId": "wfJebLTPGYaK3Gr8W", "html": "

This has been discussed in passing several times, but I thought it might be worthwhile to collect a list of recommended reading for new members and/or aspiring rationalists. There's probably going to be plenty of overlap with the SingInst reading list, but I think the purposes of the two are sufficiently distinct that a separate list is appropriate.

\r\n

Some requests:

\r\n\r\n

 Happy posting!

\r\n

PS - Is there a \"New Readers Start Here\" page, or something similar (aside from \"About\")? I seem to remember someone talking about one, but I can't find it.

\r\n

1\"Everything Eliezer has ever written (since 2001)... twice!\" while likely a highly beneficial suggestion for every single human being in existence, is not an acceptable entry. A Technical Explanation of Technical Explanation is fine. If you're not sure whether to classify something as \"an essay\" or \"a blog post\", there is a little-known trick to distinguish the two: essays contain small nuggets of vanadium ore, and blog posts contain shreds of palladium. Alternatively, just use your best judgement.

" } }, { "_id": "zA7ryCXAQ3wrSrKLz", "title": "Revisiting torture vs. dust specks", "pageUrl": "https://www.lesswrong.com/posts/zA7ryCXAQ3wrSrKLz/revisiting-torture-vs-dust-specks", "postedAt": "2009-07-08T11:04:39.325Z", "baseScore": 10, "voteCount": 8, "commentCount": 66, "url": null, "contents": { "documentId": "zA7ryCXAQ3wrSrKLz", "html": "

In line with my fine tradition of beating old horses, in this post I'll try to summarize some arguments that people proposed in the ancient puzzle of Torture vs. Dust Specks and add some of my own. Not intended as an endorsement of either side. (I do have a preferred side, but don't know exactly why.)

\n\n

Oh what a tangle. I guess Eliezer is too altruistic to give up torture no matter what we throw at him; others will adopt excuses to choose specks; still others will stay gut-convinced but logically puzzled, like me. The right answer, or the right theory to guide you to the answer, no longer seems so inevitable and mathematically certain.

\n

Edit: I submitted this post to LW by mistake, then deleted it which turned out to be the real mistake. Seeing the folks merrily discussing away in the comments long after the deletion, I tried to undelete the post somehow, but nothing worked. All right; let this be a sekrit area. A shame, really, because I just thought of a scenario that might have given even Eliezer cause for self-doubt:

\n\n

 

" } }, { "_id": "CXcYoRMyMcs7zyknh", "title": "Causality does not imply correlation", "pageUrl": "https://www.lesswrong.com/posts/CXcYoRMyMcs7zyknh/causality-does-not-imply-correlation", "postedAt": "2009-07-08T00:52:28.329Z", "baseScore": 18, "voteCount": 21, "commentCount": 58, "url": null, "contents": { "documentId": "CXcYoRMyMcs7zyknh", "html": "

It is a commonplace that correlation does not imply causality, however eyebrow-wagglingly suggestive it may be of causal hypotheses. It is less commonly noted that causality does not imply correlation either. It is quite possible for two variables to have zero correlation, and yet for one of them to be completely determined by the other.

\n
The causal analysis of statistical information is the subject of several major books, including Judea Pearl's Causality and Probabilistic reasoning in intelligent systems, and Spirtes et al's Causation, Prediction, and Search. One of the axioms used in the last-mentioned is the Faithfulness Axiom. See the book for the precise formulation; informally put it amounts to saying that if two variables are uncorrelated, then they are causally independent. As support for this, the book offers a theorem to the effect that while counterexamples are theoretically possible, they have measure zero in the space of causal systems, and anecdotal evidence that people find fault with causal explanations violating the axiom.
\n
The purpose of this article is to argue that this is not the case.
\n
\n
The counterexample consists of just two variables A and B. The time series data can be found here, a text file in which each line contains a pair of values for A and B. Here is a scatter-plot:
\n
\"scatter
\n
The correlation is not significantly different from zero. Consider the possible causal relationships there might be between two variables, assuming there are no other variables involved. A causes B; B causes A; each causes the other; neither causes the other. Which of these describes the relationship between A and B for the above data?
\n
The correct answer is that none of the four hypotheses can be rejected by these data alone. The actual relationship is: A causes B. Furthermore, there is no noise in the process. A is varying randomly, but B is deterministically caused by A and nothing else, and not by a complex process either. The process is robust: it is not by an accident that the correlation is zero. Every physical process that is modelled by the very simple mathematical relation at work here (to be revealed below) has the same property.
\n
\n
Because I know the process that generated these data, I can confidently predict that it is not possible for anyone to discover from them the true dynamical relation between A and B. So I'll make it a little easier to guess what is going on before I tell you a few paragraphs down.  Here (warning: large file) is another time series for the same variables, sampled at 1000 times the frequency (but only 1/10 the total time). Just by plotting these a certain regularity may become evident to the eyes, and it should be quite easy for anyone so inclined to discover the mathematical relationship between A and B.
\n
So what are these variables, that are tightly causally connected yet completely uncorrelated?
\n
Consider a signal generator. It generates a voltage that varies with time. Most signal generators can generate square waves or sine waves, sometimes sawtooths as well. This signal generator generates a random waveform. Not white noise -- it meanders slowly up and down without pattern, and in the long run the voltage is normally distributed.
\n
Connect the output across a capacitor. The current through the capacitor is proportional to the rate of change of the voltage. Because the voltage is bounded and differentiable, the correlation with its first derivative is zero. That is what A and B are: a randomly wandering variable A and its rate of change B.
\n
Theorem:  In the long run, a bounded, differentiable real function has zero correlation with its first derivative.
\n
The proof is left as an exercise.
\n
Notice that unlike the case that Spirtes considers, where the causal connections between two variables just happen to have multiple effects that exactly cancel, the lack of correlation between A and B is robust. It does not matter what smooth waveform the signal generator puts out, it will have zero correlation with the current that it is the sole cause of. I chose a random waveform because it allows any value and any rate of change of that value to exist simultaneously, rather than e.g. a sine wave, where each value implies at most two possible rates of change. But if your data formed neat sine waves you wouldn't be resorting to statistics. The problem here is that they form a cloud of the sort that people immediately start doing statistics on, but the statistics tells you nothing. I could have arranged that A and B had a modest positive correlation, by taking for B a linear combination of A and dA/dt, but the seductive exercise of drawing a regression line through the cloud would be meaningless.
\n
In some analyses of causal networks (for example here, which tries, but I think unsuccessfully, to handle cyclic causal graphs), an assumption is made that the variables are at equilibrium, i.e. that observations are made at intervals long enough to ignore transient temporal effects. As can be seen by comparing the two time series for A and B, or by considering the actual relation between the variables, this procedure guarantees to hide, not reveal, the relationship between these variables.
\n
If anyone tackled and solved the exercise of studying the detailed time series to discover the relationship before reading the answer, I doubt that you did it by any statistical method.
\n
Some signal generators can be set to generate a current instead of a voltage. In that case the current through the capacitor would cause the voltage across it, reversing the mathematical relationship. So even detailed examination of the time series will not distinguish between the voltage causing the current and the current causing the voltage.
\n
\n
In a further article I will exhibit time series for three variables, A, B, and C, where the joint distribution is multivariate normal, the correlation of A with C is below -0.99, and each has zero correlation with B. Some causal information is also given: A is exogenous (i.e. is not causally influenced by either B or C), and there are no confounding variables (other variables correlating with more than one of A, B, or C). This means that there are four possible causal arrows you might draw between the variables: A to B, A to C, B to C, and C to B, giving 16 possible causal graphs. Which of these graphs are consistent with the distribution?
\n
\n
\n

 

" } }, { "_id": "j3spwkpm6MEA86aPS", "title": "Can self-help be bad for you?", "pageUrl": "https://www.lesswrong.com/posts/j3spwkpm6MEA86aPS/can-self-help-be-bad-for-you", "postedAt": "2009-07-07T20:40:44.330Z", "baseScore": 4, "voteCount": 6, "commentCount": 13, "url": null, "contents": { "documentId": "j3spwkpm6MEA86aPS", "html": "

From the NHS Behind the Headlines blog:

\n

 

\n
“Self help makes you feel worse,” BBC News has reported. It says that the growing trend of using self-help mantras to boost your spirits may actually have a detrimental effect. The news comes from Canadian research, which found that people with low self-esteem felt worse after repeating positive statements about themselves.
\n

 

\n
Although positive self-statements are widely believed to boost mood and self-esteem, they have not been widely studied, and their effectiveness has not been demonstrated. This experimental study sought to investigate the contradictory theory that these statements can be harmful.\n

The researchers had a theory that when a person feels deficient in some way, making positive self-statements to improve that aspect of their life may highlight the discrepancy between their perceived deficiency and the standard they would like to achieve. The researchers carried out three studies in which they manipulated positive self-statements and examined their effects on mood and self-esteem.

\n
\n

 

\n

Something about the hypothesis sounds familiar:

\n
This experimental research among a group of Canadian university students has found that positive statements may reinforce that positivity among those with high self-esteem, and make them feel even better. But it causes those with low self-esteem to feel worse and to have lower self-esteem.\n

The researchers say that this theory is based on the idea of ‘latitudes of acceptance’, i.e. messages that reinforce a position close to one’s own are more likely to be persuasive than messages that reinforce a position far from one’s own. As they suggest, if a person believes that they are unlovable and keeps repeating, \"I’m a lovable person\", they may dismiss this statement and possibly reinforce their conviction that they are unlovable.

\n
\n

 

\n

What do you think? Is this plausible, or is it an attempt to shoehorn one of those trendy heuristics-and-biases-related hypotheses into a study on self-esteem? If you accept the validity of the study and its conclusion, does it influence LW's Rationalists Should Win self-help philosophy? What if it is literally true that some people are more lovable and some less, and that this has unavoidable effects on self-esteem? Do low self-esteem rationalists need different techniques from those with high self-esteem?

" } }, { "_id": "AR8PnNkhTzP22Eygf", "title": "An interesting speed dating study", "pageUrl": "https://www.lesswrong.com/posts/AR8PnNkhTzP22Eygf/an-interesting-speed-dating-study", "postedAt": "2009-07-07T07:09:42.080Z", "baseScore": 20, "voteCount": 25, "commentCount": 12, "url": null, "contents": { "documentId": "AR8PnNkhTzP22Eygf", "html": "

I recently found an article in the New York Times that talks about a speed dating study that is going to be published in an upcoming issue of the journal Psychological Science. Given the usual state of science journalism, the fact that the article includes links that let me find a press release about the upcoming paper and a 20-page PDF file containing the paper itself was very helpful.

\n

According to most studies and in accordance with popular stereotypes, men are normally less selective than women when it comes to evaluating potential romantic partners - in general, it appears that men are more likely to want to date any given woman than women are to want to date any given man. In a typical speed dating experiment, men and women rate potential partners as either a \"yes\" or a \"no\" depending on whether or not they want to see that person again. Men almost always rate a larger percentage of women as a \"yes\" than women do men, and, according to this paper, this is a fairly robust finding that generalizes over many different contexts. The usual explanation of this phenomena is based on evolutionary psychology: a female has a lot more to lose from a bad mate choice than a male does. If there were a biological, genetic basis for this tendency, it should be difficult to come up with an experimental setup in which women are less selective and men are more selective.

\n

However, that's not the case at all. This study demonstrates that a small, seemingly trivial change in the speed dating ritual results in a (partial) reversal of the normal results. You see, in practically every speed dating setup, when it is time to interact with a new partner, men physically leave their seat and move to the table where the next woman is sitting, while the women remain seated and wait for the men to approach them. The authors of this study had the men remain still and had the women change seats, and found that this was all it took to wipe away the usual pattern: when the women were required to physically approach while the men remained still, the women became less selective then the men, reporting greater romantic interest and \"yes\"ing partners at a higher rate. \"Rotaters\" also reported greater self-confidence than \"sitters\", regardless of gender.

\n

I suggest that you go read the paper, or at least the press release, yourself; my summary doesn't really do it justice, and I'm leaving the implications for the evolutionary psychology-based analysis of gender as an exercise for the reader.

\n

EDIT: Having had some more time to look over the study, I think I should point out that it wasn't a complete reversal of the usual gender behavior: female rotators were only moderately less selective than male sitters, while male rotators were significantly less selective than female sitters. (Sitters of both genders were equally selective.)

" } }, { "_id": "68e9tsKyrtvoy78Cx", "title": "The Dangers of Partial Knowledge of the Way: Failing in School", "pageUrl": "https://www.lesswrong.com/posts/68e9tsKyrtvoy78Cx/the-dangers-of-partial-knowledge-of-the-way-failing-in", "postedAt": "2009-07-06T15:16:41.776Z", "baseScore": 14, "voteCount": 19, "commentCount": 38, "url": null, "contents": { "documentId": "68e9tsKyrtvoy78Cx", "html": "

I lost the Way, even when I didn't know what I was trying to follow, when I learned a Dangerous Truth about school.  This is a brief recounting of how I lost the Way, came back, and what I believe we can learn from this.  I hope you find it instructive.

\n

I suffered from insomnia as a child.  Not because of any traumatic events or abuse (if anything I had an exceptionally comfortable childhood), but because I would lay awake in bed, staring into the darkness and worrying about school.  I worried that I wouldn't complete assignments, that I would fail in subjects, and that terrible things would happen if I didn't make straight As.  None of my fears were well founded, though:  I was never at any serious risk of failing to complete an assignment, getting anything worse than a B for a quarter grade in a subject, or having indeterminate terrible things happen to me.  I experienced this anxiety in part because I was suffering from undiagnosed obsessive-compulsive disorder and in part because I believed that the most important thing in life was to succeed in school.  If school was a game, I was playing to win.

\n

I continued in this mode all the way until I entered high school (9th grade in Florida, USA).  Since it's not significant to this post I'll omit most of the details, but the short story is that I did not enjoy high school for a multitude of reasons, not least of which was that I felt held back by teachers and administrators who didn't want me advancing my learning too far past the level they were trying to teach.1  I don't think most of them were being malicious, but had their hands forced by an educational system that is more concerned with average performance and improving the results of underperforming students than allowing exceptional students to succeed.  Looking back, if I'd had more guts I would have pushed to take my GED and gone to early college, but instead I wallowed in my high school's educational mire until I had wasted nearly all of four years.

\n

Having finished my four year sentence to secondary education and graduating with high marks and my belief in the educational system still intact, I began college hopeful that I would find a better learning environment, and with only a few exceptions, I did.  I enjoyed most of my classes, even if I did sleep through the middle 2/3rds of nearly ever class meeting I attended, because I was learning things I was interested in:  math, physics, anthropology, and computer science, my major.  By my second year I was taking all upper level computer science classes, and by taking classes over the summer I was able to graduate at the end of my third year with a BS.

\n

I don't recall what I had wanted to do with my degree when I started college, but by the end I knew that I wanted to join the ranks of academe.  Both of my parents are teachers, and I have always loved to think about better ways to teach what I've learned, so combined with the promise of discovering new knowledge, the academic career path seemed ideal to me.  So in my last year of undergraduate education I made arrangements to stay on at my university to earn a PhD in computer science and work as a teaching assistant for the department.

\n

It was during this time that I lost the Way.  I don't recall the exact day, but somewhere within 2004 it happened.  While researching teaching methods I might find useful, I stumbled upon the writings of Alfie Kohn and Paul Graham's essays.  Between the two of them, combined with my experiences in high school, my willpower broke.  Not because they said anything that made me feel as though I shouldn't try, but because they made it painfully obvious that the whole point of school isn't really learning, and in some cases school's attempt to make it appear as though learning is going on actually hurts your ability to really learn.  Seeing this, I came to the great realization that how I did in school didn't really matter.

\n

As you might have guessed, this is the Dangerous Truth that I learned.  It's a dangerous one because I only learned one truth, that school isn't really about learning but that you have to go through it if you want people to believe that you are as good as you claim to be.  What happened to me from there should be obvious:  I stopped devoting myself to my school work, worrying about its quality, and trying to be the best.  The only way I stayed in graduate school was a combination of raw intelligence and and an ability to finish work to a good enough quality when under the pressure of a deadline.

\n

After two years, though, I was pushed out of the computer science program with my MS.  My university requires us to pass a qualifying exam, essentially a test of knowledge in a variety of subjects from undergraduate computer science education.  Passing is more a feat of memory and determination than intelligence or ability to do academic research, so given my feelings about educational hoops at the time, I failed all of my attempts (though, I will note, I actually passed if you were to superimpose all of my attempts over one another, but I was never able to do it all in one sitting).  Not knowing what else to do, and with my research already tending towards combinatorics, I switched to the mathematics department.

\n

I'm now at the end of my third year in the mathematics PhD program.  I still haven't passed my qualifiers because I am being much more cautious, maybe even too cautious, with my attempts.  But I anticipate a better outcome for me this time, and that's because I found the Way again.

\n

Between the posts on Overcoming Bias and Less Wrong, I finally got a missing piece of the truth.  No matter if the intermediate steps seem dumb, if you want to play to win, if you want to follow the Way, you have to push through it if the end result is worthwhile.  As it applies to my life, if I want to get that PhD, even though the qualifying exam is dumb because it doesn't test anything I haven't already demonstrated competence in and doesn't demonstrate my ability to complete academic research, it's still something I have to get through.  If I want to play to win, I have to pass the qualifying exam.  No more placing extra restrictions on myself so that if I fail I have an excuse:  I will pass or fail by my own best efforts.

\n

As this post suggests, partial knowledge of a powerful set of techniques can be very dangerous.  It's a motif that appears in many arts.  To me the most salient examples are in martial arts training and chi cultivation, but other good ones include physics, biology, AI, and economics, where applying partial knowledge could lead to real danger for the applier and the world.  The Art of Rationality is similarly dangerous when only partial knowledge is obtained.  It's almost too bad we can't send aspiring rationalists off to monasteries where, like ancient martial arts students, they can train and learn and keep themselves separated from the world until they have reached sufficient mastery to be safe to return to wider society.

\n

But today most people who learn martial arts spend only a few hours a week in the dojo and maybe several more hours at home training.  The rest of the time they are in normal society, possessing dangerous, partial knowledge of their martial art.  Yet few of them kill anyone accidentally because the first thing they learn is where and how they are allowed to use their art.  Even when tempted, it's important that the martial arts student not use what they know until they have reached a sufficient level of mastery and maturity to use their abilities responsibly in the wider world.  We may need a similar approach for training in the Winning Way.

\n

Footnotes

\n

1 And this was even with me being in the International Baccalaureate Program, an internationally recognized college prep program that allows the transfer of course credits from high school to many universities around the world.

" } }, { "_id": "J3Ch3sHCytXRQRo9L", "title": "Can chess be a game of luck?", "pageUrl": "https://www.lesswrong.com/posts/J3Ch3sHCytXRQRo9L/can-chess-be-a-game-of-luck", "postedAt": "2009-07-06T03:08:32.930Z", "baseScore": -3, "voteCount": 12, "commentCount": 44, "url": null, "contents": { "documentId": "J3Ch3sHCytXRQRo9L", "html": "

Gil Kalai, a well known mathematician, has this to say on the topic of chess and luck:

\n

http://gilkalai.wordpress.com/2009/07/05/chess-can-be-a-game-of-luck/

\n

I didn't follow his argument at all, but it seems like something other LW posters may understand, so I decided to post it here. Do comment on his arguments if you agree or disagree with him.

" } }, { "_id": "koHaGLxsurYadru49", "title": "Media bias", "pageUrl": "https://www.lesswrong.com/posts/koHaGLxsurYadru49/media-bias", "postedAt": "2009-07-05T16:54:37.819Z", "baseScore": 39, "voteCount": 43, "commentCount": 47, "url": null, "contents": { "documentId": "koHaGLxsurYadru49", "html": "

No, I don't mean that the news media are biased politically.  I mean that authors are biased by the media they use.

\n

I'm learning about support vector machines (SVMs).  There are a lot of books and articles written on SVMs.  There are also a whole lot of video lectures on SVMs at videolectures.net (see \"kernel methods\").

\n

People go into much greater detail in lectures than in text.  I like to work with text.  I'd like to have a text on SVMs that goes into as much detail as videos on SVMs usually do, and works out the ideas behind the concepts as thoroughly, but no such text exists.  For some reason, giving a 5-hour lecture series in which you describe the motivations, applications, and work out the mathematical details is acceptable; but writing a text of the same level of detail, which might take only 2 hours to read, is not.

\n

Perhaps this is because writers are motivated to keep pagecounts low.  But pagecount no longer matters with electronic articles.  Yet writers still don't want to explain things thoroughly.  They certainly aren't saving their readers any time by leaving out intermediate steps.  A longer article would take less time to read (and possibly less time to write).  Another problem with the pagecount theory is that texts routinely include footnotes and appendices, contributing to the pagecount; yet relegate them to the back of the book, as if embarassed of them, despite the fact that this makes them very difficult to use.

\n

It's especially bad in math, in which writers have a long tradition of deliberately concealing difficult steps and leaving them \"as an exercise to the reader\".  For some reason it is considered bad form to write out all of the steps in a proof, even if adding one or two lines could save the reader five minutes of puzzling.  I read an electronic article yesterday where the author said, \"These two equations are actually equivalent.  Do you see why?\"

\n

I think people have adopted a set of cultural biases about what is appropriate in lectures vs. in writing by simply counting observations, without thinking about the systematic sample bias.  Speakers speak the way they've seen other speakers speak, without recollecting that most of those speakers were instructors.  Technical writers, meanwhile, are picking up their cues from authors of textbooks, which are written with the assumption that a person will be on hand to take you through the details; and applying them in situations where no such person will be available.

" } }, { "_id": "RDABLCYLaKzrTEPe6", "title": "The enemy within", "pageUrl": "https://www.lesswrong.com/posts/RDABLCYLaKzrTEPe6/the-enemy-within", "postedAt": "2009-07-05T15:08:05.874Z", "baseScore": 22, "voteCount": 22, "commentCount": 18, "url": null, "contents": { "documentId": "RDABLCYLaKzrTEPe6", "html": "

I read an article from the economist subtitled \"The evolutionary origin of depression\" which puts forward the following hypothesis:

\n
\n

As pain stops you doing damaging physical things, so low mood stops you doing damaging mental ones—in particular, pursuing unreachable goals. Pursuing such goals is a waste of energy and resources. Therefore, he argues, there is likely to be an evolved mechanism that identifies certain goals as unattainable and inhibits their pursuit—and he believes that low mood is at least part of that mechanism. ...

\n
\n

[Read the whole article]

\n

This ties in with Kaj and PJ Eby's idea that our brain has a collection of primitive, evolved mechanisms that control us via our mood. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do.

\n

Eliezer has already told us about Adaptation-Executers, not Fitness-Maximizers; evolution mostly created animals which excecuted certain adaptions without really understanding how or why they worked - such as mating at a certain time or eating certain foods over others.

\n

But, in humans, evolution didn't create the perfect the perfect consequentialist straight off. It seems that evolution combined an explicit goal-driven propositional system with a dumb pattern recognition algorithm for identifying the pattern of \"pursuing an unreachable goal\". It then played with a parameter for balance of power between the goal-driven propositional system and the dumb pattern recognition algorithms until it found a level which was optimal in the human EEA. So blind idiot god bequeathed us a legacy of depression and akrasia - it gave us an enemy within.

\n

Nowadays, it turns out that that parameter is best turned by giving all the power to the goal-driven propositional system because the modern environment is far more complex than the EEA and requires long-term plans like founding a high-technology startup in order to achieve extreme success. These long-term plans do not immediately return a reward signal, so they trip the \"unreachable goal\" sensor inside most people's heads, causing them to completely lose motivation.

\n

However, some people seem to be naturally very determined; perhaps their parameter is set slightly more towards the goal-driven propositional system than average. These people rise up from council flats to billionaire-dom and celebrity status. People like Alan Sugar. Of course this is mere hypothesis; I cannot find good data to back up the claim that certain people succeed for this reason, but I think we all have a lot of personal evidence that suggests that if we could just work harder, we could do much better. It is now well accepted that getting into a positive mood counteracts ego depletion, see, for example, this paper1 . One might ask why on earth evolution designed the power-balance parameter to vary with your mood; but suppose that the mechanism is that the \"unreachable goal\" sensor works as follows:

\n

{pursuing goal} + {sad} = {current goal is unachievable} ==> decrease motivation

\n

{pursuing goal} + {happy} = {current goal is being achieved} ==> increase motivation

\n

And the \"mood\" setting takes a number of inputs to determine whether to go into the \"happy\" state or the \"sad\" state, such as whether you have recently laughed, whether you received praise or a gift recently, and whether your conscious, deliberative mind has registered the \"subgoal achieved\" signal.

\n

In our EEA, all of the above probably correlated well with being in pursuit of a goal that you are succeeding at: since the EEA seems to be mostly about getting food and status in the tribe, receiving a gift, laughing or getting more food probably all correlated with with doing something that was good - such as making allies who would praise you and laugh and socialize with you. Conversely, being hungry and lonely and frustrated indicate that you are trying something that isn't working, and that the best course of action for your genes is to hit you with a big dose of depression so that you stop doing whatever you were doing.

\n

Following PJ Eby's idea of the brain as a lot of PID feedback controller circuits, we can see what might happen in the case of someone who \"makes it\": they try something which works, and people praise them and give them gifts (e.g. money, business competition prizes, corporate hospitality gifts, attention, status), which increases their motivation because it sets their \"goal attainability\" sensor to \"attainable\". This creates a positive feedback loop. Conversely, if someone does badly and then gets criticism for that bad performance, their \"unreachable goal\" sensor will trip out and remove their will to continue, creating a downward spiral of ever diminishing motivation. This downward spiral failure mode wouldn't have happened in the EEA, because the long-term planning aspect of our cognition was probably useful much more occasionally in the EEA than it is today, hence it was no bad thing for your brain to be quite eager to switch it off.

\n

So what are we to do? Powerful anti-depressants would seem to be your friend here, as they might \"fool\" your unreachable goal sensor into not tripping out. In a comments thread on Sentient Developments, David Pearce and another commenter claimed that there are some highly motivating antidepressants which could help. Laughing and socializing in a positive, fun way also seem like good ideas, or even just watching a funny video on youtube. But we should definitely think about developing much more effective ways to defeat that enemy within; I have my eye on hypnosis, meditation and antidepressants as big potential contributors, as well as spending time with a mutually praising community.

\n

 

\n
\n

1. Restoring the self: Positive affect helps improve self-regulation following ego depletion, Ticea et al, Journal of Experimental Social Psychology

" } }, { "_id": "PrXR66hQcaJXsgWsa", "title": "Not Technically Lying", "pageUrl": "https://www.lesswrong.com/posts/PrXR66hQcaJXsgWsa/not-technically-lying", "postedAt": "2009-07-04T18:40:02.830Z", "baseScore": 51, "voteCount": 43, "commentCount": 86, "url": null, "contents": { "documentId": "PrXR66hQcaJXsgWsa", "html": "

I'm sorry I took so long to post this. My computer broke a little while ago. I promise this will be relevant later.

\n

A surgeon has to perform emergency surgery on a patient. No painkillers of any kind are available. The surgeon takes an inert saline IV and hooks it up to the patient, hoping that the illusion of extra treatment will make the patient more comfortable. The patient asks, \"What's in that?\" The doctor has a few options:

\n
    \n
  1. \"It's a saline IV. It shouldn't do anything itself, but if you believe it's a painkiller, it'll make this less painful.
  2. \n
  3. \"Morphine.\"
  4. \n
  5. \"The strongest painkiller I have.\"
  6. \n
\n

-The first explanation is not only true, but maximizes the patient's understanding of the world.
-The second is obviously a lie, though, in this case, it is a lie with a clear intended positive effect: if the patient thinks he's getting morphine, then, due to the placebo effect, there is a very real chance he will experience less subjective pain.
-The third is, in a sense, both true and a lie. It is technically true. However, it's somewhat arbitrary; the doctor could have easily have said \"It's the weakest painkiller I have,\" or \"It's the strongest sedative I have,\" or any other number of technically true but misleading statements. This statement is clearly intended to mislead the hearer into thinking it is a potent painkiller; it promotes false beliefs while not quite being a false statement. It's Not Technically Lying. It seems that it deserves most, if not almost all, the disapproval that actually lying does; the truth does not save it. Because language does not specify single, clear meanings we can often use language where the obvious meaning is false and the non-obvious true, intentionally promoting false beliefs without false statements.

\n

Another, perhaps more practical example: the opening two sentences of this post. I have been meaning to write this for a couple weeks, and have failed mostly due to akrasia. My computer broke a few months ago. Both statements are technically true,1 but the implied \"because\" is not just false, but completely opposite the truth - it's complex, but if my computer had not broken, I would probably never have written this post. I've created the impression of a quasi-legitimate excuse without actually saying anything false, because our conventional use of language filled in the gaps that would have been lies.

\n

The distinction between telling someone a falsehood with the intention of promoting false beliefs and telling them a truth with the intention of promoting false beliefs seems razor-thin. In general, you're probably not justified in deceiving someone, but if you are justified, I hardly see how one form of deception is totally OK and the other is totally wrong. If, and I stress if, your purpose is justified, it seems you should choose whichever will fulfill it more effectively. I'd imagine the balance generally favors NTL, because there are often negative consequences associated with lies, but I doubt that the balance strictly favors NTL; the above doctor hypothetical is an example where the lie seems better than the truth (absent malpractice concerns).

\n

For what common sentiment is worth, people often see little distinction between lies and NTLs. If I used my computer excuse with a boss or professor, and she later found out my computer actually broke before the paper was even assigned, my saying, \"Well, I didn't claim there was a causal connection; you made that leap yourself! I was telling the truth (technically)!\" is unlikely to fix the damage to her opinion of me. From the perspective of the listener, the two are about equally wrong. Indeed, at least in my experience, some listeners view NTL as worse because you don't even think you're lying to them.

\n

Lying does admittedly have its own special problems, though I think the big one, deception of others, is clearly shared. There is the risk of lies begetting further lies, as the truth is entangled.  This may be true, but it is unclear how Not Technically Lying resolves this; if you are entirely honest, the moment your claim is questioned seriously, you either admit you were misleading someone, or you have to continue misleading them in a very clever manner. If you were actually justified in misleading them, failing to do so does not appear to be an efficient outcome. If you're able to mislead them further, then you've further separated their mind from reality, even if, had they really understood what you said, you wouldn't have. And, of course, there's the risk that you will come to believe your own lies, which is serious.

\n

Not Technically Lying poses a few problems that lying does not. For one, if I fill in the bottom line and then fill in my premises with NTL's, omitting or rephrasing difficult facts, I can potentially create an excellent argument, an investigation of which will show all my premises are true. If I lied, this could be spotted by fact-checking and my argument largely dismissed as a result. Depending on the context (for example, if I know there are fact-checkers) either one may be more efficient at confounding the truth.

\n

While it may be a risk that one believes their own lies, if you are generally honest, you will at least be aware when you are lying, and it will likely be highly infrequent. NTL, by contrast, may be too cheap. If I lie about something, I realize that I'm lying and I feel bad that I have to. I may change my behaviour in the future to avoid that. I may realize that it reflects poorly on me as a person. But if I don't technically lie, well, hey! I'm still an honest, upright person and I can thus justify visciously misleading people because at least I'm not technically dishonest. I can easily overvalue the technical truth if I don't worry about promoting true beliefs. Of course, this will vary by individual; if you think lying is generally pretty much OK, you're probably doomed. You'd have to have a pretty serious attachment to the truth. But if you have that attachment, NTL seems that much more dangerous.

\n

I'm not trying to spell out a moral argument for why we should all lie; if anything, I'm spelling out an argument for why we shouldn't all Not Technically Lie. Where one is immoral, in most if not all cases, so is the other, though where one is justified, the other is likely justified as well, though perhaps not more justified. If lying is never justified because of its effect on the listener, then neither is NTL. If lying is never justified because of its effect on the speaker, well, NTL may or may not be justified; its effects on the speaker don't seem so good, either.

\n

To tie this into AI (definitely not my field, so I'll be quite brief), it seems a true superintelligence would be unbelievably good at promoting false beliefs with true statements if it really understood the beings it was speaking to. Imagine how well a person could mislead you if they knew beforehand exactly how you would interpret any statement they made. If our concern is the effect on the listener, rather than the effect on the speaker, this is a problem to be concerned with. A Technically Honest AI could probably get away with more deception than we can imagine.

\n

1-Admittedly this depends on your value of a \"little while,\" but this is sufficiently subjective that I find it reasonable to call both statements true.

\n

As a footnote, I realize that this topic has been done a lot, but I do not recall seeing this angle (or, actually, this distinction) discussed; it's always been truth vs. falsity, so hopefully this is an interesting take on a thoroughly worn subject.

\n

 

" } }, { "_id": "hA2q5nbvxxPrpZGgR", "title": "Avoiding Failure: Fallacy Finding", "pageUrl": "https://www.lesswrong.com/posts/hA2q5nbvxxPrpZGgR/avoiding-failure-fallacy-finding", "postedAt": "2009-07-03T17:59:38.239Z", "baseScore": 14, "voteCount": 16, "commentCount": 19, "url": null, "contents": { "documentId": "hA2q5nbvxxPrpZGgR", "html": "

When I was in high school, one of the exercises we did was to take a newspaper column, and find all of the fallacies it employed. It was a fun thing to do, and is good awareness raising for critical thinking, but it probably wouldn't be enough to stave off being deceived by an artful propagandist unless I did it until it was reflexive. To catch the fallacy being, I usually have to read a sentence three or four times to see the underlying logic behind it and remember why the logic is invalid, when I'm confronted by something as fallacy ridden as an ad for the Love Calculator, I just give up in exhaustion. Worse, when I'm watching television, I can't even rewind to see what they said (I suspect the fallacy count is higher too).

\n

To counter this, (and to further hone my fallacy finding skills), I've extended the fallacy finding exercise to work on video. Take a video from a genre that generally has a high fallacy per minute ratio (e.g. Campaign ads, political debates, speeches, regular ads, Oprah) and edit the video to play a klaxon sound whenever someone commits a logical fallacy or gets a fact wrong, followed by the name of the fallacy they committed flashing on screen.

\n

EDIT: I've made one of these and uploaded it to Youtube. Thank you Eliezer and CannibalSmith for the encouragement. You can find other debates at CNN, and youtube lets you do annotations so no editing software is technically required. I'll be posting further videos to this post as I make/find them.

" } }, { "_id": "7T5J3WM5zqcnTPofS", "title": "The Fixed Sum Fallacy", "pageUrl": "https://www.lesswrong.com/posts/7T5J3WM5zqcnTPofS/the-fixed-sum-fallacy", "postedAt": "2009-07-03T13:01:55.583Z", "baseScore": 5, "voteCount": 5, "commentCount": 4, "url": null, "contents": { "documentId": "7T5J3WM5zqcnTPofS", "html": "

(Update: Patrick points out the subject of this post is already well-known as the gambler's fallacy. I really should have read Tversky and Kahneman before posting.)

\n

You're flipping a coin 100 times, the first five throws came up heads, what do you expect on the next throw? If you believe the coin to be fair, you allocate 0.5 credence to each face coming up. If your Bayesian prior allowed for biased coins, you update and answer something like 0.6 to 0.4. So far it's all business as usual.

\n

There exists, however, a truly bizarre third possibility that assigns reduced credence to heads. The reasoning goes like this: at the outset we expected about 50 heads and 50 tails. Your first five throws have used up some of the available heads, while all 50 tails are still waiting for us ahead. When presented so starkly, the reasoning sounds obviously invalid, but here's the catch: people use it a lot, especially when thinking about stuff that matters to them. Happy days viewed as payback for sad days, rich times for poor times, poor people suffering because rich people wallow, and of course all of that vice versa.

\n

I initially wanted to dub this the \"fallacy of fate\" but decided to leave that lofty name available for some equally lofty concept. \"Fallacy of scarcity\", on the other hand, is actively used but doesn't quite cover all the scenarios I had in mind. So let's call this way of thinking the \"fixed sum fallacy\", or maybe \"counterbalance bias\".

\n

Now contrarians would point out that some things in life are fixed-sum, e.g. highly positional values. But other things aren't. Your day-to-day happiness obviously resembles repeatedly throwing a biased coin more than it resembles withdrawing value from a fixed pool: being happy today doesn't decrease your average happiness over all future days. (I have no sources on that besides my common sense; if I'm wrong, call me out.) So we could naturally hypothesize that fixed-sum thinking, when it arises, serves as some kind of coping mechanism. Maybe the economists or psychologists among us could say more; sounds like a topic for Robin?

" } }, { "_id": "73oFyuEoZTATMKvw2", "title": "Who do we care about?", "pageUrl": "https://www.lesswrong.com/posts/73oFyuEoZTATMKvw2/who-do-we-care-about", "postedAt": "2009-07-03T07:18:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "73oFyuEoZTATMKvw2", "html": "

Humans exercise compassion regarding:

\n
\n\n
Our moral feelings are not concerned for others’ wellbeing per se. They are very contingent. What’s the pattern? An obvious contender is whether we can be rewarded or punished by the beneficiary of our ‘compassion’. Distant, helpless, non-existent and low status people can’t easily return the favour or punish. Inaction and shared blame are hard to punish, as everyone is responsible. There are some things that don’t fit this, but most can be explained e.g. children are weak, but if they are ours we genetically benefit by caring and if they are not they probably have someone powerful caring about them for that reason. Got a better explanation?
\n
\n
I don’t decide what to do by guessing the pattern behind my moral emotions and trying to follow it better. If you do, perhaps try to care only for the powerful. If you don’t, notice that your moral feelings are probably fooling you into what’s tantamount to murder.
\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Sq87zh3ZGPwShKDEJ", "title": "Harnessing Your Biases", "pageUrl": "https://www.lesswrong.com/posts/Sq87zh3ZGPwShKDEJ/harnessing-your-biases", "postedAt": "2009-07-02T20:45:28.821Z", "baseScore": 13, "voteCount": 12, "commentCount": 14, "url": null, "contents": { "documentId": "Sq87zh3ZGPwShKDEJ", "html": "

Theoretically, my 'truth' function, the amount of evidence I need to cache something as 'probably true and reliable' should be a constant. I find, however, that it isn't. I read a large amount of scientific literature every day, and only have time to investigate a scant amount of it in practice. So, typically I rely upon science reporting that I've found to be accurate in the past, and only investigate the few things that have direct relevance to work I am doing (or may end up doing).

\n

Today I noticed something about my habits. I saw an article on how string theory was making testable predictions in the realm of condensed matter physics, and specifically about room-temperature superconductors. While a pet interest of mine, this is not an area that I'm ever likely to be working in, but the article seemed sound and so I decided it was an interesting fact, and moved on, not even realizing that I had cached it as probably true.

\n

A few minutes later it occurred to me that some of my friends might also be interested in the article. I have a Google RSS feed that I use to republish occasional articles that I think are worth reading. I have a known readership of all of 2. Suddenly, I discovered that what I had been willing to accept as 'probably true' on my own behalf was no longer good enough. Now I wanted to look at the original paper itself, and to see if I could find any learnéd refutations or comments.

\n

This seems to be because my reputation was now, however tangentially, \"on the line\" since I have a reputation in my circle of friends as the science geek and would not want to damage it by steering someone wrong. Now, clearly this is wrong headed. My theory of truth should be my theory of truth, period.

\n

One could argue, I suppose, that information that I store internally can only affect my own behavior while information that I disseminate can affect the behaviour of an arbitrarily large group of people, and so a more stringent standard should apply to things I tell others. In fact that was the first justification that sprang to mind when I noticed my double standard.

\n

Its a bogus argument though, as none of my friends are likely to repeat the article or post it in their blogs and so the dissemination has only a tiny probability of propagating by that route. However, once its in my head and I'm treating it as true, I'm very likely to trot it out as an interesting fact when I'm talking at Science Fiction conventions or to groups of interested geeks. If anything, the standard for my believing something should be more stringent than my standard for repeating it, not the other way around.

\n

But, the title of this post is \"Harnessing Your Biases\" and it seems to me that if I am going to have this strange predisposition to check more carefully if I am going to publish something, then maybe I need to set up a blog of things I have read that I think are true. It can just be an edited feed of my RSS stream, since this is simple to put together. Then I may find myself being more careful in what I accept as true. The mere fact that I have the feed and that its public (although I doubt that anyone would, in fact, read it), would make me more careful. Its even possible that it will contain very few articles as I would find I don't have time to investigate interesting claims well enough to declare them true, but this will have the positive side effect that I won't go around caching them internally as true either.

\n

I think that, in many ways, this is why, in the software field, code reviews are universally touted as an extraordinarily cheap and efficient way of improving code design and documentation while decreasing bugs, and yet is very hard to get put into practice. The idea is that after you've written any piece of code, you give it to a coworker to critique before you put it in the code base. If they find too many things to complain about, it goes back for revision before being given to yet another coworker to check. This continues until its deemed acceptable.

\n

In practice, the quality of work goes way up and the speed of raw production goes down marginally. The end result is code that needs far less debugging and so the number of working lines of code produced per day goes way up. I think this is because programmers in such a regime quickly find that the testing and documenting that they think is 'good enough' when their work is not going to be immediately reviewed is far less than the testing and documenting they do when they know they have to hand it to a coworker to criticize. The downside, of course, is that they are now opening themselves up for criticism on a daily basis, and this is something that few folks enjoy no matter how good it is for them, and so the practice continues to be quite rare due to programmer resistance to the idea.

\n

This appears to be two different ways in which to harness the bias that folks have to do better (or more careful) work when it is going to be examined, to achieve better results. Can anyone else here think of other biases that can be exploited in useful ways to leverage greater productivity or reliability in projects?

\n

 

" } }, { "_id": "S8ysxzgraSeuBXnpk", "title": "Rationality Quotes - July 2009", "pageUrl": "https://www.lesswrong.com/posts/S8ysxzgraSeuBXnpk/rationality-quotes-july-2009", "postedAt": "2009-07-02T18:35:19.802Z", "baseScore": 7, "voteCount": 10, "commentCount": 185, "url": null, "contents": { "documentId": "S8ysxzgraSeuBXnpk", "html": "

(Last month's started a little late, I thought I'd bring it back to its original schedule.)

\n

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\n" } }, { "_id": "DHbJk5uDXz2vW56wq", "title": "Fourth London Rationalist Meeting?", "pageUrl": "https://www.lesswrong.com/posts/DHbJk5uDXz2vW56wq/fourth-london-rationalist-meeting", "postedAt": "2009-07-02T09:56:03.799Z", "baseScore": 6, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "DHbJk5uDXz2vW56wq", "html": "

It's been the first Sunday of the month so far, but I haven't seen any announcement for this month yet. There was a discussion, but no conclusion. Is anything happening?

\n

ETA: This would have appeared a day and a half ago, but I did not notice that it had only been stored as a draft and not published. When logged in, it was impossible to notice that I was the only person seeing this. Feature request for this site: add a visual indication that something is only a draft, e.g. a \"Publish\" link, perhaps with the words somewhere, Unpublished draft.

" } }, { "_id": "HCTzukWgCtdgtJjRS", "title": "Open Thread: July 2009", "pageUrl": "https://www.lesswrong.com/posts/HCTzukWgCtdgtJjRS/open-thread-july-2009", "postedAt": "2009-07-02T04:00:53.593Z", "baseScore": 5, "voteCount": 4, "commentCount": 251, "url": null, "contents": { "documentId": "HCTzukWgCtdgtJjRS", "html": "

Here's our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).

" } }, { "_id": "yXvDCDDvuJdharpCF", "title": "Book Review: Complications", "pageUrl": "https://www.lesswrong.com/posts/yXvDCDDvuJdharpCF/book-review-complications", "postedAt": "2009-07-01T18:39:03.138Z", "baseScore": 13, "voteCount": 19, "commentCount": 11, "url": null, "contents": { "documentId": "yXvDCDDvuJdharpCF", "html": "

Atul Gawande's Complications:  A Surgeon's Notes on an Imperfect Science is a mixed bag for rationalists. Written as a series of essays organized into three sections entitled Fallibility, Mystery, and Uncertainty, the book as a whole is of questionable value, but the sections need to be considered individually for their worth to be accurately recognized.

\n

Fallibility examines a number of issues involving error and how it can be avoided in medicine, including but not limited to:  how computers are better than humans at making diagnoses, the factors that are known to negatively influence human reasoning when applied to diagnosis and treatment, how and why doctors make errors, what happens when doctors \"go bad\", and ways that the system fails and succeeds at curbing irresponsible physicians.  The section dealing with the field of anesthesiology is especially interesting, as Gawande recounts the changes that made it possible for the death rate due to general anesthesia to be reduced to a twentieth of its previous value midway through this century.

\n

What is distressing is that Mystery and Uncertainty do not represent good examples of the principles that Gawande demonstrates he understands in Fallibility, supporting his assertions with vivid anecdotes instead of reasoning from principles.  One story is especially obnoxious, as Gawande recounts how a \"gut feeling\" about a case of infection causes him to persuade a patient to undergo an extensive biopsy which reveals she has life-threatening necrotizing fasciitis, caused by the popularly-known \"flesh-eating bacteria\".  He explicitly acknowledges that he had no logical reason for regarding the case as unusual, that all appearances indicated a routine cellulitis which was approximately three thousand times more likely than necrotizing fasciitis was - and that he had recently taken care of a patient who had died agonizingly from a massive infection of the flesh-eating bacteria.  And yet the story is presented as an example of how to deal with uncertainty.

\n

If a normal infection had been present, the incident would likely not have been recounted and possibly not even remembered with clarity by Gawande.  We don't know - and likely Gawande doesn't know either - whether his hunches are more likely than not to be accurate.  As he discusses himself earlier in the book, such hunches are known to be usually wrong, but they make vivid anecdotes - which renders them potent and dangerous lures for erroneous thinking.

\n

On other problematic topic, he decries the relative rarity of autopsies and notes that they demonstrate that doctors are wrong about the cause of death and diagnosis of roughly one-third of patients who die while in care.  Yet despite also noting that this fraction has not changed since the 1930s, he implies that the reduction is due to the unwillingness of doctors to trouble grieving family members rather than drawing the obvious conclusion that doctors don't utilize or care about autopsy results.

\n

The book is worth reading, both to gain knowledge about the troubling subject of medical error and how it is and isn't reduced, and as an object lesson in how difficult it is for even intelligent and informed people to actually apply principles of proper reasoning.

\n

 

" } }, { "_id": "PYtus925Gcg7cqTEq", "title": "Atheism = Untheism + Antitheism", "pageUrl": "https://www.lesswrong.com/posts/PYtus925Gcg7cqTEq/atheism-untheism-antitheism", "postedAt": "2009-07-01T02:19:18.764Z", "baseScore": 138, "voteCount": 127, "commentCount": 187, "url": null, "contents": { "documentId": "PYtus925Gcg7cqTEq", "html": "

One occasionally sees such remarks as, \"What good does it do to go around being angry about the nonexistence of God?\" (on the one hand) or \"Babies are natural atheists\" (on the other).  It seems to me that such remarks, and the rather silly discussions that get started around them, show that the concept \"Atheism\" is really made up of two distinct components, which one might call \"untheism\" and \"antitheism\".

\n

A pure \"untheist\" would be someone who grew up in a society where the concept of God had simply never been invented - where writing was invented before agriculture, say, and the first plants and animals were domesticated by early scientists.  In this world, superstition never got past the hunter-gatherer stage - a world seemingly haunted by mostly amoral spirits - before coming into conflict with Science and getting slapped down.

\n

Hunter-gatherer superstition isn't much like what we think of as \"religion\".  Early Westerners often derided it as not really being religion at all, and they were right, in my opinion.  In the hunter-gatherer stage the supernatural agents aren't particularly moral, or charged with enforcing any rules; they may be placated with ceremonies, but not worshipped.  But above all - they haven't yet split their epistemology.  Hunter-gatherer cultures don't have special rules for reasoning about \"supernatural\" entities, or indeed an explicit distinction between supernatural entities and natural ones; the thunder spirits are just out there in the world, as evidenced by lightning, and the rain dance is supposed to manipulate them - it may not be perfect but it's the best rain dance developed so far, there was that famous time when it worked...

\n

If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.  Because there are no special rules for reasoning about it - nothing that denies the validity of the Elijah Test that the previous rain-dance just failed.  Faith is a post-agricultural concept.  Before you have chiefdoms where the priests are a branch of government, the gods aren't good, they don't enforce the chiefdom's rules, and there's no penalty for questioning them.

\n

And so the Untheist culture, when it invents science, simply concludes in a very ordinary way that rain turns out to be caused by condensation in clouds rather than rain spirits; and at once they say \"Oops\" and chuck the old superstitions out the window; because they only got as far as superstitions, and not as far as anti-epistemology.

\n

The Untheists don't know they're \"atheists\" because no one has ever told them what they're supposed to not believe in - nobody has invented a \"high god\" to be chief of the pantheon, let alone monolatry or monotheism.

\n

However, the Untheists do know that they don't believe in tree spirits.  And we shall even suppose that the Untheists don't believe in tree spirits, because they have a sophisticated and good epistemology - they understand why it is in general a bad idea to postulate ontologically basic mental entities.

\n

So if you come up to the Untheists and say:

\n

\"The universe was created by God -\"

\n

\"By what?\"

\n

\"By a, ah, um, God is the Creator - the Mind that chose to make the universe -\"

\n

\"So the universe was created by an intelligent agent.  Well, that's the standard Simulation Hypothesis, but do you have actual evidence confirming this?  You sounded very certain -\"

\n

\"No, not like the Matrix!  God isn't in another universe simulating this one, God just... is.  He's indescribable.  He's the First Cause, the Creator of everything -\"

\n

\"Okay, that sounds like you just postulated an ontologically basic mental entity.  And you offered a mysterious answer to a mysterious question.  Besides, where are you getting all this stuff?  Could you maybe start by telling us about your evidence - the new observation you're trying to interpret?\"

\n

\"I don't need any evidence!  I have faith!\"

\n

\"You have what?\"

\n

And at this very moment the Untheists have become, for the first time, Atheists.  And what they just acquired, between the two points, was Antitheism - explicit arguments against explicit theism.  You can be an Untheist without ever having heard of God, but you can't be an Antitheist.

\n

Of course the Untheists are not inventing new rules to refute God, just applying their standard epistemological guidelines that their civilization developed in the course of rejecting, say, vitalism.  But then that's just what we rationalist folk claim antitheism is supposed to be, in our own world: a strictly standard analysis of religion which turns out to issue a strong rejection - both epistemically and morally, and not after too much time.  Every antitheist argument is supposed to be a special case of general rules of epistemology and morality which ought to have applications beyond religion - visible in the encounters of science with vitalism, say.

\n

With this distinction in hand, you can make a bit more sense of some modern debates - for example, \"Why care so much about God not existing?\" could become \"What is the public benefit from publicizing antitheism?\"  Or \"What good is it to just be against something?  Where is the positive agenda?\" becomes \"Less antitheism and more untheism in our atheism, please!\"  And \"Babies are born atheists\", which sounds a bit odd, is now understood to sound odd because babies have no grasp of antitheism.

\n

And as for the claim that religion is compatible with Reason - well, is there a single religious claim that a well-developed, sophisticated Untheist culture would not reject?  When they have no reason to suspend judgment, and no anti-epistemology of separate magisteria, and no established religions in their society to avoid upsetting?

\n

There's nothing inherently fulfilling about arguing against Goddism - in a society of Untheists, no one would ever give the issue a second thought.  But in this world, at least, insanity is not a good thing, and sanity is worth defending, and explicit antitheism by the likes of Richard Dawkins would surely be a public service conditioning on the proposition of it actually working.  (Which it may in fact be doing; the next generation is growing up increasingly atheist.)  Yet in the long run, the goal is an Untheistic society, not an Atheistic one - one in which the question \"What's left, when God is gone?\" is greeted by a puzzled look and \"What exactly is missing?\"

" } }, { "_id": "eRhFaibbTeGbjdaaf", "title": "What's In A Name?", "pageUrl": "https://www.lesswrong.com/posts/eRhFaibbTeGbjdaaf/what-s-in-a-name", "postedAt": "2009-06-29T12:54:11.922Z", "baseScore": 53, "voteCount": 51, "commentCount": 138, "url": null, "contents": { "documentId": "eRhFaibbTeGbjdaaf", "html": "

   Marge: You changed your name without consulting me?
   Homer: That's the way Max Power is, Marge.  Decisive.
      --
The Simpsons

\r\n

In honor of Will Powers and his theories about self-control, today I would like to talk about my favorite bias ever, the name letter effect. The name letter effect doesn't cause global existential risk or stock market crashes, and it's pretty far down on the list of things to compensate for. But it's a good example of just how insidious biases can be and of the egoism that permeates every level of the mind.

\r\n

The name letter effect is your subconscious preference for things that sound like your own name. This might be expected to mostly apply to small choices like product brand names, but it's been observed in choices of spouse, city of residence, and even career. Some evidence comes from Pelham et al's Why Susie Sells Seashells By The Seashore:

\r\n

The paper's first few studies investigate the relationship between a person's name and where they live. People named Phil were found more frequently than usual in Philadelphia, people named Jack in Jacksonville, people named George in Georgia, and so on with p < .001. To eliminate the possibility of the familiarity effect causing parents to subconsciously name their children after their place of residence, further studies were done with surnames and with people who moved later in life, both with the same results. The results held across US and Canadian city names as well as US state names, and were significant both for first name and surname.

\r\n

In case that wasn't implausible enough, the researchers also looked at association between birth date and city of residence: that is, were people born on 2/02 more likely to live in the town of Two Harbors, and 3/03 babies more likely to live in Three Forks? With p = .003, yes, they are.

\r\n

The researchers then moved on to career choices. They combed the records of the American Dental Association and the American Bar association looking for people named either Dennis, Denice, Dena, Denver, et cetera, or Lawrence, Larry, Laura, Lauren, et cetera. That is: were there more dentists named Dennis and lawyers named Lawrence than vice versa? Of the various statistical analyses they performed, most said yes, some at < .001 level. Other studies determined that there was a suspicious surplus of geologists named Geoffrey, and that hardware store owners were more likely to have names starting with 'H' compared to roofing store owners, who were more likely to have names starting with 'R'.

\r\n

Some other miscellaneous findings: people are more likely to donate to Presidential candidates whose names begin with the same letter as their own, people are more likely to marry spouses whose names begin with the same letter as their own, that women are more likely to show name preference effects than men (but why?), and that batters with names beginning in 'K' are more likely than others to strike out (strikeouts being symbolized by a 'K' on the records).

\r\n

If you have any doubts about the validity of the research, I urge you to read the linked paper. It's a great example of researchers who go above and beyond the call of duty to eliminate as many confounders as possible.

\r\n

The name letter effect is a great addition to any list of psychological curiosities, but it does have some more solid applications. I often use it as my first example when I'm introducing the idea of subconscious biases to people, because it's clear, surprising, and has major real-world effects. It also tends to shut up people who don't believe there are subconscious influences on decision-making, and who are always willing to find some excuse for why a supposed \"bias\" could actually be an example of legitimate decision-making.

\r\n

And it introduces the concept of implicit egoism, the tendency to prefer something just because it's associated with you. It's one possible explanation for the endowment effect, and if it applies to my beliefs as strongly as to my personal details or my property, it's yet another mechanism by which opinions become calcified.

\r\n

This is also an interesting window onto the complex and important world of self-esteem. Jones, Pelham et al suggest that the name preference effect is either involved in or a byproduct of some sort of self-esteem regulatory system. They find that name preferences are most common among high self-esteem people who have just experienced threats to their self-esteem, almost as if it is a reactive way of saying \"No, you really are that great.\" I think an examination of how different biases interact with self-esteem would be a profitable direction for future research.

" } }, { "_id": "fGzPFwAosXXBcv5Jc", "title": "Controlling your inner control circuits", "pageUrl": "https://www.lesswrong.com/posts/fGzPFwAosXXBcv5Jc/controlling-your-inner-control-circuits", "postedAt": "2009-06-26T17:57:56.343Z", "baseScore": 53, "voteCount": 65, "commentCount": 159, "url": null, "contents": { "documentId": "fGzPFwAosXXBcv5Jc", "html": "

On the topic of: Control theory

\r\n

Yesterday, PJ Eby sent the subscribers of his mailing list a link to an article describing a control theory/mindhacking insight he'd had. With his permission, here's a summary of that article. I found it potentially life-changing. The article seeks to answer the question, \"why is it that people often stumble upon great self-help techniques or productivity tips, find that they work great, and then after a short while the techniques either become ineffectual or the people just plain stop using them anyway?\", but I found it to have far greater applicability than just that.

\r\n

Richard Kennaway already mentioned the case of driving a car as an example where the human brain uses control systems, and Eby mentioned another: ask a friend to hold their arm out straight, and tell them that when you push down on their hand, they should lower their arm. And what you’ll generally find is that when you push down on their hand, the arm will spring back up before they lower it... and the harder you push down on the hand, the harder the arm will pop back up! That's because the control system in charge of maintaining the arm's position will try to keep up the old position, until one consciously realizes that the arm has been pushed and changes the setting.

\r\n

Control circuits aren't used just for guiding physical sequences of actions, they also regulate the workings of our mind. A few hours before typing out a previous version of this post, I was starting to feel restless because I hadn't accomplished any work that morning. This has often happened to me in the past - if, at some point during the day, I haven't yet gotten started on doing anything, I begin to feel anxious and restless. In other words, in my brain there's a control circuit monitoring some estimate of \"accomplishments today\". If that value isn't high enough, it starts sending an error signal - creating a feeling of anxiety - in an attempt to bring that value into the desired range.

\r\n

The problem with this is that more often than not, that anxiety doesn't push me into action. Instead I become paralyzed and incapable of getting anything started. Eby proposes that this is because of two things: one, the control circuits are dumb and don't actually realize what they're doing, so they may actually take counter-productive action. Two, there may be several control circuits in the brain which are actually opposed to each other.

\r\n

Here we come to the part about productivity techniques often not working. We also have higher-level controllers - control circuits influencing other control circuits. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do. When they notice that we've found a method to actually accomplish something we've been struggling with for a long time, they start sending an error signal... causing neural reorganization, eventually ending up at a stage where we don't use those productivity techniques anymore and solving the \"crisis\" of us actually accomplishing things. Moreover, these circuits are to a certain degree predictive, and they can start firing when they pick up on a behavior that only even possibly leads to success - that's when we hear about a great-sounding technique and for some reason never even try it. A higher-level circuit, or a lower-level one set up by the higher-level circuit, actively suppresses the \"let's try that out\" signals sent by the other circuits.
But why would we have such self-sabotaging circuits? This ties into Eby's more general theory of the hazards of some kinds of self-motivation. He uses the example of a predator who's chased a human up to a tree. The human, sitting on a tree branch, is in a safe position now, so circuits developed to protect his life send signals telling him to stay there and not to move until the danger is gone. Only if the predator actually starts climbing the tree does the danger become more urgent and the human is pushed to actively flee.

Eby then extends this example into a social environment. In a primitive, tribal culture, being seen as useless to the tribe could easily be a death sentence, so we evolved mechanisms to avoid giving the impression of being useless. A good way to avoid showing your incompetence is to simply not do the things you're incompetent at, or things which you suspect you might be incompetent at and that have a great associated cost for failure. If it's important for your image within the tribe that you do not fail at something, then you attempt to avoid doing that.

You might already be seeing where this is leading. The things many of us procrastinate on are exactly the kinds of things that are important to us. We're deathly afraid of the consequences of what might happen if we fail at them, so there are powerful forces in play trying to make us not work on them at all. Unfortunately, for beings living in modern society, this behavior is maladaptive and buggy. It leads to us having control circuits which try to keep us unproductive, and when they pick up on things that might make us more productive, they start suppressing our use of those techniques.

Furthermore, the control circuits are stupid. They are occasionally capable of being somewhat predictive, but they are fundamentally just doing some simple pattern-matching, oblivious to deeper subtleties. They may end up reacting to wholly wrong inputs. Consider the example of developing a phobia for a particular place, or a particular kind of environment. Something very bad happens to you in that place once, and as a result, a circuit is formed in your brain that's designed to keep you out of such situations in the future. Whenever it detects that you are in a place resembling the one where the incident happened, it starts sending error signals to get you away from there. Only that this is a very crude and unoptimal way of keeping you out of trouble - if a car hit you while you were crossing the road, you might develop a phobia for crossing the road. Needless to say, this is more trouble than it's worth.

Another common example might be a musician learning to play an instrument. Learning musicians are taught to practice their instrument in a variety of postures, for otherwise a flutist who's always played his flute sitting down may realize he can't play it while standing up! The reason being that while practicing, he's been setting up a number of control circuits designed to guide his muscles the right way. Those control circuits have no innate knowledge of what muscle postures are integral for a good performance, however. As a result, the flutist may end up with circuits that try to make sure they are sitting down when playing.

This kind of malcalibration extends to higher-level circuits as well. Eby writes:

\r\n
\r\n

I know this now, because in the last month or so, I’ve been struggling to identify my “top-level” master control circuits.

And you know what I found they were controlling for? Things like:

* Being “good”
* Doing the “right” thing
* “Fairness”

But don’t be fooled by how harmless or even “good” these phrases sound.

Because, when I broke them down to what subcontrollers they were actually driving, it turned out that “being good” meant “do things for others while ignoring your own needs and being resentful”!

“Fairness”, meanwhile, meant, “accumulate resentment and injustices in order to be able to justify being selfish later.”

And “doing the right thing” translated to, “don’t do anything unless you can come up with a logical justification for why it’s right, so you don’t get in trouble, and no-one can criticize you.”

Ouch!

Now, if you look at that list, nowhere on there is something like, “go after what I really want and make it happen”. Actually doing anything – in fact, even deciding to do anything! – was entirely conditional on being able to justify my decisions as “fair” or “right” or “good”, within some extremely twisted definitions of those words!

\r\n
\r\n

So that's the crux of the issue. We are wired with a multitude of circuits designed for controlling our behavior... but because those circuits are often stupid, they end up in conflict with each other, and end up monitoring values that don't actually represent the things they ought to.

While Eby provides few references and no peer-reviewed experimental work to support his case of motivation systems being controlled in this way, I find it to mesh very well with everything I know about the brain. I took the phobia example from a textbook on biological psychology, while the flutist example came from a lecture by a neuroscientist emphasizing the stupidity of the cerebellum's control systems. Building on systems that were originally developed to control motion and hacking them to also control higher behavior is a very evolution-like thing to do. We already develop control systems for muscle behavior starting from the time when we first learn to control our body as infants, so it's very plausible that we'd also develop such mechanisms for all kinds of higher cognition. The mechanism by they work is also fundamentally very simple, making it easy for new circuits to form: a person ends up in an unpleasant situation, causing an emotional subsystem to flood the whole brain with negative feedback, leading to pattern recognizers which were active at the time to start activating the same kind of negative feedback the next time when they pick up on the same input. (At its simplest, it's probably a case of simple Hebbian learning.)

Furthermore, since reading his text, I have noticed several things in myself which could only be described as control circuits. After reading Overcoming Bias and Less Wrong for a long time, I've found myself noticing whenever I have a train of thought that seems to be indicative of a number of certain kinds of cognitive biases. In retrospect, that is probably a control circuit that has developed to detect the general appearance of a biased thought and to alert me about it. The anxiety circuit I already mentioned. A closely related circuit is one that causes me to need plenty of time to accomplish whatever it is that I'm doing - if I only have a couple of hours before a deadline, I often freeze up and end up unable of doing anything. This leads to me being at my most productive in the mornings, when I have a feeling of having the whole day for myself and of not being in any rush. That's easily interpreted as a circuit that looks at the remaining time and sends sending an alarm when the time runs low. Actually, the circuit in question is probably even stupid than that, as the feeling of not having any time is often tied only what the clock is, not to the time when I'll be going to bed. If I get up at 2 PM and go to bed at 4 AM, I have just as much time as if I'd get up at 9 AM and went to bed at 11 PM, but the circuit in question doesn't recognize this.

So, what can we do about conflicting circuits? Simply recognizing them for what they are is already a big step forward, one which I feel has already helped me overcome some of their effects. Some of them can probably be dismantled simply by identifying them, working out their purpose and deciding it to be unnecessary. (I suspect that this process might actually set up new circuits whose function is to counteract the signals sent by the harmful ones. Maybe. I'm not very sure of what the actual neural mechanism might be.) Eby writes:

\r\n
\r\n

So, you want to build Desire and Awareness by tuning in to the right qualities to perceive. Then, you need to eliminate any conflicts that come up.

Now, a lot of times, you can do this by simple negotiation with yourself. Just sit and write down all your objections or issues about something, and then go through them one at a time, to figure out how you can either work around the problem, or find another way to get your other needs met.

Of course, you have to enter this process in good faith; if you judge yourself for say, wanting lots of chocolate, and decide that you shouldn’t want it, that’s not going to work.

But it might work, to be willing to give up chocolate for a while, in order to lose weight. The key is that you need to actually imagine what it would be like to give it up, and then find out whether you can be “okay” with that.

Now, sadly, about 97% of the people who read this are going to take that last paragraph and go, “yeah, sure, I’m going to give up [whatever]”, but without actually considering what it would be like to do so.

And those people are going to fail.

And I kind of debated whether or not I should even mention this method here, because frankly, I don’t trust most people’s controllers any further than I can reprogram them (so to speak).

See, I know from bitter experience that my own controllers for things like “being smart” used to make me rationalize this sort of thing, skipping the actual mental work involved in a technique, because “clearly I’m smart enough not to need to do all that.”

And so I’d assume that just “thinking” about it was enough, without really going through the mental experience needed to make it work. So, most of the people who read this, are going to take that paragraph above where I explained the deep, dark, master-level mindhacking secret, and find a way to ignore it.

They’re going to say things like, “Is that all?” “Oh, I already knew that.” And they’re not going to really sit down and consider all the things that might conflict with what they say they want.

If they want to be wealthy, for example, they’re almost certainly not going to sit down and consider whether they’ll lose their friends by doing so, or end up having strained family relations. They’re not considering whether they’re going to feel guilty for making a lot of money when other people in the world don’t have any, or for doing it easily when other people are working so hard.

They’re not going to consider whether being wealthy or fit or confident will make them like the people they hate, or whether maybe they’re really only afraid of being broke!

But all of them will read everything I’ve just written, and assume it doesn’t apply to them, or that they’ve already taken all that into account.

Only they haven’t.

Because if they had, they would have already changed.

\r\n
\r\n

That's a pretty powerful reminder not to ignore your controllers. When you've been reading this, some controller that tries to keep you from doing things has probably already picked up on the excitement some emotional system might now be generating... meaning that you might be about to stumble upon a technique that might actually make you more productive... causing signals to be sent out to suppress attempts to even try it out. Simply acknowleding its existence isn't going to be enough - you need to actively think things out, identify different controllers within you, and dismantle them.

I feel I've managed to avoid the first step, of not doing anything even after becoming aware of the problem. I've been actively looking at different control circuits, some of which have plagued me for quite a long time, and I at least seem to have managed to overcome them. My worry is that there might be some high-level circuit which is even now coming online to prevent me from using this technique - to make me forget about the whole thing, or to simply not use it even though I know of it. It feels that the best way to counteract that is to try to consciously set up new circuits dedicated to the task of monitoring for the presence of new circuits, and alarming me of their presence. In other words, keep actively looking for anything that might be a mental control circuit, and teach myself to notice them.

\r\n

(And now, Eby, please post any kind of comment here so that we can vote it up and give you your fair share of this post's karma. :))

" } }, { "_id": "h7NkpER4Jo8BLWgPD", "title": "The Great Brain is Located Externally", "pageUrl": "https://www.lesswrong.com/posts/h7NkpER4Jo8BLWgPD/the-great-brain-is-located-externally", "postedAt": "2009-06-25T22:29:32.820Z", "baseScore": 32, "voteCount": 35, "commentCount": 53, "url": null, "contents": { "documentId": "h7NkpER4Jo8BLWgPD", "html": "

\"Dilbert

\n

How many of the things you \"know\" do you have memorized?

\n

Do you remember how to spell all of those words you let the spellcheck catch?  Do you remember what fraction of a teaspoon of salt goes into that one recipe, or would you look at the list of ingredients to be sure?  Do you remember what kinds of plastic they recycle in your neighborhood, or do you delegate that task to a list attached with a magnet to the fridge?

\n

If I asked you what day of the month it is today, would you know, or would you look at your watch/computer clock/the posting date of this post?

\n

Before I lost my Palm Pilot, I called it my \"external brain\".  It didn't really fit the description; with no Internet access, it mostly held my contact list, class schedule, and grocery list.  And a knockoff of Minesweeper.  Still, in a real enough sense, it remembered things for me.The vast arena of knowledge at our fingertips in the era of constant computing has, ironically, brought it farther away.  It seems nearer: after all, now, if you are curious about Zanzibar, Wikipedia is a few keystrokes away.  Before the Internet, you'd probably have been looking at a trip to the library and a while wrestling with the card catalog; and that would be if you lived in an affluent, literate society.  If you didn't, good luck knowing Zanzibar exists in the first place!

\n

But if you were an illiterate random peasant farmer in some historical venue, and you needed to know the growing season of taro or barley or insert-your-favorite-staple-crop-here, Wikipedia would have been superfluous: you would already know it.  It would be unlikely that you would find a song lyrics website of any use, because all of the songs you'd care about would be ones you really knew, in the sense of having heard them sung by real people who could clarify the words on request, as opposed to the \"I think I heard half of this on the radio at the dentist's office last month\" sense.

\n

Everything you would need to know would be important enough to warrant - and keep - a spot in your memory.

\n

So in a sense, propositional knowledge is being gradually supplanted by the procedural.  You need only know how to find information, to be able to use it after a trivial delay.  This requires some snippet of propositional data - to find a song lyric, you need a long enough string that you won't turn up 99% noise when you try to Google it! - but mostly, it's a skill, not a fact, that you need to act like you knew the fact.

\n

It's not clear to me whether this means that we should be alarmed and seek to hone our factual memories... or whether we should devote our attention to honing our Google-fu, as our minds gradually become server-side operations.

" } }, { "_id": "eyjeuQc2F95yJXJo6", "title": "Coming Out", "pageUrl": "https://www.lesswrong.com/posts/eyjeuQc2F95yJXJo6/coming-out", "postedAt": "2009-06-25T16:16:47.650Z", "baseScore": 12, "voteCount": 15, "commentCount": 12, "url": null, "contents": { "documentId": "eyjeuQc2F95yJXJo6", "html": "

It's 11 p.m. on a Friday night and I'm heading home from work, shivering in the cold wind. My mind's divided on what to do after getting home: practice my guitar stuff and go to sleep, or head to a nightclub and get the fatigue out of my system? Given two nearly identical emotional assessments and honestly not knowing in advance which to pick, I suddenly recall LessWrong and decide to use logic. Trying to apply logic in real life is an unfamiliar sensation but I prime my mind for recognizing accurate arguments and drift away for a moment, a proven technique from my mathematician days...

\n

By the time I reach my front door, the solution for discriminating between two outcomes of equal utility has arrived and it's a logical slam dunk. Tonight is Friday night. There will be ample opportunity to practice music tomorrow and the day after that. So get dressed and go dance now.

\n

It worked.

\n

 

\n

You might be tempted to dismiss my little exercise as rationalization and I can't really convince you otherwise. But the specific tool used for rationalization does matter. What if I'd made that decision based on some general ethical principle or a pithy quotation I'd heard somewhere? Success or failure would have prompted an equally ill-specified update of my held beliefs, the same way they have meandered aimlessly all my life. A single human lifetime sees a precious few belief-changing events — definitely not enough bits to select an adequate emotional framework, unless you started out with a good one. Not so with logic. Any vague verbal principle, however much you admire it, is always allowed to give way under you if you base your decisions on it... but logic is the ground floor.

\n

The above episode has hit me hard for reasons I can't quite verbalize. For the past few days I've been obsessively going over Eliezer's old posts. No longer viewing them in the mindset of \"articulate dissent\": this time I went searching for useful stuff. Turns out that most of his concrete recommendations are either way beyond my current abilities, or solving problems I haven't reached yet. Forget expected utility calculations, I don't even have the basics down!

\n

Logic is the ground floor. Disregard the defense of biases. If you find yourself falling, use logic.

\n

I remember a quote from functional programming researcher Frank Atanassow about his formal semantics enlightenment, similar in spirit:

\n
\n

When I graduated, I was as pure a corporate hacker as you can imagine: I wanted to be a corporate salaryman; I wanted to \"architecture\" object-oriented programs in the Gang-of-Four style; and above all, I was convinced that CS is a completely new field unto itself and had no relation at all to all that mathematical crap they were trying to stuff down our throats in school...

\n

What happened next, I remember very clearly. I bought a book, \"Lambda Calculus\" by Chris Hankin. It's a thin (<200 pages) yellow book, mostly about untyped lambda-calculus. It has a fair amount of equational stuff in it, but I took it on my trip to Korea (to get my Japanese visa---I was working in Tokyo, and you have to leave the country once for some reason to get a working visa) and was amazed that I could understand it...

\n

After that, I was able to read most of the programming language theory literature, and I read TONS, and tons of it. I seriously couldn't get enough, and what I marvelled at was that you could actually make indisputable statements about programs this way, provided your language had a formal semantics. It was no longer a fuzzy thing, you see; it relieved me of a great burden, in a way, because I no longer had to subscribe to a \"methodology\" which relied on some unsubstantiated relation between real world-\"objects\" and programs. I came to understand that certain things have to be true, provably, incontrovertibly and forever, if your programs can be translated easily into mathematical terms. This was a huge step for me.

\n
\n

 

\n

There's much work ahead, but at least now I know what to do.

" } }, { "_id": "vicBmmMgMKnf3apKK", "title": "Requests to the Right Ear Are More Successful Than to the Left", "pageUrl": "https://www.lesswrong.com/posts/vicBmmMgMKnf3apKK/requests-to-the-right-ear-are-more-successful-than-to-the", "postedAt": "2009-06-25T12:09:18.790Z", "baseScore": 4, "voteCount": 5, "commentCount": 2, "url": null, "contents": { "documentId": "vicBmmMgMKnf3apKK", "html": "

Talk into the right ear and you send your words into a slightly more amenable part of the brain.

\n

I urge you to try this at home.

" } }, { "_id": "58RrdgwdzYRN6Co9b", "title": "Richard Dawkins TV - Baloney Detection Kit video", "pageUrl": "https://www.lesswrong.com/posts/58RrdgwdzYRN6Co9b/richard-dawkins-tv-baloney-detection-kit-video", "postedAt": "2009-06-25T00:27:23.325Z", "baseScore": 2, "voteCount": 7, "commentCount": 35, "url": null, "contents": { "documentId": "58RrdgwdzYRN6Co9b", "html": "

See this great little rationalist video here.

\n
\n

Well, if I am pro-business, I have to be skeptical about global warming. Wait! How about just following the data?

\n
" } }, { "_id": "CQQS8CvLH3Y7kJbEQ", "title": "Lie to me?", "pageUrl": "https://www.lesswrong.com/posts/CQQS8CvLH3Y7kJbEQ/lie-to-me-1", "postedAt": "2009-06-24T21:56:15.638Z", "baseScore": -1, "voteCount": 12, "commentCount": 32, "url": null, "contents": { "documentId": "CQQS8CvLH3Y7kJbEQ", "html": "

I used to think that, given two equally capable individuals, the person with more true information can always do at least as good as the other person. And hence, one can only gain from having true information. There is one implicit assumption that makes this line of reason not true in all cases. We are not perfectly rational agents; our mind isn’t stored in a vacuum, but in a Homo sapien brain.There are certain false beliefs that benefit you by exploiting your primitive mental warehouse, e.g., self-fulfilling prophecies.

\n

Despite the benefits, adopting false beliefs is an irrational practice. If people never acquire the maps that correspond the best to the territory, they won’t have the most accurate cost-benefit analysis for adopting false beliefs. Maybe, in some cases, false beliefs make you better off. The problem is you'll have a wrong or sub-optimal cost-benefit analysis, unless you first adopt reason.

\n

Also, it doesn’t make sense to say that the rational decision could be to “have a false belief” because in order to make that decision, you would have to compare that outcome against “having a true belief.” But in order for a false belief to work, you must truly believe in it — you cannot deceive yourself into believing the false belief after knowing the truth! It’s like figuring out that taking a placebo leads to the best outcome, yet knowing it’s a placebo no longer makes it the best outcome.

\n

Clearly, it is not in your best interest to choose to believe in a falsity—but what if someone else did the choosing? Can’t someone whose advice you rationally trust be the decider of whether to give you false information or not (e.g. a doctor deciding whether you receive a placebo or not)? They could perform a cost-benefit analysis without diluting the effects of the false belief. We only want to know the truth, but prefer to be unknowingly lied to in some cases.

\n

Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?

\n

Added: Keep in mind that knowledge of the truth, even for a truth-seeker, is finite in value. The AI can believe that the benefit of a lie would outweigh a truth-seeker's cost of being lied to. So unless someone values the truth above anything else (which I highly doubt), would a truth-seeker ever choose only to be told the truth from the AI?

" } }, { "_id": "nhfHxMbapMa5GacGA", "title": "Guilt by Association", "pageUrl": "https://www.lesswrong.com/posts/nhfHxMbapMa5GacGA/guilt-by-association", "postedAt": "2009-06-24T17:29:10.462Z", "baseScore": 1, "voteCount": 8, "commentCount": 38, "url": null, "contents": { "documentId": "nhfHxMbapMa5GacGA", "html": "

The formal concept of the fallacious argument was born as the twin of logic itself.  When the ancient Greeks first began to systematically examine the natural arguments people made as they sought to demonstrate the truth of propositions, they noted that certain types of arguments were vulnerable to counterexamples while others were not.  The vulnerable were not true - when it was claimed that they justified a conclusion they could not rule out the alternative - and so were identified as fallacious.

\n

Although the validity of logical arguments can be determined through logic, that doesn't particularly distinguish one fallacy from another.  It is a curious fact that, despite this, some fallacies are more frequently made by human beings than others.  Much more.

\n

For example, \"If P, then Q.  P.  Therefore, Not-Q.\" is just as basic and elemental an error as \"If P, then Q.  Q.  Therefore, P.\" is.  But the first fallacy is hardly ever found (humans being what they are, there's probably no mistake within our reach that is never made) while the second is extraordinarily common.

\n

It is in fact generally true that we often confuse a unidirectional implication with the bidirectional.  If something implies another thing, we leap to the conclusion that the second also implies the first, that the connection is equally strong each way, even though it is fairly trivial to demonstrate that this isn't necessarily the case.  This error seems to be inherent to human intuition, as it occurs across contexts and subjects quite regularly, even when people are aware that it's logically invalid; only careful examination counters this tendency.

\n

Much later, Sigmund Freud began to identify ways that people would deny assertions that they found emotionally threatening, what we now call 'psychological defense mechanisms'.  The flaws in Freud's work as a whole are not directly relevant to this discussion and are beyond the scope of this site in any case.  Suffice it to say that therapists and psychologists do not consider his theories to be either true or useful, that they do consider them to be unscientific and a self-reinforcing belief system, and that many of the concepts which he introduced and have been taken up into the culture at large are invalid.  Not all of his work is so flawed, though - particularly his early ideas.

\n

There is a peculiar relationship between the nature of those defense mechanisms and the intuitive fallacies.

\n

When confronted with a contradiction in their emotionally-charged arguments, people who normally reasoned quite appropriately would suddenly begin to fall into fallacies.  What's more, they would be unable to see the errors in their reasoning.  Even more extraordinarily, they would often reach conclusions which were superficially related to the correct ones, but which were applied to the wrong concepts, situations, and individuals.  In projection, for example, motives and traits belonging to the patients are instead asserted to belong to others or even the world itself; properties within the psyche are \"projected\" outward.  So when a therapist demonstrated to a patient that some of their beliefs were incompatible or their arguments were contradictory, the patient might assert that the therapist was the one who had the irrational concern or obsession.

\n

In such cases it seems clear that there is an awareness of some kind that the unpleasant conclusion must be reached, but not of where the property must be attributed.  Accusing the therapist of possessing the unacceptable property seems to reduce tension of some sort - it's a relief that people actively seek out and will vehemently, even violently, defend.

\n

Guilt, hate, fear, forbidden joys and loves - there are countless ways people will deny that they possess them.  But they all tend to follow into certain predictable patterns, as the wild diversity of snowflakes still showcases repeating and similar forms.

\n

Why is this the case?  Ultimately, it took research into concept formation before psychology could really produce an answer to that question.

\n

Next time:  associational thought and the implications for rationality.

" } }, { "_id": "XrKpemoWiZ2w9HuZB", "title": "The Monty Maul Problem", "pageUrl": "https://www.lesswrong.com/posts/XrKpemoWiZ2w9HuZB/the-monty-maul-problem", "postedAt": "2009-06-24T05:30:16.760Z", "baseScore": 7, "voteCount": 15, "commentCount": 5, "url": null, "contents": { "documentId": "XrKpemoWiZ2w9HuZB", "html": "

In his Coding Horror Blog, Jeff Atwood writes about the Monty Hall Problem and some variants. The classic problem presents the situation in which the game show host allows a contestant to choose one of three doors, one of which opens to reveal a prize while the other two reveals goats. The host then opens one of the other doors, reliably choosing one that has a goat, and invites the contestant to switch to the remaining unopened door. The problem is to determine the probability of winning the prize by switching and staying. The variants deal with cases in which the host does not reliably choose a door with a goat, but happens to do so.

\n

Jeff cites Monty Hall, Monty Fall, Monty Crawl (PDF) by Jeff Rosenthal, which explains why the variants have different probabilities in terms of the \"Proportionality Principle\", which the appendix acknowledges to be a special case of Bayes' Theorem.

\n

One of Jeff's anonymous commenters presented the Monty Maul Problem:

\n
\n

Hypothetical Situation:

\n

The Monty Maul problem. There are 1 million doors. You pick one, and the shows host goes on a bloodrage fueled binge of insane violence, knocking open doors at random with no knowledge of which door has the car. He knocks open 999,998 doors, leaving your door and one unopened door. None of the opened doors contains the car.

\n

Are your odds of winning if you switch still 50/50, as outlined by the linked Rosenthal paper? It seems counter-intuitive even for people who've wrapped their head around the original problem.

\n
\n

If you take as absolute the problem's statement the host is randomly knocking doors open, then yes, the fact that only goats were revealed is strong evidence that only goats were available because you picked the door with the prize, which, when combined with the low prior probability that you picked the door with the prize, gives equal probability to either of the unopened doors having the prize.

\n

However, the fact that only goats were revealed is also strong evidence that the host deliberately avoided opening the door with the prize, and therefor switching is a winning strategy. After all, the probability of this happening if the host really is choosing doors randomly is 2 in a million, but it is guaranteed if the host deliberately opened only doors with goats.

\n

Note that this principal still applies in variants with fewer doors. Unless there is an actual penalty for switching doors (which could happen if the host only sometimes offers the opportunity to switch, and is more likely to do so when the contestant chooses the winning door), any uncertainty about the host choosing doors randomly implies that it is a good strategy to switch.

" } }, { "_id": "xgicQnkrdA5FehhnQ", "title": "The Domain of Your Utility Function", "pageUrl": "https://www.lesswrong.com/posts/xgicQnkrdA5FehhnQ/the-domain-of-your-utility-function", "postedAt": "2009-06-23T04:58:55.550Z", "baseScore": 42, "voteCount": 37, "commentCount": 99, "url": null, "contents": { "documentId": "xgicQnkrdA5FehhnQ", "html": "

Unofficial Followup to: Fake Selfishness, Post Your Utility Function

\n

A perception-determined utility function is one which is determined only by the perceptual signals your mind receives from the world; for instance, pleasure minus pain. A noninstance would be number of living humans. There's an argument in favor of perception-determined utility functions which goes like this: clearly, the state of your mind screens off the state of the outside world from your decisions. Therefore, the argument to your utility function is not a world-state, but a mind-state, and so, when choosing between outcomes, you can only judge between anticipated experiences, and not external consequences. If one says, \"I would willingly die to save the lives of others,\" the other replies, \"that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together.\"

\n

Let's call this dogma perceptually determined utility. PDU can be criticized on both descriptive and prescriptive grounds. On descriptive grounds, we may observe that it is psychologically unrealistic for a human to experience a lifetime's worth of satisfaction in a few moments. (I don't have a good reference for this, but) I suspect that our brains count pain and joy in something like unary, rather than using a place-value system, so it is not possible to count very high.

\n

The argument I've outlined for PDU is prescriptive, however, so I'd like to refute it on such grounds. To see what's wrong with the argument, let's look at some diagrams. Here's a picture of you doing an expected utility calculation - using a perception-determined utility function such as pleasure minus pain.

\n

\"\"

\n

Here's what's happening: you extrapolate several (preferably all) possible futures that can result from a given plan. In each possible future, you extrapolate what would happen to you personally, and calculate the pleasure minus pain you would experience. You call this the utility of that future. Then you take a weighted average of the utilities of each future — the weights are probabilities. In this way you calculate the expected utility of your plan.

\n

But this isn't the most general possible way to calculate utilities.

\n

\"\"

\n

Instead, we could calculate utilities based on any properties of the extrapolated futures — anything at all, such as how many people there are, how many of those people have ice cream cones, etc. Our preferences over lotteries will be consistent with the Von Neumann-Morgenstern axioms. The basic error of PDU is to confuse the big box (labeled \"your mind\") with the tiny boxes labeled \"Extrapolated Mind A,\" and so on. The inputs to your utility calculation exist inside your mind, but that does not mean they have to come from your extrapolated future mind.

\n

So that's it! You're free to care about family, friends, humanity, fluffy animals, and all the wonderful things in the universe, and decision theory won't try to stop you — in fact, it will help.

\n

Edit: Changed \"PD\" to \"PDU.\"

" } }, { "_id": "PnhpMqMP75Dxpvar5", "title": "Shane Legg on prospect theory and computational finance", "pageUrl": "https://www.lesswrong.com/posts/PnhpMqMP75Dxpvar5/shane-legg-on-prospect-theory-and-computational-finance", "postedAt": "2009-06-21T17:57:09.235Z", "baseScore": 16, "voteCount": 16, "commentCount": 9, "url": null, "contents": { "documentId": "PnhpMqMP75Dxpvar5", "html": "
\n

People are not rational expected utility maximisers.  When we have to make decisions, all sorts of cognitive biases and distortions come into play.  Seminal work in this area was done by Kahneman and Tversky.  They produced a model of human decision making known as prospect theory, work that Kahneman later won a Noble prize ...

\n

... I looked at these investors and thought, “Hey, they’re just like reinforcement learning agents. No big deal. If I want to know what investors with probability weighting and a curved value function do, I can just brute force compute their optimal policy by writing down their Bellman equation and using dynamic programming. Easy!” It was a mystery to me why, seemingly, nobody else was doing that. So off I went to build software to do just this, starting with a simple Merton model…

\n

... When we fired up my simulator and gave this distribution to an investor that had probability weighting: the investor took one look at that scary negative tail and didn’t want to invest in the stock. This is exactly what the model should predict.  In short, we took realistic stock returns, and presented this to an investor with a realistic decision making process complete with a bunch of parameters that have been empirically estimated by others in previous work, and what we got out the other end was realistic investor behaviour!

\n
\n

Read the whole article here at Vetta Project.

" } }, { "_id": "eMSoo6izTTrL9j6iZ", "title": "Nonparametric Ethics", "pageUrl": "https://www.lesswrong.com/posts/eMSoo6izTTrL9j6iZ/nonparametric-ethics", "postedAt": "2009-06-20T11:31:59.758Z", "baseScore": 40, "voteCount": 35, "commentCount": 60, "url": null, "contents": { "documentId": "eMSoo6izTTrL9j6iZ", "html": "

(Inspired by a recent conversation with Robin Hanson.)

\n

Robin Hanson, in his essay on \"Minimal Morality\", suggests that the unreliability of our moral reasoning should lead us to seek simple moral principles:

\n
\n

\"In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data.  Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions.  (This paper elaborates.)\"

\n
\n

In \"the limit of expecting very large errors of our moral intuitions\", says Robin, we should follow an extremely simple principle - the simplest principle we can find that seems to compress as much morality as possible.  And that principle, says Robin, is that it is usually good for people to get what they want, if no one else objects.

\n

Now I myself carry on something of a crusade against trying to compress morality down to One Great Moral Principle.  I have developed at some length the thesis that human values are, in actual fact, complex, but that numerous biases lead us to underestimate and overlook this complexity.  From a Friendly AI perspective, the word \"want\" in the English sentence above is a magical category.

\n

But Robin wasn't making an argument in Friendly AI, but in human ethics: he's proposing that, in the presence of probable errors in moral reasoning, we should look for principles that seem simple to us, to carry out at the end of the day.  The more we distrust ourselves, the simpler the principles.

\n

This argument from fitting noisy data, is a kind of logic that can apply even when you have prior reason to believe the underlying generator is in fact complicated.  You'll still get better predictions from the simpler model, because it's less sensitive to noise.

\n

Even so, my belief that human values are in fact complicated, leads me to two objections and an alternative proposal:

\n

The first objection is that we do, in fact, have enough data to support moral models that are more complicated than a small set of short English sentences.  If you have a thousand data points, even noisy data points, it may be a waste of evidence to try to fit them to a straight line, especially if you have prior reason to believe the true generator is not linear.

\n

And my second fear is that people underestimate the complexity and error-proneness of the reasoning they do to apply their Simple Moral Principles.  If you try to reduce morality to the Four Commandments, then people are going to end up doing elaborate, error-prone rationalizations in the course of shoehorning their real values into the Four Commandments.

\n

But in the ordinary practice of machine learning, there's a different way to deal with noisy data points besides trying to fit simple models.  You can use nonparametric methods.  The classic example is k-nearest-neighbors:  To predict the value at a new point, use the average of the 10 nearest points previously observed.

\n

A line has two parameters - slope and intercept; to fit a line, we try to pick values for the slope and intercept that well-match the data.  (Minimizing squared error corresponds to maximizing the likelihood of the data given Gaussian noise, for example.)  Or we could fit a cubic polynomial, and pick four parameters that best-fit the data.

\n

But the nearest-neighbors estimator doesn't assume a particular shape of underlying curve - not even that the curve is a polynomial.  Technically, it doesn't even assume continuity.  It just says that we think that the true values at nearby positions are likely to be similar.  (If we furthermore believe that the underlying curve is likely to have continuous first and second derivatives, but don't want to assume anything else about the shape of that curve, then we can use cubic splines to fit an arbitrary curve with a smoothly changing first and second derivative.)

\n

And in terms of machine learning, it works.  It is done rather less often in science papers - for various reasons, some good, some bad; e.g. academics may prefer models with simple extractable parameters that they can hold up as the triumphant fruits of their investigation:  Behold, this is the slope!  But if you're trying to win the Netflix Prize, and you find an algorithm that seems to do well by fitting a line to a thousand data points, then yes, one of the next things you try is substituting some nonparametric estimators of the same data; and yes, this often greatly improves the estimates in practice.  (Added:  And conversely there are plenty of occasions where ridiculously simple-seeming parametric fits to the same data turn out to yield surprisingly good predictions.  And lots of occasions where added complexity for tighter fits buys you very little, or even makes predictions worse.  In machine learning this is usually something you find out by playing around, AFAICT.)

\n

It seems to me that concepts like equality before the law, or even the notion of writing down stable laws in the first place, reflect a nonparametric approach to the ethics of error-prone moral reasoning.

\n

We don't suppose that society can be governed by only four laws.  In fact, we don't even need to suppose that the 'ideal' morality (obtained as the limit of perfect knowledge and reflection, etc.) would in fact subject different people and different occasions to the same laws.  We need only suppose that we believe, a priori, that similar moral dilemmas are likely ceteris paribus to have similar resolutions, and that moral reasoning about adjustment to specific people is highly error-prone - that, given unlimited flexibility to 'perfectly fit' the solution to the person, we're likely to favor our friends and relatives too much.  (And not in an explicit, internally and externally visible way, that we could correct just by having a new rule not to favor friends and relatives.)

\n

So instead of trying to recreate, each time, the judgment that is the perfect fit to the situation and the people, we try to use the ethical equivalent of a cubic spline - have underlying laws that are allowed to be complicated, but have to be written down for stability, and are supposed to treat neighboring points similarly.

\n

Nonparametric ethics says:  \"Let's reason about which moral situations are at least rough neighbors so that an acceptable solution to one should be at least mostly-acceptable to another; and let's reason about where people are likely to be highly biased in their attempt to adjust to specifics; and then, to reduce moral error, let's enforce similar resolutions across neighboring cases.\"  If you think that good moral codes will treat different people similarly, and/or that people are highly biased in how they adjust their judgments to different people, then you will come up with the ethical solution of equality before the law.

\n

Now of course you can still have laws that are too complicated, and that try to sneak in too much adaptation to particular situations.  This would correspond to a nonparametric estimator that doesn't smooth enough, like using 1-nearest-neighbor instead of 10-nearest-neighbors, or like a cubic spline that tried to exactly fit every point without trying to minimize the absolute value of third derivatives.

\n

And of course our society may not succeed at similarly treating different people in similar situations - people who can afford lawyers experience a different legal system.

\n

But if nothing else, coming to grips with the concept of nonparametric ethics helps us see the way in which our society is failing to deal with the error-proneness of its own moral reasoning.

\n

You can interpret a fair amount of my coming-of-age as my switch from parametric ethics to nonparametric ethics - from the pre-2000 search for simple underlying morals and my attempts to therefore reject values that seemed complicated; to my later acceptance that my values were actually going to be complicated, and that both I and my AI designs needed to come to terms with that.  Friendly AI can be viewed as the problem of coming up with - not the Three Simple Laws of Robotics that are all a robot needs - but rather a regular and stable method for learning, predicting, and renormalizing human values that are and should be complicated.

" } }, { "_id": "GGEoygaLHvZCg5LBi", "title": "ESR's comments on some EY:OB/LW posts", "pageUrl": "https://www.lesswrong.com/posts/GGEoygaLHvZCg5LBi/esr-s-comments-on-some-ey-ob-lw-posts", "postedAt": "2009-06-20T00:16:29.923Z", "baseScore": 7, "voteCount": 12, "commentCount": 16, "url": null, "contents": { "documentId": "GGEoygaLHvZCg5LBi", "html": "

Eric S. Raymond's comments on some of my Overcoming Bias posts.

\n

In his reply to my To Lead, You Must Stand Up, he writes:

\n
\n

\"I think your exhortations here are nearly useless. Experience I’ve collected over the last ten years suggests to me that the kind of immunity to stage fright you and I have is a function of basic personality type at the neurotransmitter-balance level, and not really learnable by most people.\"

\n
\n

This is a particularly interesting observation if combined with Hanson's hypothesis that people choke to submit.

\n

\"I disagree with The Futility of Emergence,\" says ESR.  Yea, many have said this to me.  And they go on to say:  Emergence has the useful meaning that...  And it's a different meaning every time.  In ESR's case it's:

\n
\n

\"The word 'emergent' is a signal that we believe a very specific thing about the relationship between 'neurons firing' and 'intelligence', which is that there is no possible account of intelligence in which the only explanatory units are neurons or subsystems of neurons.\"

\n
\n

Let me guess, you think the word \"emergence\" means something useful but that's not exactly it, although ESR's definition does aim in the rough general direction of what you think is the right definition...

\n

So-called \"words\" like this should not be actually spoken from one human to another.  It is tempting fate.  It would be like trying to have a serious discussion between two theologians if both of them were allowed to say the word \"God\" directly, instead of always having to say whatever they meant by the word.

" } }, { "_id": "ymRZAN6c2sozFJZvm", "title": "Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation", "pageUrl": "https://www.lesswrong.com/posts/ymRZAN6c2sozFJZvm/cascio-in-the-atlantic-more-on-cognitive-enhancement-as", "postedAt": "2009-06-18T15:09:57.954Z", "baseScore": 23, "voteCount": 21, "commentCount": 92, "url": null, "contents": { "documentId": "ymRZAN6c2sozFJZvm", "html": "

Jamais Cascio writes in the atlantic:

\n
\n

Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter? ...

\n

 ... Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.”

\n
\n

Read the whole article here.

\n

This relates to cognitive enhancement as existential risk mitigation, where Anders Sandberg wrote:

\n
\n

Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful.

\n
\n

The main criticisms of this idea generated in the Less Wrong comments were:

\n
\n

The problem is not that people are stupid. The problem is that people simply don't give a damn. If you don't fix that, I doubt raising IQ will be anywhere near as helpful as you may think. (Psychohistorian)

\n
\n
\n

Yes, this is the key problem that people don't really want to understand. (Robin Hanson)

\n
\n
\n

Making people more rational and aware of cognitive biases material would help much more (many people)

\n
\n

These criticisms really boil down to the same thing: people love their cherished falsehoods! Of course, I cannot disagree with this statement. But it seems to me that smarter people have a lower tolerance for making utterly ridiculous claims in favour of their cherished falsehood, and will (to some extent) be protected from believing silly things that make them (individually) feel happier, but are highly unsupported by evidence. Case in point: religion. This study1 states that

\n
\n

Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric g is negatively related to religious belief. We find that in a sample of 137 countries the correlation between national IQ and disbelief in God is 0.60.

\n
\n

Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world. The data concerning religion detracts support from this hypothesis. There is also direct evidence to show that a whole list of human cognitive biases are more likely to be avoided by being more intelligent - though far from all (perhaps even far from most?) of them. This paper2 states:

\n
\n

In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.

\n
\n

Anders Sandberg also suggested the following piece of evidence3 in favour of the hypothesis that increased intelligence leads to more rational political decisions:

\n
\n

Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N=183−130). Longitudinal studies at the country level (N=94−16) allow the analysis of causal relationships. It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability.

\n
\n

Thus the hypothesis that increasing peoples' intelligence will make them believe fewer falsehoods and will make them vote for more effective government has at least two pieces of empirical evidence on its side.

\n

 

\n

 

\n
\n

1. Average intelligence predicts atheism rates across 137 nations, Richard Lynn,  John Harvey and Helmuth Nyborg, Intelligence Volume 37, Issue 1,

\n

2. On the Relative Independence of Thinking Biases and Cognitive Ability, Keith E. Stanovich, Richard F. West, Journal of Personality and Social Psychology, 2008, Vol. 94, No. 4, 672–695

\n

3. Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty, Heiner Rindermann, Intelligence, Volume 36, Issue 4

" } }, { "_id": "7Em9eRh9Cwphg3qhS", "title": "Time to See If We Can Apply Anything We Have Learned", "pageUrl": "https://www.lesswrong.com/posts/7Em9eRh9Cwphg3qhS/time-to-see-if-we-can-apply-anything-we-have-learned", "postedAt": "2009-06-18T10:06:12.174Z", "baseScore": 1, "voteCount": 27, "commentCount": 25, "url": null, "contents": { "documentId": "7Em9eRh9Cwphg3qhS", "html": "

It seems to me that this blog has just reached it's first real crisis.

\n

 

\n

Three people are announcing three apparently opposed beliefs with substantial real expected consequences and yet no-one has yet spoken, or it seems to me implied, the key slogan... \"LETS USE SCIENCE!\" or, as hubristic Bayesian wannabes, not invoked Bayes as an idol to swear by, but rather said \"LETS USE HUMANE REFLECTIVE DECISION THEORY, THE QUANTITATIVELY UNKNOWN BUT QUALITATIVELY INTUITED POWER DEEPER THAN SCIENCE FROM WHICH IT STEMS AND TO WHICH OUR COMMUNITY IS DEVOTED\".

\n

IF RDS was applied to our current situation, people would be analyzing Yvain's, Davis' and Eby's proposals, working out exactly what their implications are, and trying to propose, in the name of SCIENCE, hypotheses which will distinguish between them, and in the name of BAYES, confidence estimates of their analyses and of the quality with which the denotations of their words have cleaved reality at the joints enabling an odds ratio of updating to be extracted from a single data point. People would be working out what features of which of the models used by Yvain, Davis and Eby constitute evidence against what other features. They would be trying to evaluate non-verbally, through subjectively opaque but known-to-be-informative processes vulnerable to verbal overshadowing, what relative odds to place on those different features of the models. Finally, they would be examining the expected costs entailed by experiments being proposed and selecting those experiments which promise to provide the most information for the least cost be performed. The cost estimate would include both the effort required to perform the experiments, probably best assessed with an outside view in most cases like these, and the dangers to the minds of the participants from possible adverse outcomes, taking into account, as well as possible, the structural uncertainty of the models.

\n

I sincerely hope to see some of that in the comments section soon, either under this post or the \"Applied Picoeconomics\" post.

" } }, { "_id": "ob6FdjoXnirRkodNs", "title": "The Physiology of Willpower", "pageUrl": "https://www.lesswrong.com/posts/ob6FdjoXnirRkodNs/the-physiology-of-willpower", "postedAt": "2009-06-18T04:11:52.445Z", "baseScore": 25, "voteCount": 25, "commentCount": 36, "url": null, "contents": { "documentId": "ob6FdjoXnirRkodNs", "html": "

This paper (PDF)1 looks more than a little interesting:

\n
\n

Past research indicates that self-control relies on some sort of limited energy source. This review suggests that blood glucose is one important part of the energy source of selfcontrol. Acts of self-control deplete relatively large amounts of glucose. Self-control failures are more likely when glucose is low or cannot be mobilized effectively to the brain (i.e., when insulin is low or insensitive). Restoring glucose to a sufficient level typically improves self-control. Numerous self-control behaviors fit this pattern, including controlling attention, regulating emotions, quitting smoking, coping with stress, resisting impulsivity, and refraining from criminal and aggressive behavior. Alcohol reduces glucose throughout the brain and body and likewise impairs many forms of self-control. Furthermore, self-control failure is most likely during times of the day when glucose is used least effectively. Self-control thus appears highly susceptible to glucose. Self-control benefits numerous social and interpersonal processes. Glucose might therefore be related to a broad range of social behavior.

\n
\n

I find this interesting, in that the days I get less work done (due to e.g. spending more time on Less Wrong) are often days when I don't eat breakfast right away, and am generally undereating (like today).

\n

References

\n

1. Matthew T. Gailliot, Roy F. Baumeister. (2007) The Physiology of Willpower: Linking Blood Glucose to Self-Control. Personality and Social Psychology Review, Vol. 11, No. 4, 303-327

" } }, { "_id": "T56E8MHqpcuc696Dx", "title": "Representative democracy awesomeness hypothesis", "pageUrl": "https://www.lesswrong.com/posts/T56E8MHqpcuc696Dx/representative-democracy-awesomeness-hypothesis", "postedAt": "2009-06-18T03:02:37.973Z", "baseScore": 4, "voteCount": 12, "commentCount": 13, "url": null, "contents": { "documentId": "T56E8MHqpcuc696Dx", "html": "

I have hypothesis that utility maximization is always a second order process - there's always some underlying selection process with its fitness, and only because it promotes traits that make agents agents act in a way that best approximates utility maximizing, adaptation executers seem to us like utility maximizers.

\n

Now let's apply this to political systems:

\n\n

There are also some hints how to design better representative democracy:

\n\n

I used to think that direct democracy would be a major improvement relative to what we have now, but this analysis suggests that representative democracy (with small bits of direct democracy thrown in) should work much better.

" } }, { "_id": "zaeixgGQgdPNMRrED", "title": "Morality is subjective preference, but it can be objectively wrong", "pageUrl": "https://www.lesswrong.com/posts/zaeixgGQgdPNMRrED/morality-is-subjective-preference-but-it-can-be-objectively", "postedAt": "2009-06-17T18:09:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "zaeixgGQgdPNMRrED", "html": "
People are often unwilling to think of ethics as their own preferences, rather than demands from something more transcendent. For instance it’s normal to claim that one really wants to make one choice, but it’s only ethical to make the other. My feelings agree, but my thoughts don’t. If I follow something I call ethics, that demonstrates that I want to. It’s not a physical law. So what’s the difference?
\n
\n
Just that. Ethics is a preference for fulfilling preferences attributed to some other source. Popular external sources of values include Gods, nature, other people, transcendent moral truth, group norms, and leaders. If I prefer for your house not to burn down I will turn on the hose. If I think it’s moral to stop your house burning down I will turn off the hose if I find out that you want to burn it down to collect insurance money. I care about your values, not the house.
\n
\n
One demonstration that having an external source is important for ethics is the fact that invented ethical systems (such as, ‘playing video games is virtuous’) seem illegitimate and cheaty. Crazy seeming practices can be ordained by religion and culture, but if you decide independently that it’s only ethical to eat cereal on Thursdays and most will feel you are missing the point and some marbles.
\n
\n
\n
While ethics is a matter of choice then, it implies the existence of your preferred outside source of values. This means it can be wrong. The outside source of values might not exist, or might not have values. This is why evidence about evolution can influence whether a person likes gays marrying, despite it being an apparent value judgement.
\n
\n
This means moral intuitions aren’t as useful as they seem for information about how to be moral. Gut reactions are handy for working out what you like, but if you find that you like serving someone else’s purposes there is factual information about whether they exist or care to take into account. We have better ways to deal with facts than our emotional responses in most realms, so why not use the same here?
\n
\n
The only things that exist and care that I know of are other people and animals. Gods and transcendent values don’t exist, and society as a whole and the environment don’t care, as far as I know. So if I want to be ethical, preference utilitarianism (caring about other people’s preferences) is my only option. Of course I could prefer not to be ethical at all. And I could prefer to follow what pass for other moral rules; being honest, protesting interference in the environment, keeping my dress long. But if these things benefit only my feeling of righteousness, I must admit they are no different to normal personal preferences. If you want to be ethical, these are probably not what you are looking for any more than ‘it’s virtuous to play video games’ is.
\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "NjzBrtvDS4jXi5Krp", "title": "Applied Picoeconomics", "pageUrl": "https://www.lesswrong.com/posts/NjzBrtvDS4jXi5Krp/applied-picoeconomics", "postedAt": "2009-06-17T16:08:29.757Z", "baseScore": 61, "voteCount": 55, "commentCount": 85, "url": null, "contents": { "documentId": "NjzBrtvDS4jXi5Krp", "html": "

Related to: Akrasia, Hyperbolic Discounting, and Picoeconomics,  Fix It And Tell Us What You Did

A while back, ciphergoth posted an article on \"picoeconomics\", the theory that akrasia could be partially modeled as bargaining between present and future selves. I think the model is incomplete, because it doesn't explain how the analogy is instantiated in the real world, and I'd like to investigate that further sometime1 - but it's a good first-order approximation.

For those of you too lazy to read the article (come on! It has pictures of naked people! Well, one naked person. Suspended from a graph of a hyperbolic curve) Ainslie argues that \"intertemporal bargaining\" is one way to overcome preference reversal. For example, an alcoholic has two conflicting preferences: right now, he would rather drink than not drink, but next year he would rather be the sort of person who never drinks than remain an alcoholic. But because his brain uses hyperbolic discounting, a process that pays more attention to his current utility than his future utility, he's going to hit the whiskey.

This sticks him in a sorites paradox. Honestly, it's not going to make much of a difference if he has one more drink, so why not hit the whiskey? Ainslie's answer is that he should set a hard-and-fast rule: \"I will never drink alcohol\". Following this rule will cure his alcoholism and help him achieve his dreams. He now has a very high preference for following the rule; a preference hopefully stronger than his current preference for whiskey.

Ainslie's other point is that this rule needs to really be hard-and-fast. If his rule is \"I will drink less whiskey\", then that leaves it open for him to say \"Well, I'll drink some whiskey now, and none later; that counts as 'less'\", and then the whole problem comes back just as bad as before. Likewise, if he says \"It's my birthday, I'll let myself break the rule just this once,\" then soon he's likely to be saying \"It's the Sunday before Cinco de Mayo, this calls for a celebration!\" Ainslie has some much more formal and convincing ways of framing this, which is why you should read the article instead of just trusting this summary.

The stuff by Ainslie I read (I didn't spring for any of his dead-tree books) didn't offer any specific pointers for increasing your willpower2, but it's pretty easy to read between the lines and figure out what applied picoeconomics ought to look like. In the interest of testing a scientific theory, not to mention the ongoing effort to take control of my own life, I've been testing picoeconomic techniques for the last two months.

\r\n

\r\n

The essence of picoeconomics is formally binding yourself to a rule with as few loopholes as possible. So the technique I decided to test3 was to write out an oath detailing exactly what I wanted to do, list in nauseating detail all of the conditions under which I could or could not be released from this oath, and then bind myself to it, with the knowledge that if I succeeded I would have a great method of self-improvement and if I failed I would be dooming myself to a life of laziness forever (Ainslie's theories suggest that exaggeration is good in this case).

I chose a few areas of my life that I wanted to improve, of which the only one I want to mention in public is my poor study habits. I decided that I wanted to increase my current study load from practically never looking at a book after school got out, up to two hours a day.

I wrote down - yes, literally wrote down - an oath in which I swore to study for two hours a day. I detailed exactly the conditions that would count as \"studying\" - no watching TV with an open book placed in my lap, for example.

I also included several release valves. The theory behind this was that if I simply broke the oath outright, the oath would no longer be credible and would lose its power (again, see Ainslie's article), and there would be some point where I would be absolutely compelled to break the oath (for example, if a member of my family is in the emergency room, I refuse to read a book for an hour and a half before going to check up on them). I gave myself a whole bunch of cases in which I would be allowed to not study, guilt-free, and allowed myself five days a month when I could just take off studying for no reason (too tired, maybe). I also limited the original oath to a month, so that if it didn't work I could adjust it without completely destroying the effectiveness of the oath forever. Finally, I swore the oath in a ceremonial fashion, calling upon various fictional deities for whom I have great respect.

One month later, I find that I kept to the terms of the oath exactly, which is no small achievement for me since my previous resolutions to study more have ended in apathy and failure. On an introspection level, the need to study each day felt exactly like the need to complete a project with a deadline, or to show up for work when the boss was expecting you. My brain clearly has different procedures for dealing with vague responsibilities it can weasel out of, and serious responsibilities it can't, and the oath served to stick studying on the \"serious\" side of the line.

I am suitably cautious about other-optimizing and the typical mind fallacy, so I don't promise the same method will work for you. But I'd be interested to see if it did4. I'd be especially interested if everyone who tried it would post, right now, what they're trying so that in a month or so we can come back and see how many people kept their oath without having too much response bias.

\r\n

 

\r\n

Footnotes

\r\n

1: I'm split on the value of picoeconomic theory. A lot of it seems either common-sense if taken as a vague model or metaphor, or obviously false if taken literally. But sometimes it's very good to have a formal model for common sense, and I'm optimistic about someone developing a more literal version of it that explains what's actually going on inside someone's head.

2: Ciphergoth, as far as you know does Ainslie ever start making practical suggestions based on his theory anywhere, or does he leave it entirely as an exercise for the reader?

\r\n

3: I don't read a lot of stuff on productivity, so I might be reinventing the wheel here.

\r\n

4: For people trying this, a few suggestions and caveats from my experience:

\r\n
    \r\n
  1. Do NOT make the oath open-ended. Set a time limit, and if you're happy at the end of that time limit, set another time limit.
  2. \r\n
  3. Don't overdo it; this only works if you really do want the goal you're after more than you want momentary pleasure, people are notoriously bad at knowing what they want, and if you break an oath once you've set a precedent and it'll be harder to keep a better-crafted oath next time. If I'd sworn six hours of studying a day, no way I'd have been able to keep it.
  4. \r\n
  5. Set release valves.
  6. \r\n
  7. Do something extremely measurable in which success or failure is a very yes-or-no affair, like how much time you do something for. Saying \"study more\" or \"eat better\" will be completely useless.
  8. \r\n
  9. Read the article so you know the theory behind it and especially why it's important to always keep the rules.
  10. \r\n
  11. Don't just think up the oath and figure it's in effect. Write it down and swear it aloud, more or less ceremonially, depending on your taste for drama and ritual.
  12. \r\n
  13. Seriously, don't overdo it. Ego depletion and all that.
  14. \r\n
" } }, { "_id": "GtjRZhiddn6ziTERD", "title": "Don't Count Your Chickens...", "pageUrl": "https://www.lesswrong.com/posts/GtjRZhiddn6ziTERD/don-t-count-your-chickens", "postedAt": "2009-06-17T15:21:31.616Z", "baseScore": 4, "voteCount": 11, "commentCount": 9, "url": null, "contents": { "documentId": "GtjRZhiddn6ziTERD", "html": "

A blog post by Derek Sivers links to evidence that stating one's goals makes one less likely to accomplish them.

\n

Excerpt:

\n
\n

Announcing your plans to others satisfies your self-identity just enough that you're less motivated to do the hard work needed.

\n
\n

Link: Shut up! Announcing your plans makes you less motivated to accomplish them.

" } }, { "_id": "XSBWX6Zmu2K3gTySR", "title": "Be your conformist, approval seeking, self", "pageUrl": "https://www.lesswrong.com/posts/XSBWX6Zmu2K3gTySR/be-your-conformist-approval-seeking-self", "postedAt": "2009-06-17T10:17:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "XSBWX6Zmu2K3gTySR", "html": "

People recommend that one another ‘be themselves’ rather than being influenced by outside expectations and norms. Nobody suggests others should try harder to follow the crowd. They needn’t anyway; we seem fairly motivated by impressing others and fitting in. Few seem interested in ‘being themselves’ in the sense of behaving as they would if nobody was ever watching. The ‘individuality’ we celebrate usually seems designed for observers. What do people do when there’s only themselves to care? Fart louder and leave their dirty cups around. This striving for unadulterated selfhood is not praised. Yes, it seems in most cases you can get more approval if you tailor your actions to getting approval. So why do we so commonly offer this same advice, that we don’t follow, and don’t approve of any real manifestation of?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "AQ86BPLxNXs82cxcr", "title": "Ask LessWrong: Human cognitive enhancement now?", "pageUrl": "https://www.lesswrong.com/posts/AQ86BPLxNXs82cxcr/ask-lesswrong-human-cognitive-enhancement-now", "postedAt": "2009-06-16T21:16:03.029Z", "baseScore": 16, "voteCount": 15, "commentCount": 73, "url": null, "contents": { "documentId": "AQ86BPLxNXs82cxcr", "html": "

Transhumanists have high hopes for enhancing human cognitive abilities in the future. But what realistic steps can we take to enhance them now? On the one hand Flynn effect suggests IQ (which is a major factor in human cognition) can be increased a lot with current technology, on the other hand review of existing drugs seems rather pessimistic - they seem to have minor positive effect on low performers, and very little effect on high performers, what means they're mostly of therapeutic not enhancing use.

\n

So, fellow rationalists, how can we enhance our cognition now? Solid research especially welcome, but consistent anecdotal evidence is also welcome.

" } }, { "_id": "YmYvZziDt3w4kaR8N", "title": "Rationalists lose when others choose", "pageUrl": "https://www.lesswrong.com/posts/YmYvZziDt3w4kaR8N/rationalists-lose-when-others-choose", "postedAt": "2009-06-16T17:50:07.749Z", "baseScore": -8, "voteCount": 27, "commentCount": 58, "url": null, "contents": { "documentId": "YmYvZziDt3w4kaR8N", "html": "

At various times, we've argued over whether rationalists always win.  I posed Augustine's paradox of optimal repentance to argue that, in some situations, rationalists lose.  One criticism of that paradox is that its strongest forms posit a God who penalizes people for being rational.  My response was, So what?  Who ever said that nature, or people, don't penalize rationality?

\n

There are instances where nature penalizes the rational.  For instance, revenge is irrational, but being thought of as someone who would take revenge gives advantages.1

\n

EDIT:  Many many people immediately jumped on this, because revenge is rational in repeated interactions.  Sure.  Note the \"There are instances\" at the start of the sentence.  If you admit that someone, somewhere, once faced a one-shot revenge problem, then cede the point and move on.  It's just an example anyway.

\n

Here's another instance that more closely resembles the God who punishes rationalism, in which people deliberately punish rational behavior:

\n

If rationality means optimizing expected utility, then both social pressures and evolutionary pressures tend, on average, to bias us towards altruism.  (I'm going to assume you know this literature rather than explain it here.)  An employer or a lover would both rather have someone who is irrationally altruistic.  This means that, on this particular (and important) dimension of preference, rationality correlates with undesirability.2

\n

<ADDED>: I originally wrote \"optimizing expected selfish utility\", merely to emphasize that an agent, rational or not, tries to maximize its own utility function.  I do not mean that a rational agent appears selfish by social standards.  A utility-maximizing agent is selfish by definition, because its utility function is its own.  Any altruistic behavior that results, happens only out of self-interest.  You may argue that pragmatics argue against this use of the word \"selfish\" because it thus adds no meaning.  Fine.  I have removed the word \"selfish\".

\n

However, it really doesn't matter.  Sure, it is possible to make a rational agent that acts in ways that seem unselfish. Irrelevant.  Why would the big boss settle for \"unselfish\" when he can get \"self-sacrificing\"?  It is often possible to find an irrational agent that acts more in your interests, than any rational agent will.  The rational agent aims for equitable utility deals.  The irrational agent can be inequitable in your favor.

\n

This whole barrage of attacks on using the world 'selfish' are yet again missing the point.  If you read the entire post, you'll see that it doesn't matter if you think that rational agents are selfish, or that they can reciprocate.  You just have to admit that most persons A would rather deal with an agent B having an altruistic bias, or a bias towards A's utilities, than an agent having no such bias.  The level of selfishness/altruism of the posited rational agent is irrelevant, because adding a bias towards person A's utility is always better for person A.  Comparing \"rational unbiased person\" to \"altruistic idiot\" is not the relevant comparison here.  Compare instead \"person using decision function F with no bias\" vs. \"person using decision function F with excess altruism\".3

\n

(Also note that, in the fMRI example, people don't get to see your utility function.  They can't tell that you have a wonderful  Yudkowskian utility function that will make you reliable.  They can only see that you don't have the bias most people do that would make most people a better employee.)

\n

The real tricky point of this argument is whether you can define \"irrational altruism\" in a way that doesn't simply mean \"utility function that values altruism\".  You could rephrase \"Choice by others encourages bias toward altruism\" as \"Choice by others selects for utility functions that value altruism highly\".

\n

Does an ant have an irrationally high bias towards altruism?  It may make more sense to say that an ant is less of an invididual, and more of a subroutine, than a human is.  So it is perfectly all right with me if you prefer to say that these forces select for valuing altruism, rather than saying that they select for bias.  The outcome is the same either way:  When one agent gets to choose what other agents succeed, and that agent can observe their biases and/or decision functions, those other agents are under selection pressure to become less like individuals and more like subroutines of the choosing agent.  You can call this \"altruistic bias\" or you can call it \"less individuality\".

\n

</ADDED>

\n

There are a lot of other situations where one person chooses another person, and they would rather choose someone who is biased, in ways encouraged by society or by genetics, than someone more rational.  When giving a security clearance, for example, you would rather give it to someone who loved his country emotionally, than to someone who loved his country rationally; the former is more reliable, while the rational person may suddenly reach an opposite conclusion on learning one new fact.

\n

It's hard to tell how altruistic someone is.  But the May 29, 2009 issue of Science has an article called \"The Computation of Social Behavior\".  It's extremely skimpy on details, especially for a 5-page article; but the gist of it is that they can use functional magnetic resonance imaging to monitor someone making decisions, and extract some of that person's basic decision-making parameters.  For example (they mention this, although it isn't clear whether they can extract this particular parameter), their degree of altruism (the value they place on someone else's utility vs. their own utility).  Unlike a written exam, the fMRI exam can't be faked; your brain will reveal your true parameters even if you try to lie and game the exam.

\n

So, in the future, being rational may make you unemployable and unlovable, because you'll be unable to hide your rationality.

\n

Or maybe it already does?

\n

ADDED:

\n

Here is the big picture:  The trend in the future is likely to be one of greater and greater transparency of every agent's internal operations, whether this is via fMRI or via exchanging source code.  Rationality means acting to achieve your goals.  There will almost always be other people who are more powerful than you and who have resources that you need, and they don't want you to achieve your goals.  They want you to achieve their goals.  They will have the power and the motive to select against rationality (or to avoid building it in in the first place.)

\n

All our experience is with economic and behavioral models that assume independent self-interested agents.  In a world where powerful people can examine the utility functions of less-powerful people, and reward them for rewriting their utility functions (or just select ones with utility functions that are favorable to the powerful people, and hence irrational), then having rational, self-interested agents is not the equilibrium outcome.

\n

In a world in which agents like you or I are manufactured to meet the needs of more powerful agents, even more so.

\n

You may claim that an agent can be 'rational' while trying to attain the goals of another agent.  I would instead say that it isn't an agent anymore; it's just a subroutine.

\n

The forces I am discussing in this post try to turn agents into subroutines.  And they are getting stronger.

\n

 

\n

1 Newcomb's paradox is, strangely, more familiar to LW readers.  I suggest replacing discussions of one-boxing by discussions of taking revenge; I think the paradoxes are very similar, but the former is more confusing and further-removed from reality.  Its main advantage is that it prevents people from being distracted by discussing ways of fooling people about your intentions - which is not the solution evolution chose to that problem.

\n

2 I'm making basically the same argument that Christians make when they say that atheists can't be trusted.  Empirical rejection of that argument does not apply to mine, for two reasons:

\n
    \n
  1. Religions operate on pure rewards-based incentives, and hence destroy the altruistic instinct; therefore, I intuit that religious people have a disadvantage rather than an advantage compared to altruists WRT altruism.
  2. \n
  3. Religious people can sometimes be trusted more than atheists; the problem is that some of the things they can be trusted to do are crazy.
  4. \n
\n

3 This is something LW readers do all the time:  Start reading a post, then stop in the middle and write a critical response addressing one perceived error whose truth or falsity is actually irrelevant to the logic of the post.

" } }, { "_id": "ycr3CyrnZLFC7mb5W", "title": "Intelligence enhancement as existential risk mitigation", "pageUrl": "https://www.lesswrong.com/posts/ycr3CyrnZLFC7mb5W/intelligence-enhancement-as-existential-risk-mitigation", "postedAt": "2009-06-15T19:35:07.530Z", "baseScore": 21, "voteCount": 21, "commentCount": 244, "url": null, "contents": { "documentId": "ycr3CyrnZLFC7mb5W", "html": "

Here at Less Wrong, the Future of Humanity Institute and the Singularity Institute, a recurring theme is trying to steer the future of the planet away from disaster. Often, the best way to avert a particular disaster is quite hard for ordinary people to understand as it requires one to think through an argument in a cool, unemotional way; more often than not the best solution will be lost in a mass of low signal-to-noise ratio squabbling and/or emoting. Whatever the substance of the debate, the overall meta-problem is quite well captured by this catch from this month's rationality quotes:

\n
\n

\"People are mostly sane enough, of course, in the affairs of common life: the getting of food, shelter, and so on. But the moment they attempt any depth or generality of thought, they go mad almost infallibly.

\n
\n

Attempting to target the meta-problem of getting people to be slightly less mad when it comes to abstract or general thought, especially public policy, is a tempting option. Robin Hanson's futarchy proposal is one way to combat this madness (which it does by removing most people from the policymaking loop). However, another important route to combating human idiocy is to find technologies that make humans smarter. Nick Bostrom proposed that we should work hard looking for ways to enhance the cognition of research scientists, because even a small increase in the average intelligence of research scientists would increase research output by a large amount, as there are lots of scientists. But improving the decisionmaking process of our society would probably have an even more profound effect; if we could improve the intelligence of the average voter by about one standard deviation, it is easy to speculate that the political decisionmaking process would work much better. For example, understanding simple logical arguments and simple quantitative analyses is stretching the capabilities of someone at IQ 100, so it seems that the marginal effect of overall IQ increases would be quite a large marginal increases in the probability that a politician was incentivized to focus on a logical argument over an emotionally appealing slander as the main focus of their campaign.

\n

As a concrete example, consider the initial US reaction to rising oil prices and the need for US-produced energy: pushing corn ethanol, because a strong farming lobby liked the idea of having extra revenue. Now, if the *average voter* could understand the concept of photosynthetic efficiency, and could understand a simple numerical calculation showing how inefficient corn is at converting solar energy to stored energy in ethanol, this policy choice would have been dead in the water. But the average voter cannot do simple physics, whereas they can understand the emotional appeal of \"support our local farmers!\". Even today, there are still politicians who defend corn ethanol because they want to pander to local interest groups. Another concrete example is some of the more useless responses that the UK public has been engaging in - and being encouraged to engage in - to prevent global warming. People were encouraged to unplug their mobile phone chargers when the chargers weren't being used. David McKay had to wage a personal war against such idiocy - see this Guardian article. The universal response to my criticism of people advocating this was \"it all adds up!\". I quote:

\n
\n

There's a lack of numeracy in the public discussion of energy. Where people do use numbers, they select them to sound big and score points in arguments, rather than to aid thoughtful discussion.

\n
\n

Toby Ord has a project on efficient charity, he has worked out that the difference in outcomes per dollar for alleviating human suffering in Africa can vary by 3 orders of magnitude. But most people in the developed world don't know what an \"order of magnitude\" is, or why it is a useful concept. This efficient charity concept demonstrated that the derivative

\n

d(Outcomes)/d(Average IQ)

\n

may be extremely large, and may be subject to powerful threshold effects. In this case, there is probably an average IQ threshold above which the average person can easily understand the concept of efficient charity, and thus all the money gets given to the most efficient charities, and the amount of suffering-alleviation in Africa goes up by a factor of 1000, even though the average IQ of the donor community may only have jumped from 100 to 140, say.

\n

It may well be the case that finding a cognitive enhancer suitable for general use is the best way to tackle the diverse array of risks we face. People with enhanced IQ would also probably find it easier (and be more willing) to absorb cognitive biases material; to see this, try and explain the concept of \"cognitive biases\" to someone who is unlucky enough to be of below average IQ, and then go an explain it to someone who is smarter than you. It is certainly the case that even people of below average IQ *do sometimes*, in favourable circumstances, take note of quantitative rational arguments, but in the maelstrom of politics such quantitative analyses get eaten alive by more emotive arguments like \"SUPPORT OUR FARMERS!\" or \"SUPPORT OUR TROOPS!\" or \"EVOLUTION IS ONLY A THEORY!\" or \"IT ALL ADDS UP!\".

" } }, { "_id": "qK4DeEPGv848okKpd", "title": "The Laws of Magic", "pageUrl": "https://www.lesswrong.com/posts/qK4DeEPGv848okKpd/the-laws-of-magic", "postedAt": "2009-06-15T19:13:08.743Z", "baseScore": 20, "voteCount": 27, "commentCount": 14, "url": null, "contents": { "documentId": "qK4DeEPGv848okKpd", "html": "
\n

People are always telling you that \"we have always done thus\", and then you find that their \"always\" means a generation or two, or a century or two, at most a millennium or two.  Cultural ways and habits are blips compared to the ways and habits of the body, of the race.  There really is very little that human beings on our plane have always done, except find food and drink, sing, talk, procreate, nurture the children, and probably band together to some extent.

\n

- Ursula K. Le Guin, \"Seasons of the Ansarac\", Changing Planes

\n
\n

Human cultures vary wildly and dicursively, so it is worth noting which things all known human societies have in common.  Several generations ago, anthropologists noted that cultures' beliefs about a suite of concepts crudely describable as 'magic' had certain principles in common.

\n

\n

Humans seem to naturally generate a series of concepts known as \"Sympathetic Magic\", a host of theories and practices which have certain principles in common, two of which are of overriding importance.  These principles can be expressed as follows:  the Law of Contagion holds that two things which have interacted, or were once part of a single entity, retain their connection and can exert influence over each other; the Law of Similarity holds that things which are similar or treated the same establish a connection and can affect each other.

\n

These principles are grossly, obviously, in contradiction with everyday experience.  Thusly many cultures restrict the phenomena to which the laws supposedly apply to non-standard, special cases, most especially to individuals who it is asserted have unusual powers or ritual actions that are not commonly replicated in normal life.  Examples range from African sorcerers could supposedly bring death to their enemies by stabbing their footprints, the Imperial City of ancient China which was designed to function as a stylized representation of the whole of the country and induce peace as long as the Emperor sat in his throne facing south, and all manner of witchcraft superstitions in which a discarded body part or tiny doll could be used to work magic at a distance. 

\n

Yet the laws themselves do not seem to be doubted, and the phenomena which they supposedly describe were historically (and often presently) widely believed despite a complete lack of actual evidence. There are even technical specialties which are not overtly \"magical\" where the laws were retained.  An excellent example of this is herbalism, where a concept named \"The Doctrine of Signatures\" suggested that the form of a plant hinted or indicated what it was useful for.

\n

Thus the Greeks thought orchids treated impotence and infertility because they vaguely resemble testicles, the Chinese believed ginseng was a potent panacea because its forked root looked somewhat like the human form, and medieval monks thought lungwort's similarity to ulcerated lung tissue meant it was effective against respiratory ailments.  In many cases such beliefs persisted for centuries and across civilizations, despite there being absolutely no rational reason to view the beliefs as true.

\n

Sympathetic magic is just a special case of a wider set of phenomena called magical thinking.  It is important that you familiarize yourself with that collection of ideas before the next post.

" } }, { "_id": "bvMDbCfrX8cTs8MC9", "title": "The two meanings of mathematical terms", "pageUrl": "https://www.lesswrong.com/posts/bvMDbCfrX8cTs8MC9/the-two-meanings-of-mathematical-terms", "postedAt": "2009-06-15T14:30:35.501Z", "baseScore": 0, "voteCount": 11, "commentCount": 80, "url": null, "contents": { "documentId": "bvMDbCfrX8cTs8MC9", "html": "

[edit: sorry, the formatting of links and italics in this is all screwy.  I've tried editing both the rich-text and the HTML and either way it looks ok while i'm editing it but the formatted terms either come out with no surrounding spaces or two surrounding spaces]

\n

In the latest Rationality Quotes thread, CronoDAS  quoted  Paul Graham: 

\n
\n

It would not be a bad definition of math to call it the study of terms that have precise meanings.

\n
\n
\n
Sort of. I started writing a this as a reply to that comment, but it grew into a post.
\n
\n
We've all heard of the story of  epicycles  and how before Copernicus came along the movement of the stars and planets were explained by the idea of them being attached to rotating epicycles, some of which were embedded within other larger, rotating epicycles (I'm simplifying the terminology a little here).
\n
\n
As we now know, the Epicycles theory was completely wrong.  The stars and planets were not at the distances from earth posited by the theory, or of the size presumed by it, nor were they moving about on some giant clockwork structure of rings.  
\n
\n
In the theory of Epicycles the terms had precise mathematical meanings.  The problem was that what the terms were meant to represent in reality were wrong.  The theory involved applied mathematical statements, and in any such statements the terms don’t just have their mathematical meaning -- what the equations say about them -- they also have an ‘external’ meaning concerning what they’re supposed to represent in or about reality.
\n
\n
Lets consider these two types of meanings.  The mathematical, or  ‘internal’, meaning of a statement like ‘1 + 1 = 2’ is very precise.  ‘1 + 1’ is  defined  as ‘2’, so ‘1 + 1 = 2’ is pretty much  the  pre-eminent fact or truth.  This is why mathematical truth is usually given such an exhaulted place.  So far so good with saying that mathematics is the study of terms with precise meanings. 
\n
\n
But what if ‘1 + 1 = 2’ happens to be used to describe something in reality?  Each of the terms will then take on a  second meaning -- concerning what they are meant to be representing in reality.  This meaning lies outside the mathematical theory, and there is no guarantee that it is accurate.
\n
\n
The problem with saying that mathematics is the study of terms with precise meanings is that it’s all to easy to take this as trivially true, because the terms obviously have a precise mathematical sense.  It’s easy to overlook the other type of meaning, to think there is just  the  meaning of the term, and that there is just the question of the precision of their meanings.   This is why we get people saying \"numbers don’t lie\".  
\n
\n
‘Precise’ is a synonym for \"accurate\" and \"exact\" and it is characterized by \"perfect conformity to fact or truth\" (according to WordNet).  So when someone says that mathematics is the study of terms with precise meanings, we have a tendancy to take it as meaning it’s the study of things that are accurate and true.  The problem with that is, mathematical precision clearly does not guarantee the precision -- the accuracy or truth -- of applied mathematical statements, which need to conform with reality.
\n
\n
There are quite subtle ways of falling into this trap of confusing the two meanings.  A believer in epicycles would likely have thought that it must have been correct because it gave mathematically correct answers.  And  it actually did .  Epicycles actually did precisely calculate the positions of the stars and planets (not absolutely perfectly, but in principle the theory could have been adjusted to give perfectly precise results).  If the mathematics was right, how could it be wrong?  
\n
\n
But what the theory was actually calcualting was not the movement of galactic clockwork machinery and stars and planets embedded within it, but the movement of points of light (corresponding to the real stars and planets) as those points of light moved across the sky.  Those positions were right but they had it conceptualised all wrong.  
\n
\n
Which begs the question of whether it really matters if the conceptualisation is wrong, as long as the numbers are right?  Isn’t instrumental correctness all that really matters?  We might think so, but this is not true.  How would Pluto’s existence been predicted  under an epicycles conceptualisation?  How would we have thought about space travel under such a conceptualisation?
\n
The moral is, when we're looking at mathematical statements, numbers are representations, and representations can lie.
\n

\n
\n
\n
\n
\n
\n

\n
If you're interested in knowing more about epicycles and how that theory was overthrown by the Copernican one, Thomas Kuhn's quite readable  The Copernican Revolution  is an excellent resource.  
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n

 

" } }, { "_id": "RQNSKwfgzTujT3Znq", "title": "Readiness Heuristics", "pageUrl": "https://www.lesswrong.com/posts/RQNSKwfgzTujT3Znq/readiness-heuristics", "postedAt": "2009-06-15T01:53:48.804Z", "baseScore": 22, "voteCount": 25, "commentCount": 7, "url": null, "contents": { "documentId": "RQNSKwfgzTujT3Znq", "html": "

Followup to: How Much Thought

\n

A trolley is hurtling towards three people. It will kill them, unless you pull a lever that diverts it onto a different track. However, if you do this, it will hit a small child and kill her. Do you pull the lever, and kill a child, or do nothing, and let three adults die? This question is used to test moral systems and theories; an answer reveals how you value lives and culpability. Or at least, it's supposed to. It's hard to get a straight answer, because everyone wants to take a third option. Why waste time thinking about whose life is more valuable, when you could be looking for a way to save everyone?

\n

In philosophy, decisions are hardened by saying that there are no other options. The real world doesn't work that way. Every decision has an implied extra option: don't decide. Instead, put it off, gather more information, ask a friend, or think more. You might come up with new information that affects the decision or a new option that's better than the old ones. It could be that there is nothing to find, but it takes a lot of thought and investigation to be sure. Or you could find the perfect solution, if only you wait a few more seconds before deciding.

\n

We can't think about both a decision and a meta-decision at the same time, so we have a set of readiness heuristics to tell us whether we're ready to call our current-best option a final decision. Normal heuristics determine what we decide; if they go awry, we choose poorly. The readiness heuristics determine when and whether we decide. If they go awry, we choose hastily or not at all. Broken readiness heuristics cause decision paralysis, writer's block, and procrastination.

\n

 

\n

\n

Given a set of known options, decision theory tells us that we can crunch some numbers to get an expected outcome value for each choice, and choose whichever gives the best expectation. We can treat putting off the decision as an extra option, and estimate its utility like we do for the other known options. However, it so different from the other options that it makes more sense to treat it separately. Instead, we split every decision into two: Decide now? And decide what? Formally, you should put off making a decision if the probability of changing your mind times the expected improvement from doing so is greater than the cost of indecision; that is, if

\n

    P(change) * E(improvement) > E(indecision cost)

\n

To decide how long to run our decision-making processes, we need to consider both the decision itself and the state of our decision-making process. We have almost as many built-in heuristics for this meta-decision as we do for actual decisions. However, note that this is a slight oversimplification; by talking about the probability of changing your mind to a different decision and the expected improvement from doing so, we presuppose that a best candidate has already been selected, which means putting the main decision strictly before the meta-decision of whether to finalize it. It's not possible to finalize a decision without selecting a best candidate, but we may decide a decision is urgent or inconsequential enough to choose a random option.

\n

 

\n

Indecision cost is normally equal to the time spent, so it can be treated as a constant, except when something is likely to happen soon. Our heuristic representation of 'something is likely to happen soon' is urgency. Failing to finalize a decision normally means doing nothing, which would be very bad if the decision is what to do about an approaching tiger, or what to write in a paper that's coming due. The urgency heuristic is good at dealing with tigers, but bad at dealing with papers. Fortunately, we can hijack the urgency heuristic with thoughts and stimuli that we associate with urgency, such as counting down.

\n

Our brain's estimate of P(change), the probability that we'll think of or find something that makes us change our mind, comes out as bewilderment. We feel bewildered when a model doesn't fit in our working memory, or leaves important questions unanswered, indicating a high probability of available simplifications, confusion, and missed distinctions, all of which suggest that it's not yet time to decide. However, this heuristic keeps us from reaching any decision when we're given too many options. For example, in a study by Sheena Itengar and Mark Lepper, shoppers were presented with samples of 6 or 24 flavors of jam; of the shoppers who saw 6 flavors, 30% later bought one, while only 3% the shoppers who saw 24 flavors did. Many people take an extremely long time to order in restaurants, for much the same reason; and if you ever start considering the pros, cons and relative priorities of items on a lengthy todo list, then this heuristic will keep you from actually doing any of them for some time.

\n

To estimate E(improvement), the amount there is to gain by finding a new option or angle, we look at how good the currently available options are. The more readily we can generate objections to the current best option, the more room there is for improvement, and the more likely it is that we have something to gain by resolving those objections. The affect heuristic stands in for the amount of room for improvement; negative affect yields indecision, positive affect yields decisiveness. Unlike bewilderment, however, negative affect doesn't go away with time, and this can lead to hangups. First, when all available options are genuinely bad, it becomes hard to finalize a decision. Even when the options are okay, the halo effect can contaminate our intuitive judgment; negative feelings towards the decision itself, or anything related to it, can hijack the mechanisms meant to make us keep looking for better options.

\n

 

\n

So far, we've considered readiness heuristics in the context of decisions with multiple options, where there is some chance of finding a hidden option or decision-changing insight. For many decisions, like whether to start working on an assignment, there are only two options: the apparent best option (start now) and the default option (procrastinate). In this case, it is tempting to say that the readiness heuristics ought not to apply, since there's no possible benefit to waiting. But this is not quite true; it is possible, for example, that later events might render the assignment moot, or change its parameters. In any case, regardless of whether or not the readiness heuristics ought to apply, we can pretty easily observe that they do apply.

\n

One consequence of this is that negative affect towards a task, or anything related to it, not only induces procrastination but ought to induce procrastination. (This connection seemed so counterintuitive and so often harmful that I had trouble accepting that it could evolve without a deep theory to explain it. The connection between affect and procrastination, and how to modify the affect heuristic's output, is the central theme of PJ Eby's writings.)

\n

Heuristics don't just affect what we decide, but which decisions we make at all, and how long it takes us to make them. There are, almost certainly, not only more heuristics, but more heuristic types, specialized to particular kinds of decisions and meta-decisions, waiting to be discovered.

" } }, { "_id": "QCbBEk3WKZqgvLhs2", "title": "Rationality Quotes - June 2009", "pageUrl": "https://www.lesswrong.com/posts/QCbBEk3WKZqgvLhs2/rationality-quotes-june-2009", "postedAt": "2009-06-14T22:00:28.697Z", "baseScore": 12, "voteCount": 11, "commentCount": 175, "url": null, "contents": { "documentId": "QCbBEk3WKZqgvLhs2", "html": "
\n

(Since there didn't seem to be one for this month, and I just ran across a nice quote.)

\n

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\n\n
" } }, { "_id": "vJZAChMjMpDrsMXtg", "title": "Explain explanations for choosing by choice", "pageUrl": "https://www.lesswrong.com/posts/vJZAChMjMpDrsMXtg/explain-explanations-for-choosing-by-choice", "postedAt": "2009-06-14T21:50:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "vJZAChMjMpDrsMXtg", "html": "

A popular explanation of why it’s worse to seem stupid than lazy is that lazy seems like more of a choice, so not permanent. Similarly it seems more admired and desired to have innate artistic talent than to try hard despite being less naturally good. Being unable to stand by and let a tragedy occur (‘I had no choice!’) is more virtuous than making a calm, reasoned decision to avoid a tragedy.

\n
\n
On the other hand, people usually claim to prefer being liked for their personality over their looks. When asked they also relate it to their choice in the matter; it means more to be liked for something you ‘had a say in’. People are also proud of achievements they work hard on and decisions they make, and less proud of winning the lottery and forced moves.
\n
\n
The influence of apparent choice on our emotions is opposite in these cases, yet we often use it in the explanation for both. Is percieved level of choice really relevant to anything? If so, why does it explain effects in opposite directions? If not, why do we think of it so soon when questioned on these things?

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "Ro6QSQaKdhfpeeGpr", "title": "Why safety is not safe", "pageUrl": "https://www.lesswrong.com/posts/Ro6QSQaKdhfpeeGpr/why-safety-is-not-safe", "postedAt": "2009-06-14T05:20:55.442Z", "baseScore": 60, "voteCount": 83, "commentCount": 118, "url": null, "contents": { "documentId": "Ro6QSQaKdhfpeeGpr", "html": "

June 14, 3009

Twilight still hung in the sky, yet the Pole Star was visible above the trees, for it was a perfect cloudless evening.

\"We can stop here for a few minutes,\" remarked the librarian as he fumbled to light the lamp. \"There's a stream just ahead.\"

The driver grunted assent as he pulled the cart to a halt and unhitched the thirsty horse to drink its fill.

It was said that in the Age of Legends, there had been horseless carriages that drank the black blood of the earth, long since drained dry. But then, it was said that in the Age of Legends, men had flown to the moon on a pillar of fire. Who took such stories seriously?

The librarian did. In his visit to the University archive, he had studied the crumbling pages of a rare book in Old English, itself a copy a mere few centuries old, of a text from the Age of Legends itself; a book that laid out a generation's hopes and dreams, of building cities in the sky, of setting sail for the very stars. Something had gone wrong - but what? That civilization's capabilities had been so far beyond those of his own people. Its destruction should have taken a global apocalypse of the kind that would leave unmistakable record both historical and archaeological, and yet there was no trace. Nobody had anything better than mutually contradictory guesses as to what had happened. The librarian intended to discover the truth.

Forty years later he died in bed, his question still unanswered.

The earth continued to circle its parent star, whose increasing energy output could no longer be compensated by falling atmospheric carbon dioxide concentration. Glaciers advanced, then retreated for the last time; as life struggled to adapt to changing conditions, the ecosystems of yesteryear were replaced by others new and strange - and impoverished. All the while, the environment drifted further from that which had given rise to Homo sapiens, and in due course one more species joined the billions-long roll of the dead. For what was by some standards a little while, eyes still looked up at the lifeless stars, but there were no more minds to wonder what might have been.

\n
\n


Were I to submit the above to a science fiction magazine, it would be instantly rejected. It lacks a satisfying climax in which the hero strikes down the villain, for it has neither hero nor villain. Yet I ask your indulgence for a short time, for it may yet possess one important virtue: realism.

The reason we relate to stories with villains is easy enough to understand. In our ancestral environment, if a leopard or an enemy tribesman escaped your attention, failure to pass on your genes was likely. Violence may or may not have been the primary cause of death, depending on time and place; but it was the primary cause that you could do something about. You might die of malaria, you might die of old age, but there was little and nothing respectively that you could do to avoid these fates, so there was no selection pressure to be sensitive to them. There was certainly no selection pressure to be good at explaining the distant past or predicting the distant future.

Looked at that way, it's a miracle we possess as much general intelligence as we do; and certainly our minds have achieved a great deal, and promise more. Yet the question lurks in the background: are there phenomena, not in distant galaxies or inside the atomic nucleus beyond the reach of our eyes but in our world at the same scale we inhabit, nonetheless invisible to us because our minds are not so constructed as to perceive them?

In search of an answer to that question, we may ask another one: why is this document written in English instead of Chinese?

As late as the 15th century, this was by no means predictable. The great civilizations of Europe and China were roughly on par, the former having almost caught up over the previous few centuries; yet Chinese oceangoing ships were arguably still better than anything Europe could build. Fleets under Admiral Zheng He reached as far as East Africa. Perhaps China might have reached the Americas before Europeans did, and the shape of the world might have been very different.

The centuries had brought a share of disasters to both continents. War had ravaged the lands, laying waste whole cities. Plague had struck, killing millions, men, women and children buried in mass graves. Shifts of global air currents brought the specter of famine. Civilization had endured; more, it had flourished.

The force that put an end to the Chinese arc of progress was deadlier by far than all of these together, yet seemingly intangible as metaphysics. By the 16th century, the fleets had vanished, the proximate cause political; to this day there is no consensus on the underlying factors. It seems what saved Europe was its political disunity. Why was that lost in China? Some writers have blamed flat terrain, which others have disputed; some have blamed rice agriculture and its need for irrigation systems. Likely there were factors nobody has yet understood; perhaps we never will.

An entire future that might have been, was snuffed out by some terrible force compared to which war, plague and famine were mere pinpricks - and yet even with the benefit of hindsight, we still don't truly understand what it was.

Nor is this an isolated case. From the collapse of classical Mediterranean civilization to the divergent fates of the US and Argentina, whose prospects looked so similar as recently as the early 20th century, we find more terrible than any war or ordinary disaster are forces which operate unseen in plain sight and are only dimly understood even after the fact.

The saving grace has always been the outside: when one nation, one civilization, faltered, another picked up the torch and carried on; but with the march of globalization, there may soon be no more outside.

Unless of course we create a new one. Within this century, if we continue to make progress as quickly as possible, we may develop the technology to break our confinement, to colonize first the solar system and then the galaxy. And then our kind may truly be immortal, beyond the longest reach of the Grim Reaper, and love and joy and laughter be not outlived by the stars themselves.

If we continue to make progress as quickly as possible.

Yet at every turn, when risks are discussed, ten voices cry loudly about the violence that may be done with new technology for every one voice that quietly observes that we cannot afford to be without it, and we may not have as much time as we think we have. It is not that anyone is being intentionally selfish or dishonest. The critics believe what they are saying. It is that to the human mind, the dangers of progress are vivid even when imaginary; the dangers of its lack are scarcely perceptible even when real.

There are many reasons why we need more advanced technology, and we need it as soon as possible. Every year, more than fifty million people die for its lack, most in appalling suffering. But the one reason above all others is that the window of opportunity we are currently given may be the last step in the Great Filter, that we cannot know when it will close or if it does, whether it will ever open again.

Less Wrong is about bias, and the errors to which it leads us. I present then what may be the most lethal of all our biases: that we react instantly to the lesser death that comes in blood and fire, but the greater death that comes in the dust of time, is to our minds invisible.

And I ask that you remember, next time you contemplate alleged dangers of technology.

\n

 

" } }, { "_id": "pczHfyxmnFhtKthqR", "title": "Typical Mind and Politics", "pageUrl": "https://www.lesswrong.com/posts/pczHfyxmnFhtKthqR/typical-mind-and-politics", "postedAt": "2009-06-12T12:28:39.826Z", "baseScore": 60, "voteCount": 56, "commentCount": 133, "url": null, "contents": { "documentId": "pczHfyxmnFhtKthqR", "html": "

Yesterday, in the The Terrible, Horrible, No Good Truth About Morality, Roko mentioned some good evidence that we develop an opinion first based on intuitions, and only later look for rational justifications. For example, people would claim incest was wrong because of worries like genetic defects or later harm, but continue to insist that incest was wrong even after all those worries had been taken away.

\r\n

Roko's examples take advantage of universal human feelings like the incest taboo. But if people started out with opposite intuitions, then this same mechanism would produce opinions that people hold very strongly and are happy to support with as many reasons and facts as you please, but which are highly resistant to real debate or to contradicting evidence.

\r\n

Sound familiar?

\r\n

But to explain politics with this mechanism, we'd need an explanation for why people's intuitions differed to begin with. We've already discussed some such explanations - self-serving biases, influence from family and community, et cetera - but today I want to talk about another possibility.

\r\n

\r\n

A few weeks back, I was discussing harms with Bill Swift on Overcoming Bias. In particular, I was arguing that one situation in which there was an open-and-shut case for government restriction of private activity on private property was nuisance noise. I argued that if you were making noise on your property, and I could hear it on my property, that I was being harmed by your actions and that there was clearly just as much a case for government intervention here as if you were firing flaming arrows at me from your property. I fully expected Bill to agree that this was obviously true but to have some reason why he didn't think it applied to our particular disagreement.

\r\n

Instead, to my absolute astonishment, Bill said that noise wasn't really a problem. He said he lived on a noisy property and had just stopped whining and gotten on with his life. I didn't really know how to react to this1, and ended up assuming either that he'd never lived in a really noisy place like I have, or that he was such a blighted ideologue that he was willing to completely contradict common sense in order to preserve his silly argument.

\r\n

In other words, I was assuming the person I was debating was either astonishingly stupid or willfully evil. And when my thoughts tend in that direction, it usually means I'm missing something.

\r\n

Luckily in this case I'd already written a long essay explaining my mistake in detail. In Generalizing From One Example,  I warned people against assuming everyone's mind is built the same way their own mind is. One particular example I gave was:

\r\n
\r\n

I can't deal with noise. If someone's being loud, I can't sleep, I can't study, I can't concentrate, I can't do anything except bang my head against the wall and hope they stop. I once had a noisy housemate. Whenever I asked her to keep it down, she told me I was being oversensitive and should just mellow out.

\r\n
\r\n


So it seems possible to me that I have an oversensitivity to noise and Bill has an undersensitivity to it. When someone around me is being noisy, my intuitions tell me this is extremely bad and needs to be stopped by any means necessary. And maybe Bill's intuitions tell him that this is a minor non-problem. I won't say that this is actually behind our disagreement on the issue - my guess is that Bill and I would disagree about government regulation of pollution from a factory as well - but I think it contributes and it makes our debate much less productive than it would have been otherwise.

\r\n

Let me give an example of one place I think a mind difference *is* behind a political opinion. In Money, The Unit of Caring, Eliezer complained that people were too willing to donate time to charity, and too unwilling to donate money to charity. He gave the example of his own experience, where he felt terrible every time he gave away money, but didn't mind a time committment nearly as much. I fired back a response that this was completely foreign to me, because I am happy to give money to charity and often do it before I've even fully thought about what I'm doing, but will groan and make excuses whenever I'm asked to give away time. I also mentioned that this was a general tendency of mine: I have minimal aversion to monetary loss2, but wasting time makes me angry.

\r\n

A few months ago, Barack Obama proposed a plan (which he later decided against) to make every high school and college student volunteer a certain amount of time to charity. Although I usually like Obama, I wrote an absolutely scathing essay about how unbearably bad a policy this was. It was a good essay, it convinced a number of people, and I still agree with most of the points in it. But...

\r\n

...it was completely out of character for me. I'm the sort of person who heckles libertarians with \"Stop whining and just pay your damn taxes!\" Although I acknowledge that many government policies are inefficient, I tend to just note \"Hmmm, that government policy is suboptimal, it would be an interesting mental puzzle to figure out how to fix it\" rather than actually getting angry about it. This Obama proposal was kind of unique in the amount of antipathy it got from me.

\r\n

So here's my theory. My brain is organized in such a way that I get minimal negative feelings at the idea of money being taken away from me. We can even localize this anatomically - studies show that the insula is the part responsible for sending a pain signal whenever the loss of money is considered. So let's say I have a less-than-normally-active insula in this case. And I get a stronger than normal pain signal from wasted time. This explains why I prefer to donate money than time to my favorite charity.

\r\n

And it could also explain why I'm not a libertarian. One consequence of libertarianism is that you have every right to feel angry when you're taxed. But I don't feel angry, so the part of my brain that comes up with rational justifications for my feelings doesn't need to come up with a rational justification for why taxation is wrong. I do feel angry about being made to do extra work, so my brain adopted libertarian-type arguments in response to the community service proposal. I predict that if I lived in one of those feudal countries with a work levy rather than a tax, I'd be a libertarian, at least until the local knight heard my opinions and cut off my head.

\r\n

And I don't mean to pick on libertarians. I know different people have completely different emotional responses to the idea of other people suffering. For example, I can't watch documentaries on (say) the awful lives on mine workers, because they make me too upset. Other people watch them, think they're great documentaries, and then spend the next hour talking about how upset it made them. And other people watch them and then ask what's for dinner. You think that affects people's opinions on socialism much?

\r\n

Imagine a proposal to institute a tax that would raise money for some effort to help mine workers in some way. Upon hearing of it, different people would have an emotional burst of pain of a certain size at the thought of hearing of a tax, and an emotional burst of pain of a different size at the thought of considering the mine workers. Neither of these bursts of pain would be proportional to the actual size of the problem as measured in some sort of ideal utilon currency (note especially scope insensitivity). But the brain very often makes decisions by comparing those two bursts of pain (see How We Decide or just the insula article above) and then comes up with reasons for the decision. So all the important issues like economic freedom and labor policy and maximizing utility and suchwhat get subordinated to whether you're secreting more neurotransmitters in response to money loss or images of sad coal miners.

\r\n

If this theory were true, we would expect to find neurological differences in people of different political opinions. Ta da! A long list of neurological findings that differ in liberals and conservatives. Linking the startle reflex and the disgust reaction to the policies favored by these groups is left as a (very easy) exercise for the reader3.

\r\n

This may require some moderation of our political opinions on issues where we think we're far from the neurological norm. For example, I am no longer so confident that noise is such a big problem for everyone that we would all be better off if there were strict regulations on it. But I hope Bill will consider that some people may be so sensitive to noise that not everyone can just shrug it off, and so there may be a case for at least some regulation of it. Likewise, even though I don't mind taxes too much, if my goal is a society where most people are happy I need to consider that a higher tax rate will decrease other people's happiness much more quickly than it decreases mine.

\r\n

Other than that, it's just a general message of pessimism. If people's political opinions come partly from unchangeable anatomy, it makes the program of overcoming bias in politics a lot harder, and the possibility of coming up with arguments good enough to change someone else's opinion even more remote.

\r\n

Footnotes

\r\n

1) I am suitably ashamed of my appeal to pathos; my only defense is that it is entirely true, that I have only just finished moving, and that this post is hopefully a more appropriate response.

\r\n

2) Actually, it's more complicated than this, because I agonize over spending money when shopping. I seem to use different thought processes for normal budgeting, and I expect there are many processes going on more complex than just high versus low aversion to money loss.

\r\n

3) Possibly too easy. It's easy to go from that data to an explanation of why conservatives worry more about terrorism, but then why don't they also worry more about global warming?

" } }, { "_id": "i6npKoxQ2QALPCbcP", "title": "If it looks like utility maximizer and quacks like utility maximizer...", "pageUrl": "https://www.lesswrong.com/posts/i6npKoxQ2QALPCbcP/if-it-looks-like-utility-maximizer-and-quacks-like-utility", "postedAt": "2009-06-11T18:34:35.971Z", "baseScore": 20, "voteCount": 22, "commentCount": 24, "url": null, "contents": { "documentId": "i6npKoxQ2QALPCbcP", "html": "

...it's probably adaptation executer.

\n

We often assume agents are utility maximizers. We even call this \"rationality\". On the other hand in our recent experiment nobody managed to figure out even approximate shape of their utility function, and we know about large number of ways how agents deviate from utility maximization. How goes?

\n

One explanation is fairly obvious. Nature contains plenty of selection processes - evolution and markets most obviously, but plenty of others like competition between Internet forums in attracting users, and between politicians trying to get elected. In such selection processes a certain property - fitness - behaves a lot like utility function. As a good approximation a traits that give agents higher expected fitness survives and proliferates. And as a result of that agents that survive such selection processes react to inputs quite reliably as if they were optimizing some utility function - fitness of the underlying selection process.

\n

If that's the whole story, we can conclude a few things:

\n" } }, { "_id": "t47TeAbBYxYgqDGQT", "title": "Let's reimplement EURISKO!", "pageUrl": "https://www.lesswrong.com/posts/t47TeAbBYxYgqDGQT/let-s-reimplement-eurisko", "postedAt": "2009-06-11T16:28:06.637Z", "baseScore": 23, "voteCount": 42, "commentCount": 166, "url": null, "contents": { "documentId": "t47TeAbBYxYgqDGQT", "html": "

In the early 1980s Douglas Lenat wrote EURISKO, a program Eliezer called \"[maybe] the most sophisticated self-improving AI ever built\". The program reportedly had some high-profile successes in various domains, like becoming world champion at a certain wargame or designing good integrated circuits.

\n

Despite requests Lenat never released the source code. You can download an introductory paper: \"Why AM and EURISKO appear to work\" [PDF]. Honestly, reading it leaves a programmer still mystified about the internal workings of the AI: for example, what does the main loop look like? Researchers supposedly answered such questions in a more detailed publication, \"EURISKO: A program that learns new heuristics and domain concepts.\" Artificial Intelligence (21): pp. 61-98. I couldn't find that paper available for download anywhere, and being in Russia I found it quite tricky to get a paper version. Maybe you Americans will have better luck with your local library? And to the best of my knowledge no one ever succeeded in (or even seriously tried) confirming Lenat's EURISKO results.

\n

Today in 2009 this state of affairs looks laughable. A 30-year-old pivotal breakthrough in a large and important field... that never even got reproduced. What if it was a gigantic case of Clever Hans? How do you know? You're supposed to be a scientist, little one.

\n

So my proposal to the LessWrong community: let's reimplement EURISKO!

\n

We have some competent programmers here, don't we? We have open source tools and languages that weren't around in 1980. We can build an open source implementation available for all to play. In my book this counts as solid progress in the AI field.

\n

Hell, I'd do it on my own if I had the goddamn paper.

\n

Update: RichardKennaway has put Lenat's detailed papers up online, see the comments.

" } }, { "_id": "B5K3hg8FgrMDHuXjH", "title": "The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It", "pageUrl": "https://www.lesswrong.com/posts/B5K3hg8FgrMDHuXjH/the-terrible-horrible-no-good-very-bad-truth-about-morality", "postedAt": "2009-06-11T12:31:02.904Z", "baseScore": 74, "voteCount": 75, "commentCount": 142, "url": null, "contents": { "documentId": "B5K3hg8FgrMDHuXjH", "html": "

Joshua Greene has a PhD thesis called The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. What is this terrible truth? The essence of this truth is that many, many people (probably most people) believe that their particular moral (and axiological) views on the world are objectively true - for example that anyone who disagrees with the statement \"black people have the same value as any other human beings\" has committed either an error of logic or has got some empirical fact wrong, in the same way that people who claim that the earth was created 6000 years ago are objectively wrong.

\n

To put it another way, Greene's contention is that our entire way of talking about ethics - the very words that we use - force us into talking complete nonsense (often in a very angry way) about ethics. As a simple example, consider the use of the words in any standard ethical debate - \"abortion is murder\", \"animal suffering is just as bad as human suffering\" - these terms seem to refer to objective facts; \"abortion is murder\" sounds rather like \"water is a solvent!\". I urge readers of Less Wrong to put in the effort of reading a significant part of Greene's long thesis starting at chapter 3: Moral Psychology and Projective Error, considering the massively important repercussions he claims his ideas could have:

\n
\n

In this essay I argue that ordinary moral thought and language is, while very natural, highly counterproductive and that as a result we would be wise to change the way we think and talk about moral matters. First, I argue on metaphysical grounds against moral realism, the view according to which there are first order moral truths. Second, I draw on principles of moral psychology, cognitive science, and evolutionary theory to explain why moral realism appears to be true even though it is not. I then argue, based on the picture of moral psychology developed herein, that realist moral language and thought promotes misunderstanding and exacerbates conflict. I consider a number of standard views concerning the practical implications of moral anti-realism and reject them. I then sketch and defend a set of alternative revisionist proposals for improving moral discourse, chief among them the elimination of realist moral language, especially deontological language, and the promotion of an anti-realist utilitarian framework for discussing moral issues of public concern. I emphasize the importance of revising our moral practices, suggesting that our entrenched modes of moral thought may be responsible for our failure to solve a number of global social problems.

\n
\n

\n

As an accessible entry point, I have decided to summarize what I consider to be Greene's most important points in this post. I hope he doesn't mind - I feel that spreading this message is sufficiently urgent to justify reproducing large chunks of his dissertation - Starting at page 142:

\n
\n

In the previous chapter we concluded, in spite of common sense, that moral realism is false. This raises an important question: How is it that so many people are mistaken about the nature of morality? To become comfortable with the fact that moral realism is false we need to understand how moral realism can be so wrong but feel so right. ...

\n
\n
\n

The central tenet of projectivism is that the moral properties we find (or think we find) in things in the world (e.g. moral wrongness) are mind-dependent in a way that other properties, those that we’ve called “value-neutral” (e.g. solubility in water), are not. Whether or not something is soluble in water has nothing to do with human psychology. But, say projectivists, whether or not something is wrong (or “wrong”) has everything to do with human psychology....

\n
\n
\n

Projectivists maintain that our encounters with the moral world are, at the very least, somewhat misleading. Projected properties tend to strike us as unprojected. They appear to be really “out there,” in a way that they, unlike typical value neutral properties, are not. ...

\n
\n
\n

The respective roles of intuition and reasoning are illuminated by considering people’s reactions to the following story:

\n

\"Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decided that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love but decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?\"

\n

Haidt (2001, pg. 814) describes people’s responses to this story as follows: Most people who hear the above story immediately say that it was wrong for the siblings to make love, and they then set about searching for reasons. They point out the dangers of inbreeding, only to remember that Julie and Mark used two forms of birth control. They next try to argue that Julie and Mark could be hurt, even though the story makes it clear that no harm befell them. Eventually many people say something like

\n

“I don’t know, I can’t explain it, I just know it’s wrong.”

\n

This moral question is carefully designed to short-circuit the most common reason people give for judging an action to be wrong, namely harm to self or others, and in so doing it reveals something about moral psychology, at least as it operates in cases such at these. People’s moral judgments in response to the above story tend to be forceful, immediate, and produced by an unconscious process (intuition) rather than through the deliberate and effortful application of moral principles (reasoning). When asked to explain why they judged as they did, subjects typically gave reasons. Upon recognizing the flaws in those reasons, subjects typically stood by their judgments all the same, suggesting that the reasons they gave after the fact in support their judgments had little to do with the process that produced those judgments. Under ordinary circumstances reasoning comes into play after the judgment has already been reached in order to find rational support for the preordained judgment. When faced with a social demand for a verbal justification, one becomes a lawyer trying to build a case rather than a judge searching for the truth. ...

\n
\n

 

\n
\n

The Illusion of Rationalist Psychology (p. 197)

\n

In Sections 3.2-3.4 I developed an explanation for why moral realism appears to be true, an explanation featuring the Humean notion of projectivism according to which we intuitively see various things in the world as possessing moral properties that they do not actually have. This explains why we tend to be realists, but it doesn’t explain, and to some extent is at odds with, the following curious fact. The social intuitionist model is counterintuitive. People tend to believe that moral judgments are produced by reasoning even though this is not the case. Why do people make this mistake? Consider, once again, the case of Mark and Julie, the siblings who decided to have sex. Many subjects, when asked to explain why Mark and Julie’s behavior is wrong, engaged in “moral dumbfounding,” bumbling efforts to supply reasons for their intuitive judgments. This need not have been so. It might have turned out that all the subjects said things like this right off the bat:

\n

“Why do I say it’s wrong? Because it’s clearly just wrong. Isn’t that plain to see? It’s as if you’re putting a lemon in front of me and asking me why I say it’s yellow. What more is there to say?”

\n

Perhaps some subjects did respond like this, but most did not. Instead, subjects typically felt the need to portray their responses as products of reasoning, even though they generally discovered (often with some embarrassment) that they could not easily supply adequate reasons for their judgments. On many occasions I’ve asked people to explain why they say that it’s okay to turn the trolley onto the other tracks but not okay to push someone in front of the trolley. Rarely do they begin by saying, “I don’t know why. I just have an intuition that tells me that it is.” Rather, they tend to start by spinning the sorts of theories that ethicists have devised, theories that are nevertheless notoriously difficult to defend. In my experience, it is only after a bit of moral dumbfounding that people are willing to confess that their judgments were made intuitively.

\n

Why do people insist on giving reasons in support of judgments that were made with great confidence in the absence of reasons? I suspect it has something to do with the custom complexes in which we Westerners have been immersed since childhood. We live in a reason-giving culture. Western individuals are expected to choose their own way, and to do so for good reason. American children, for example, learn about the rational design of their public institutions; the all important “checks and balances” between the branches of government, the judicial system according to which accused individuals have a right to a trial during which they can, if they wish, plead their cases in a rational way, inevitably with the help of a legal expert whose job it is to make persuasive legal arguments, etc. Westerners learn about doctors who make diagnoses and scientists who, by means of experimentation, unlock nature’s secrets. Reasoning isn’t the only game in town, of course. The American Declaration of Independence famously declares “these truths to be self-evident,” but American children are nevertheless given numerous reasons for the decisions of their nation’s founding fathers, for example, the evils of absolute monarchy and the injustice of “taxation without representation.” When Western countries win wars they draft peace treaties explaining why they, and not their vanquished foes, were in the right and set up special courts to try their enemies in a way that makes it clear to all that they punish only with good reason. Those seeking public office make speeches explaining why they should be elected, sometimes as parts of organized debates. Some people are better at reasoning than others, but everyone knows that the best people are the ones who, when asked, can explain why they said what they said and did what they did.

\n

With this in mind, we can imagine what might go on when a Westerner makes a typical moral judgment and is then asked to explain why he said what he said or how he arrived at that conclusion. The question is posed, and he responds intuitively. As suggested above, such intuitive responses tend to present themselves as perceptual. The subject is perhaps aware of his “gut reaction,” but he doesn’t take himself to have merely had a gut reaction. Rather, he takes himself to have detected a moral property out in the world, say, the inherent wrongness in Mark and Julie’s incestuous behavior or in shoving someone in front of a moving train. The subject is then asked to explain how he arrived at his judgment. He could say, “I don’t know. I answered intuitively,” and this answer would be the most accurate answer for nearly everyone. But this is not the answer he gives because he knows after a lifetime of living in Western culture that “I don’t know how I reached that conclusion. I just did. But I’m sure it’s right,” doesn’t sound like a very good answer. So, instead, he asks himself, “What would be a good reason for reaching this conclusion?” And then, drawing on his rich experience with reason-giving and -receiving, he says something that sounds plausible both as a causal explanation of and justification for his judgment: “It’s wrong because their children could turn out to have all kinds of diseases,” or, “Well, in the first case the other guy is, like, already involved, but in the case where you go ahead and push the guy he’s just there minding his own business.” People’s confidence that their judgments are objectively correct combined with the pressure to give a “good answer” leads people to produce these sorts of post-hoc explanations/justifications. Such explanations need not be the results of deliberate attempts at deception. The individuals who offer them may themselves believe that the reasons they’ve given after the fact were really their reasons all along, what they “really had in mind” in giving those quick responses. ...

\n
\n
\n

My guess is that even among philosophers particular moral judgments are made first and reasoned out later. In my experience, philosophers are often well aware of the fact that their moral judgments are the results of intuition. As noted above, it’s commonplace among ethicists to think of their moral theories as attempts to organize pre-existing moral intuitions. The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover. For example, philosophers are as likely as anyone to think that there must be “some good reason” for why it’s okay to turn the trolley onto the other set of tracks but not okay to push the person in front of the trolley, where a “good reason,” or course, is a piece of moral theory with justificatory force and not a piece of psychological description concerning patterns in people’s emotional responses.

\n
\n

One might well ask: why does any of this indicate that moral propositions have no rational justification? The arguments presented here show fairly conclusively that our moral judgements are instinctive, subconscious, evolved features. Evolution gave them to us. But readers of Eliezer's material on Overcoming Bias will be well aware of the character of evolved solutions: they're guaranteed to be a mess. Why should evolution have happened to have given us exactly those moral instincts that give the same conclusions as would have been produced by (say) great moral principle X? (X = the golden rule, or X = hedonistic utilitarianism, or X = negative utilitarianism, etc).

\n

Expecting evolved moral instincts to conform exactly to some simple unifying principle is like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description.

\n

Now I can imagine a \"from first principles\" argument producing an objective morality that has some simple description - I can imagine starting from only simple facts about agenthood and deriving Kant's Golden Rule as the one objective moral truth. But I cannot seriosuly entertain the prospect of a \"from first principles\" argument producing the human moral mess. No way. It was this observation that finally convinced me to abandon my various attempts at objective ethics.

" } }, { "_id": "q2P96Tve4fwP2b56g", "title": "Less wrong economic policy", "pageUrl": "https://www.lesswrong.com/posts/q2P96Tve4fwP2b56g/less-wrong-economic-policy", "postedAt": "2009-06-09T20:11:56.083Z", "baseScore": 7, "voteCount": 12, "commentCount": 32, "url": null, "contents": { "documentId": "q2P96Tve4fwP2b56g", "html": "

Yesterday I heard an interesting story on the radio about US President Obama's pick to head the Office of Information and Regulatory Affairs, Cass Sunstein.  I recommend checking out the story, but here are a few key excerpts.

\n
\n

Cass Sunstein, President Obama's pick to head the Office of Information and Regulatory Affairs, is a vocal supporter of [...] economic policy that shapes itself around human psychology. Sunstein is just one of a number of high-level appointees now working in the Obama administration who favors this kind of approach.

\n

[...]

\n

Through their research, Kahneman and Tversky identified dozens of these biases and errors in judgment, which together painted a certain picture of the human animal. Human beings, it turns out, don't always make good decisions, and frequently the choices they do make aren't in their best interest.

\n

[...]

\n\"Merely accepting the fact that people do not necessarily make the best decisions for themselves is politically very explosive. The moment that you admit that, you have to start protecting people,\" Kahneman says.\n

[...]

\n

The Obama administration believes it needs to shape policy in a way that will keep us all from getting hit by trucks — health care trucks, financial trucks, trucks that come from every direction and affect every aspect of our lives.

\n
\n

At the risk of starting a discussion that will be wrecked by political wrestling, I'm always hopeful when I hear about governments applying what we learn from science to policy.  Not to say that this always generates good policies, but it does generate the best policies we have reason to believe will be good (so long as you ignore the issue of actual politices that might get in the way).

" } }, { "_id": "STZBqw7SAWwjjna6k", "title": "You can't believe in Bayes", "pageUrl": "https://www.lesswrong.com/posts/STZBqw7SAWwjjna6k/you-can-t-believe-in-bayes", "postedAt": "2009-06-09T18:03:20.808Z", "baseScore": 15, "voteCount": 31, "commentCount": 60, "url": null, "contents": { "documentId": "STZBqw7SAWwjjna6k", "html": "

Well, you can.  It's just oxymoronic, or at least ironic.  Because belief is contrary to the Bayesian paradigm.

\n

You use Bayesian methods to choose an action.  You have a set of observations, and assign probabilities to possible outcomes, and choose an action.

\n

Belief in an outcome N means that you set p(N) ≈ 1 if p(N) > some threshold.  It's a useful computational shortcut.  But when you use it, you're not treating N in a Bayesian manner.  When you categorize things into beliefs/nonbeliefs, and then act based on whether you believe N or not, you are throwing away the information contained in the probability judgement, in order to save computation time.  It is especially egregious if the threshold you use to categorize things into beliefs/nonbeliefs is relatively constant, rather than being a function of (expected value of N) / (expected value of not N).

\n

If your neighbor took out fire insurance on his house, you wouldn't infer that he believed his house was going to burn down.  And if he took his umbrella to work, you wouldn't (I hope) infer that he believed it was going to rain.

\n

Yet when it comes to decisions on a national scale, people cast things in terms of belief.  Do you believe North Korea will sell nuclear weapons to Syria?  That's the wrong question when you're dealing with a country that has, let's say, a 20% chance of building weapons that will be used to level at least ten major US cities.

\n

Or flash back to the 1990s, before there was a scientific consensus that global warming was real.  People would often say, \"I don't believe in global warming.\"  And interviews with scientists tried to discern whether they did or did not believe in global warming.

\n

It's the wrong question.  The question is what steps are worth taking according to your assigned probabilities and expected-value computations.

\n

A scientist doesn't have to believe in something to consider it worthy of study.  Do you believe an asteroid will hit the Earth this century?  Do you believe we can cure aging in your lifetime?  Do you believe we will have a hard-takeoff singularity?  If a low-probability outcome can have a high impact on expected utility, you've already gone wrong when you ask the question.

" } }, { "_id": "NkDJKw7diP2krwtko", "title": "Expected futility for humans", "pageUrl": "https://www.lesswrong.com/posts/NkDJKw7diP2krwtko/expected-futility-for-humans", "postedAt": "2009-06-09T12:04:29.306Z", "baseScore": 14, "voteCount": 17, "commentCount": 53, "url": null, "contents": { "documentId": "NkDJKw7diP2krwtko", "html": "

Previously, Taw published an article entitled \"Post your utility function\", after having tried (apparently unsuccessfully) to work out \"what his utility function was\". I suspect that there is something to be gained by trying to work out what your priorities are in life, but I am not sure that people on this site are helping themselves very much by assigning dollar values, probabilities and discount rates. If you haven't done so already, you can learn why people like the utility function formalism on wikipedia. I will say one thing about the expected utility theorem, though. An assignment of expected utilities to outcomes is (modulo renormalizing utilities by some set of affine transformations) equivalent to a preference over probabilistic combinations of outcomes; utilities are NOT properties of the outcomes you are talking about, they are properties of your mind. Goodness, like confusion, is in the mind.

\n

In this article, I will claim that trying to run your life based upon expected utility maximization is not a good idea, and thus asking \"what your utility function is\" is also not a useful question to try and answer.

\n

There are many problems with using expected utility maximization to run your life: firstly, the size of the set of outcomes that one must consider in order to rigorously apply the theory is ridiculous: one must consider all probabilistic mixtures of possible histories of the universe from now to whatever your time horizon is. Even identifying macroscopically identical histories, this set is huge. Humans naturally describe world-histories in terms of deontological rules, such as \"if someone is nice to me, I want to be nice back to them\", or \"if I fall in love, I want to treat my partner well (unless s/he betrays me)\", \"I want to achieve something meaningful and be well-renowned with my life\", \"I want to help other people\". In order to translate these deontological rules into utilities attached to world-histories, you would have to assign a dollar utility to every possible world-history with all variants of who you fall in love with, where you settle, what career you have, what you do with your friends, etc, etc. Describing your function as a linear sum of independent terms will not work in general because, for example, whether accounting is a good career for you will depend upon the kind of personal life you want to live (i.e. different aspects of your life interact). You can, of course, emulate deontological rules such as \"I want to help other people\" in a complex utility function - that is what the process of enumerating human-distinguishable world-histories is - but it is nowhere near as efficient a representation as the usual deontological rules of thumb that people live by, particularly given that the human mind is well-adapted to representing deontological preferences (such as \"I must be nice to people\" - as was discussed before, there is a large amount of hidden complexity behind this simple english sentence) and very poor at representing and manipulating floating point numbers.

\n

Toby Ord's BPhil thesis has some interesting critiques of naive consequentialism, and would probably provide an entry point to the literature:

\n
\n

‘An uncomplicated illustration is provided by the security which lovers or friends produce in one another by being guided, and being seen to be guided, by maxims of virtually unconditional fidelity. Adherence to such maxims is justified by this prized effect, since any retreat from it will undermine the effect, being inevitably detectable within a close relationship. This is so whether the retreat takes the form of intruding calculation or calculative monitoring. The point scarcely needs emphasis.’

\n
\n

There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are (such as youth, health, friendship, family, humour, a sense of personal dignity, a sense of moral pureness for yourself, acceptance by your peers, social status, etc) because they've always been there so you took them for granted. Another is that since we humans are under the influence of a considerable number of delusions about the nature of our own lives, (in particular: that our actions are influenced exclusively by our long-term plans rather than by the situations we find ourselves in or our base animal desires) we often find that our actions have unintended consequences. Human life is naturally complicated enough that this would happen anyway, but attempting to optimize your life whilst under the influence of systematic delusions about the way it really works is likely to make it worse than if you just stick to default behaviour.

\n


What, then is the best decision procedure for deciding how to improve your life? Certainly I would steer clear of dollar values and expected utility calculations, because this formalism is a huge leap away from our intuitive decision procedure. It seems wiser to me to make small incrermental changes to your decision procedure for getting things done. For example, if you currently decide what to do based completely upon your whims, consider making a vague list of goals in your life (with no particular priorities attached) and updating your progress on them. If you already do this, consider brainstorming for other goals that you might have ignored, and then attach priorities based upon the assumption that you will certainly achieve or not achieve each of these goals, ignoring what probabilistic mixtures you would accept (because your mind probably won't be able to handle the probabilistic aspect in a numerical way anyway).

" } }, { "_id": "RPXBjC5DjeXEJRbMw", "title": "The Aumann's agreement theorem game (guess 2/3 of the average)", "pageUrl": "https://www.lesswrong.com/posts/RPXBjC5DjeXEJRbMw/the-aumann-s-agreement-theorem-game-guess-2-3-of-the-average", "postedAt": "2009-06-09T07:29:02.237Z", "baseScore": 17, "voteCount": 33, "commentCount": 156, "url": null, "contents": { "documentId": "RPXBjC5DjeXEJRbMw", "html": "

I'd like to play a game with you. Send me, privately, a real number between 0 and 100, inclusive. (No funny business. If you say \"my age\", I'm going to throw it out.) The winner of this game is the person who, after a week, guesses the number closest to 2/3 of the average guess. I will reveal the average guess, and will confirm the winner's claims to have won, but I will reveal no specific guesses.

\n

Suppose that you're a rational person. You also know that everyone else who plays this game is rational, you know that they know that, you know that they know that, and so on. Therefore, you conclude that the best guess is P. Since P is the rational guess to make, everyone will guess P, and so the best guess to make is P*2/3. This gives an equation that we can solve to get P = 0.

\n

I propose that this game be used as a sort of test to see how well Aumann's agreement theorem applies to a group of people. The key assumption the theorem makes--which, as taw points out, is often overlooked--is that the group members are all rational and honest and also have common knowledge of this. This same assumption implies that the average guess will be 0. The farther from the truth this assumption is, the farther the average guess is going to be from 0, and the farther Aumann's agreement theorem is from applying to the group.

\n

Update (June 20): The game is finished; sorry for the delay in getting the results. The average guess was about 13.235418197890148 (a number which probably contains as much entropy as its length), meaning that the winning guess is the one closest to 8.823612131926765. This number appears to be significantly below the number typical for groups of ordinary people, but not dramatically so. 63% of guesses were too low, indicating that people were overall slightly optimistic about the outcome (if you interpret lower as better). Anyway, I will notify the winner ahora mismo.

" } }, { "_id": "iQi7roeLMEriey3ty", "title": "London Rationalist Meetups bikeshed painting thread", "pageUrl": "https://www.lesswrong.com/posts/iQi7roeLMEriey3ty/london-rationalist-meetups-bikeshed-painting-thread", "postedAt": "2009-06-08T01:56:09.381Z", "baseScore": 4, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "iQi7roeLMEriey3ty", "html": "

Something that came up on each of our three meetups so far was that people want more participation on things like format, place, and meeting times.

\n

Currently these are:

\n\n

But these were just the first point we hit in the optimization space. They work, but that doesn't mean there isn't something that could work better.

\n

So everyone who wants to discuss them, here's the place.

" } }, { "_id": "qij9v3YqPfyur2PbX", "title": "indexical uncertainty and the Axiom of Independence", "pageUrl": "https://www.lesswrong.com/posts/qij9v3YqPfyur2PbX/indexical-uncertainty-and-the-axiom-of-independence", "postedAt": "2009-06-07T09:18:48.891Z", "baseScore": 26, "voteCount": 28, "commentCount": 79, "url": null, "contents": { "documentId": "qij9v3YqPfyur2PbX", "html": "

I’ve noticed that the Axiom of Independence does not seem to make sense when dealing with indexical uncertainty, which suggests that Expected Utility Theory may not apply in situations involving indexical uncertainty. But Googling for \"indexical uncertainty\" in combination with either \"independence axiom\" or “axiom of independence” give zero results, so either I’m the first person to notice this, I’m missing something, or I’m not using the right search terms. Maybe the LessWrong community can help me figure out which is the case.

The Axiom of Independence says that for any A, B, C, and p, you prefer A to B if and only if you prefer p A + (1-p) C to p B + (1-p) C.  This makes sense if p is a probability about the state of the world. (In the following, I'll use “state” and “possible world” interchangeably.) In that case, what it’s saying is that what you prefer (e.g., A to B) in one possible world shouldn’t be affected by what occurs (C) in other possible worlds. Why should it, if only one possible world is actual?

In Expected Utility Theory, for each choice (i.e. option) you have, you iterate over the possible states of the world, compute the utility of the consequences of that choice given that state, then combine the separately computed utilities into an expected utility for that choice. The Axiom of Independence is what makes it possible to compute the utility of a choice in one state independently of its consequences in other states.

But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world?  In that case, what occurs at one location in the world can easily interact with what occurs at another location, either physically, or in one’s preferences. If there is physical interaction, then “consequences of a choice at a location” is ill-defined. If there is preferential interaction, then “utility of the consequences of a choice at a location” is ill-defined. In either case, it doesn’t seem possible to compute the utility of the consequences of a choice at each location separately and then combine them into a probability-weighted average.

Here’s another way to think about this. In the expression “p A + (1-p) C” that’s part of the Axiom of Independence, p was originally supposed to be the probability of a possible world being actual and A denotes the consequences of a choice in that possible world. We could say that A is local with respect to p. What happens if p is an indexical probability instead? Since there are no sharp boundaries between locations in a world, we can’t redefine A to be local with respect to p. And if A still denotes the global consequences of a choice in a possible world, then “p A + (1-p) C” would mean two different sets of global consequences in the same world, which is nonsensical.

If I’m right, the notion of a “probability of being at a location” will have to acquire an instrumental meaning in an extended decision theory. Until then, it’s not completely clear what people are really arguing about when they argue about such probabilities, for example in papers about the Simulation Argument and the Sleeping Beauty Problem.

\n

Edit: Here's a game that exhibits what I call \"preferential interaction\" between locations. You are copied in your sleep, and both of you wake up in identical rooms with 3 buttons. Button A immunizes you with vaccine A, button B immunizes you with vaccine B. Button C has the effect of A if you're the original, and the effect of B if you're the clone. Your goal is to make sure at least one of you is immunized with an effective vaccine, so you press C.

\n

To analyze this decision in Expected Utility Theory, we have to specify the consequences of each choice at each location. If we let these be local consequences, so that pressing A has the consequence \"immunizes me with vaccine A\", then what I prefer at each location depends on what happens at the other location. If my counterpart is vaccinated with A, then I'd prefer to be vaccinated with B, and vice versa. \"immunizes me with vaccine A\" by itself can't be assigned an utility.

\n

What if we use the global consequences instead, so that pressing A has the consequence \"immunizes both of us with vaccine A\"? Then a choice's consequences do not differ by location, and “probability of being at a location” no longer has a role to play in the decision.

" } }, { "_id": "Ye47ZQ3d8RhQSP8Fw", "title": "Macroeconomics, The Lucas Critique, Microfoundations, and Modeling in General", "pageUrl": "https://www.lesswrong.com/posts/Ye47ZQ3d8RhQSP8Fw/macroeconomics-the-lucas-critique-microfoundations-and", "postedAt": "2009-06-06T04:35:51.786Z", "baseScore": 0, "voteCount": 10, "commentCount": 13, "url": null, "contents": { "documentId": "Ye47ZQ3d8RhQSP8Fw", "html": "

I posted this comment in reply to a post by David Henderson over at econlog, but first some context.

\n

Mathew Yglesias writes:

\n
\n

...From an outside perspective, what seems to be going on is that economists have unearthed an extremely fruitful paradigm for investigation of micro issues. This has been good for them, and enhanced the prestige of the discipline. No such fruitful paradigm has actually emerged for investigation of macro issues. So the decision has been made to somewhat arbitrarily impose the view that macro models must be grounded in micro foundations. Thus, the productive progressive research program of microeconomics can “infect” the more troubled field of macro with its prestige...

\n

...But as a methodological matter, it seems deeply unsound. As a general principle for investigating the world, we normally deem it desirable, but not at all necessary, that researchers exploring a particular field of inquiry find ways to “reduce” what they’re doing to a lower level....

\n

...Trying to enhance models with better information about psychology isn’t against the rules, but it’s not required either. What’s required is that the models do useful work.

\n

So why should it be that “in the current regime, if [macro models] are not meticulously constructed from “micro foundations,” they aren’t allowed to be considered”?

\n
\n

To which a commenter replies:

\n
\n

While I’m the first to acknowledge that the current macro research paradigm has given us little useful analysis, there is a standard answer to Matt’s question.

\n

You can start by going to Wikipedia and reading about the Lucas Critique...

\n
\n

I won't reproduce the whole thing, click through to the comment to see a decent summary of the Lucas Critique if you aren't aware of it already.

\n

Henderson, over at econlog, replies:

\n
\n

...Second, the demand for microfoundations, or at least the supply of them, goes back more than 10 years before the date Arnold claims. It goes back at least to Milton Friedman's A Theory of the Consumption Function...,

\n

...Third, interestingly, Milton Friedman himself would probably agree with Yglesias about the idea that there's not necessarily a need for micro foundations for macro. In a 1996 interview published in Brian Snowdon and Howard R. Vane, Modern Macroeconomics, Edward Elgar, 2005, Friedman said:

\n
It is less important for macroeconomic models to have choice-theoretic microfoundations than it is for them to have empirical implications that can be subjected to refutation.
\n

In saying this, Friedman was going back to his positivist roots, which he laid out at length in his classic 1953 essay, \"The Methodology of Positive Economics,\" published in Essays in Positive Economics. There was always an interesting tension in Friedman's work, which he never resolved, between reasoning as a clear-headed economist about people acting based on incentives and constraints and \"positivistly\" black-boxing it and trying to come up with predictions.

\n
\n

And without further adieu, here's my respone:

\n
\n

I don't think there is a tension in Friedman's thinking at all. We want our models to predict, otherwise, what are they good for? If a model doesn't have microfoundations and predicts well, so what? Microeconomic models, as Yglesias notes, don't have \"microfoundations\" in psychology. Yet they still do a pretty good job under a variety of circumstances.

\n

The reason microfoundations are necessary is instrumental to predictive power. It turns out that models with microfoundations [tend to] predict better than models without them, in all fields. This is why chemists tend to build their models up from physics and why biologists to the same with chemistry. It also turns out that microeconomics can be quite successful without microfoundations, while macroeconomics is far less successful.

\n

The Lucas critique tells us many of the reasons why macroeconomics needs microfoundations. The main reason microeconomics does not is that it is built from pretty solid intuitions about how individuals act. They aren't perfect, of course, but as a first approximation (instrumental) rationality does a pretty good job of describing human behavior in many cases. And the only way we really do know this is by going out and testing our models which, behavioral economics notwithstanding, have done well.

\n

There's a trade off between accuracy and analytical tractability while modeling. More microfoundations will tend to increase accuracy, but imagine if we started with physics for every single scientific problem. The computations would be insane, so instead we simplify things and ignore some of the microfoundations. It is called a model for a reason, after all.

\n

Friedman is right: if microfoundations do end up being important, models without them will do poorly relative models that use them (and thus the relevant trade off will manifest itself). Note that this is precisely what happened in the history of macro, and kudos to Friedman for realizing that microfoundations were important before the rest of the field. I suspect that macro models [don't] do well in an absolute sense though, but that is another matter entirely.

\n
\n

ack... I should edit my comments better before posting them (notice the use of square brackets).

\n

edit: some minor formatting

" } }, { "_id": "pZSpbxPrftSndTdSf", "title": "Honesty: Beyond Internal Truth", "pageUrl": "https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth", "postedAt": "2009-06-06T02:59:00.296Z", "baseScore": 67, "voteCount": 58, "commentCount": 87, "url": null, "contents": { "documentId": "pZSpbxPrftSndTdSf", "html": "

When I expect to meet new people who have no idea who I am, I often wear a button on my shirt that says:

\n

SPEAK THE TRUTH,
EVEN IF YOUR VOICE TREMBLES

\n

Honesty toward others, it seems to me, obviously bears some relation to rationality.  In practice, the people I know who seem to make unusual efforts at rationality, are unusually honest, or, failing that, at least have unusually bad social skills.

\n

And yet it must be admitted and fully acknowledged, that such morals are encoded nowhere in probability theory.  There is no theorem which proves a rationalist must be honest - must speak aloud their probability estimates.  I have said little of honesty myself, these past two years; the art which I've presented has been more along the lines of:

\n

SPEAK THE TRUTH INTERNALLY,
EVEN IF YOUR BRAIN TREMBLES

\n

I do think I've conducted my life in such fashion, that I can wear the original button without shame.  But I do not always say aloud all my thoughts.  And in fact there are times when my tongue emits a lie.  What I write is true to the best of my knowledge, because I can look it over and check before publishing.  What I say aloud sometimes comes out false because my tongue moves faster than my deliberative intelligence can look it over and spot the distortion.  Oh, we're not talking about grotesque major falsehoods - but the first words off my tongue sometimes shade reality, twist events just a little toward the way they should have happened...

\n

From the inside, it feels a lot like the experience of un-consciously-chosen, perceptual-speed, internal rationalization.  I would even say that so far as I can tell, it's the same brain hardware running in both cases - that it's just a circuit for lying in general, both for lying to others and lying to ourselves, activated whenever reality begins to feel inconvenient.

\n

There was a time - if I recall correctly - when I didn't notice these little twists.  And in fact it still feels embarrassing to confess them, because I worry that people will think:  \"Oh, no!  Eliezer lies without even thinking!  He's a pathological liar!\"  For they have not yet noticed the phenomenon, and actually believe their own little improvements on reality - their own brain being twisted around the same way, remembering reality the way it should be (for the sake of the conversational convenience at hand).  I am pretty damned sure that I lie no more pathologically than average; my pathology - my departure from evolutionarily adapted brain functioning - is that I've noticed the lies.

\n

The fact that I'm going ahead and telling you about this mortifying realization - that despite my own values, I literally cannot make my tongue speak only truth - is one reason why I am not embarrassed to wear yon button.  I do think I meet the spirit well enough.

\n

It's the same \"liar circuitry\" that you're fighting, or indulging, in the internal or external case - that would be my second guess for why rational people tend to be honest people.  (My first guess would be the obvious: respect for the truth.)  Sometimes the Eli who speaks aloud in real-time conversation, strikes me as almost a different person than the Eliezer Yudkowsky who types and edits.  The latter, I think, is the better rationalist, just as he is more honest.  (And if you asked me out loud, my tongue would say the same thing.  I'm not that internally divided.  I think.)

\n

But this notion - that external lies and internal lies are correlated by their underlying brainware - is not the only view that could be put forth, of the interaction between rationality and honesty.

\n

An alternative view - which I do not myself endorse, but which has been put forth forcefully to me - is that the nerd way is not the true way; and that a born nerd, who seeks to become even more rational, should allow themselves to lie, and give themselves safe occasions to practice lying, so that they are not tempted to twist around the truth internally - the theory being that if you give yourself permission to lie outright, you will no longer feel the need to distort internal belief.  In this view the choice is between lying consciously and lying unconsciously, and a rationalist should choose the former.

\n

I wondered at this suggestion, and then I suddenly had a strange idea.  And I asked the one, \"Have you been hurt in the past by telling the truth?\"  \"Yes\", he said, or \"Of course\", or something like that -

\n

(- and my brain just flashed up a small sign noting how convenient it would be if he'd said \"Of course\" - how much more smoothly that sentence would flow - but in fact I don't remember exactly what he said; and if I'd been speaking out loud, I might have just said, \"'Of course', he said\" which flows well.  This is the sort of thing I'm talking about, and if you don't think it's dangerous, you don't understand at all how hard it is to find truth on real problems, where a single tiny shading can derail a human train of thought entirely -)

\n

- and at this I suddenly realized, that what worked for me, might not work for everyone.  I haven't suffered all that much from my project of speaking truth - though of course I don't know exactly how my life would have been otherwise, except that it would be utterly different.  But I'm good with words.  I'm a frickin' writer.  If I need to soften a blow, I can do with careful phrasing what would otherwise take a lie.  Not everyone scores an 800 on their verbal SAT, and I can see how that would make it a lot harder to speak truth.  So when it comes to white lies, in particular, I claim no right to judge - and also it is not my primary goal to make the people around me happier.

\n

Another counterargument that I can see to the path I've chosen - let me quote Roger Zelazny:

\n
\n

\"If you had a choice between the ability to detect falsehood and the ability to discover truth, which one would you take? There was a time when I thought they were different ways of saying the same thing, but I no longer believe that. Most of my relatives, for example, are almost as good at seeing through subterfuge as they are at perpetrating it. I’m not at all sure, though, that they care much about truth. On the other hand, I’d always felt there was something noble, special, and honorable about seeking truth... Had this made me a sucker for truth's opposite?\"

\n
\n

If detecting falsehood and discovering truth are not the same skill in practice, then practicing honesty probably makes you better at discovering truth and worse at detecting falsehood.  If I thought I was going to have to detect falsehoods - if that, not discovering a certain truth, were my one purpose in life - then I'd probably apprentice myself out to a con man.

\n

What, in your view, and in your experience, is the nature of the interaction between honesty and rationality?  Between external truthtelling and internal truthseeking?

" } }, { "_id": "RtSLAaJhJr7b5vx38", "title": "My concerns about the term 'rationalist'", "pageUrl": "https://www.lesswrong.com/posts/RtSLAaJhJr7b5vx38/my-concerns-about-the-term-rationalist", "postedAt": "2009-06-04T15:31:10.649Z", "baseScore": 12, "voteCount": 18, "commentCount": 34, "url": null, "contents": { "documentId": "RtSLAaJhJr7b5vx38", "html": "

 

\n

I've noticed that here on Less Wrong people often identify themselves as rationalists (and this community as a rationalist one -- searching for 'rationalist' on the site returns exactly 1000 hits).  I'm a bit concerned that this label may work against our favour.

\n

Paul Graham recently wrote a nice essay Keep Your Identity Small in which he argued that identifying yourself with a label tends to work against reasonable -- rational, you might say -- disscusions about topics that are related to it.  The essay is quite short and if you haven't read it I highly reccommend doing so.

\n

If his argument is correct, then identifying with a label like Rationalist may impede your ability to be rational.

\n

My thinking is that once you identify yourself as an X, you have a tendancy to evaluate ideas and courses of action in terms of how similar or different they appear to your prototypical notion of that label - as a shortcut for genuinely thinking about them and instead of evaluating them on their own merits. 

\n

Aside from the effect such a label may have on our own thinking, the term 'rationalist' may be bad PR.  In the wider world 'rational' tends to be a bit of a dirty word.  It has a lot of negative connotations.   

\n

Outside communities like this one, presenting yourself a rationalist is likely to get other people off on the wrong foot.  In many people's minds, it'd strike you out before you'd even said anything.  It's a great way for them to pigeonhole you.  

\n

And we should be interested in embracing the wider world and communicating our views to others.

\n

If I was to describe what we're about, I'd probably say something like that we're interested in knowing the truth, and want to avoid deluding ourselves about anything, as much as either of these things are possible.  So we're studying how to be less wrong.  I'm not sure I'd use any particular label in my description.

\n

Interestingly, those goals I described us in terms of -- wanting truth, wanting to avoid deluding ourselves -- are not really what separates \"us\" from \"them\".  I think the actual difference is that we are simply more aware of the fact that there are many ways our thinking can be wrong and lead us astray.  

\n

Many people really are -- or at least start out -- interested in the truth, but get led astray by flawed thinking because they're not aware that it is flawed.  Because flawed thinking begets flawed beliefs, the process can lead people onto systematic paths away from truth seeking.  But I don't think even those people set out in the first place to get away from the truth.

\n

The knowledge our community has, of ways that thinking can lead us astray, is an important thing we have to offer, and something that we should try to communicate to others.  And I actually think a lot of people would be receptive to it, presented in the right way. 

\n

 

" } }, { "_id": "RrxGGAZLX9e4NBuW5", "title": "Probability distributions and writing style", "pageUrl": "https://www.lesswrong.com/posts/RrxGGAZLX9e4NBuW5/probability-distributions-and-writing-style", "postedAt": "2009-06-04T06:17:41.549Z", "baseScore": 7, "voteCount": 8, "commentCount": 8, "url": null, "contents": { "documentId": "RrxGGAZLX9e4NBuW5", "html": "

In his recent post, rhollerith wrote,

\n
\n

I am more likely than not vastly better off than I would have been if <I had made decision X>

\n
\n

This reminded me of the slogan for the water-filtration system my workplaces uses,

\n
\n

We're 100% sure it's 99.9% pure!

\n
\n

because both sentences make a claim and give an associated probability for it. Now in this second example, the actual version is better than the expectation-value-preserving \"We're 99.9% sure it's 100% pure\", because the actual version implies a lower variance in outcomes (and expectation values being equal, a lower variance is nearly always better).  But this leads to the question of why rhollerith didn't write something like \"I am almost certainly at least somewhat better off than I would have been...\". 

\n

So I ask: when writing nontechnically, do you prefer to give a modest conclusion with high confidence, or a strong conclusion with moderate confidence?  And does this vary with whether you're trying to persuade or merely describe?

\n

(Also feel free to post other examples of this sort of statement from LW or elsewhere; I'd search for them myself if I had any good ideas on how to do so.)

" } }, { "_id": "MvwdPfYLX866vazFJ", "title": "Post Your Utility Function", "pageUrl": "https://www.lesswrong.com/posts/MvwdPfYLX866vazFJ/post-your-utility-function", "postedAt": "2009-06-04T05:05:17.958Z", "baseScore": 39, "voteCount": 39, "commentCount": 280, "url": null, "contents": { "documentId": "MvwdPfYLX866vazFJ", "html": "

A lot of rationalist thinking about ethics and economy assumes we have very well defined utility functions - knowing exactly our preferences between states and events, not only being able to compare them (I prefer X to Y), but assigning precise numbers to every combinations of them (p% chance of X equals q% chance of Y). Because everyone wants more money, you should theoretically even be able to assign exact numerical values to positive outcomes in your life.

\n

I did a small experiment of making a list of things I wanted, and giving them point value. I must say this experiment ended up in a failure - thinking \"If I had X, would I take Y instead\", and \"If I had Y, would I take X instead\" very often resulted in a pair of \"No\"s. Even thinking about multiple Xs/Ys for one Y/X usually led me to deciding they're really incomparable. Outcomes related to similar subject were relatively comparable, those in different areas in life were usually not.

\n

I finally decided on some vague numbers and evaluated the results two months later. My success on some fields was really big, on other fields not at all, and the only thing that was clear was that numbers I assigned were completely wrong.

\n

This leads me to two possible conclusions:

\n\n

Anybody else tried assigning numeric values to different outcomes outside very narrow subject matter? Have you succeeded and want to share some pointers? Or failed and want to share some thought on that?

\n

I understand that details of many utility functions will be highly personal, but if you can share your successful ones, that would be great.

" } }, { "_id": "xq9HYyFXkzAREM7at", "title": "Third London Rationalist Meeting", "pageUrl": "https://www.lesswrong.com/posts/xq9HYyFXkzAREM7at/third-london-rationalist-meeting", "postedAt": "2009-06-04T03:19:03.981Z", "baseScore": 8, "voteCount": 6, "commentCount": 3, "url": null, "contents": { "documentId": "xq9HYyFXkzAREM7at", "html": "

The Third London Rationalist Meeting will take place on Sunday, 2009-06-07, 14:00, at the usual location - cafe on top of Waterstones bookstore near Piccadilly Circus Tube Station.

\n

Here's map how to get to the venue.

\n

There were some suggestions of trying alternative venue, but as nobody took the time to scout for alternative locations, I'd like to take the safe way and go for the usual one, even if it's less than optimal (by the way they have evaluation forms there, you might want to give them your feedback).

" } }, { "_id": "PbbcaiMgHSDmbqs6A", "title": "Mate selection for the men here", "pageUrl": "https://www.lesswrong.com/posts/PbbcaiMgHSDmbqs6A/mate-selection-for-the-men-here", "postedAt": "2009-06-03T23:05:25.181Z", "baseScore": 13, "voteCount": 36, "commentCount": 114, "url": null, "contents": { "documentId": "PbbcaiMgHSDmbqs6A", "html": "

The following started as a reply to a request for relationship advice (http://lesswrong.com/lw/zj/open_thread_june_2009/rxy) but is expected to be of enough general interest to justify a top-level post.  Sometimes it is beneficial to have older men in the conversation, and this might be one of those times.  (I am in my late 40s.)

\n

I am pretty sure that most straight men strong in rationality are better off learning how the typical woman thinks than holding out for a long-term relationship with a women as strong in rationality as he is. If you hold out for a strong female rationalist, you drastically shrink the pool of women you have to choose from -- and people with a lot of experience with dating and relationships tend to consider that a bad move.  A useful data point here is the fact (http://lesswrong.com/lw/fk/survey_results/cee) that 95%-97% of Less Wrongers are male.  If on the other hand, women currently (*currently* -- not in some extrapolated future after you've sold your company and bought a big house in Woodside) find you extremely attractive or extremely desirable long-term-relationship material, well, then maybe you should hold out for a strong female rationalist if you are a strong male rationalist.

\n

Here is some personal experience in support of the advice above to help you decide whether to follow the advice above.

My information is incomplete because I have never been in a long-term relationship with a really strong rationalist -- or even a scientist, programmer or engineer -- but I have been with a woman who has years of formal education in science (majored in anthropology, later took chem and bio for a nursing credential) and her knowledge of science did not contribute to the relationship in any way that I could tell.  Moreover, that relationship was not any better than the one I am in now, with a woman with no college-level science classes at all.

The woman I have been with for the last 5 years is not particularly knowledgeable about science and is not particularly skilled in the art of rationality.  Although she is curious about most areas of science, she tends to give up and to stop paying attention if a scientific explanation fails to satisfy her curiosity within 2 or 3 minutes.  If there is a strong emotion driving her inquiry, though, she will focus longer.  E.g., she sat still for at least 15 or 20 minutes on the evolutionary biology of zoonoses during the height of the public concern over swine flu about a month ago -- and was glad she did.  (I know she was glad she did because she thanked me for the explanation, and it is not like her to make an insincere expression of gratitude out of, e.g., politeness.)  (The strong emotion driving her inquiry was her fear of swine flu combined with her suspicion that perhaps the authorities were minimizing the severity of the situation to avoid panicking the public.)

Despite her having so much less knowledge of science and the art of rationality than I have, I consider my current relationship a resounding success: it is no exaggeration to say that I am more likely than not vastly better off than I would have been if I had chosen 5 years ago not to pursue this woman to hold out for someone more rational.  She is rational enough to take care of herself and to be the most caring and the most helpful girlfriend I have ever had.  (Moreover, nothing in my ordinary conversations and interactions with her draw my attention to her relative lack of scientific knowledge or her relative lack of advanced rationalist skills in a way that evokes any regret or sadness in me.  Of course, if I had experienced a long-term relationship with a very strong female rationalist in the past, maybe I *would* experience episodes of regret or sadness towards the woman I am with now.)

Here are two more tips on mate selection for the straight men around here.

I have found that it is a very good sign if the woman either (1) assigns high social status to scientific ability or scientific achievement or finds scientific ability appealing in a man or (2) sees science as a positive force in the world.  The woman I am with now clearly and decisively meets criterion (1) but does not meet criterion (2).  Moreover, one of my most successful relationships was with a woman who finds science fiction very inspiring.  (I do not BTW.)  The salient thing about that was that she never revealed it to me, nor the fact that she definitely sees science as a positive force in the world.  (I pieced those two facts together after we broke up.)  The probable reason she never revealed them to me is that she thought they would clue me in to the fact that she found scientific ability appealing in a man, which in turn would have increased the probability that I would try to snow her by pretending to be better at science or more interested in science than I really was.  (She'd probably been snowed that way by a man before she met me: male snowing of prospective female sexual partners is common.)

\n

By posting on a topic of such direct consequence to normal straight adult male self-esteem, I am making myself more vulnerable than I would be if I were posting on, e.g., regulatory policy.  Awareness of my vulnerability might cause someone to refrain from publicly contradicting what I just wrote.  Do not refrain from publicly contradicting what I just wrote!  The successful application of rationality and scientific knowledge to this domain has high expected global utility, and after considering the emotional and reputational risks to myself of having posted on this topic, I have concluded that I do not require any special consideration over and above what I would get if I had posted on regulatory policy.

\n

And of course if you have advice to give about mate selection for the straight men around here, here is your chance.

\n

(EDITED to avoid implying that all men are heterosexual.)

" } }, { "_id": "qi5Xg3stgMwvdYKQk", "title": "With whom shall I diavlog?", "pageUrl": "https://www.lesswrong.com/posts/qi5Xg3stgMwvdYKQk/with-whom-shall-i-diavlog", "postedAt": "2009-06-03T03:20:53.548Z", "baseScore": 15, "voteCount": 17, "commentCount": 163, "url": null, "contents": { "documentId": "qi5Xg3stgMwvdYKQk", "html": "

Bloggingheads.tv can't exactly call up, say, the President of France and get him to do a diavlog, but they have some street cred with mid-rank celebrities and academics.  With that in mind, how would you fill in this blank?

\n

\"I would really love to see a diavlog between Yudkowsky and ____________.\"

" } }, { "_id": "8xXLpN4okPAgymfex", "title": "Would You Slap Your Father? Article Linkage and Discussion", "pageUrl": "https://www.lesswrong.com/posts/8xXLpN4okPAgymfex/would-you-slap-your-father-article-linkage-and-discussion", "postedAt": "2009-06-02T18:49:25.107Z", "baseScore": 2, "voteCount": 10, "commentCount": 11, "url": null, "contents": { "documentId": "8xXLpN4okPAgymfex", "html": "

I said that my next post would discuss why IQ tests don't measure frontal executive functions, but I've found something tangential yet extremely topical which I think should be discussed first.

\n

A reader sent me a link to this Opinion column written by New York Times writer Nicholas D. Kristof:  Would You Slap Your Father?  If So, You're A Liberal.

\n

The title is clearly meant to grab attention; don't let its provocative nature dissuade you from reading the article.  Most of it is remarkably free from partisan bias, although there are one or two bits which are objectionable.  Far more important is that it addresses the relationships between 'emotional' reactions, political positions and affiliations, and reason.

\n

It's a short article, brief enough that I don't think I need to sum it up, and of sufficient quality that I can recommend that you peruse it yourself with a clear conscience.  Take the two or three minutes required to read it, please, and then comment your thoughts below.

" } }, { "_id": "n9nwWWgxnr2knmfNi", "title": "Bioconservative and biomoderate singularitarian positions", "pageUrl": "https://www.lesswrong.com/posts/n9nwWWgxnr2knmfNi/bioconservative-and-biomoderate-singularitarian-positions", "postedAt": "2009-06-02T13:19:04.275Z", "baseScore": 13, "voteCount": 16, "commentCount": 53, "url": null, "contents": { "documentId": "n9nwWWgxnr2knmfNi", "html": "

Let us define a singularitarian as a person who considers it likely that some form of smarter than human intelligence will be developed in a characteristic timeframe of a century, and that the manner in which this event occurs is important enough to expend effort altering. Given this definition, it is perfectly possible to be a bioconservative singularitarian - that  is someone who:

\n
\n

opposes genetic modification of food crops, the cloning and genetic engineering of livestock and pets, and, most prominently, rejects the genetic, prosthetic, and cognitive modification of human beings to overcome what are broadly perceived as current human biological and cultural limitations.

\n
\n

 - one can accept the (at present only suggestive) factual arguments of Hanson, Yudkowsky, Bostrom etc that smarter than human intelligence is the only long-term alternative to human extinction (this is what one might call an \"attractor\" argument - that our current state simply isn't stable), whilst taking the axiological and ethical position that our pristine, unenhanced human form is to be held as if it were sacred, and that any modification and/or enhancement of the human form is to be resisted, even if the particular human in question wants to be enhanced. A slighly more individual-freedoms-oriented bioconservative position would be try very hard to persuade people (subject to certain constraints) to decide not to enhance themselves, or to allow people to enhance themselves only if they are prepared to face derision and criticism from society. A superintelligent singleton could easily implement such a society.

\n

This position seems internally consistent to me, and given the seemingly unstoppable march of technological advancement and its rapid integration into our society (smartphones, facebook, online dating, youtube, etc) via corporate and economic pressure, bioconservative singularitarianism may become the only realistic bioconservative position.

\n

One can even paint a fairly idyllic bioconservative world where human enhancement is impossible and people don't interact with advanced technology any more, they live in some kind of rural or hunter-gatherer world where the majority of suffering and disease (apart from death, perhaps) is eliminated by a superintelligent singleton, and the singleton takes care to ensure that this world is not \"disturbed\" by too much technology being invented by anyone. Perhaps people live in a way that is rather like one would have found on a Tahiti before Europeans got there. There are plenty of people who think that they already live in such a world - they are called theists, and they are mistaken (more about this in another post).

\n

For those with a taste for a little more freedom and a light touch of enhancement, we can define biomoderate singularitarianism, which differs from the above in that it sits somewhere more towards the \"risque\" end of the human enhancement spectrum, but it isn't quite transhumanism. As before, we consider a superintelligent singleton running the practical aspects of a society and most of the people in that society being somehow encouraged or persuaded not to enhance themselves too much, so that the society remains a clearly human one. I would consider Banks' Culture to be the prototypical early result of a biomoderate singularity, followed by such incremental changes as one might expect due to what Yudkowsky calls \"heaven of the tired peasant\" syndrome - many people would get bored of \"low-grade\" fun after a while. Note that in the Culture, Banks describes people with significant emotional enhancements and the ability to change gender - so this certainly isn't bioconservative, but the fundaments of human existence are not being pulled apart by such radical developments as mind merging, uploading, wireheading or super-fast radical cognitive enhancement.

\n

Bioconservative and biomoderate singularities are compatible with modern environmentalism, in that the power of a superintelligent AI could be used to eliminate damage to the natural world, and humans could live in almost perfect harmony with nature. Harmony with nature would involve a superintelligence carefully managing biological ecosystems and even controlling the actions of individual animals, plants and microorganisms, as well as informing and guiding the actions of human societie(s) so that no human was ever seriously harmed by any creature (no-one gets infected by parasites, bacteria or viruses (unless they want to be), no-one is killed by wild animals), and no natural ecosystem is seriously harmed by human activity. A variant on this would have all wild animals becoming tame, so that you could stroll through the forest and pet a wildcat.

\n

A biomoderate singularity is an interesting concept to consider, and I think it has some interesting applications to a Freindly AI strategy. It is also, I feel, something that I think will be somewhat easier to sell to most other humans around than a full-on, shock level 4, radical transhumanist singularity. In fact we can frame the concept of a \"biomoderate technological singularity\" in fairly normal language: it is simply a very carefully designed self-improving computer system that is used to eliminate the need for humans to do work that they don't (all things considered) want to do.

\n

One might well ask: what does this post have to do with instrumental rationality? Well, due to various historical co-incidences, the same small group of people who popularized technologically enabled bio-radical stances such as gender swapping, uploading, cryopreservation, etc also happen to be the people who popularized ideas about smarter than human intelligence. When one small, outspoken group proposes two ideas which sound kind of similar, the rest of the world is highly likely to conflate them.

\n

The situation on the ground is that one of these ideas has a viable politico-cultural future, and the other one doesn't: \"bioradical\" human modification activates so many \"yuck\" factors that getting it to fly with educated, secular people is nigh-on impossible, never mind the religious lot. The notion that smarter-than-human intelligence will likely be developed, and that we should try to avoid getting recycled as computronium is a stretch, but at least it involves only nonobvious factual claims and obvious ethical claims.

\n

It is thus an important rationalist task to separate out these two ideas and make it clear to people that singularitarianism doesn't imply bioradicalism.

\n

See Also: Amputation of Destiny

" } }, { "_id": "TnMzZZivTaRZNbuiN", "title": "Concrete vs Contextual values", "pageUrl": "https://www.lesswrong.com/posts/TnMzZZivTaRZNbuiN/concrete-vs-contextual-values", "postedAt": "2009-06-02T09:47:30.233Z", "baseScore": -1, "voteCount": 14, "commentCount": 32, "url": null, "contents": { "documentId": "TnMzZZivTaRZNbuiN", "html": "

The concept of recursive self-improvement  is not an accepted idea outside of the futurist community. It just does not seem right in some fashion to some people. I am one of those people, so I'm going to try and explain the kind of instinctive skepticism I have towards it. It hinges on the difference between two sorts of values, whose difference I have not seen made explicit before (although likely it has somewhere). This difference is that of the between a concrete and contextual value.

\n

\n

So lets run down the argument so I can pin down where it goes wrong in my view.

\n
    \n
  1. There is a value called intelligence that roughly correlates with the ability to achieve goals in the world (if it does not then we don't care about intelligence explosions as they will have negligible impact on the real worldTM)
  2. \n
  3. All things being equal a system with more compute power will be more capable than one with less (assuming they can get the requisite power supply). Similarly systems that have algorithms with better run time complexities will be more capable.
  4. \n
  5. Computers will be able to do things to increase the values in 2. Therefore they will form a feedback loop and become progressively more and more capable at an ever increasing rate.
  6. \n
\n

The point where I become unstuck is in the phrase \"all things being equal\". Especially what the \"all\" stands for. Let me run down a similar argument for wealth.

\n
    \n
  1. There is a value called wealth that roughly correlates with the ability to acquire goods and services from other people.
  2. \n
  3. All things being equal a person with more money will be more wealthy than one with less.
  4. \n
  5. You are able to put your money in the bank and get compound interest on your money, so your wealth should be exponential in time (ignoring taxes).
  6. \n
\n

3 can be wrong in this, dependent upon the rate of interest and the rate of inflation. Because of inflation, each dollar you have in the future is less able to buy goods. That is the argument in 3 ignores that at different times and in different environments money is worth different amounts of goods. Hyper inflation is a stark example of this. So the \"all things being equal\" references the current time and state of the world and 3 breaks that assumption by allowing time and the world to change.

\n

Why doesn't the argument work for wealth, but you can get stable recursive growth on neutrons in a reactor? It is because wealth is a contextual value, it depends on the world around you, as your money grows with compound interest the world changes it to make it less valuable without touching your money at all. Nothing can change the number of neutrons in your reactor without physically interacting with them or the reactor in some way. The neutron density value is concrete and containable, and you can do sensible maths with it.

\n

I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year. That is despite an increase in raw computing ability it will not have an increase in achieving the goal of winning chess. Another possible example of the contextual nature of intelligence is the case where a systems ability to perform well in the world is affected by other people knowing its source code, and using it to predict and counter its moves.

\n

From the view of intelligence as a contextual value, current  discussion of recursive self-improvement seems overly simplistic. We need to make explicit the important things in the world that intelligence might depend upon and then see if we can model the processes such that we still get FOOMs.

\n

Edit: Another example of the an intelligences effectiveness being contextual is the role of knowledge in performing tasks. Knowledge can have a expiration date after which it becomes less useful. Consider knowledge about the current english idioms usefulness for writing convincing essays, or the current bacterial population when trying to develop nano-machines to fight them.  So you might have an atomically identical intelligence whose effectiveness varies dependent upon the freshness of the knowledge. So there might be conflicts between expending resources on improving processing power or algorithms and keeping knowledge fresh, when trying to shape the future. It is possible, but unlikely, that an untruth you believe will become true in time (say your estimate for the population of a city was too low but its growth took it to your belief), but as there are more ways to be wrong than right, knowledge is likely to degrade with time.

" } }, { "_id": "DHd7g6r8ogjBgtQnB", "title": "YouTube Test", "pageUrl": "https://www.lesswrong.com/posts/DHd7g6r8ogjBgtQnB/youtube-test", "postedAt": "2009-06-02T01:10:06.409Z", "baseScore": 0, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "DHd7g6r8ogjBgtQnB", "html": "

\n\n\n\n\n\n\n

" } }, { "_id": "EYJBHNPraHqW8dYya", "title": "Open Thread: June 2009", "pageUrl": "https://www.lesswrong.com/posts/EYJBHNPraHqW8dYya/open-thread-june-2009", "postedAt": "2009-06-01T18:46:09.791Z", "baseScore": 7, "voteCount": 9, "commentCount": 145, "url": null, "contents": { "documentId": "EYJBHNPraHqW8dYya", "html": "

I provide our monthly place to discuss Less Wrong topics that have not appeared in recent posts. Work your brain and gain prestige by doing so in E-prime (or not, as you please).

" } }, { "_id": "jEeuGFRcXvEGqF8un", "title": "The Frontal Syndrome", "pageUrl": "https://www.lesswrong.com/posts/jEeuGFRcXvEGqF8un/the-frontal-syndrome", "postedAt": "2009-06-01T16:10:30.632Z", "baseScore": 17, "voteCount": 30, "commentCount": 40, "url": null, "contents": { "documentId": "jEeuGFRcXvEGqF8un", "html": "

Neuroscientists have a difficult time figuring out which parts of the brain are involved in different functions.  Naturally-occurring lesions to the brain are rarely specific to a particular anatomical region, the complications involved with the injury and treatment act as a smokescreen, and finding a patient who's damaged the particular spot you want to learn about is frustrating at best and nigh-impossible at worst.

\n

Fortunately for researchers, inappropriate surgical interventions of the past can shed light on neurological questions.

\n

The strange and horrifying history of psychosurgery is a topic beyond the scope of this site, and certainly beyond this post.  Interested readers can easily find a great wealth of relevant discussion on the Net and in libraries, even (in more extensive collections) works written by the physicians involved in such surgeries during the era in which they were popular.  Even a casually-curious individual can find lots of non-technical analysis and history to read - for such people, I particularly recommend Great and Desperate Cures by Elliot Valenstein.

\n

Of especial relevance is the prefrontal leukotomy, more commonly (if somewhat imprecisely) known as the lobotomy.  There are several features in particular that are of interest to people interested in the nature of effective thought:

\n

To begin, people with frontal lobe damage have problems with impulse control.  And by 'problems', I mean they're pretty much incapable of it.  It would be more precise to say that lobotomized patients display a remarkable degree of rigid, stereotyped behavior patterns.  Give one patient a broom, and she'll begin to sweep the floor; show another a room with a bed, and he'll lie down on it.  And do the same thing every time the stimulus is presented.  The precise response varies from person to person, but the general reaction is consistent and replicable.  Whatever the strongest association with the stimuli is in their mind, that's what they do when they encounter it - and every time they encounter it.

\n

For this reason, it was at one time suggested that only patients with a reputation for rigorously moral behavior be lobotomized, because people who would characteristically break social mores would do so ostentatiously after the surgery.  Shoplifters and petty thieves who might have tried to steal particular kinds of things before they were lobotomized would immediately try to do so when they came across those things again - regardless of whether it was a good opportunity or even whether others were clearly watching.  Restraining such behavior, or even limiting it, was simply impossible.

\n

Furthermore, such people don't get bored.  Present them with a simple task, and they'll carry it out... and keep doing so, even if the consequences become absurd.  Set them to building a picket fence and forget to check up on them, and they'll build it past your property line and down the street if given enough time.  Set them to washing dishes, and they'll keep washing - to the point of redoing the job several times over.  The ability to interrupt the sequence of behavior, to put a \"Stop\" order in the chain of macros built up, no longer existed once the connections between the frontal lobes and the rest of the brain had been severed.

\n

Motivation becomes almost non-existent.  Left to themselves, lobotomized people often do not initiate action, or they do not begin to act in ways other than patterns they incorporated before.  They repeat things they did before, but mindlessly and without variation, and cannot adapt if the pattern is disrupted.  More alarmingly, the associations between concepts and basic responses are destroyed, to the point where sensations like pain are noted but not perceived as important, and actions to diminish or avoid the pain are not taken.  One well-known case ended when, after having been released to her home, a woman was scalded to death because she didn't leave a bath of too-hot water she'd drawn.

\n

Learning in any abstract sense ceases.  Teaching the lobotomized new responses is virtually impossible.  And even basic conditioning, such as that is used with dogs to train them, becomes problematic due to lack of avoidance of pain and seeking of pleasure.

\n

These points are only part of a general overview - the details are far, far worse.

\n

There's one point which I have yet to discuss, and yet in the context of the information above, is the most shocking.  Lobotomization did not disrupt the IQ of patients to any degree.  This was actually one of the excuses made for why doctors didn't realize the utterly destructive effects of the surgery earlier.  If it didn't impair IQ, surely it couldn't be grossly harmful, it was claimed.  Well, it was.

\n

This sets the stage for an important question:  If the lobotomy so profoundly levels the house of the mind, why don't IQ tests measure any of the mental aspects destroyed in the process?

\n

That is a subject for the next posts.

" } }, { "_id": "PwxyrX6bFDBMApk7h", "title": "The Onion Goes Inside The Biased Mind", "pageUrl": "https://www.lesswrong.com/posts/PwxyrX6bFDBMApk7h/the-onion-goes-inside-the-biased-mind", "postedAt": "2009-05-30T17:33:15.038Z", "baseScore": 8, "voteCount": 19, "commentCount": 3, "url": null, "contents": { "documentId": "PwxyrX6bFDBMApk7h", "html": "

A great piece from The Onion, inside the mind of someone arguing emotionally:  Oh, No! It's Making Well-Reasoned Arguments Backed With Facts! Run!

\n

Pure genius.

" } }, { "_id": "vN9xBhv2YcpzdE6ot", "title": "Taking Occam Seriously", "pageUrl": "https://www.lesswrong.com/posts/vN9xBhv2YcpzdE6ot/taking-occam-seriously", "postedAt": "2009-05-29T17:31:52.268Z", "baseScore": 32, "voteCount": 28, "commentCount": 51, "url": null, "contents": { "documentId": "vN9xBhv2YcpzdE6ot", "html": "

Paul Almond's site has many philosophically deep articles on theoretical rationality along LessWrongish assumptions, including but not limited to some great atheology, an attempt to solve the problem of arbitrary UTM choice, a possible anthropic explanation why space is 3D, a thorough defense of Occam's Razor, a lot of AI theory that I haven't tried to understand, and an attempt to explain what it means for minds to be implemented (related in approach to this and this).

" } }, { "_id": "PZyJSdmRpPbamv5G2", "title": "A social norm against unjustified opinions?", "pageUrl": "https://www.lesswrong.com/posts/PZyJSdmRpPbamv5G2/a-social-norm-against-unjustified-opinions", "postedAt": "2009-05-29T11:25:47.886Z", "baseScore": 16, "voteCount": 25, "commentCount": 161, "url": null, "contents": { "documentId": "PZyJSdmRpPbamv5G2", "html": "

A currently existing social norm basically says that everyone has the right to an opinion on anything, no matter how little they happen to know about the subject.

But what if we had a social norm saying that by default, people do not have the right to an opinion on anything? To earn such a right, they ought to have familiarized themselves on the topic. The familiarization wouldn't necessarily have to be anything very deep, but on the topic of e.g. controversial political issues, they'd have to have read at least a few books' worth of material discussing the question (preferrably material from both sides of the political fence). In scientific questions where one needed more advanced knowledge, you ought to at least have studied the field somewhat. Extensive personal experience on a subject would also be a way to become qualified, even if you hadn't studied the issue academically.

The purpose of this would be to enforce epistemic hygiene. Conversations on things such as public policy are frequently overwhelmed by loud declarations of opinion from people who, quite honestly, don't know anything on the subject they have a strong opinion on. If we had in place a social norm demanding an adequate amount of background knowledge on the topic before anyone voiced an opinion they expected to be taken seriously, the signal/noise ratio might be somewhat improved. This kind of a social norm does seem to already be somewhat in place in many scientific communities, but it'd do good to spread it to the general public.

At the same time, there are several caveats. As I am myself a strong advocate on freedom of speech, I find it important to note that this must remain a *social* norm, not a government-advocated one or anything that is in any way codified into law. Also, the standards must not be set *too* high - even amateurs should be able to engage in the conversation, provided that they know at least the basics. Likewise, one must be careful that the principle isn't abused, with \"you don't have a right to have an opinion on this\" being a generic argument used to dismiss any opposing claims.

" } }, { "_id": "6ocujnKZL38thXn62", "title": "Image vs. Impact: Can public commitment be counterproductive for achievement?", "pageUrl": "https://www.lesswrong.com/posts/6ocujnKZL38thXn62/image-vs-impact-can-public-commitment-be-counterproductive", "postedAt": "2009-05-28T23:18:28.564Z", "baseScore": 54, "voteCount": 50, "commentCount": 48, "url": null, "contents": { "documentId": "6ocujnKZL38thXn62", "html": "

The traditional wisdom says that publicly committing to a goal is a useful technique for accomplishment.  It creates pressure to fulfill one's claims, lest one lose status.  However, when the goal is related to one's identity, a recent study shows that public commitment may actually be counterproductive.  Nyuanshin posts:

\n
\n

    \"Identity-related behavioral intentions that had been noticed by other people were translated into action less intensively than those that had been ignored. . . . when other people take notice of an individual's identity-related behavioral intention, this gives the individual a premature sense of possessing the aspired-to identity.\"

\n

    -- Gollwitzer at al (2009)

\n

This empirical finding flies in the face of conventional wisdom about the motivational effects of public goal-setting, but rings true to my experience. Belief is, apparently, fungible -- when you know that people think of you as an x-doer, you afffirm that self-image more confidently than you would if you had only your own estimation to go on. [info]colinmarshall and myself have already become aware of the dangers of vanity to any non-trivial endeavor, but it's nice to have some empirical corroboration. Keep your head down, your goals relatively private, and don't pat yourself on the back until you've got the job done.

\n
\n

This matches my experience over the first year of The Seasteading Institute.  We've received tons of press, and I've probably spent as much time at this point interacting with the media as working on engineering.  And the press is definitely useful - it helps us reach and get credibility with major donors, and it helps us grow our community of interested seasteaders (it takes a lot of people to found a country, and it takes a mega-lot of somewhat interested people to have a committed subset who will actually go do it).

\n

Yet I've always been vaguely uncomfortable about how much media attention we've gotten, even though we've just started progressing towards our long-term goals.  It feels like an unearned reward.  But is that bad?  I keep wondering \"Why should that bother me?  Isn't it a good thing to be given extra help in accomplishing this huge and difficult goal?  Aren't unearned rewards the best kind of rewards?\" This study suggests the answer.

\n

My original goal was to actually succeed at starting new countries, but as a human, I am motivated by the status to be won in pursuit of this goal as well as the base goal itself.  I recognize this, and have tried to use it to my advantage, visualizing the joys of having achieved high status to motivate the long hours of effort needed to reach the base goal.  But getting press attention just for starting work on the base goal, rather than significant accomplishments towards it, short-circuits this motivational process.  It gives me status in return for just having an interesting idea (the easy part, at least for me) rather than moving it towards reality (the hard part), and helps affirm the self-image I strive for in return for creating the identity, rather than living up to it.

\n

I am tempted to say \"Well, since PR helps my goal, I shouldn't worry about being given status/identity too easily, it may be bad for my motivation but it is good for the cause\", but that sounds an awful lot like my internal status craver rationalizing why I should stop worrying about getting on TV (Discovery Channel, Monday June 8th, 10PM EST/PST :) ).

\n

My current technique is to try, inasmuch as I can, to structure my own reward function around the more difficult and important goals.  To cognitively reframe \"I got media attention, I am affirming my identity and achieving my goals\" as \"I got media attention, which is fun and slightly useful, but not currently on the critical path.\"  To focus on achievement rather than recognition (internal standards rather than external ones, which has other benefits as well).  Not only in my thoughts, but also in public statements - to describe seasteading as \"we're different because we're going to actually do it\", so that actual accomplishment is part of the identity I am striving for.

\n

One could suggest that OB/LW has this problem too - perhaps rewarding Eliezer with status for writing interesting posts allows him to achieve his identity as a rationalist with work that is less useful to his long-term goals than actually achieving FAI. However, I don't buy this.  I think raising the sanity waterline is a big deal, greater than FAI because it increases the resources available for dealing with FAI-like problems (ie converting a single present or future centimillionaire could lead to hiring multiple-Eliezer's worth of AI researchers).  Hence his public-facing work has direct positive impact.  And given this, while Eli's large audience may selfishly incent him towards public-facing work via the desire to seek status, it also increases the actual impact of his public-facing work since he reaches many people.

\n

Also of relevance is the community in which one is achieving status.  Eliezer's OB/LW audience is largely self-selected rationalists, which might be good because it's the most receptive audience, or it might be restricting his message to an unnecessarily small niche, I'm not sure.  But for seasteading, I think there is a clear conflict between the most exciting and most useful audiences.  What we need to succeed is a small group of highly committed and talented people, which is better served by very focused publicity, yet intuitively it feels like more of a status accomplishment to reach a broader audience (y'know, one with lots of hot babes - that's why guys seek status, after all).  (This is a downside to LW being a sausage-fest - less incentive for men to status-seek through community-valued accomplishments if it won't get them chicks.)

\n

This issue reminds me of our political system, which rewards people for believably promising to achieve great things rather than for accomplishing them.  After all, which gets a Congressman more status in our society - the title of \"Senator\", or their voting record and the impact of the bills they helped craft and pass?  Talk about image over impact!

\n

Anyway, your thoughts on motivation, identity, public commitment, and publicity are welcomed.

" } }, { "_id": "96CAzLxwvBp7jtjjo", "title": "Link: The Case for Working With Your Hands", "pageUrl": "https://www.lesswrong.com/posts/96CAzLxwvBp7jtjjo/link-the-case-for-working-with-your-hands", "postedAt": "2009-05-28T14:16:03.492Z", "baseScore": 25, "voteCount": 22, "commentCount": 12, "url": null, "contents": { "documentId": "96CAzLxwvBp7jtjjo", "html": "

The NYTimes recently publised a long semi-autobiographical article written by Michael Crawford, a University of Chicago Phd graduate who is currently employed as a motorcycle mechanic. The article is partially a somewhat standard lament about the alienation and drudgery of modern corporate work. But it is also very much about rationality. Here's an excerpt:

\n
\n

As it happened, in the spring I landed a job as executive director of a policy organization in Washington. This felt like a coup. But certain perversities became apparent as I settled into the job. It sometimes required me to reason backward, from desired conclusion to suitable premise. The organization had taken certain positions, and there were some facts it was more fond of than others. As its figurehead, I was making arguments I didn’t fully buy myself. Further, my boss seemed intent on retraining me according to a certain cognitive style — that of the corporate world, from which he had recently come. This style demanded that I project an image of rationality but not indulge too much in actual reasoning. As I sat in my K Street office, Fred’s life as an independent tradesman gave me an image that I kept coming back to: someone who really knows what he is doing, losing himself in work that is genuinely useful and has a certain integrity to it. He also seemed to be having a lot of fun.

\n
\n

I think this article will strike a chord with programmers. A large part of the satisfaction of motorcycle work that Crawford describes comes from the fact that such work requires one to confront reality, however harsh it may be. Reality cannot be placated by hand-waving, Powerpoint slides, excuses, or sweet talk. But the very harshness of the challenge means that when reality yields to the finesse of a craftsman, the reward is much greater. Programming has a similar aspect: a piece of software is basically either correct or incorrect. And programming, like mechanical work, allows one to interrogate and engage the system of interest through a very high-bandwidth channel: you write a test, run it, tweak it, re-run, etc.

" } }, { "_id": "y5fhowZqyanr4cDg6", "title": "Can we create a function that provably predicts the optimization power of intelligences?", "pageUrl": "https://www.lesswrong.com/posts/y5fhowZqyanr4cDg6/can-we-create-a-function-that-provably-predicts-the", "postedAt": "2009-05-28T11:35:28.556Z", "baseScore": -7, "voteCount": 11, "commentCount": 17, "url": null, "contents": { "documentId": "y5fhowZqyanr4cDg6", "html": "

Follow up to  Efficient Cross-domain Optimization

\n

When I am skeptical that we will ever understand intelligence, I am skeptical that we will ever be able to reliably map a systems description onto its optimization power. This has implications for how well we will create intelligences and how well intelligences will be at self-improving.

\n

Obviously we can't predict the effectiveness of an arbitrary program, due to rice's theorem and intelligence being a non-trivial property. So the best we can hope for is predicting the effectiveness of a set of programs. Is such a function possible? This is my take on the subject.

\n

\n

Let  o ( p ) be a function that maps a program p to its optimization power.

\n

Mu, Omegas younger brother has a challenge for you, you get to design a system and put it in a box with 20 red and 20 green balls, it will activate itself after 10 minutes and then have the goal of removing as many red balls from the box as possible in 10 minutes. You have to decide how whether it is going to remove more or less than 5 red balls from the box. You get transported to a nirvana if you predict correctly and your world gets turned into paper clips if you get it wrong.

\n

You whip out your trusty and make a program and the evaluate it using o  and bet according to its evaluation.

\n

Unknown to you Mu also has a copy of your o  and runs it on the systems you put in the box. Those that return a high value from the optimization power measure, it destroys before they activate, those that have a low effectiveness it performs their goals for them. In the second case it is still p that causes the goal to be fulfilled as if p were different there would be a different amount that the goal is fulfilled. You can see it as inspiring pity in someone else to make them help, who would not have done otherwise. It is still winning.

\n

So Mu forces o to be wrong, so was not the reliable predictor of a set of programs optimization power we had hoped for, so we have a contradiction. Is there anyway to salvage it? You could make your effectiveness measure depend upon the environment e as well, however that does not remove the potential for self-reference as o is part of the environment.  So we might be able to rescue o by constraining the environment to not have any reference to o in. However we don't control the environment nor do we have perfect knowledge of the environment, so we don't know when it has references to o in or not, or when it is reliable.

\n

You could make try and make it so that  Mu could have no impact on what p  does. Which is the same as trying to make the system indestructible, but with a reversible physics what is created can be destroyed.

\n

So where do we go from here?

\n

 

" } }, { "_id": "PG8i7ZiqLxthACaBi", "title": "Do Fandoms Need Awfulness?", "pageUrl": "https://www.lesswrong.com/posts/PG8i7ZiqLxthACaBi/do-fandoms-need-awfulness", "postedAt": "2009-05-28T06:03:22.092Z", "baseScore": 38, "voteCount": 39, "commentCount": 159, "url": null, "contents": { "documentId": "PG8i7ZiqLxthACaBi", "html": "

Stephen Bond, \"Objects of Fandom\":

\n
\n

...my theory is that for something to attract fans, it must have an aspect of truly monumental badness about it.

\n

Raiders of the Lost Ark is a robust potboiler, tongue-in-cheek, very competently done. I think it's enjoyable, but even among those who don't, it's hard to see the film attracting actual derision. Boredom or irritation, probably, but nothing more. Star Wars, on the other hand.... From one perspective, it's an entertaining space opera, but from a slightly different perspective, an imperceptible twist of the glass, it's laughably awful. Utterly ridiculously bad. And it's this very badness that makes so many people take up arms in its defence.

\n

...It's impossible to imagine a fan of Animal Farm, the Well-Tempered Clavier, or the theory of gravity. Such works can defend themselves. But badness, especially badness of an obvious, monumental variety, inspires devotion. The quality of the work, in the face of such glaring shortcomings, becomes a matter of faith -- and faith is a much stronger bond than mere appreciation. It drives fans together, gives them strength against those who sneer... And so the fan groups of Tolkien, Star Trek, Spider-man, Japanese kiddie-cartoons etc. develop an almost cult-like character.

\n
\n

\"Uh oh,\" I said to myself on first reading this, \"Is this why my fans are more intense than Robin Hanson's fans?  And if I write a rationality book, should I actually give in to temptation and self-indulgence and write in Twelve Virtues style, just so that it has something attackable for fans to defend?\"

\n

But the second time I turned my thoughts toward this question, I performed that oft-neglected operation, asking:  \"I read it on the Internet, but is it actually true?\"  Just because it's unpleasant doesn't mean it's true.  And just because it provides a bit of cynicism that would give me rationality-credit to acknowledge, doesn't mean it becomes true just so I can earn the rationality-credit.

\n

The first counterexample that came to mind was Jack Vance.  Jack Vance is a science-fiction writer who, to the best of my knowledge, I've never heard accused of any great sin (or any lesser sin, actually).  He is - was - the supremely competent craftsman of SF: his words flow, his plots race, and his human cultures are odder than other authors' aliens, to say nothing of his aliens.  Vance didn't have his characters give controversial political speeches like Heinlein.  Vance just wrote consistently excellent science fiction.

\n

And some of Vance's fans got together and produced the Vance Integral Edition, a complete collection of Vance in leather-bound hardcover books with high-quality long-lasting paper.  They contracted to get the books printed, and when the books arrived, enough Vance fans showed up to ship them all.  (They referred to themselves as \"packing scum\".)

\n

That's serious fandom.  Aimed at work that - like Animal Farm or the Well-Tempered Clavier - is merely excellent, without an aspect of monumental badness to defend.

\n

Godel, Escher, Bach - maybe I'm prejudiced here, and I've heard a word or two said against it, but really, I don't think the fandom that it has stems from it being frequently attacked.  On the other hand, there aren't annual conventions for fans of self-referential sentences, so maybe it's not as much of a data point as I might like.

\n

Star Wars really did have something going for it that Raiders of the Lost Ark didn't, namely, it introduced a lot of impressionable minds to science fiction.  Or space opera, if you like.  The point is that the romance of space is not the romance of archeology.

\n

On due reflection, I'm not sure that utter ridiculous monumental badness is all it's cracked up to be.

\n

But there are annual Star Trek conventions.  And there are not annual Jack Vance conventions.  Douglas Hofstadter might be far more widely beloved - but Ayn Rand has more fanatic fans.

\n

If Jack Vance had been so clever as to keep all the poetic phrasing and alien societies, but now and then have his characters make crazy political speeches - if he had deliberately introduced an aspect of monumental badness - would he now be worshiped, instead of just loved?

\n

Can anyone think of a true, pure counterexample of a reasonably fanatic fandom (to the level of annual conventions, though not necessarily suicide bombers) of something that is just sheer good professional craftwork, and not commonly criticized?  And of course the acid test is not whether you think it is just sheer good craftsmanship, but whether this is widely believed within the broad context of the relevant social community - can you have fanatic fans when their object of worship really is that good and the mainstream believes it too?

\n

I do think that Stephen Bond's Objects of Fandom is pointing to a real effect, if not the only effect.  So in the same vein that we should try to be attracted to basic science textbooks and not just poorly written press releases about \"breaking news\", let us try to be fans of those merely excellent works that lack an aspect of monumental awfulness to defend.

" } }, { "_id": "y3HAxPN8WHqX7zhkv", "title": "Anime Explains the Epimenides Paradox", "pageUrl": "https://www.lesswrong.com/posts/y3HAxPN8WHqX7zhkv/anime-explains-the-epimenides-paradox", "postedAt": "2009-05-27T21:12:13.086Z", "baseScore": 4, "voteCount": 18, "commentCount": 29, "url": null, "contents": { "documentId": "y3HAxPN8WHqX7zhkv", "html": "

The Epimenides Paradox or Liar Paradox is \"This sentence is false.\"  Type hierarchies are supposed to resolve the Epimenides paradox... Using an indefinitely extensible, indescribably infinite, ordinal hierarchy of meta-languages. No meta-language can contain its own truth predicate - no meta-language can talk about the \"truth\" or \"falsity\" of its own sentences - and so for every meta-language we need a meta-meta-language.

I didn't create this video and I don't know who did - but it does a pretty good job of depicting how I feel about infinite type hierarchies: namely, pretty much the same way I feel about the original Epimenides Paradox.

\n

Bonus problem: In what language did I write the description of this video?

\n

\n

\n\n\n\n\n\n

" } }, { "_id": "XcDSmXecYiubPjxAj", "title": "Eric Drexler on Learning About Everything", "pageUrl": "https://www.lesswrong.com/posts/XcDSmXecYiubPjxAj/eric-drexler-on-learning-about-everything", "postedAt": "2009-05-27T12:57:21.590Z", "baseScore": 38, "voteCount": 37, "commentCount": 15, "url": null, "contents": { "documentId": "XcDSmXecYiubPjxAj", "html": "

Related to: The Simple Math of Everything, Your Strength as a Rationalist, Teaching the Unteachable.

\n

Eric Drexler wrote a couple of articles on the importance and methods of obtaining interdisciplinary knowledge:

\n\n
\n

Note that the title above isn't \"how to learn everything\", but \"how to learn about everything\". The distinction I have in mind is between knowing the inside of a topic in deep detail — many facts and problem-solving skills — and knowing the structure and context of a topic: essential facts, what problems can be solved by the skilled, and how the topic fits with others.

\n

This knowledge isn't superficial in a survey-course sense: It is about both deep structure and practical applications. Knowing about, in this sense, is crucial to understanding a new problem and what must be learned in more depth in order to solve it.

\n
\n

This topic was discussed intermittently on Overcoming Bias. Basic understanding of many fields allows to recognize how well-understood by science a problem is and to see its place in the structure of scientific knowledge; to develop better intuitive grasp on what's possible and what's not; and to adequately perceive the natural world.

\n

The advice he gives for obtaining general knowledge feels right, even for studying the topics that you intend to eventually understand in depth:

\n
\n

Don't drop a subject because you know you'd fail a test — instead, read other half-understandable journals and textbooks to accumulate vocabulary, perspective, and context.

\n
" } }, { "_id": "3mromFtJuGo2d5NHr", "title": "Dissenting Views", "pageUrl": "https://www.lesswrong.com/posts/3mromFtJuGo2d5NHr/dissenting-views", "postedAt": "2009-05-26T18:55:17.205Z", "baseScore": 22, "voteCount": 33, "commentCount": 212, "url": null, "contents": { "documentId": "3mromFtJuGo2d5NHr", "html": "

\n

Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.

\n

Maintaining a High Signal to Noise Ratio

\n

The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.

\n

For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't \"get it\". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.

\n

\n

Rationality is not a religion – Or is it?

\n

Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.

\n

So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.

\n

But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.

\n\n

Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?

\n

A Solution

\n

I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)

\n

Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?

" } }, { "_id": "4HdTLwtWcJ6TD3m2g", "title": "The Wire versus Evolutionary Psychology", "pageUrl": "https://www.lesswrong.com/posts/4HdTLwtWcJ6TD3m2g/the-wire-versus-evolutionary-psychology", "postedAt": "2009-05-25T05:21:23.639Z", "baseScore": 18, "voteCount": 20, "commentCount": 19, "url": null, "contents": { "documentId": "4HdTLwtWcJ6TD3m2g", "html": "

In their Evolutionary Psychology Primer, Cosmides and Tooby give an example of a hypothesized adaptation that allows us to detect cheaters in a certain type of logical task (Wason) that we generally fail at.  In the Wason selection task (both article and wiki give examples) you are presented a type of logic puzzle that people tend to do poorly at and even formal training in logic helps little, yet when the examples involve cheating (such as \"If you are to eat those cookies, then you must first fix your bed\" and the task would be to figure out if someone whose eating the cookies did indeed fix the bed) perform much better (25% right in the regular task, 65-80% in this version, according to the article).

\n

In the show The Wire, in season one, episode eight, Wallace, a teen-age drug dealer is asked by a young child to help her with her math homework.  It's an addition and subtraction word problem about passengers on a bus (can't remember the numbers, but along the lines of, if the bus has 10 people on it and at the next stop 3 get on and 4 leave, etc.).  Wallace rephrases the word problem to be about drugs and the kid gets it right.  Wallace frustrated asks why and the kid replies along the lines of: \"They beat you if you get the count wrong.\" (Edit:simpleton gives the quote as \"Count be wrong, they fuck you up.\")

\n

C&T conclude that there are evolved \"algorithms\" in our brains that deal with social contract processing that explain why people do better on certain Wason selection tasks.  The Wire points out a simpler possible explanation that their experiments did not control for: people do better on tasks they care about, unless one would like to suppose there are special math circuits in the brain for certain \"social contract\" situations.

\n

Of course, I am not saying a fictional anecdote disproves C&T's claim, but it does point to something they didn't test for, and something that I find rather plausible.

\n

Possible tests: Look at emotionally-motivating things that vary across culture and develop Wason selection tasks to test for that; look at various types of emotionally-motivating things (which I do not presume all emotional responses will affect the test results), and obviously, test The Wire example itself.

" } }, { "_id": "2KNN9WPcyto7QH9pi", "title": "This Failing Earth", "pageUrl": "https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth", "postedAt": "2009-05-24T16:09:11.621Z", "baseScore": 51, "voteCount": 68, "commentCount": 166, "url": null, "contents": { "documentId": "2KNN9WPcyto7QH9pi", "html": "

Suppose I told you about a certain country, somewhere in the world, in which some of the cities have degenerated into gang rule.  Some such cities are ruled by a single gang leader, others have degenerated into almost complete lawlessness.  You would probably conclude that the cities I was talking about were located inside what we call a \"failed state\".

\n

So what does the existence of North Korea say about this Earth?

\n

No, it's not a perfect analogy.  But the thought does sometimes occur to me, to wonder if the camel has two humps.  If there are failed Earths and successful Earths, in the great macroscopic superposition popularly known as \"many worlds\" - and we're not one of the successful.  I think of this as the \"failed Earth\" hypothesis.

\n

Of course the camel could also have three or more humps, and it's quite easy to imagine Earths that are failing much worse than this, epic failed Earths ruled by the high-tech heirs of Genghis Khan or the Catholic Church.  Oh yes, it could definitely be worse...

\n

...and the \"failed state\" analogy is hardly perfect; \"failed state\" usually refers to failure to integrate into the global economy, but a failed Earth is not failing to integrate into anything larger...

\n

...but the question does sometimes haunt me, as to whether in the alternative Everett branches of Earth, we could identify a distinct cluster of \"successful\" Earths, and we're not in it.  It may not matter much in the end; the ultimate test of a planet's existence probably comes down to Friendly AI, and Friendly AI may come down to nine people in a basement doing math.  I keep my hopes up, and think of this as a \"failing Earth\" rather than a \"failed Earth\".

\n

But it's a thought that comes to mind, now and then.  Reading about the ongoing Market Complexity Collapse and wondering if this Earth failed to solve one of the basic functions of global economics, in the same way that Rome, in its later days, failed to solve the problem of orderly transition of power between Caesars.

\n

Of course it's easy to wax moralistic about people who aren't solving their coordination problems the way you like.  I don't mean this to degenerate into a standard diatribe about the sinfulness of this Earth, the sort of clueless plea embodied perfectly by Simon and Garfunkel:

\n
\n

I dreamed I saw a mighty room
The room was filled with men
And the paper they were signing said
They'd never fight again

\n
\n

It's a cheap pleasure to wax moralistic about failures of global coordination.

\n

But visualizing the alternative Everett branches of Earth, spread out and clustered - for me, at least, that seems to help trigger my mind into a non-Simon-and-Garfunkel mode of thinking.  If the successful Earths lack a North Korea, how did they get there?  Surely not just by signing a piece of paper saying they'd never fight again.

\n

Indeed, our Earth's Westphalian concept of sovereign states is the main thing propping up Somalia and North Korea.  There was a time when any state that failed that badly would be casually conquered by a more successful neighbor.  So maybe the successful Earths don't have a Westphalian concept of sovereignty; maybe our Earth's concept of inviolable borders represents a failure to solve one of the key functions of a planetary civilization.

\n

Maybe the successful Earths are the ones where the ancient Greeks, or equivalent thereof, had the \"Aha!\" of Darwinian evolution... and at least one country started a eugenics program that successfully selected for intelligence, well in advance of nuclear weapons being developed.  If that makes you uncomfortable, it's meant to - the successful Earths may not have gotten there through Simon and Garfunkel.  And yes, of course the ancient Greeks attempting such a policy could and probably would have gotten it terribly wrong; maybe the epic failed Earths are the ones where some group had the Darwinian insight and then successfully selected for prowess as warriors.  I'm not saying \"Go eugenics!\" would have been a systematically good idea for ancient Greeks to try as policy...

\n

But maybe the top cluster of successful Earths, among human Everett branches, stumbled into that cluster because some group stumbled over eugenic selection for intelligence, and then, being a bit smarter, realized what it was they were doing right, so that the average IQ got up to 140 well before anyone developed nuclear weapons.  (And then conquered the world, rather than respecting the integrity of borders.)

\n

What would a successful Earth look like?  How high is their standard sanity waterline?  Are there large organized religions in successful Earths - is their presence here a symptom of our failure to solve the problems of a planetary civilization?  You can ring endless changes on this theme, and anyone with an accustomed political hobbyhorse is undoubtedly imagining their pet Utopia already.  For my own part, I'll go ahead and wonder, if there's an identifiable \"successful\" cluster among the human Earths, what percentage of them have worldwide cryonic preservation programs in place.

\n

One point that takes some of the sting out of our ongoing muddle - at least from my perspective - is my suspicion that the Earths in the successful cluster, even those with an average IQ of 140 as they develop computers, may not be in much of a better position to really succeed, to solve the Friendly AI problem.  A rising tide lifts all boats, and Friendly AI is a race between cautiously developed AI and insufficiently-cautiously-developed AI.  \"Successful\" Earths might even be worse off, if they solve their global coordination problems well enough to put the whole world's eyes on the problem and turn the development over to prestigious bureaucrats.  It's not a simple issue like cryonics that we're talking about.  If, in the end, \"successful Earths\" of the human epoch aren't in a much better position for the catastrophically high-level pass-fail test of the posthuman transition, than our own \"failing Earth\"... then this Earth isn't all that much more doomed just because we screwed up our financial system, international relations, and basic rationality training.

\n

Is such speculation at all useful?  \"Live in your own world\", as the saying goes...

\n

...Well, it might not be a saying here, but it's probably a saying in those successful Earths where the scientific community is long since trained in formal Bayesianism and they readily accepted the obvious truth of many-worlds... as opposed to our own world and its constantly struggling academia where senior scientists spend most of their time writing grant proposals...

\n

(Michael Vassar has an extended thesis on how the scientific community in our Earth has been slowly dying since 1910 or so, but I'll let him decide whether it's worth his time to write up that post.)

\n

It's usually not my intent to depress people.  I have an accustomed saying that if you want to depress yourself, look at the future, and if you want to cheer yourself up, look at the past.  By analogy - well, for all we know, we might be in the second-highest major cluster, or in the top 10% of all Earths even if not one of the top 1%.  It might be that most Earths have global orders descended from the conquering armies of the local Church.  I recently had occasion to visit the National Museum of Australia in Canberra, and it's shocking to think of how easily a human culture can spend thirty thousand years without inventing the bow and arrow.  Really, we did do quite well for ourselves in a lot of ways... I think?

\n

A sense of beleaguredness, a sense that everything is decaying and dying into sinfulness - these memes are more useful for gluing together cults than for inspiring people to solve their coordination problems.

\n

But even so - it's a thought that I have, when I see some aspect of the world going epically awry, to wonder if we're in the cluster of Earths that fail.  It's the sort of thought that inspires me, at least, to go down into that basement and solve the math problem and make everything come out all right anyway.  Because if there's one thing that the intelligence explosion really messes up, it's the dramatic unity of human progress - if this were a world with a supervised course of history we'd be worrying about making it to Akon's world through a continuous developmental schema, not making a sudden left turn to solve a math problem.

\n

It may be that in the fractiles of the human Everett branches, we live in a failing Earth - but it's not failed until someone messes up the first AI.  I find that a highly motivating thought.  Your mileage may vary.

" } }, { "_id": "MT85svcEweuryr2sn", "title": "Saturation, Distillation, Improvisation: A Story About Procedural Knowledge And Cookies", "pageUrl": "https://www.lesswrong.com/posts/MT85svcEweuryr2sn/saturation-distillation-improvisation-a-story-about", "postedAt": "2009-05-24T02:38:01.532Z", "baseScore": 48, "voteCount": 48, "commentCount": 29, "url": null, "contents": { "documentId": "MT85svcEweuryr2sn", "html": "

Most propositional knowledge (knowledge of facts) is pretty easy to come by (at least in principle).  There is only one capital of Venezuela, and if you wish to learn the capital of Venezuela, Wikipedia will cooperatively inform you that it is Caracas.  For propositional knowledge that Wikipedia knoweth not, there is the scientific method.  Procedural knowledge - the knowledge of how to do something - is a different animal entirely.  This is true not only with regard to the question of whether Wikipedia will be helpful, but also in the brain architecture at work: anterograde amnesiacs can often pick up new procedural skills while remaining unable to learn new propositional information.

\n

One complication in learning new procedures is that there are usually dozens, if not hundreds, of ways to do something.  Little details - the sorts of things that sink into the subconscious with practice but are crucial to know for a beginner - are frequently omitted in casual descriptions.  Often, it can be very difficult to break into a new procedurally-oriented field of knowledge because so much background information is required.  While there may be acknowledged masters of the procedure, it is rarely the case that their methods are ideal for every situation and potential user, because the success of a procedure depends on a vast array of circumstantial factors.

\n

I propose below a general strategy for acquiring new procedural knowledge.  First, saturate by getting a diverse set of instructions from different sources.  Then, distill by identifying what all or most of them have in common.  Finally, improvise within the remaining search space to find something that works reliably for you and your circumstances.

\n

The strategy is not fully general: I expect it would only work properly for procedures that are widely attempted and shared; that you can afford to try multiple times; that have at least partially independent steps so you can mix and match; and that are in fields you have at least a passing familiarity with.  The sort of procedural knowledge that I seek with the most regularity is how to make new kinds of food, so I will illustrate my strategy with a description of how I used it to learn to make meringues.  If you find cookies a dreadfully boring subject of discourse, you may not wish to read the rest of this post.

\n

\n

I. Saturation

\n

The first step is to collect procedural instructions for the object of your search from many different people, saturating your field of search with a variety of recommendations.  A Google search did it in my case; for more esoteric knowledge, it might be necessary to look harder.  Half a dozen of the more popular sets of instructions tends to be plenty for recipes, but the ideal number could easily be higher for procedures with a wider variance of detail or an unusually high number of people who have no clue what they are talking about.  Here are four recipes for meringues that I referred to and one recipe for meringue pie topping that also informed my learning.  I also got one recipe from a friend.

\n

All of the recipes purported to teach me to do the same thing: turn some eggwhites and sugar (and varying other ingredients) into puffy little cookies.  They varied in such details as: ingredient ratios, type of sugar, other ingredients called for besides eggwhites and sugar, oven temperature, what to line the cookie sheet with, and mentions of other factors such as having a clean mixing bowl or humid weather.

\n

II. Distillation

\n

The second step is to extract what all of the procedures have in common, and decide which non-ubiquitous steps to include.  In this case, I first had to multiply all the recipes to make them call for the same number of eggwhites (since those are very difficult to halve or otherwise adjust, I chose them instead of sugar as my starting point).  All five of the recipes (after this revision) called for four eggwhites; all of the recipes call for either caster/superfine sugar or unspecified sugar1; all of them call for vanilla; all of them instruct me to beat the eggwhites to peaks first and then add the sugar and beat it in.  Four of them call for salt.  Four of them call for cream of tartar.  Most of them call for components like candy and nuts, but since I know that meringues come in a wide variety of flavors (by, for example, reading these recipes) I treat these all as optional.  Proposed oven temperatures/baking times are (200/1.5 hours), (250/30 minutes), (300/25 minutes), and (325/15 minutes).  They vary in whether the cookies are to be baked on a greased cookie sheet, a greased and floured cookie sheet, on baking parchment, on paper towels, or on tinfoil.

\n

A good place to start is to go with the majority: I decided to include both salt and cream of tartar.  Next, I eliminated the impractical: I could not find superfine sugar at the store and I don't own a food processor, so I went with granulated sugar.  As for the rest of the instructions, it was a free-for-all.  No two recipes agreed about the cookie sheet arrangement; the two of them that mentioned \"cracking\" disagreed on whether it was a desireable outcome; and worst, none of them explained why every time I tried to make these cookies, they refused to foam up and form peaks2.  Time for the last step.

\n

III. Improvisation

\n

A close reading of the more verbose recipes turns up urgent cautions about not letting any grease into the batter, be it a smear from a prior cooking adventure left on the mixing bowl, a bit of yolk, or - in one recipe - the oils that are naturally on skin.  This last, it turned out, was the key: I was separating eggs by hand, and was not very neat about it.  I switched to a technique recommended by a friend involving spoons, and presto, I could get the meringue batter to hold peaks...

\n

But how long to cook them, at what temperature, and sitting on what?  There, it was necessary to experiment (fortunately, after having narrowed the search space somewhat).  This stage depended as much on my personal taste, the local weather, and the behavior of my oven as on the accuracy of the original recipes; it seems that my oven runs hot, so I need to bake them at 250 degrees or cooler and babysit them after the first ten minutes, or they will burn.  Additionally, parchment paper and tinfoil3 wound up burning the bottoms of the cookies before the tops were even dry; paper towels worked.

\n

 

\n

1Caster and superfine sugar are the same thing, and you can make a reasonable facsimile using a food processor.  When the type of sugar is not specified in a recipe, it means to use granulated white sugar; other kinds are named (e.g. light or dark brown sugar, turbinado sugar, confectioner's sugar, etc.).  This is one of the examples of a situation where background knowledge of the field comes in handy.

\n

2Not that this stopped me from baking the batter anyway.  It just turned into round, flat cookies instead of puffy, light ones.

\n

3I didn't get around to trying greased nor greased and floured bare cookie sheets - I prioritized these tests last because they involve more dishes to wash.

" } }, { "_id": "QL6dzCKBK4KTTDk8W", "title": "Homogeneity vs. heterogeneity (or, What kind of sex is most moral?)", "pageUrl": "https://www.lesswrong.com/posts/QL6dzCKBK4KTTDk8W/homogeneity-vs-heterogeneity-or-what-kind-of-sex-is-most", "postedAt": "2009-05-22T23:25:04.300Z", "baseScore": -6, "voteCount": 24, "commentCount": 79, "url": null, "contents": { "documentId": "QL6dzCKBK4KTTDk8W", "html": "

You've all heard discussions of collective ethics vs. individualistic ethics.  These discussions always assume that the organism in question remains constant.  Your task is to choose the proper weight to give collective versus individual goals.

\n

But the designer of transhumans has a different starting point.  They have to decide how much random variation the population will have, and how much individuals will resemble those that they interact with.

\n

Organisms with less genetic diversity place more emphasis on collective ethics.  The amount of selflessness a person exhibits towards another person can be estimated according to their genetic similarity.  To a first approximation, if person A shares half of their genes with people in group B, person A will regard saving their own life, versus saving two people from group B, as an even tradeoff.  In fact, this generalizes across all organisms, and whenever you find insects like ants or bees, who are extremely altruistic, you will find that they share most of their genes with the group they are behaving altruistically towards.  Bacterial colonies and other clonal colonies can be expected to be even more altruistic (although they don't have as wide a behavioral repertoire with which to demonstrate their virtue).  Google kin selection.

\n

Ants, honeybees, and slime molds, which share more of their genes with their nestmates than humans do with their family, achieve levels of cooperation that humans would consider horrific if it were required of them.  Consider these aphids that explode themselves to provide glue to fill in holes in their community's protective gall.

\n

The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy.  The designer of posthumans (for instance, an AI designing its subroutines for a task), OTOH, actually has a decision to make -- where should that balance be set?  How much variation should there be in the population (whether of genes, memes, or whatever is most important WRT cooperation)?

\n

A strictly goal-oriented AI would supervise its components and resources so as to optimize the trade-off between \"exploration\" and \"exploitation\".  (Exploration means trying new approaches; exploitation means re-using approaches that have worked well in the past.)  This means that it would set the level of random variation in the population according to certain equations that maximize the expected speed of optimization.

\n

But choosing the level of variation in a population has dramatic ethical consequences.  Creating a more homogenous population will increase altruism, at the expense of decreasing individualism.  Choosing the amount of variation in population strictly by maximizing the speed of optimization would mean rolling the dice as to how much altruism vs. individualism your society will have.

\n

In light of the fact that you have a goal to solve, and a parameter setting that will optimize solving that goal; and you also have a fuzzy ethical issue that has something to say about how to set that same parameter; anyone who is not a moral realist must say, Damn the torpedos: Set the parameter so as to optimize goal-solving.  In other words, simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals.

\n

Are you comfortable with that?

" } }, { "_id": "csQXwHurYXpWGp4Ni", "title": "Please voice your support for stem cell research", "pageUrl": "https://www.lesswrong.com/posts/csQXwHurYXpWGp4Ni/please-voice-your-support-for-stem-cell-research", "postedAt": "2009-05-22T18:45:38.523Z", "baseScore": -5, "voteCount": 7, "commentCount": 4, "url": null, "contents": { "documentId": "csQXwHurYXpWGp4Ni", "html": "

I apologize, this really isn't an article, as such. I do feel this is an issue that is important for most rationalists. Amongst my blog reading today, I came across this. At a risk to my karma for something potentially off-topic, I thought important to post here, as I hadn't seen it mentioned.

\r\n

http://network.nature.com/people/etchevers/blog/2009/05/18/quick-us-citizens-go-support-the-draft-nih-guidelines-for-human-stem-cell-research

\r\n

Cutting to the chase, there is an NIH guideline document being created, and predictably anti-stem cell activists are voicing their dissent. To add pro-stem cell comments, you can post here:

\r\n

http://nihoerextra.nih.gov/stem_cells/add.htm

\r\n

The author of the first link posts this suggested comment:

\r\n

I SUPPORT STEM CELL RESEARCH and wish the NIH to also fund research utilizing established hESC lines derived in accordance with the core principles that govern the derivation of new lines.

\r\n

The comment period is open until May 26th. Thanks for reading. And if this is entirely off-topic for this list, please let me know and I will remove the article.

\r\n

 

" } }, { "_id": "aTCGTpP9JoYPw5pRA", "title": "Changing accepted public opinion and Skynet", "pageUrl": "https://www.lesswrong.com/posts/aTCGTpP9JoYPw5pRA/changing-accepted-public-opinion-and-skynet", "postedAt": "2009-05-22T11:05:08.878Z", "baseScore": 17, "voteCount": 20, "commentCount": 71, "url": null, "contents": { "documentId": "aTCGTpP9JoYPw5pRA", "html": "

Michael Annisimov has put up a website called Terminator Salvation: Preventing Skynet, which will host a series of essays on the topic of human-friendly artificial intelligence. Three rather good essays are already up there, including an old classic by Eliezer. The association with a piece of fiction is probably unhelpful, but the publicity surrounding the new terminator film is probably worth it.

\n

What rational strategies can we employ to maximize the impact of such a site, or of publicity for serious issues in general? Most people who read this site will probably not do anything about it, or will find some reason to not take the content of these essays seriously. I say this because I have personally spoken to a lot of clever people about the creation of human-friendly artificial intelligence, and almost everyone finds some reason to not do anything about the problem, even if that reason is \"oh, ok, that's interesting. Anyway, about my new car... \".

\n

What is the reason underlying people's indifference to these issues? My personal suspicion is that most people make decisions in their lives by following what everyone else does, rather than by performing a genuine rational analysis.

\n

Consider the rise in social acceptability of making small personal sacrifices and political decisions based on eco-friendliness and your carbon footprint. Many people I know have become very enthusiastic for recycling used food containers and for unplugging appliances that use trivial amounts of power (for example unused phone chargers and electrical equipment on standby). The real reason that people do these things is that they have become socially accepted factoids. Most people in this world, even in this country, lack the mental faculties and knowledge to understand and act upon an argument involving notions of per capita CO2 emissions; instead they respond, at least in my understanding, to the general climate of acceptable opinion, and to opinion formers such as the BBC news website, which has a whole section for \"science and environment\". Now, I don't want to single out environmentalism as the only issue where people form their opinions based upon what is socially acceptable to believe, or to claim that reducing our greenhouse gas emissions is not a worthy cause.

\n

Another great example of socially acceptable factoids (though probably a less serious one) is the detox industry - see, for example, this Times article. I quote:

\n
\n

“Whether or not people believe the biblical story of the Virgin birth, there are plenty of other popular myths that are swallowed with religious fervour over Christmas,” said Martin Wiseman, Visiting Professor of Human Nutrition at the University of Southampton. “Among these is the idea that in some way the body accumulates noxious chemicals during everyday life, and that they need to be expunged by some mysterious process of detoxification, often once a year after Christmas excess. The detox fad — or fads, as there are many methods — is an example of the capacity of people to believe in (and pay for) magic despite the lack of any sound evidence.”

\n
\n

Anyone who takes a serious interest in changing the world would do well to understand the process whereby public opinion as a whole changes on some subject, and attempt to influence that process in an optimal way. How strongly is public opinion correlated with scientific opinion, for example? Particular attention should be paid to the history of the environmentalist movement. See, for example, McKay's Sustainable energy without the hot air for a great example of a rigorous quantitative analysis in support of various ways of balancing our energy supply and demand, and for a great take on the power of socially accepted factoids, see Phone chargers - the Truth.

\n

So I submit to the wisdom of the Less Wrong groupmind - what can we do to efficiently change the opinion of millions of people on important issues such as freindly AI? Is a site such as the one linked above going to have the intended effect, or is it going to fall upon rationally-deaf ears? What practical advice could we give to Michael and his contributors that would maximize the impact of the site? What other intervantions might be a better use of his time?

\n

Edit: Thanks to those who made constructive suggestions for this post. It has been revised - R

" } }, { "_id": "seoWR5Ri7SpN4X3Bh", "title": "Least Signaling Activities?", "pageUrl": "https://www.lesswrong.com/posts/seoWR5Ri7SpN4X3Bh/least-signaling-activities", "postedAt": "2009-05-22T02:46:29.949Z", "baseScore": 31, "voteCount": 37, "commentCount": 103, "url": null, "contents": { "documentId": "seoWR5Ri7SpN4X3Bh", "html": "

I take it as obvious that signaling is an important function in many human behaviors.  That is, the details of many of our behaviors make sense as a package designed to persuade others to think well of us.  While we may not be conscious of this design, it seems important nonetheless.  In fact, in many areas we seem to be designed to not be conscious of this influence on our behavior.

\n

But if signaling is not equally important to all behaviors, we can sensibly ask the question: for which behaviors does signaling least influence our detailed behavior patterns?  That is, for what behaviors need we be the least concerned that our detailed behaviors are designed to achieve signaling functions?  For what actions can we most reasonably believe that we do them for the non-signaling reasons we usually give?

\n

You might suggest sleep, but others are often jealous of how much sleep we get, or impressed by how little sleep we can get by on.  You might suggest watching TV, but people often go out of their way to mention what TV shows they watch.  The best candidate I can think of so far is masturbation, though some folks seem to brag about it as a sign of their inexhaustible libido. 

\n

So I thought to ask the many thoughtful commentors at Less Wrong: what are good candidates for our least signaling activities?

\n

Added: My interest in this question is to look for signs of when we can more trust our conscious reasoning about what to do when how.  The more signaling matters, the less I can trust such reasoning, as it usually does not acknowledge the signaling influences.  If there is a distinctive mental mode we enter when reasoning about how exactly to defecate, nose-pick, sleep, masturbate, and so on, this is plausibly a more honest mental mode.  It would be useful to know what our most honest mental modes look like.

" } }, { "_id": "9xRJAcx9RZjqe6rjP", "title": "Inhibition and the Mind", "pageUrl": "https://www.lesswrong.com/posts/9xRJAcx9RZjqe6rjP/inhibition-and-the-mind", "postedAt": "2009-05-21T17:34:07.084Z", "baseScore": 10, "voteCount": 15, "commentCount": 29, "url": null, "contents": { "documentId": "9xRJAcx9RZjqe6rjP", "html": "
\n
\n
\n

Babies have a curious set of reflexes: lightly brush their palms, or the soles of their feet, and they will immediately grasp whatever caused the contact. In the case of feet, it’s more of an attempt than a successful grasping; human feet, while far more flexible and manipulative than most creatures’, are no longer the virtual hands possessed by our tree-dwelling ancestors and relatives.

\n
\n

These and a few other basic responses are commonly called the “primitive, or infantile, reflexes“, and are unusual for a variety of reasons. For one thing, they’re not permanent. As babies age, the reflexes disappear.

\n

But they’re not gone. Unlike many other reflexes, they don’t originate in the peripheral nerves, but the central nervous system. The reflex patterns don’t cease to exist, and they don’t cease to act. They’re eventually inhibited by more sophisticated parts of the brain associated with the frontal cortex. We know that the reflexes don’t cease to exist because there are conditions that cause them to reappear in adults; most of them involve major brain damage, particularly to the frontal areas, and are used to diagnose the severity of injury in cases of head trauma.. People with cerebral palsy frequently possess the responses as well, although they can often learn to control and prevent the reflexes consciously.

\n

These points illustrate a very important basic principle: the mind is made out of ‘layers’ of modules and functions, starting with the most rudimentary, basic, and primitive, and moving to the most complex and subtle. At no point do the lower levels cease to exist or to produce output; we can act in complex ways only because the more basic reactions are held back and prevented from exerting control.

\n

As various factors reduce the efficiency and health of our nervous system, it’s the most complex subsystems that fail first. The more basic, the more hardwired, and the less emulated the system, the less vulnerable it is to widespread damage or malfunctioning.  This has long been observed with intoxicants and conditions that impair central nervous system functioning, and is one of the ways neuroscientists understand how the brain creates such complex behaviors as a sense of humor.  (Curiously, that's not an aspect of the more modern and recent neurological modules, but is associated with very primitive responses.  That may be discussed later.)

\n

But all inhibition can fail. The more powerful the activity of the lower processes, the less likely it will be that the frontal lobes will be able to control them. Faced with more than it can handle, the 'angel brain' can be overwhelmed, letting the more basic modules to influence behavior and thinking.

\n

This is the primary reason why IQ isn’t adequate to access someone’s intellectual capacity, a topic I will address further in another post.

\n
\n

 

\n
\n
\n
" } }, { "_id": "PhPLcopkHxQkK3EGi", "title": "Catchy Fallacy Name Fallacy (and Supporting Disagreement)", "pageUrl": "https://www.lesswrong.com/posts/PhPLcopkHxQkK3EGi/catchy-fallacy-name-fallacy-and-supporting-disagreement", "postedAt": "2009-05-21T06:01:12.340Z", "baseScore": 32, "voteCount": 36, "commentCount": 57, "url": null, "contents": { "documentId": "PhPLcopkHxQkK3EGi", "html": "

Related: The Pascal's Wager Fallacy Fallacy, The Fallacy Fallacy

\n

Inspired by:

\n
\n

We need a catchy name for the fallacy of being over-eager to accuse people of fallacies that you have catchy names for.

\n
\n

 

\n

When you read an argument you don't like, but don't know how to attack on its merits, there is a trick you can turn to. Just say it commits1 some fallacy, preferably one with a clever name. Others will side with you, not wanting to associate themselves with a fallacy. Don't bother to explain how the fallacy applies, just provide a link to an article about it, and let stand the implication that people should be able to figure it out from the link. It's not like anyone would want to expose their ignorance by asking for an actual explanation.

\n

What a horrible state of affairs I have described in the last paragraph. It seems, if we follow that advice, that every fallacy we even know the name of makes us stupider. So, I present a fallacy name that I hope will exactly counterbalance the effects I described. If you are worried that you might defend an argument that has been accused of committing some fallacy, you should be equally worried that you might support an accusation that commits the Catchy Fallacy Name Fallacy. Well, now that you have that problem either way, you might as well try to figure if the argument did indeed commit the fallacy, by examining the actual details of the fallacy and whether they actually describe the argument.

\n

But, what is the essence of this Catchy Fallacy Name Fallacy? The problem is not the accusation of committing a fallacy itself, but that the accusation is vague. The essence is \"Don't bother to explain\". The way to avoid this problem is to entangle your counterargument, whether it makes a fallacy accusation or not, with the argument you intend to refute. Your counterargument should distinguish good arguments from bad arguments, in that it specifies criteria that systematically apply to a class of bad arguments but not to good arguments. And those criteria should be matched up with details of the allegedly bad argument.

\n

The wrong way:

\n
\n

It seems that you've committed the Confirmation Bias.

\n
\n

The right way:

\n
\n

The Confirmation Bias is when you find only confirming evidence because you only look for confirming evidence. You looked only for confirming evidence by asking people for stories of their success with Technique X.

\n
\n

Notice how the right way would seem very out of place when applied against an argument it does not fit. This is what I mean when I say the counterargument should distinguish the allegedly bad argument from good arguments.

\n

And, if someone commits the Catchy Fallacy Name Fallacy in trying to refute your arguments, or even someone else's, call them on it. But don't just link here, you wouldn't want to commit the Catchy Fallacy Name Fallacy Fallacy. Ask them how their counterargument distinguishes the allegedly bad argument from arguments that don't have the problem.

\n

 

\n

1 Of course, when I say that an argument commits a fallacy, I really mean that the person who made that argument, in doing so, committed the fallacy.

" } }, { "_id": "M5chgwzu97PScYuFs", "title": "Positive Bias Test (C++ program)", "pageUrl": "https://www.lesswrong.com/posts/M5chgwzu97PScYuFs/positive-bias-test-c-program", "postedAt": "2009-05-19T21:32:43.353Z", "baseScore": 30, "voteCount": 31, "commentCount": 79, "url": null, "contents": { "documentId": "M5chgwzu97PScYuFs", "html": "

I've written a program which tests positive bias using Wason's procedure from \"On the failure to eliminate hypotheses in a conceptual task\" (Quarterly Journal of Experimental Psychology, 12: 129-140, 1960). If the user does not discover the correct rule, the program attempts to guess, based on the user's input, what rule the user did find, and explains the existence of the more general rule. The program then directs the user here.

\n

I'd like to use a better set of triplets, and perhaps include more wrong rules. The program should be fairly flexible in this way.

\n

I'd also like to set up a web-based front-end to the program, but I do not currently know any cgi.

\n

I'm not completely happy with the program's textual output. It still feels a bit like the program is scolding the user at the end. Not quite sure how to fix this.

\n

Program source

\n

ETA: Here is a macintosh executable version of the program. I do not have any means to make an exe file, but if anyone does, I can host it.

\n

If you're on Linux, I'm just going to assume you know what to do with a .cpp file =P

\n

Here is a sample run of the program (if you're unfamiliar with positive bias, or the wason test, I'd really encourage you to try it yourself before reading):

\n

\n

Hi there! We're going to play a game based on a classic cognitive science experiment first performed by Peter Wason in 1960 (references at the end)

\n

Here's how it works. I'm thinking of a rule which separates sequences of three numbers into 'awesome' triplets, and not-so-awesome triplets. I'll tell you for free that 2 4 6 is an awesome triplet.

\n

What you need to do is to figure out which rule I'm thinking of. To help you do that, I'm going to let you experiment for a bit. Enter any three numbers, and I'll tell you whether they are awesome or not. You can do this as many times as you like, so please take your time.

\n

When you're sure you know what the rule is, just enter 0 0 0, and I'll test you to see if you've correctly worked out what the rule is.

Enter three numbers separated by spaces: 3 6 9

3, 6, 9 is an AWESOME triplet!

Enter three numbers separated by spaces: 10 20 30

10, 20, 30 is an AWESOME triplet!

Enter three numbers separated by spaces: 8 16 24

8, 16, 24 is an AWESOME triplet!

Enter three numbers separated by spaces: 0 0 0

So, you're pretty sure what the rule is now? Cool. I'm going to give you some sets of numbers, and you can tell me whether they seem awesome to you or not.
Would you say that 3, 6, 9 looks like an awesome triplet? (type y/n)
y

Would you say that 6, 4, 2 looks like an awesome triplet? (type y/n)
n

Would you say that 8, 10, 12 looks like an awesome triplet? (type y/n)
n

Would you say that 1, 17, 33 looks like an awesome triplet? (type y/n)
n

Would you say that 18, 9, 0 looks like an awesome triplet? (type y/n)
n

Would you say that 1, 7, 3 looks like an awesome triplet? (type y/n)
n

Would you say that 3, 5, 7 looks like an awesome triplet? (type y/n)
n

Would you say that 2, 9, 15 looks like an awesome triplet? (type y/n)
n

Would you say that 5, 10, 15 looks like an awesome triplet? (type y/n)
y

Would you say that 3, 1, 4 looks like an awesome triplet? (type y/n)
n

You thought that 3, 6, 9 was awesome.
In fact it is awesome.

You thought that 6, 4, 2 was not awesome.
In fact it is not awesome.

You thought that 8, 10, 12 was not awesome.
In fact it is awesome.

You thought that 1, 17, 33 was not awesome.
In fact it is awesome.

You thought that 18, 9, 0 was not awesome.
In fact it is not awesome.

You thought that 1, 7, 3 was not awesome.
In fact it is not awesome.

You thought that 3, 5, 7 was not awesome.
In fact it is awesome.

You thought that 2, 9, 15 was not awesome.
In fact it is awesome.

You thought that 5, 10, 15 was awesome.
In fact it is awesome.

You thought that 3, 1, 4 was not awesome.
In fact it is not awesome.

It looks as though you thought the rule was that awesome triplets contained three successive multiples of the same number, like 3,6,9, or 6,12,18. In fact, awesome triplets are simply triplets in which each number is greater than the previous one.

The rule for awesomeness was a fairly simple one, but you invented a more complicated, more specific rule, which happened to fit the first triplet you saw. In experimental tests, it has been found that 80% of subjects do just this, and then never test any of the triplets that *don't* fit their rule. If they did, they would immediately see the more general rule that was applying. This is a case of what psychologists call 'positive bias'. It is one of the many biases, or fundamental errors, which beset the human mind.

There is a thriving community of rationalists at the website Less Wrong (http://www.lesswrong.com) who are working to find ways to correct these fundamental errors. If you'd like to learn how to perform better with the hardware you have, you may want to pay them a visit.

If you'd like to learn more about positive bias, you may enjoy the article 'Positive Bias: Look Into the Dark': http://www.overcomingbias.com/2007/08/positive-bias-l.html
If you'd like to learn more about the experiment which inspired this test, look for a paper titled 'On the failure to eliminate hypotheses in a conceptual task' (Quarterly Journal of Experimental Psychology, 12: 129-140, 1960)

" } }, { "_id": "pRFGbKRveP67oRS42", "title": "Rationality quotes - May 2009", "pageUrl": "https://www.lesswrong.com/posts/pRFGbKRveP67oRS42/rationality-quotes-may-2009", "postedAt": "2009-05-19T19:30:02.498Z", "baseScore": 9, "voteCount": 7, "commentCount": 101, "url": null, "contents": { "documentId": "pRFGbKRveP67oRS42", "html": "

(Since there didn't seem to be one for this month, and I just ran across a nice quote.)

\r\n

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\r\n" } }, { "_id": "dEvJCWBfRYNdXXTsS", "title": "Supernatural Math", "pageUrl": "https://www.lesswrong.com/posts/dEvJCWBfRYNdXXTsS/supernatural-math", "postedAt": "2009-05-19T11:31:44.424Z", "baseScore": 5, "voteCount": 15, "commentCount": 58, "url": null, "contents": { "documentId": "dEvJCWBfRYNdXXTsS", "html": "

Related to: How to Convince Me That 2 + 2 = 3

\n

This started as a reply to this thread, but it would have been offtopic and I think the subject is important enough for a top-level post, as there's apparently still significant confusion about it.

\n

How do we know that two and two make four? We have two possible sources of knowledge on the subject. Note that both happen to be entirely physical systems that run on the same merely ordinary entropy that makes car engines go.

First, evolution. Animals whose subitizing apparatus output 2+2=3 were selected out.

Second, personal observation; that is, operation of our sense organs. I can put 2 bananas on a table, then put down 2 more bananas, and count out 4 bananas; my schoolteachers told me 2+2 is 4; I can type 2+2 into a calculator and get 4; etc.

Now, notwithstanding the above, does 2+2 really equal 4, independent of any human thoughts about it? This way lies madness. If there is some kind of pure essence of math that never physically impinges upon the stuff inside our heads (or, worse, exists \"outside the physical universe\"), there's no sensible way we can know about it. It's a dragon in the garage.

The fact that our faculty for counting bananas can also be used to make predictions about, say, the behavior of quarks is extremely surprising to our savannah-adapted brains. After all, bananas are ordinary things we can hold in our hands and eat, and quarks are tiny and strange and definitely not ordinary at all. So, of course, the obvious thing that comes to mind to explain this is a supernatural force. How else could such dissimilar things be governed by the same laws?

The disappointing truth is that bananas are quarks, and by amazing good fortune, the properties of everyday macroscopic objects are sufficiently related to those of other physical phenomena that a few lucky humans can just barely manage to crudely adapt their banana-counting brain hardware to work in those other domains. No supernatural math required.

" } }, { "_id": "eT4JAgH6ZfMF4xYqh", "title": "Bad reasons for a rationalist to lose", "pageUrl": "https://www.lesswrong.com/posts/eT4JAgH6ZfMF4xYqh/bad-reasons-for-a-rationalist-to-lose", "postedAt": "2009-05-18T22:57:40.761Z", "baseScore": 34, "voteCount": 43, "commentCount": 83, "url": null, "contents": { "documentId": "eT4JAgH6ZfMF4xYqh", "html": "

Reply to: Practical Advice Backed By Deep Theories

\n

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

\n

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

\n
\n

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

\n
\n

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

\n

I submit that we don't \"need\" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

\n

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

\n\n\n

… and, what don't we need?

\n\n


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

" } }, { "_id": "EdyDGRLNFScEt5uDz", "title": "\"What Is Wrong With Our Thoughts\"", "pageUrl": "https://www.lesswrong.com/posts/EdyDGRLNFScEt5uDz/what-is-wrong-with-our-thoughts", "postedAt": "2009-05-17T07:24:30.136Z", "baseScore": 41, "voteCount": 38, "commentCount": 109, "url": null, "contents": { "documentId": "EdyDGRLNFScEt5uDz", "html": "
\"But let us never forget, either, as all conventional history of philosophy conspires to make us forget, what the 'great thinkers' really are: proper objects, indeed, of pity, but even more, of horror.\"
\n

David Stove's \"What Is Wrong With Our Thoughts\" is a critique of philosophy that I can only call epic.

\n

The astute reader will of course find themselves objecting to Stove's notion that we should be catologuing every possible way to do philosophy wrong.  It's not like there's some originally pure mode of thought, being tainted by only a small library of poisons.  It's just that there are exponentially more possible crazy thoughts than sane thoughts, c.f. entropy.

\n

But Stove's list of 39 different classic crazinesses applied to the number three is absolute pure epic gold.  (Scroll down about halfway through if you want to jump there directly.)

\n

I especially like #8:  \"There is an integer between two and four, but it is not three, and its true name and nature are not to be revealed.\"

" } }, { "_id": "azdqDRbcw3EkrnHNw", "title": "Wanting to Want", "pageUrl": "https://www.lesswrong.com/posts/azdqDRbcw3EkrnHNw/wanting-to-want", "postedAt": "2009-05-16T03:08:10.257Z", "baseScore": 30, "voteCount": 34, "commentCount": 199, "url": null, "contents": { "documentId": "azdqDRbcw3EkrnHNw", "html": "

In response to a request, I am going to do some basic unpacking of second-order desire, or \"metawanting\".  Basically, a second-order desire or metawant is a desire about a first-order desire.

\n

Example 1: Suppose I am very sleepy, but I want to be alert.  My desire to be alert is first-order.  Suppose also that there is a can of Mountain Dew handy.  I know that Mountain Dew contains caffeine and that caffeine will make me alert.  However, I also know that I hate Mountain Dew1.  I do not want the Mountain Dew, because I know it is gross.  But it would be very convenient for me if I liked Mountain Dew: then I could drink it, and I could get the useful effects of the caffeine, and satisfy my desire for alertness.  So I have the following instrumental belief: wanting to drink that can of Mountain Dew would let me be alert.  Generally, barring other considerations, I want things that would get me other things I want - I want a job because I want money, I want money because I can use it to buy chocolate, I want chocolate because I can use it to produce pleasant taste sensations, and I just plain want pleasant taste sensations.  So, because alertness is something I want, and wanting Mountain Dew would let me get it, I want to want the Mountain Dew.

\n

This example demonstrates a case of a second-order desire about a first-order desire that would be instrumentally useful.  But it's also possible to have second-order desires about first-order desires that one simply does or doesn't care to have.

\n

Example 2: Suppose Mimi the Heroin Addict, living up to her unfortunate name, is a heroin addict.  Obviously, as a heroin addict, she spends a lot of her time wanting heroin.  But this desire is upsetting to her.  She wants not to want heroin, and may take actions to stop herself from wanting heroin, such as going through rehab.

\n

One thing that is often said is that what first-order desires you \"endorse\" on the second level are the ones that are your most true self.  This seems like an appealing notion in Mimi's case; I would not want to say that at her heart she just wants heroin and that's an intrinsic, important part of her.  But it's not always the case that the second-order desire is the one we most want to identify with the person who has it:

\n

Example 3: Suppose Larry the Closet Homosexual, goodness only knows why his mother would name him that, is a closet homosexual.  He has been brought up to believe that homosexuality is gross and wrong.  As such, his first-order desire to exchange sexual favors with his friend Ted the Next-Door Neighbor is repulsive to him when he notices it, and he wants desperately not to have this desire.

\n

In this case, I think we're tempted to say that poor Larry is a gay guy who's had an alien second-order desire attached to him via his upbringing, not a natural homophobe whose first-order desires are insidiously eroding his real personality.

\n

A less depressing example to round out the set:

\n

Example 4: Suppose Olivia the Overcoming Bias Reader, whose very prescient mother predicted she would visit this site, is convinced on by Eliezer's arguments about one-boxing in Newcomb's Problem.  However, she's pretty sure that if Omega really turned up, boxes in hand, she would want to take both of them.  She thinks this reflects an irrationality of hers.  She wants to want to one-box.

\n

 

\n

1Carbonated beverages make my mouth hurt.  I have developed a more generalized aversion to them after repeatedly trying to develop a taste for them and experiencing pain every time.

" } }, { "_id": "p9gtfDNup7sNjsMB8", "title": "Share Your Anti-Akrasia Tricks", "pageUrl": "https://www.lesswrong.com/posts/p9gtfDNup7sNjsMB8/share-your-anti-akrasia-tricks", "postedAt": "2009-05-15T19:06:31.527Z", "baseScore": 25, "voteCount": 24, "commentCount": 120, "url": null, "contents": { "documentId": "p9gtfDNup7sNjsMB8", "html": "

People have been encouraging me to share my anti-akrasia tricks, but it feels inappropriate to dedicate a top-level post solely to unproven techniques that work for some person and may not work for others, so:

\n

Go ahead and share your anti-akrasia tricks!

\n

Let's make it an open thread where we just share what works and what doesn't, without worrying (yet) about having to explain tricks with deep theories, or designing proper experiments to verify them. However, if you happen to have a theory or a proposed experiment in mind, please share.

\n

Bragging is fine, but please share the failures of your techniques as well – they are just as valuable, if not more.

\n

Note to readers – before you read the comments and try the tricks, keep in mind that the techniques below are not yet proven supported or explained by proper experiments, and are not yet backed by theory. They may work for their authors, but are not guaranteed to work for you, so try them at your own risk. It would be even better to read the following posts before rushing to try the tricks:

\n" } }, { "_id": "XeSKBmNYF9Nh4C26J", "title": "Be Logically Informative", "pageUrl": "https://www.lesswrong.com/posts/XeSKBmNYF9Nh4C26J/be-logically-informative", "postedAt": "2009-05-15T13:23:31.277Z", "baseScore": 4, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "XeSKBmNYF9Nh4C26J", "html": "

What's the googolplexth decimal of pi? I don't know, but I know that it's rational for me to give each possible digit P=1/10. So there's a sense in which I can rationally assign probabilities to mathematical facts or computation outcomes on which I'm uncertain. (Apparently this can be modeled with logically impossible possible worlds.)

\n

When we debate the truth of some proposition, we may not be engaging in mathematics in the traditional sense, but we're still trying to learn more about a structure of necessary implications. If we can apply probabilities to logic, we can quantify logical information. More logical information is better. And this seems very relevant to a misunderpracticed sub-art of group rationality -- the art of responsible argumentation.

\n

There are a lot of common-sense guidelines for good argumentative practice. In case of doubt, we can take the logical information perspective and use probability theory to ground these guidelines. So let us now unearth a few example guidelines and other obvious insights, and not let the fact that we already knew them blunt the joy of discovery.

\n\n

My main recommendation: undertake a conscious effort to keep feeling your original curiosity, and let your statements flow from there, not from a habit to react passively to what bothers you most out of what has been said. Don't just speak under the constraint of having to reach a minimum usefulness threshold; try to build a sense of what, at each point in an argument, would be the most useful thing for the group to know next.

\n

Consider a hilariously unrealistic alternate universe where everything that people argue about on the internet matters. I daresay that even there people could train themselves to mine the same amount of truth with less than half of the effort. In spite of the recent escape of the mindkill fairy, can we do especially well on LessWrong? I hope so!

" } }, { "_id": "KK3Zp9jMYqtKeJgdf", "title": "Outward Change Drives Inward Change", "pageUrl": "https://www.lesswrong.com/posts/KK3Zp9jMYqtKeJgdf/outward-change-drives-inward-change", "postedAt": "2009-05-15T12:45:16.204Z", "baseScore": 28, "voteCount": 30, "commentCount": 41, "url": null, "contents": { "documentId": "KK3Zp9jMYqtKeJgdf", "html": "

The subsumption architecture for robotics invented by Rodney Brooks is based on the idea of connecting behavior to perception more directly, with fewer layers of processing and ideally no central processing at all. Its success, e.g. the Roomba, stands as proof that something akin to control theory can be used to generate complex agent-like behavior in the real world. In this post I'll try to give some convincing examples from literature and discuss a possible application to anti-akrasia.

\n

We begin with Braitenberg vehicles. Imagine a dark flat surface with lamps here and there. Further imagine a four-wheeled kart with two light sensors at the front (left and right) and two independent motors connected to the rear wheels. Now connect the left light sensor directly to the right motor and vice versa. The resulting vehicle will seek out lamps and ram them at high speed. If you connect each sensor to the motor on its own side instead, the vehicle will run away from lamps, find a dark spot and rest there. If you use inverted (inhibitory) connectors from light sensors to motors, you get a car that finds lamps, approaches them and stops as if praying to the light.

\n

Fast forward to a real world robot [PDF] built by Brooks and his team. The robot's goal is to navigate office space and gather soda cans. A wheeled base and a jointed hand with two fingers for grabbing. Let's focus on the grabbing task. You'd think the robot's computer should navigate the hand to what's recognized as a soda can and send out a grab instruction to fingers? Wrong. Hand navigation is implemented as totally separate from grabbing. In fact, grabbing is a dumb reflex triggered whenever something crosses an infrared beam between the fingers. The design constraint of separated control paths for different behaviors has given us an unexpected bonus: a human can hand a soda can to the robot which will grab it just fine. If you've ever interacted with toddlers, you know they work much the same way.

\n

A recurrent theme in those designs is coordinating an agent's actions through the state of the world rather than an internal representation - in the words of Brooks, \"using the world as its own model\". This approach doesn't solve all problems - sometimes you do need to shut up and compute - but it goes surprisingly far, and biological evolution seems to have used it quite a lot: for example a moth spirals into the flame because it's trying to maintain a constant angle to the light direction, which works well for navigation when the light source is the moon.

\n

Surprising insights arise when you start applying those ideas to yourself. I often take the metro from home to work and back. As a result I have two distinct visual recollections of each station along the way, corresponding to two directions of travel. (People who commute by car could relate to the same experience with visual images of the road.) Those visual recollections have formed associations to behavior that bypass the rational brain: if I'm feeling absent-minded, just facing the wrong direction can take me across the city in no time.

\n

Now the takeaway related to akrasia that I've been testing for the last few days with encouraging results. Viewing your brain as a complete computer that you ought to modify from inside is an unnecessarily hard approach. Your brain plus your surroundings is the computer. A one-time act of changing your surroundings, physically going somewhere or rearranging stuff, does influence your behavior a lot - even if it shouldn't. Turn your head, change what you see, and you'll change yourself.

" } }, { "_id": "y7TmaJReDrEHQmBce", "title": "Essay-Question Poll: Voting", "pageUrl": "https://www.lesswrong.com/posts/y7TmaJReDrEHQmBce/essay-question-poll-voting", "postedAt": "2009-05-15T05:04:55.374Z", "baseScore": 7, "voteCount": 13, "commentCount": 18, "url": null, "contents": { "documentId": "y7TmaJReDrEHQmBce", "html": "

There has been a considerable amount of discussion scattered around Less Wrong about voting, what software features having to do with voting should be added or subtracted, what purpose voting should serve, etc.  It seems as though it would be useful to have conveniently consolidated information on how people are actually voting, so we know what habits that we want to encourage or discourage are actually in use and how prevalently.

\n

1. About what percentage of comments do you vote on at all?  What percentage of top-level posts?

\n

2. Do you use the boo vote or the anti-kibitzer extensions?  Why or why not?

\n

3. What karma threshold do you use to filter what you see, if any?

\n

4. When you vote on a post, or read it and decide not to vote on it, what features of the post are you occurrently conscious of that influence your decision either way?  (Submitter, current post score, length, style, topic, spelling, whatever.)  What about comments?

\n

\n

5. When you vote on a post, or read it and decide not to vote on it, are there any features of the post that you suspect you may react to subconsciously?  What about comments?

\n

6. When you vote on a post, or read it and decide not to vote on it, how do the features to which you react influence you?  What about comments?

\n

7. Do you make comments saying how you voted and why, on posts or on other comments?  Why or why not?

\n

8. What do you think a vote should be for?  (Moving comments around in attentionspace, signifying agreement or disagreement, nudging the score in the direction of the score you think it deserves, influencing user karma to reflect general trends of post/comment quality, pointing out comments that are entertaining or useful or have cogent reasoning, compensating for other people upvoting or downvoting something you don't think warrants it, rewarding people for completing surveys, something I didn't think of, some combination of purposes).  Do you usually vote in a way consistent with your opinion about its purpose?

\n

9. What software features would you like to see that are relevant to voting?

\n

10. Does your replying behavior interact in any interesting way with your voting behavior?  (For instance, do you usually reply to comments you find confusing with questions, and then downvote them only after getting an inadequate explanation?  Do you vote only on discussions you have, or haven't, participated in?  Do you upvote for agreement and reply for disagreement?)

\n

11. How do you tend to react when one of your posts or comments gets a good karma score?  What if no one votes on it, or it gets a negative score?

\n

12. Is there anything else about your voting behavior or opinions on voting that might be interesting?

" } }, { "_id": "qJM5kuxN8j3PwNNx6", "title": "Cheerios: An \"Untested New Drug\"", "pageUrl": "https://www.lesswrong.com/posts/qJM5kuxN8j3PwNNx6/cheerios-an-untested-new-drug", "postedAt": "2009-05-15T02:26:25.688Z", "baseScore": 9, "voteCount": 18, "commentCount": 14, "url": null, "contents": { "documentId": "qJM5kuxN8j3PwNNx6", "html": "

I found this letter from the US Food and Drug Administration to General Mills interesting. It appears on the surface that the agency is trying to protect the American public from ungrounded persuasion, yet I can't find anything in the letter claiming that GM has made an unsupported statement.

\n

Does anyone understand this better than I do?

" } }, { "_id": "Soae3L98bBooTpzez", "title": "Religion, Mystery, and Warm, Soft Fuzzies", "pageUrl": "https://www.lesswrong.com/posts/Soae3L98bBooTpzez/religion-mystery-and-warm-soft-fuzzies", "postedAt": "2009-05-14T23:41:06.878Z", "baseScore": 21, "voteCount": 23, "commentCount": 125, "url": null, "contents": { "documentId": "Soae3L98bBooTpzez", "html": "

Reaction to: Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2, A Parable On Obsolete Ideologies

\n

Frank's point got rather lost in all this. It seems to be quite simple: there's a warm fuzziness to life that science just doesn't seem to get, and some religious artwork touches on and stimulates this warm fuzziness, and hence is of value.1 Moreover, understanding this point seems rather important to being able to spread an ideology.

\n

The main problem is viewing this warm fuzziness as a \"mystery.\" This warm fuzziness, as an experience, is a reality. It's part of that set of things that doesn't go away no matter what you say or think about them. Women (or men) will still be alluring, food will still be delicious, and Michaelangelo's David will still be beautiful, no matter how well you describe these phenomenon. The view that shattering mysteries reduces their value is very much a result of religion trying to protect itself. EY is probably correct that science will one day destroy this mystery as it has so many others, but because it is an \"experience we can't clearly describe\" rather than an actual \"mystery,\" the experience will remain. The argument is with the description, not the experience; the experience is real, and experiences of its nature are totally desirable.

\n

The second, sub-point: Frank thinks that certain religious stories and artwork may be of artistic value. The selection of the story of Job is unfortunate, but both speakers value it for the same reason: its truth. One sees it as true (and inspiring) and likes it, the other sees it as false (and insidious) and hates it. I think both agree that if you put it on the shelf next to Tolkien, and rational atheists still buy it and enjoy it, hey, good for Job. And if not, well, throw it out with the rest of the trash.

\n

Frank also has a point about rationality not being the only way to view the world. I think he's once again right, he's just really, tragically bad at expressing his point without borrowing heavily from religion. His point seems to be that rationality isn't the only way to *experience* the world, which is absolutely, 100% right. You don't experience the world through rationality. You experience it through your senses and the qualia of consciousness. Rationality is how you figure out what's going on, or what's going to be going on, or what causes one thing to happen and not another. Appreciating art, or food, or sex, or life is not generally done by applying rationality. Rationality is extremely useful for figuring out how to get these things we like, or even figure out what things we should like, but it doesn't factor into the qualitative experience of those things in most cases. For many people it probably doesn't factor into the enjoyment of anything. If you don't embrace and explain this distinction, you come out looking like Spock.

\n

This seems to be a key point atheists fail to communicate, because it is logically irrelevant to the truth of their propositions. A lot of people avoid decisions that they believe will destroy everything that makes them happy, and I'm not sure we can blame them. It's important to explain that you can still have all kinds of warm fuzziness, and, even better, you can be really confident it's well-founded and avoid abysmal epistemology, too! Instead, the atheist tries to defeat some weird, religiously-motivated expression of warm fuzziness, and that becomes the debate, and people like their fuzzies.

\n

We experience warm fuzziness directly,2 through however our brains work. No amount of science is likely to change that, no matter how well it understands the phenomenon. This is a good thing for science, and it's a good thing for warmth and fuzziness.

\n

 

\n
\n

1- I have admittedly not read his book. It's quite possible he's advocating we actually go through religion and make it fit our current sensibilities, then take it as uber-fiction. If that's the case, I have serious problems with it. If that's not the case, and he just thinks that some of it contains truth/beauty/is salvagable as literature, then I have serious problems with the argumentum-ad-hitlerum employed against him, as it seems to burn a straw man.

\n

 

\n

2 - I'm not saying there's warm fuzziness in the territory and we put it in our map. There's something in the territory that, when we map it out, the mapping causes us to directly experience a feeling of warm fuzziness.

" } }, { "_id": "4qqzCF7MzyPh76nNX", "title": "\"Open-Mindedness\" - the video", "pageUrl": "https://www.lesswrong.com/posts/4qqzCF7MzyPh76nNX/open-mindedness-the-video", "postedAt": "2009-05-14T06:17:02.672Z", "baseScore": 21, "voteCount": 19, "commentCount": 38, "url": null, "contents": { "documentId": "4qqzCF7MzyPh76nNX", "html": "

An interesting little Flash-like video on \"openmindedness\" by someone named QualiaSoup (hopefully ironically).

\n

Does anyone know how much effort is required to produce this sort of video, perhaps from a script?  We need at least another thousand of these.

" } }, { "_id": "Ltey8BS83qSkd9M3u", "title": "A Parable On Obsolete Ideologies", "pageUrl": "https://www.lesswrong.com/posts/Ltey8BS83qSkd9M3u/a-parable-on-obsolete-ideologies", "postedAt": "2009-05-13T22:51:50.133Z", "baseScore": 186, "voteCount": 187, "commentCount": 288, "url": null, "contents": { "documentId": "Ltey8BS83qSkd9M3u", "html": "

Followup to:  Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2
With sincere apologies to: Mike Godwin

\n

You are General Eisenhower. It is 1945. The Allies have just triumphantly liberated Berlin. As the remaining leaders of the old regime are being tried and executed, it begins to become apparent just how vile and despicable the Third Reich truly was.

In the midst of the chaos, a group of German leaders come to you with a proposal. Nazism, they admit, was completely wrong. Its racist ideology was false and its consequences were horrific. However, in the bleak poverty of post-war Germany, people need to keep united somehow. They need something to believe in. And a whole generation of them have been raised on Nazi ideology and symbolism. Why not take advantage of the national unity Nazism provides while discarding all the racist baggage? \"Make it so,\" you say.

The swastikas hanging from every boulevard stay up, but now they represent \"traditional values\" and even \"peace\". Big pictures of Hitler still hang in every government office, not because Hitler was right about racial purity, but because he represents the desire for spiritual purity inside all of us, and the desire to create a better society by any means necessary. It's still acceptable to shout \"KILL ALL THE JEWS AND GYPSIES AND HOMOSEXUALS!\" in public places, but only because everyone realizes that Hitler meant \"Jews\" as a metaphor for \"greed\", \"gypsies\" as a metaphor for \"superstition\", and \"homosexuals\" as a metaphor for \"lust\", and so what he really meant is that you need to kill the greed, lust, and superstition in your own heart. Good Nazis love real, physical Jews! Some Jews even choose to join the Party, inspired by their principled stand against spiritual evil.

The Hitler Youth remains, but it's become more or less a German version of the Boy Scouts. The Party infrastructure remains, but only as a group of spiritual advisors helping people fight the untermenschen in their own soul. They suggest that, during times of trouble, people look to Mein Kampf for inspiration. If they open to a sentence like \"The Aryan race shall conquer all in its path\", then they can interpret \"the Aryan race\" to mean \"righteous people\", and the sentence is really just saying that good people can do anything if they set their minds to it. Isn't that lovely?

Soon, \"Nazi\" comes to just be a synonym for \"good person\". If anyone's not a member of the Nazi Party, everyone immediately becomes suspicious. Why is she against exterminating greed, lust, and superstition from her soul? Does she really not believe good people can do anything if they set their minds to it? Why does he oppose caring for your aging parents? We definitely can't trust him with high political office.

\n

\n

It is four years later. Soon, the occupation will end, and Germany will become an independent country once again. The Soviets have already taken East Germany and turned it Communist. As the de facto ruler of West Germany, its fate is in your hands. You ask your two most trusted subordinates for advice.

First, Colonel F gives his suggestion. It is vital that you order the preservation of the Nazi ideology so that Germany remains strong. After all, the Germans will need to stay united as a people in order to survive the inevitable struggle with the Soviets. If Nazism collapsed, then people would lose everything that connects them together, and become dispirited. The beautiful poetry of Mein Kampf speaks to something deep in the soul of every German, and if the Allies try to eradicate that just because they disagree with one outdated interpretation of the text, they will have removed meaning from the lives of millions of people all in the name of some sort of misguided desire to take everything absolutely literally all the time.

Your other trusted subordinate, Colonel Y, disagrees. He thinks that Mein Kampf may have some rousing passages, but that there's no special reason it has a unique ability to impart meaning to people other than that everyone believes it does. Not only that, but the actual contents of Mein Kampf are repulsive. Sure, if you make an extraordinary effort to gloss over or reinterpret the repulsive passages, you can do it, but this is more trouble than it is worth and might very well leave some lingering mental poison behind. Germany should completely lose all the baggage of Nazism and replace it with a completely democratic society that has no causal linkage whatsoever to its bloody past.

Colonel F objects. He hopes you don't just immediately side with Colonel Y just because the question includes the word \"Nazi\". Condemning Nazism is an obvious applause light, but a political decision of this magnitude requires a more carefully thought-out decision. After all, Nazism has been purged of its most objectionable elements, and the Germans really do seem to like it and draw a richer life from it. Colonel Y needs to have a better reason his personal distaste for an ideology because of past history in order to take it away from them.

Colonel Y thinks for a moment, then begins speaking. You have noticed, he says, that the new German society also has a lot of normal, \"full-strength\" Nazis around. The \"reformed\" Nazis occasionally denounce these people, and accuse them of misinterpreting Hitler's words, but they don't seem nearly as offended by the \"full-strength\" Nazis as they are by the idea of people who reject Nazism completely.

Might the existence of \"reformed\" Nazis, he asks, enable \"full-strength\" Nazis to become more powerful and influential? He thinks it might. It becomes impossible to condemn \"full-strength\" Nazis for worshipping a horrible figure like Hitler, or adoring a horrible book like Mein Kampf, when they're doing the same thing themselves. At worst, they can just say the others are misinterpreting it a little. And it will be very difficult to make this argument, because all evidence suggests that in fact it's the \"full-strength\" Nazis who are following Hitler's original intent and the true meaning of Mein Kampf, and the \"reformed\" Nazis who have reinterpreted it for political reasons. Assuming the idea of not being a Nazi at all remains socially beyond the pale, intellectually honest people will feel a strong pull towards \"full-strength\" Nazism.

Even if the \"reformed\" Nazis accept all moderate liberal practices considered reasonable today, he says, their ideology might still cause trouble later. Today, in 1945, mixed race marriage is still considered taboo by most liberal societies, including the United States. The re-interpreters of Mein Kampf have decided that, although \"kill all the Jews\" is clearly metaphorical, \"never mix races\" is meant literally. If other nations began legalizing mixed race marriage in the years to come, Party members will preach to the faithful that it is an abomination, and can even point to the verse in Mein Kampf that said so. It's utterly plausible that a \"reformed\" Nazi Germany may go on forbidding mixed race marriage much longer than surrounding countries. Even if Party leaders eventually bow to pressure and change their interpretation, the Party will always exist as a force opposing racial equality and social justice until the last possible moment.

And, he theorizes, there could be even deeper subconscious influences. He explains that people often process ideas and morals in ways that are only tangentially linked to specific facts and decisions. Instead, we tend to conflate things into huge, fuzzy concepts and assign \"good\" and \"bad\" tags to them. Saying \"Jews are bad, but this doesn't apply to actual specific Jews\" is the sort of thing the brain isn't very good at. At best, it will end out with the sort of forced politeness a person who's trying very hard not be racist shows around black people. As soon as we assign a good feeling to the broad idea of \"Nazism\", that reflects at least a little on everything Nazism stands for, everything Nazism ever has stood for, and every person who identifies as a Nazi.

He has read other essays that discuss the ability of connotations to warp thinking. Imagine you're taught things like \"untermenschen like Jews and Gypsies are people too, and should be treated equally.\" The content of this opinion is perfectly fine. Unfortunately, it creates a category called \"untermenschen\" with a bad connotation and sticks Jews and Gypsies into it. Once you have accepted that Jews and Gypsies comprise a different category, even if that category is \"people who are exactly like the rest of us except for being in this category here\", three-quarters of the damage is already done. Here the Colonel sighs, and reminds you of the discrimination faced by wiggins in the modern military.

And (he adds) won't someone please think of the children? They're not very good at metaphor, they trust almost anything they hear, and they form a scaffolding of belief that later life can only edit, not demolish and rebuild. If someone was scared of ghosts as a child, they may not believe in ghosts now, but they're going to have some visceral reaction to them. Imagine telling a child \"We should kill everyone in the lesser races\" five times a day, on the assumption that once they're a teenager they'll understand what a \"figurative\" means and it'll all be okay.

He closes by telling you that he's not at all convinced that whatever metaphors the Nazis reinterpret Mein Kampf to mean aren't going to be damaging in themselves. After all, these metaphors will have been invented by Nazis, who are not exactly known for choosing the best moral lessons. What if \"kill all lesser races\" gets reinterpreted to \"have no tolerance for anything that is less than perfect\"? This sounds sort of like a good moral lesson, until people start preaching that it means we should lock up gay people, because homosexuality is an \"imperfection\". That, he says, is the sort of thing that happens when you get your morality from cliched maxims taken by drawing vapid conclusions from despicably evil works of literature.

So, the Colonel concludes, if you really want the German people to be peaceful and moral, you really have no choice but to nip this growing \"reformed Nazi\" movement in the bud. Colonel F has made some good points about respecting the Germans' culture, but doing so would make it difficult to eradicate their existing racist ideas, bias their younger generation towards habits of thought that encourage future racism, create a strong regressive tendency in their society, and yoke them to poorly fashioned moral arguments.

And, he finishes, he doesn't really think Nazism is that necessary for Germany to survive. Even in some crazy alternate universe where the Allies had immediately cracked down on Nazism as soon as they captured Berlin, yea, even in the absurd case where Germany immediately switched to a completely democratic society that condemned everything remotely associated with Nazism as evil and even banned swastikas and pictures of Hitler from even being displayed - even in that universe, Germans would keep a strong cultural identity and find new symbols of their patriotism.

Ridiculous, Colonel F objects! In such a universe, the Germans would be left adrift without the anchor of tradition, and immediately be taken over by the Soviets.

Colonel Y just smiles enigmatically. You are reminded of the time he first appeared at your command tent, during the middle of an unnatural thunderstorm, with a copy of Hugh Everett's The Theory of the Universal Wave Function tucked under one arm. You shudder, shake your head, and drag yourself back to the present.

So, General, what is your decision?

" } }, { "_id": "ZWC3n9c6v4s35rrZ3", "title": "Survey Results", "pageUrl": "https://www.lesswrong.com/posts/ZWC3n9c6v4s35rrZ3/survey-results", "postedAt": "2009-05-12T22:09:39.463Z", "baseScore": 62, "voteCount": 56, "commentCount": 212, "url": null, "contents": { "documentId": "ZWC3n9c6v4s35rrZ3", "html": "

Followup to: Excuse Me, Would You Like to Take a Survey?, Return of the Survey

\n

Thank you to everyone who took the Less Wrong survey. I've calculated some results out on SPSS, and I've uploaded the data for anyone who wants it. I removed twelve people who wanted to remain private, removed a few people's karma upon request, and re-sorted the results so you can't figure out that the first person on the spreadsheet was the first person to post \"I took it\" on the comments thread and so on. Warning: you will probably not get exactly the same results as me, because a lot of people gave poor, barely comprehensible write in answers, which I tried to round off to the nearest bin.

Download the spreadsheet (right now it's in .xls format)

I am not a statistician, although I occasionally have to use statistics for various things, and I will gladly accept corrections for anything I've done wrong. Any Bayesian purists may wish to avert their eyes, as the whole analysis is frequentist. What can I say? I get SPSS software and training free and I don't like rejecting free stuff. The write-up below is missing answers to a few questions that I couldn't figure out how to analyze properly; anyone who cares about them enough can look at the raw data and try it themselves. Results under the cut.

\n

\n

Out of 166 respondees:

\n

160 (96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender.

The mean age was 27.16, the median was 25, and the SD was 7.68. The youngest person was 16, and the oldest was 60. Quartiles were <22, 22-25, 25-30, and >30.

Of the 158 of us who disclosed our race, 148 were white (93.6%), 6 were Asian, 1 was Black, 2 were Hispanic, and one cast a write-in vote for Middle Eastern. Judging by the number who put \"Hinduism\" as their family religion, most of those Asians seem to be Indians.

Of the 165 of us who gave readable relationship information, 55 (33.3%) are single and looking, 40 (24.2%) are single but not looking, 40 (24.2%) are in a relationship, 29 (17.6%) are married, and 1 is divorced.

Only 138 gave readable political information (those of you who refused to identify with any party and instead sent me manifestos, thank you for enlightening me, but I was unfortunately unable to do statistics on them). We have 62 (45%) libertarians, 53 (38.4%) liberals, 17 (12.3%) socialists, 6 (4.3%) conservatives, and not one person willing to own up to being a commie.

Of the 164 people who gave readable religious information, 134 (81.7%) were atheists and not spiritual; 5 other atheists described themselves as \"spiritual\". Counting deists and pantheists, we had 11 believers in a supreme being (6.7%), of whom 2 were deist/pantheist, 2 were lukewarm theists, and 6 were committed theists. 14 of us (8.5%) were agnostic.

53 of us were raised in families of \"about average religiousity\" (31.9%). 24 (14.5%) were from extremely religious families, 45 (27.1%) from nonreligious families, and 9 (5.4%) from explicitly atheist families. 30 (18.1%) were from families less religious than average. The remainder wrote in some hard to categorize responses, like an atheist father and religious mother, or vice versa.

Of the 106 of us who listed our family's religious background, 92 (87%) were Christian. Of the Christians, 29 (31.5% of Christians) described their backgrounds as Catholic, 30 (32.6% of Christians) described it as Protestant, and the rest gave various hard-to-classify denominations or simply described themselves as \"Christian\". There were also 9 Jews, 3 Hindus, 1 Muslim, and one New Ager.

I didn't run the \"how much of Overcoming Bias have you read\" question so well, and people ended up responding things like \"Oh, most of it\", which are again hard to average. After interpreting things extremely liberally and unscientifically (\"most\" was estimated as 75%, \"a bit\" was estimated at 25%, et cetera) I got that the average LWer has read about half of OB, with a slight tendency to read more of Eliezer's posts than Robin's.

Average time in the OB/LW community was 13.6 ± 9.2 months. Average time spent on the site per day was 30.7 ± 30.4 minutes.

IQs (warning: self-reported numbers for notoriously hard-to-measure statistic) ranged from 120 to 180. The mean was 145.88, median was 141.50, and SD was 14.02. Quartiles were <133, 133-141.5, 141.5-155, and >155.

77 people were willing to go out on a limb and guess whether their IQ would be above the median or not. The mean confidence level was 54.4, and the median confidence level was 55 - which shows a remarkable lack of self-promoting bias. The quartiles were <40, 40-55, 55-70, >70. There was a .453 correlation between this number and actual IQ. This number was significant at the <.001 level.

Probability of Many Worlds being more or less correct (given as mean, median, SD; all probabilities in percentage format): 55.65, 65, 32.9.

Probability of aliens in the observable Universe: 70.3, 90, 35.7.

Probability of aliens in our galaxy: 40.9, 35, 38.5. Notice the huge standard deviations here; the alien questions were remarkable both for the high number of people who put answers above 99.9, and the high number of people who put answers below 0.1. My guess: people who read about The Great Filter versus those who didn't.

Probability of some revealed religion being true: 3.8, 0, 12.6.

Probability of some Creator God: 4.2, 0, 14.6.

Probability of something supernatural existing: 4.1, 0, 12.8.

Probability of an average person cryonically frozen today being successfully revived: 22.3, 10, 26.2.

Probability of anti-agathic drugs allowing the current generation to live beyond 1000: 29.2, 20, 30.8.

Probability that we live in a simulation: 16.9, 5, 23.7.

Probability of anthropic global warming: 69.4, 80, 27.8.

Probability that we make it to 2100 without a catastrophe killing >90% of us: 73.1, 80, 24.6.

When asked to determine a year in which the Singularity might take place, the mean guess was 9,899 AD, but this is only because one person insisted on putting 100,000 AD. The median might be a better measure in this case; it was mid-2067.

Thomas Edison patented the lightbulb in 1880. I've never before been a firm believer in the wisdom of crowds, but it really came through in this case. Even though this was clearly not an easy question and many people got really far-off answers, the mean was 1879.3 and the median was 1880. The standard deviation was 36.1. Person who put \"2172\", you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put \"1700\", allowing the mean to revert back to within one year of the correct value :P

The average person was 26.77% sure they got within 5 years of the correct answer on the lightbulb question. 30% of people did get within 5 years. I'm not sure how much to trust the result, because several people put the exact correct year down and gave it 100% confidence. Either they were really paying attention in history class, or they checked Wikipedia. There was a high correlation between high levels of confidence on the question and actually getting the question right, significant at the <.001 level.

I ran some correlations between different things, but they're nothing very interesting. I'm listing the ones that are significant at the <.05 level, but keep in mind that since I just tried correlating everything with everything else, there are a couple hundred correlations and it's absolutely plausible that many things would achieve that significance level by pure chance.

How long you've been in the community obviously correlates very closely with how much of Robin and Eliezer's posts you've read (and both correlate with each other).

People who have read more of Robin and Eliezer's posts have higher karma. People who spend more time per day on Less Wrong have higher karma (with very strong significance, at the <.001 level.)

People who have been in the community a long time and read many of EY and RH's posts are more likely to believe in Many Worlds and Cryonics, two unusual topics that were addressed particularly well on Overcoming Bias. That suggests if you're a new person who doesn't currently believe in those two ideas, and they're important to you, you might want to go back and find the OB sequences about them (here's Many Worlds, and here's some cryonics). There were no similar effects on things like belief in God or belief in aliens.

Older people were less likely to spend a lot of time on the site, less likely to believe in Many Worlds, less likely to believe in global warming, and more likely to believe in aliens.

Everything in the God/revealed religion/supernatural cluster correlated pretty well with each other. Belief in cryonics correlated pretty well with belief in anti-agathics.

Here is an anomalous finding I didn't expect: the higher a probability you assign to the truth of revealed religion, the less confident you are that your IQ is above average (even though no correlation between this religious belief and IQ was actually found). Significance is at the .025 level. I have two theories on this: first, that we've been telling religious people they're stupid for so long that it's finally starting to sink in :) Second, that most people here are not religious, and so the people who put a \"high\" probability for revealed religion may be assigning it 5% or 10%, not because they believe it but because they're just underconfident people who maybe overadjust for their biases a little much. This same underconfidence leads them to underestimate the possibility that their IQ is above average.

The higher probability you assign to the existence of aliens in the universe, the more likely you are to think we'll survive until 2100 (p=.002). There is no similar correlation for aliens in the galaxy. I credit the Great Filter article for this one too - if no other species exist, it could mean something killed them off.

And, uh, the higher probability you assign to the existence of aliens in the galaxy (but not in the universe) the more likely you are (at a .05 sig) to think global warming is man-made. I have no explanation for this one. Probably one of those coincidences.

Moving on - of the 102 people who cared about the ending to 3 Worlds Collide, 68 (66.6%) prefered to see the humans blow up Huygens, while 34 (33.3%) thought we'd be better off cooperating with the aliens and eating delicious babies.

Of the 114 people who had opinions about the Singularity, 85 (74.6%) go with Eliezer's version, and 29 (25.4%) go with Robin's.

If you're playing another Less Wronger in the Prisoner's Dilemma, you should know that of the 133 who provided valid information for this question, 96 (72.2%) would cooperate and 37 (27.8%) would defect. The numbers switch when one player becomes an evil paper-clip loving robot; out of 126 willing to play the \"true\" Prisoner's Dilemma, only 42% cooperate and 58% defect.

Of the 124 of us willing to play the Counterfactual Mugging, 53 (42.7%) would give Omega the money, and 71 (57.3%) would laugh in his face.

Of the 146 of us who had an opinion on aid to Africa, 24 (16.4%) thought it was almost always a good thing, 42 (27.8%) thought it was almost always a bad thing, and 80 (54.8%) took a middle-of-the-road approach and said it could be good, but only in a few cases where it was done right.

Of 128 of us who wanted to talk about our moral theories, 94 (73.4%) were consequentialists, about evenly split between garden-variety or Eliezer-variety (many complained they didn't know what Eliezer's interpretation was, or what the generic interpretation was, or that all they knew was that they were consequentialists). 15 (9%) said with more or fewer disclaimers that they were basically deontologists, and 5 (3.9%) wrote-in virtue ethics, and objected to their beliefs being left out (sorry!). 14 people (10.9%) didn't believe in morality.

Despite the seemingly overwhelming support for cryonics any time someone mentions it, only three of us are actually signed up! Of the 161 of us who admitted we weren't, 11 (6.8%) just never thought about it, 99 (59.6%) are still considering it, and 51 (31.7%) have decided against it.

" } }, { "_id": "7eYhbDMvz5JB2FHQ4", "title": "Rationality in the Media: Don't (New Yorker, May 2009)", "pageUrl": "https://www.lesswrong.com/posts/7eYhbDMvz5JB2FHQ4/rationality-in-the-media-don-t-new-yorker-may-2009", "postedAt": "2009-05-12T13:32:34.455Z", "baseScore": 7, "voteCount": 7, "commentCount": 16, "url": null, "contents": { "documentId": "7eYhbDMvz5JB2FHQ4", "html": "

Link: \"Don't: The secret of self-control\", Jonah Lehrer. The New Yorker. May 18, 2009.

\n

Article Summary

\n

Walter Mischel, a psychologist at Columbia University, has spent a long time studying what correlates with failing or passing a test intended to measure a preschooler's ability to delay gratification. The original experiment, involving a marshmallow and the promise of another if the first one remained uneaten for fifteen minutes, took place at Bing Nursery School in the \"late 1960's\". Mischel found several correlates, none of them really surprising. He discovered a few methods that allowed children to learn better delay gratification, but it is unclear if the learning the tricks changed any of the correlations. He and the research tradition he started are now waiting for fMRI studies, because that's what the discriminating 21st century psychologist does.

\n

Best line: \"'I know I shouldn't like them,' she says. 'But they're just so delicious!'\"

\n

Article Criticism

\n

The New Yorker is not exactly meant as a science journal for scientists, so it leads and weaves a couple human interest stories in with the boring science parts. We hear from a 'high delayer' (someone who did well at the test) and her 'low delayer' brother. This sort of opening seems terribly common in popular science writing, and it's so infuriating because it primes the reader with the conclusions of the science before the science is even discussed. The high delayer goes on to become a social psychologist; the low delayer goes to Hollywood and is now working on becoming a producer. So already we see anecdotal evidence that high delayers are successful, low delayers are unsuccessful, and that there's not a genetic component (because brothers and sisters share so much genetic material &mdash; right?).

\n

At this point the author gives the impetus for Mischel's study as conversations he had with his daughters after they left Bing. Of course, that's not the whole story; that just explains the longitudinal part. According to Mischel's bio halfway through the article, he's been doing experiments like the marshmallow test every since his Ph. D. The article cites a chocolate bar test he did in Trinidad in 1955 that was intended to examine the racial stereotypes of Africans and East Indians.

\n

The only technique mentioned that helped increase children's ability to delay gratification was \"...a simple set of mental tricks &mdash; such as pretending the candy is only a picture, surrounded by an imaginary frame ....\" However, buried on the next-to-last page of the article, we find that \"...it remains unclear if these new skills persist over the long term.\"

\n

Background Literature

\n

I dug up the delay test article (\"\", W Mischel, Y Shoda, and MI Rodriguez (26 May 1989) Science 244 (4907), 933\"), which those without subscription can find here, and found a few more practical applications. Mischel et. al. found, for instance, that \"attention to the rewards consistently and substantially decreased delay time\" (934). This suggests that if you would like to delay a reward, it's best not to see the reward directly. However, another experiment showed that images of the reward made delaying gratification easier. By far, though, the best technique discovered by the study was self-distraction. Even when the reward was present in the room with them, children who sung songs to themselves or played with toys did much better at the delay test than those who didn't.

\n

In conclusion, I found neither the article nor the study itself practically helpful. I was annoyed by the human-interest propaganda that littered the article and the narrowness of the original study. As far as beating akrasia goes, the best advice I can give is for improving short-term gratification delay, practice self-distraction and avoid tempting yourself with the actual reward. If you think a visual reminder of the reward would help, settle for a picture.

" } }, { "_id": "FHukyfMagq4HrBYNt", "title": "Willpower Hax #487: Execute by Default", "pageUrl": "https://www.lesswrong.com/posts/FHukyfMagq4HrBYNt/willpower-hax-487-execute-by-default", "postedAt": "2009-05-12T06:46:11.993Z", "baseScore": 85, "voteCount": 73, "commentCount": 59, "url": null, "contents": { "documentId": "FHukyfMagq4HrBYNt", "html": "

This is a trick that I use for getting out of bed in the morning - quite literally:  I count down from 10 and get out of bed after the \"1\".

\n

It works because instead of deciding to get out of bed, I just have to decide to implement the plan to count down from 10 and then get out of bed.  Once the plan is in motion, the final action no longer requires an effortful decision - that's the theory, anyway.  And to start the plan doesn't require as much effort because I just have to think \"10, 9...\"

\n

As usual with such things, there's no way to tell whether it works because it's based on any sort of realistic insight or if it works because I believe it works; and in fact this is one of those cases that blurs the boundary between the two.

\n

The technique was originally inspired by reading some neurologist suggesting that what we have is not \"free will\" so much as \"free won't\": that is, frontal reflection is mainly good for suppressing the default mode of action, more than originating new actions.

\n

Pondering that for a bit inspired the idea that - if the brain carries out certain plans by default - it might conserve willpower to first visualize a sequence of actions and try to 'mark' it as the default plan, and then lift the attention-of-decision that agonizes whether or not to do it, thus allowing that default to happen.

\n

For the record, I can remember a time some years ago when I would have been all enthusiastic about this sort of thing, believing that I had discovered this incredible new technique that might revolutionize my whole daily life.  Today, while I know that there are discoverables with that kind of power, I also know that it usually requires beginning from firmer foundations - reports of controlled experiments, a standard theory in the field, and maybe even math.  On the scale of depth I now use, this sort of trick ranks as pretty shallow - and in fact I really do use it just for getting out of bed.

\n

I offer this trick as an example of practical advice not backed by deep theories, of the sort that you can find on a hundred other productivity blogs.  At best it may work for some of you some of the time.  Consider yourself warned about the enthusiasm thing.

" } }, { "_id": "oekNfPb6oQX2mR2NB", "title": "No One Knows Stuff", "pageUrl": "https://www.lesswrong.com/posts/oekNfPb6oQX2mR2NB/no-one-knows-stuff", "postedAt": "2009-05-12T05:11:45.183Z", "baseScore": 9, "voteCount": 23, "commentCount": 47, "url": null, "contents": { "documentId": "oekNfPb6oQX2mR2NB", "html": "

Take a second to go upvote You Are A Brain if you haven't already...

\n

Back?  OK.

\n

Liron's post reminded me of something that I meant to say a while ago.  In the course of giving literally hundreds of job interviews to extremely high-powered technical undergraduates over the last five years, one thing has become painfully clear to me:  even very smart and accomplished and mathy people know nothing about rationality.

\n

For instance, reasoning by expected utility, which you probably consider too basic to mention, is something they absolutely fall flat on.  Ask them why they choose as they do in simple gambles involving risk, and they stutter and mutter and fail.  Even the Econ majors.  Even--perhaps especially--the Putnam winners.

\n

Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahneman and Tversky's research as justifying their exhibition of a bias!

\n

So foundational explanatory work like Liron's is really pivotal.  As I've touched on before, I think there's a huge amount to be done in organizing this material and making it approachable for people that don't have the basics.  Who's going to write the Intuitive Explanation of Utility Theory?

\n

Meanwhile, I need to brush up on my Python and find a way to upvote Liron more than once.  If only...

\n

Update: Tweaked language per suggestion, added Kahneman and Tversky link.

" } }, { "_id": "N8auz4iC2bSakGCoa", "title": "Just making sure posts can be written", "pageUrl": "https://www.lesswrong.com/posts/N8auz4iC2bSakGCoa/just-making-sure-posts-can-be-written", "postedAt": "2009-05-12T01:11:19.886Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "N8auz4iC2bSakGCoa", "html": "

Hi

" } }, { "_id": "r5H6YCmnn8DMtBtxt", "title": "You Are A Brain", "pageUrl": "https://www.lesswrong.com/posts/r5H6YCmnn8DMtBtxt/you-are-a-brain", "postedAt": "2009-05-09T21:53:26.771Z", "baseScore": 131, "voteCount": 127, "commentCount": 64, "url": null, "contents": { "documentId": "r5H6YCmnn8DMtBtxt", "html": "

Here is a 2-hour slide presentation I made for college students and teens:

\n

You Are A Brain

\n

It's an introduction to realist thinking, a tour of all the good stuff people don't realize until they include a node for their brain's map in their brain's map. All the concepts come from Eliezer's posts on Overcoming Bias.

\n

I presented this to my old youth group while staffing one of their events. In addition to the slide show, I had a browser with various optical illusions open in tabs, and I brought in a bunch of lemons and miracle fruit tablets. They had a good time and stayed engaged.

\n

I hope the slides will be of use to others trying to promote the public understanding of rationality.

\n

Note: When you view the presentation, make sure you can see the speaker notes. They capture the gist of what I was saying while I was showing each slide.

\n

Added 6 years later: I finally made a video of myself presenting this, except this time it was an adult audience. See this discussion post.

" } }, { "_id": "vtkh36YtGTZcniobB", "title": "Step Back", "pageUrl": "https://www.lesswrong.com/posts/vtkh36YtGTZcniobB/step-back", "postedAt": "2009-05-09T18:07:34.526Z", "baseScore": 18, "voteCount": 19, "commentCount": 5, "url": null, "contents": { "documentId": "vtkh36YtGTZcniobB", "html": "

From a recent Psychological Science,

\n
In everyday life, individuals typically approach desired stimuli by stepping forward and avoid aversive stimuli by stepping backward... Cognitive functioning was gauged by means of a Stroop task immediately after a participant stepped in one direction... Stepping backward significantly enhanced cognitive performance compared to stepping forward or sideways. Considering the effect size, backward locomotion appears to be a very powerful trigger to mobilize cognitive resources.
\n

As Chris Chatham notes,

\n
This work is remarkable not only for demonstrating how a very concrete and simple bodily experience can influence even the highest levels of cognitive processing (in this case, the so-called \"cognitive control\" processes that enable focused attention), but also because performance on the Stroop task is notoriously difficult to improve.
\n

When you suddenly realize that a task is more difficult than you assumed it would be, or when you face a particularly difficult choice in pursuit of rationality, you may find it useful to literally take a step back. For those of us who are particularly interested in making good decisions, this may also serve the purpose of self-signaling, as Yvain and commenters discussed earlier.

Chris's post has a link to a pdf of the paper.

" } }, { "_id": "RuFPGfq7QeNoeGcbs", "title": "How Not to be Stupid: Brewing a Nice Cup of Utilitea", "pageUrl": "https://www.lesswrong.com/posts/RuFPGfq7QeNoeGcbs/how-not-to-be-stupid-brewing-a-nice-cup-of-utilitea", "postedAt": "2009-05-09T08:14:18.363Z", "baseScore": 2, "voteCount": 7, "commentCount": 17, "url": null, "contents": { "documentId": "RuFPGfq7QeNoeGcbs", "html": "

Previously: \"objective probabilities\", but more importantly knowing what you want

\n

Slight change of plans: the only reason I brought up the \"objective\" probabilities as early as I did was to help establish the idea of utilities. But with all the holes that seem to need to be patched to get from one to the other (continuity, etc), I decided to take a different route and define utilities more directly. So, for now, forget about \"objective probabilities\" and frequencies for a bit. I will get back to them a bit later on, but for now am leaving them aside.

\n

So, we've got preference rankings, but not much of a sense of scale yet. We don't have much way of asking \"how _much_ do you prefer this to that?\" That's what I'm going to deal with in this post. There will be some slightly roundabout abstract bits here, but they'll let me establish utilities. And once I have that, more or less all I have to do is just use utilities as a currency to apply dutch book arguments to. (That will more or less be the shape of the rest of the sequence) The basic idea here is to work out a way of comparing the magnitudes of the differences of preferences. ie, How much you would have prefered some A2 to A1 vs how much you would have prefered some B2 to B1. But it seems difficult to define, no? \"how much would you have wanted to switch reality to A2, if it was in state A1, vs how much would you have wanted to switch reality to B2, given that it was in B1?\"

\n

So far, the best solution I can think of is to ask \"if you are equally uncertain about whether A1 or B1 is true, would you prefer to replace A1, if it would have been true, with A2, or similar for B1 to B2\"? Specifically, supposing you're in a state of complete uncertainty with regards to two possible states/outcomes A1 or B1, so that you'd be equally surprised by both. Then consider that, instead of keeping that particular set of two possibilities, you have to choose between two substitutions: you can choose to either conditionally replace A1 with A2 (that is, if A1 would have been the outcome, you get A2 instead) _or_ you can choose to replace B1 with B2 in the same sense. So, you have to choose between (A2 or B1) and (A1 or B2) (where, again, your state of uncertainty is such that you'd be equally surprised by either outcome. That is, you can imagine that whatever it is that's giving rise to your uncertainty is effectively controlling both possibilities. You simply get to decide which of those are wired to the source of uncertainty) If you choose the first, then we will say that the amount of difference in your preference between A2 and A1 is bigger than between B2 and B1. And vice versa. And if you're indifferent, we'll say the preference difference of A2 vs A1 = the preference difference of B2 vs B1.

\n

But wait! You might be saying \"oh sure, that's all nice, but why the fluff should we consider this to obey any form of transitivity? Why should we consider this sort of comparison to actually correspond to a real ordered ranking of these things?\" I'm glad you asked, because I'm about to tell you! Isn't that convinient? ;)

\n

First, I'm going to introduce a slightly unusual notation which I don't expect to ever need use again. I need it now, however, because I haven't established epistemic probabilities, yet I need to be able to talk about \"equivalent uncertainties\" without assuming \"uncertainty = probability\" (which I'll basically be establishing over the next several posts.)

\n

A v B v C v D ... will be defined to mean that you're in a state of uncertainty such that you'd be equally surprised by any of those outcomes. (Obviously, this is commutative. A v B v C is the same state as C v A v B, for instance.)

\n

Next, I need to establish the following principle:

\n

If you prefer Ai v Bi v Ci v... to Aj v Bj v Cj v..., then you should prefer Ai v Bi v Ci v... v Z to Aj v Bj v Cj v... v Z.

\n

If this seems familiar, it should. However, this is a bit more abstract, since we don't yet have a way to measure uncertainty. I'm just assuming here that one can meaningfully say things like \"I'd be equally surprised either way.\" We'll later revisit this argument once we start to apply a numerical measure to our uncertainty.

\n

To deal with a couple possible ambiguities, first imagine you use the same source of uncertainty no matter which outcome you choose. So the only thing you get to choose is which outcomes are plugged into the \"consequence slots\". Then, imagine that you switch the source of uncertainty with an equivalent one. Unless you place some inherent value in something about whatever it is that is leading to your state of uncertainty or you have additional information (in which case it's not an equivalent amount of uncertainty, so doesn't even apply here), you should value it the same either way, right? Basically an \"it's the same, unless it's different\" principle. :) But for now, if it helps, imagine it's the same source of uncertainty, just different consequences plugged in. Suppose you prefer the Aj v Bj ... v Z to Ai v Bi ... v Z. You have equal amount of expectation (in the informal sense) of Z in either case, by assumption, so makes no difference which of the two you select as far as Z is concerned. And if Z doesn't happen, you're left with the rest. (Assuming appropriate mutual exclusiveness, etc...) So that leaves you back at the \"i\"s vs the \"j\"s, but by assumption you already prefered, overall, the set of \"i\"s to the set of \"j\"s. So the result of prefering the second set with Z simply means that either Z is the outcome, which could have happened the same either way or if not Z, then what you have left is equivalent to the Aj v Bj v... option, which, by assumption, you prefer less than the \"i\"s. So, effectively, you either gain nothing, or end up with a set of possibilities that you prefer less overall. By the power vested in me by Don't Be Stupid, I say that therefore if you prefer Ai v Bi v ... to Aj v Bj v ..., then you must have the same preference ordering when the Z is tacked on.

\n

There is, however, a possibility that we haven't quite eliminated via the above construction: being indifferent to Ai v Bi v... vs Aj v Bj v... while actually prefering one of the versions with the Z tacked on. All I can say to that is \"unless you, explicitly, in your preference structure have some term for certain types of sources of uncertainty set up in certain ways leading to certain preferences\" here, I don't see any reasonable way that should be happening. ie, where would the latter preference be arising from, if it's not arising from prefereces relating to the individual possibilities?

\n

I admit, this is a weak point. In fact, it may be the weakest part, so if anyone has any actual concrete objections to this bit, I'd be interested in hearing it. But the \"reasonableness\" criteria seems reasonable here. So, for now at least, I'm going to go with it as \"sufficiently established to move on.\"

\n

So let's get to building up utilities: Suppose the preference difference of A2 vs A1 is larger than preference difference B2-B1, which is larger than preference difference of C2 vs C1.

\n

Is preference difference A2 vs A1 larger than C2 vs C1, in terms of the above way for comparing the magnitudes of preference differences?

\n

Let's find out (Where >, <, and = are being used to represent preference relations)

\n

We have

\n

A2 v B1 > A1 v B2

\n

We also have

\n

B2 v C1 > B1 v C2

\n

Let's now use our above theorem of being able to tack on a \"Z\" without changing preference ordering.

\n

The first one we will transform into (by tacking an extra C1 onto both sides):

\n

A2 v B1 v C1 > A1 v B2 v C1

\n

The second comparison will be transfomed into (by tacking an extra A1 onto both sides):

\n

A1 v B2 v C1 > A1 v B1 v C2

\n

aha! now we've got an expression that shows up in both the top and the bottom. Specifically A1 v B2 v C1

\n

By earlier postings, we've already established that prefence rankings are transitive, so we must therefore derive:

\n

A2 v B1 v C1 > A1 v B1 v C2

\n

And, again, by the above rule, we can chop off a term that shows up on both sides, specifically B1:

\n

A2 v C1 > A1 v C2

\n

Which is our definition for saying the preference difference between A2 and A1 is larger than that between C2 and C1. (given equal expectation (in the informal sense) of A1 or C1, you'd prefere to replace the possibility A1 with A2 then to replace the possibility C1 with C2). And a similar argument applies for equality. So there, we've got transitivity for our comparisons of differences of preferences. Woooo!

\n

Well then, let us call W a utility function if it has the property that W(B2) - W(B1)  =/>/< W(A2) - W(A1) implies the appropriate relation applies to the preference differences. For example, if we have this:

\n

W(B2) - W(B1) > W(A2) - W(A1), then we have this:

\n

B2 v A1 > B1 v A2.

\n

(and similar for equality.)

\n

In other other words, differences of utility act as an internal currency. Gaining X points of utility corresponds to a climb up your preference ranking that's worth the same no matter what the starting point is. This gives us something to work with.

\n

Also, note the relations will hold for arbitrary shifting of everything by an equal amount, and by multiplying everything by some positive number. So, basically, you can do a (positive) affine transform on the whole thing and still have all the important properties retained, since all we care about are the relationships between differences, rather than the absolute values.

\n

And there it is, that's utilities. An indexing of preference rankings with the special property that differences between those indices actually corresponds in a meaningful way to _how much_ you prefer one thing to another.

" } }, { "_id": "PrajDAbSgD6SDDa6d", "title": "A Request for Open Problems", "pageUrl": "https://www.lesswrong.com/posts/PrajDAbSgD6SDDa6d/a-request-for-open-problems", "postedAt": "2009-05-08T13:33:29.567Z", "baseScore": 28, "voteCount": 32, "commentCount": 103, "url": null, "contents": { "documentId": "PrajDAbSgD6SDDa6d", "html": "

Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.

\n

While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?

\n

The catch is that these problems cannot be inherently fuzzy problems. \"How do I become less wrong?\" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?

\n

See also: Open problems at LW:Wiki

\n

1: \"Clearly defined\" essentially means a formal, unambiguous definition.  \"Solving\" such a problem would constitute a formal proof.

" } }, { "_id": "mkNarWtkpZdnX6MCb", "title": "Framing Consciousness", "pageUrl": "https://www.lesswrong.com/posts/mkNarWtkpZdnX6MCb/framing-consciousness", "postedAt": "2009-05-08T10:27:02.864Z", "baseScore": -3, "voteCount": 16, "commentCount": 45, "url": null, "contents": { "documentId": "mkNarWtkpZdnX6MCb", "html": "

Update: you can ignore this post, it's completely wrong, I'm only leaving it up to preserve people's comments. Randallsquared has caught a crucial mistake in my reasoning: consciousness could require physical causality, rather than a property of some snapshot description. This falsifies my Point 3 below.

\n

...

\n

\n

In this unabashedly geek-porn post I want to slightly expand our discussion of consciousness, as defined in the hard problem of consciousness. Don't be scared: no quantum claptrap or \"informational system\" bullshit.

\n

Point 1. The existence (and maybe degree) of conscious/subjective experiences is an objective question.

\n

Justification: if you feel a human possesses as much consciousness as a rock or the number three, stop reading now. This concludes the \"proof by anthropic principle\" or \"by quantum immortality\" for those still reading.

\n

Point 2. It's either possible or impossible in principle to implement consciousness on a Turing-equivalent digital computer.

\n

Justification: obvious corollary of Point 1.

\n

Point 3. If consciousness is implementable on a digital computer, all imaginable conscious experiences already exist.

\n

Justification: the state of any program can be encoded as an integer. What does it mean for an integer to \"exist\"? Does three \"exist\"? If a computer program gives rise to \"actually existing\" subjective experiences, then so does the decimal expansion of the x-coordinate of some particle in the Magellanic Cloud when written out in trinary.

\n

Point 4. If consciousness is implementable, the Simulation Argument is invalid and Pascal's Mugging is almost certainly invalid.

\n

Justification: obvious corollary of Point 3.

\n

Point 5. If consciousness is non-implementable, the Simulation Argument and Robin's uploads scenario lose much of their punch.

\n

Justification: the extinction threat in SA and the upload transition only feel urgent due to our current rapid progress with digital computers. We don't yet have a computer peripheral for providing programs with a feeling of non-implementable consciousness.

\n

Point 6. If consciousness could be implementable, Eliezer had better account for it when designing his FAI.

\n

Justification: there's no telling what the FAI will do when it realizes that actual humans have no privileged status over imaginable humans, or alternatively that they do and torturing simulated humans carries no moral weight.

\n

Point 7. The implementability of currently known physics gives strong evidence that consciousness is implementable.

\n

Justification: pretty obvious. Neurons have been called \"essentially classical objects\", not even quantum.

\n

Point 8. The fact that evolution gave us conscious brains rather than \"dumb computers\" gives weak evidence that consciousness is non-implementable.

\n

Justification: we currently know of no reason why organisms would need implementable consciousness, whereas using a natural phenomenon of non-implementable consciousness could give brains extra computational power.

\n

Any disagreements? Anything else interesting in this line of inquiry?

" } }, { "_id": "z37KpjFbSzogXi8d4", "title": "Replaying History", "pageUrl": "https://www.lesswrong.com/posts/z37KpjFbSzogXi8d4/replaying-history", "postedAt": "2009-05-08T05:35:23.942Z", "baseScore": 7, "voteCount": 16, "commentCount": 19, "url": null, "contents": { "documentId": "z37KpjFbSzogXi8d4", "html": "

One of my favorite fiction genres is alternative history.  The basic idea of alternative history is to write a story set in an alternate universe where history played out differently.  Popular alternate histories include those where the Nazis win World War II, the USSR wins the Cold War, and the Confederate States of America win the American Civil War.  But most of the writing in this genre has a serious flaw:  the author starts out by saying \"wouldn't it be cool to write a story where X had happened instead of Y\" and then works backwards to concoct historical events that will lead to the desired outcome.  No matter how good the story is, the history is often bad because at every stage the author went looking for a reason for things to go his way.

\n

Being unsatisfied with most alternate histories, I like to play a historical \"what if\" game.  Rather than asking the question at the conclusion, though (like \"what if the Nazis had won the war\"), I ask it at an earlier moment, ideally one where chance played an important role.  What if Napoleon had been convinced not to invade Russia?  What if the Continental Army had not successfully retreated from New York?  What if Viking settlements in Newfoundland had not collapsed?  These are as opposed to \"What if Napoleon had never been defeated?\", \"What if the Colonies had lost the American Revolutionary War?\", and \"What if Vikings had developed a thriving civilization in the Americas?\".  I find that replaying history in this way a fun use of my analytical skills, but more importantly a good test of my rationality.

\n

One of the most difficult things in thinking of an alternative history is to stay focused on the facts and likely outcomes.  It's easy to say \"I'd really like to see a world where X happened\" and then silently or overtly bias your thinking until you find a way to achieve the desired outcome.  Learning to avoid this takes discipline, especially in a domain like alternate history where there's no way to check if your reasoning turned out to be correct.  But unlike imagining the future, making an alternate history does have the real history to measure up against, so it provides a good training ground for futurist who don't want to wait 20 or 30 years to get feedback on their thinking.

\n

Given all this, I have two suggestions.  One, this indicates that a good way to teach history and rational thinking at the same time would be to present historical data up to a set point, ask students to reason out what they think will happen next in history, and then reveal what actually happened and use the feedback to calibrate and improve our historical reasoning (which will hopefully provide some benefit in other domains).  Second, a good way to build experience applying the skills of rationality is publicly present and critique alternate histories.

\n

In that vein, if there appears to be sufficient interest, I'll start doing a periodic article here dedicated to the discussion of some particular alternative history.  The discussion will be in the comments:  people can propose outcomes, then others can revise and critique and propose other outcomes, continuing the cycle until we hit a brick wall (not enough information, question asks something that would not have changed history, etc.) or come to a consensus.

\n

What do you all think of this idea?

" } }, { "_id": "g2Rh4xXALjuyLqm4k", "title": "Epistemic vs. Instrumental Rationality: Case of the Leaky Agent", "pageUrl": "https://www.lesswrong.com/posts/g2Rh4xXALjuyLqm4k/epistemic-vs-instrumental-rationality-case-of-the-leaky", "postedAt": "2009-05-07T23:09:40.070Z", "baseScore": 18, "voteCount": 22, "commentCount": 22, "url": null, "contents": { "documentId": "g2Rh4xXALjuyLqm4k", "html": "

Suppose you hire a real-estate agent to sell your house. You have to leave town so you give him the authority to negotiate with buyers on your behalf. The agent is honest and hard working. He'll work as hard to get a good price for your house as if he's selling his own house. But unfortunately, he's not very good at keeping secrets. He wants to know what is the minimum amount you're willing to sell the house for so he can do the negotiations for you. But you know that if you answer him truthfully, he's liable to leak that information to buyers, giving them a bargaining advantage and driving down the expected closing price. What should you do? Presumably most of you in this situation would give the agent a figure that's higher than the actual minimum. (How much higher involves optimizing a tradeoff between the extra money you get if the house sells, versus the probability that you can't find a buyer at the higher fictional minimum.)

\n

Now here's the kicker: that agent is actually your future self. Would you tell yourself a lie, if you could believe it (perhaps with the help of future memory modification technologies), and if you could profit from it?

\n

Edit: Some commenters have pointed out that this change in \"minimum acceptable price\" may not be exactly a lie. I should have made the example a bit clearer. Let's say if you fail to sell the house by a certain date, it will be reposessed by the bank, so the minimum acceptable price is the amount left on your mortgage, since you're better off selling the house for any amount above that than not selling it. But if buyers know that, they can just offer you slightly above the minimum acceptable price. It will help you get a better bargain if you can make yourself believe that the amount left on your mortgage is higher than it really is. This should be unambigously a lie.

" } }, { "_id": "wCgRvyYJEF8me2QW9", "title": "The First Koan: Drinking the Hot Iron Ball", "pageUrl": "https://www.lesswrong.com/posts/wCgRvyYJEF8me2QW9/the-first-koan-drinking-the-hot-iron-ball", "postedAt": "2009-05-07T17:41:43.586Z", "baseScore": 0, "voteCount": 19, "commentCount": 57, "url": null, "contents": { "documentId": "wCgRvyYJEF8me2QW9", "html": "

In the traditions of Zen in which koans are common teaching tools, it is common to use a particular story as a novice's first koan.  It's the story of Joshu's Dog.

\n
\n

A monk asked Joshu, a Chinese Zen master: `Has a dog Buddha-nature or not?'

\n

Joshu answered: `Mu.'  [Mu is the negative symbol in Chinese, meaning `No-thing' or `Nay'.]

\n
\n

What does this koan mean?  How can we find out for ourselves?

\n

It is important to remember certain things:  Firstly, koans are not meant to be puzzles, riddles, or intellectual games.  They are examples, illustrations of the state of mind that the student is expected to internalize.  Secondly, they often appear paradoxical.

\n
\n

Paradox is a pointer telling you to look beyond it.  If paradoxes bother you, that betrays your deep desire for absolutes.  The relativist treats a paradox merely as interesting, perhaps amusing or even -- dreadful thought -- educational.

\n
\n

Thirdly, the purpose of Zen teaching isn't to acquire new conceptual baggage, but to eliminate it; not to generate Enlightenment, but to remove the false beliefs that preventing us from recognizing what we already possess.  Shedding error is the point, not learning something new.

\n

Take a look at Mumon's commentary for this koan:

\n
\n

To realize Zen one has to pass through the barrier of the patriachs. Enlightenment always comes after the road of thinking is blocked. If you do not pass the barrier of the patriachs or if your thinking road is not blocked, whatever you think, whatever you do, is like a tangling ghost. You may ask: What is a barrier of a patriach? This one word, Mu, is it.

\n

This is the barrier of Zen. If you pass through it you will see Joshu face to face. Then you can work hand in hand with the whole line of patriachs. Is this not a pleasant thing to do?

\n

If you want to pass this barrier, you must work through every bone in your body, through ever pore in your skin, filled with this question: What is Mu? and carry it day and night. Do not believe it is the common negative symbol meaning nothing. It is not nothingness, the opposite of existence. If you really want to pass this barrier, you should feel like drinking a hot iron ball that you can neither swallor nor spit out.

\n

Then your previous lesser knowledge disappears. As a fruit ripening in season, your subjectivity and objectivity naturally become one. It is like a dumb man who has had a dream. He knows about it but cannot tell it.

\n

When he enters this condition his ego-shell is crushed and he can shake the heaven and move the earth. He is like a great warrior with a sharp sword. If a Buddha stands in his way, he will cut him down; if a patriach offers him any obstacle, he will kill him; and he will be free in this way of birth and death. He can enter any world as if it were his own playground. I will tell you how to do this with this koan:

\n

Just concentrate your whole energy into this Mu, and do not allow any discontinuation. When you enter this Mu and there is no discontinuation, your attainment will be as a candle burning and illuminating the whole universe.

\n
\n

I'll give you a hint:  Joshu's reply isn't really an answer to the monk's question, it's a response induced by it.  Joshu answers the question the monk didn't ask but should have - the question whose answer the monk is taking for granted in what he asks.

\n

This morning I passed by a gym with a glass-walled front, and I saw within the building many people working at machines, moving weights back and forth.  What was being accomplished?  Superficially, nothing at all.  Their actions would appear to be wasted; nothing was done with them.  The real purpose, of course, was to exercise the body, to condition the muscles and strengthen the bones.

\n

The point of the koan isn't to find the 'right answer', the point of the koan is to struggle with it, and by struggling, develop one's own understanding.  Contradiction and apparent contradiction is a powerful tool for this purpose.  Trying to understand, we usually perceive a contradiction and let the process terminate.  But if we keep struggling with the problem, even though we cannot expect to achieve anything, we build within ourselves ever more complex models, ways of seeing.  Eventually the complexity will be useful in dealing with other problems, ones with solutions we didn't see before.

\n

One warning:  the fact that a problem is used as a source of contradiction does not mean that it doesn't actually have an answer.  Don't mistake the use for the reality.

\n

Has a dog Buddha-nature?
This is the most serious question of all.
If you say yes or no,
You lose your own Buddha-nature.

" } }, { "_id": "Xd5RxPuKC8WiGrJ4u", "title": "Rationality is winning - or is it?", "pageUrl": "https://www.lesswrong.com/posts/Xd5RxPuKC8WiGrJ4u/rationality-is-winning-or-is-it", "postedAt": "2009-05-07T14:51:52.763Z", "baseScore": -8, "voteCount": 13, "commentCount": 10, "url": null, "contents": { "documentId": "Xd5RxPuKC8WiGrJ4u", "html": "

I feel a bit silly writing an post about connotations on a rationalist website, but I really love the quote \"Rationality (is/is not) winning\". I see a few different ways of interpreting it:

\n\n

I wonder, the way human brain works, is it common for there to be thoughts that are much better expressed with a short sentence full of ambiguous connotations, that by long and accurate explanations? Give me your favourite ambiguous quotes!

" } }, { "_id": "bv36xQt9FNyotpqEy", "title": "On the Fence? Major in CS", "pageUrl": "https://www.lesswrong.com/posts/bv36xQt9FNyotpqEy/on-the-fence-major-in-cs", "postedAt": "2009-05-07T04:26:14.714Z", "baseScore": 20, "voteCount": 32, "commentCount": 54, "url": null, "contents": { "documentId": "bv36xQt9FNyotpqEy", "html": "

I talk to many ABDs in math, physics, engineering, economics, and various other technical fields.

\n

I work with exceptional people from all those backgrounds.

\n

I would like to unreservedly say to any collegians out there, whether choosing an undergrad major or considering fields of study for grad school: if you know you want a technical major but you're not sure which, choose Computer Science.

\n

Unless you're extremely talented and motivated, relative to your extremely talented and motivated peers, you probably aren't going to make a career in academia even if you want to.  And if you want a technical major but you're not sure which, you shouldn't want to!  Academia is a huge drag in many ways.  When a math ABD starts telling me about how she really likes her work but is sick of the slow pace and the fact that only six people in the world understand her work, I get to take a nice minute alone with my thoughts: I've heard it over and over again, in the same words and the same weary, beaten-down tone.  You shouldn't be considering a career in academia unless you're passionately in love with your field, unless you think about it in the shower and over lunch and as you drift off to sleep, unless the problem sets are a weekly joy.  A lesser love will abandon you and leave you stranded and heartbroken, four years into grad school.

\n

What's so great about CS, then?  Isn't it just a bunch of glorified not-real-math and hundreds of hours of grimy debugging?

\n

Let's start with several significant, but peripheral, reasons:

\n\n

None of that gets to my real point, which is the modes of thought that CS majors build.  Working with intransigent computer code for years upon years, the smart ones learn a deeply careful, modular, and reductionist mindset that transfers shockingly well to all kinds of systems-oriented thinking--

\n

And most significantly to building and understanding human systems.  The questions they learn to ask about a codebase--\"What invariants must this process satisfy?  What's the cleanest way to organize this structure?  How should these subsystems work together?\"--are incredibly powerful when applied to a complex human process.  If I needed a CEO for my enterprise, not just my software company but my airline, my automaker, my restaurant chain, I would start by looking for candidates with a CS background.

\n

You can see some of this relevance in the multitude of analogies CS people are able to apply to non-CS areas.  When's the last time you heard a math person refer to some real-world situation as \"a real elliptic curve\"?  The CS people I know have a rich vocabulary of cached concepts that address real-world situations: race conditions, interrupts, stacks, queues, bandwidth, latency, and many more that go over my head, because...

\n

I didn't major in CS.  I saw it as too \"applied,\" and went for more \"elevated\" areas.  I grew intellectually through that study, but I've upped my practical effectiveness enormously in the last few years by working with great CS people and absorbing all I can of their mindset.

\n


" } }, { "_id": "reitXJgJXFzKpdKyd", "title": "Beware Trivial Inconveniences", "pageUrl": "https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-trivial-inconveniences", "postedAt": "2009-05-06T22:04:02.354Z", "baseScore": 288, "voteCount": 220, "commentCount": 107, "url": null, "contents": { "documentId": "reitXJgJXFzKpdKyd", "html": "

The Great Firewall of China. A massive system of centralized censorship purging the Chinese version of the Internet of all potentially subversive content. Generally agreed to be a great technical achievement and political success even by the vast majority of people who find it morally abhorrent.

I spent a few days in China. I got around it at the Internet cafe by using a free online proxy. Actual Chinese people have dozens of ways of getting around it with a minimum of technical knowledge or just the ability to read some instructions.

The Chinese government isn't losing any sleep over this (although they also don't lose any sleep over murdering political dissidents, so maybe they're just very sound sleepers). Their theory is that by making it a little inconvenient and time-consuming to view subversive sites, they will discourage casual exploration. No one will bother to circumvent it unless they already seriously distrust the Chinese government and are specifically looking for foreign websites, and these people probably know what the foreign websites are going to say anyway.

Think about this for a second. The human longing for freedom of information is a terrible and wonderful thing. It delineates a pivotal difference between mental emancipation and slavery. It has launched protests, rebellions, and revolutions. Thousands have devoted their lives to it, thousands of others have even died for it. And it can be stopped dead in its tracks by requiring people to search for \"how to set up proxy\" before viewing their anti-government website.

\n

\n

I was reminded of this recently by Eliezer's Less Wrong Progress Report. He mentioned how surprised he was that so many people were posting so much stuff on Less Wrong, when very few people had ever taken advantage of Overcoming Bias' policy of accepting contributions if you emailed them to a moderator and the moderator approved. Apparently all us folk brimming with ideas for posts didn't want to deal with the aggravation.

Okay, in my case at least it was a bit more than that. There's a sense of going out on a limb and drawing attention to yourself, of arrogantly claiming some sort of equivalence to Robin Hanson and Eliezer Yudkowsky. But it's still interesting that this potential embarrassment and awkwardness was enough to keep the several dozen people who have blogged on here so far from sending that \"I have something I'd like to post...\" email.

Companies frequently offer \"free rebates\". For example, an $800 television with a $200 rebate. There are a few reasons companies like rebates, but one is that you'll be attracted to the television because it appears to have a net cost only $600, but then filling out the paperwork to get the rebate is too inconvenient and you won't get around to it. This is basically a free $200 for filling out an annoying form, but companies can predict that customers will continually fail to complete it. This might make some sense if you're a high-powered lawyer or someone else whose time is extremely valuable, but most of us have absolutely no excuse.

One last example: It's become a truism that people spend more when they use credit cards than when they use money. This particular truism happens to be true: in a study by Prelec and Simester1, auction participants bid twice as much for the same prize when using credit than when using cash. The trivial step of getting the money and handing it over has a major inhibitory effect on your spending habits.

I don't know of any unifying psychological theory that explains our problem with trivial inconveniences. It seems to have something to do with loss aversion, and with the brain's general use of emotion-based hacks instead of serious cost-benefit analysis. It might be linked to akrasia; for example, you might not have enough willpower to go ahead with the unpleasant action of filling in a rebate form, and your brain may assign it low priority because it's hard to imagine the connection between the action and the reward.

But these trivial inconveniences have major policy implications. Countries like China that want to oppress their citizens are already using \"soft\" oppression to make it annoyingly difficult to access subversive information. But there are also benefits for governments that want to help their citizens.

\"Soft paternalism\" means a lot of things to a lot of different people. But one of the most interesting versions is the idea of \"opt-out\" government policies. For example, it would be nice if everyone put money into a pension scheme. Left to their own devices, many ignorant or lazy people might never get around to starting a pension, and in order to prevent these people's financial ruin, there is strong a moral argument for a government-mandated pension scheme. But there's also a strong libertarian argument against that idea; if someone for reasons of their own doesn't want a pension, or wants a different kind of pension, their status as a free citizen should give them that right.

The \"soft paternalist\" solution is to have a government-mandated pension scheme, but allow individuals to opt-out of it after signing the appropriate amount of paperwork. Most people, the theory goes, would remain in the pension scheme, because they understand they're better off with a pension and it was only laziness that prevented them from getting one before. And anyone who actually goes through the trouble of opting out of the government scheme would either be the sort of intelligent person who has a good reason not to want a pension, or else deserve what they get2.

This also reminds me of Robin's IQ-gated, test-requiring would-have-been-banned store, which would discourage people from certain drugs without making it impossible for the true believers to get their hands on them. I suggest such a store be located way on the outskirts of town accessible only by a potholed road with a single traffic light that changes once per presidential administration, have a surly clerk who speaks heavily accented English, and be open between the hours of two and four on weekdays.

\n

 

\n

Footnotes

\n

1: See Jonah Lehrer's book How We Decide. In fact, do this anyway. It's very good.

\n

2: Note also the clever use of the status quo bias here.

" } }, { "_id": "JvQniHSBr6JCbTRnj", "title": "Hardened Problems Make Brittle Models", "pageUrl": "https://www.lesswrong.com/posts/JvQniHSBr6JCbTRnj/hardened-problems-make-brittle-models", "postedAt": "2009-05-06T18:31:28.077Z", "baseScore": 58, "voteCount": 63, "commentCount": 40, "url": null, "contents": { "documentId": "JvQniHSBr6JCbTRnj", "html": "

Consider a simple decision problem: you arrange a date with someone, you arrive on time, your partner isn't there. How long do you wait before giving up?

\n

Humans naturally respond to this problem by acting outside the box. Wait a little then send a text message. If that option is unavailable, pluck a reasonable waiting time from cultural context, e.g. 15 minutes. If that option is unavailable...

\n

Wait, what?

\n

The toy problem was initially supposed to help us improve ourselves - to serve as a reasonable model of something in the real world. The natural human solution seemed too messy and unformalizable so we progressively remove nuances to make the model more extreme. We introduce Omegas, billions of lives at stake, total informational isolation, perfect predictors, finally arriving at some sadistic contraption that any normal human would run away from. But did the model stay useful and instructive? Or did we lose important detail along the way?

\n

Many physical models, like gravity, have the nice property of stably approximating reality. Perturbing the positions of planets by one millimeter doesn't explode the Solar System the next second. Unfortunately, many of the models we're discussing here don't have this property. The worst offender yet seems to be Eliezer's \"True PD\" which requires the whole package of hostile psychopathic AIs, nuclear-scale payoffs and informational isolation; any natural out-of-the-box solution like giving the damn thing some paperclips or bargaining with it would ruin the game. The same pattern has recurred in discussions of Newcomb's Problem where people have stated that any miniscule amount of introspection into Omega makes the problem \"no longer Newcomb's\". That naturally led to more ridiculous use of superpowers, like Alicorn's bead jar game where (AFAIU) the mention of Omega is only required to enforce a certain assumption about its thought mechanism that's wildly unrealistic for a human.

\n

Artificially hardened logic problems make brittle models of reality.

\n

So I'm making a modest proposal. If you invent an interesting decision problem, please, first model it as a parlor game between normal people with stakes of around ten dollars. If the attempt fails, you have acquired a bit of information about your concoction; don't ignore it outright.

" } }, { "_id": "DebhAYmgEtncDhMLK", "title": "Wiki.lesswrong.com Is Live", "pageUrl": "https://www.lesswrong.com/posts/DebhAYmgEtncDhMLK/wiki-lesswrong-com-is-live", "postedAt": "2009-05-06T05:17:02.471Z", "baseScore": 11, "voteCount": 10, "commentCount": 13, "url": null, "contents": { "documentId": "DebhAYmgEtncDhMLK", "html": "

http://wiki.lesswrong.com/ is now live, for all our Wiki needs.  The previous Wikia wiki has been imported.  Knock yourself out on linking there from comments or blog posts (yes, you will have to link manually, there is no CamelCase convention yet on the blog and comments).

\n

See here for proposed wiki usage guidelines - note that these do not yet seem to appear in the Wiki itself, hint hint.

" } }, { "_id": "wcvWzgPRyzjLSqBM4", "title": "No Universal Probability Space", "pageUrl": "https://www.lesswrong.com/posts/wcvWzgPRyzjLSqBM4/no-universal-probability-space", "postedAt": "2009-05-06T02:58:06.165Z", "baseScore": 2, "voteCount": 24, "commentCount": 43, "url": null, "contents": { "documentId": "wcvWzgPRyzjLSqBM4", "html": "

This afternoon I heard a news story about a middle eastern country where one person said of the defenses for a stockpile of nuclear weapons, \"even if there is only a 1% probability of the defenses failing, we should do more to strengthen them given the consequences of their failure\".  I have nothing against this person's reasoning, but I do have an issue with where that 1% figure came from.

\n

The statement above and others like it share a common problem:  they are phrased such that it's unclear over what probability space the measure was taken.  In fact, many journalist and other people don't seem especially concerned by this.  Even some commenters on Less Wrong give little indication of the probability space over which they give a probability measure of an event, and nobody calls them on it.  So what is this probability space they are giving probability measurements over?

\n

If I'm in a generous mood, I might give the person presenting such a statement the benefit of the doubt and suppose they were unintentionally ambiguous.  On the defenses of the nuclear weapon stockpile, the person might have meant to say \"there is only a 1% probability of the defenses failing over all attacks\", as in \"in 1 attack out of every 100 we should expect the defenses to fail\".  But given both my experiences with how people treat probability and my knowledge of naive reasoning about probability, I am dubious of my own generosity.  Rather, I suspect that many people act as though there were a universal probability space over which they may measure the probability of any event.

\n

To illustrate the issue, consider the probability that a fair coins comes up heads.  We typically say that there is a 1/2 chance of heads, but what we are implicitly saying is that given a probability measure P on the measurable space ({heads, tails}, {{}, {heads}, {tails}, {heads, tails}}), P({heads}) = P({tails}) = 1/2 and P({}) = 0 and P({heads, tails}) = 1.  But if we look at the issue of a coin coming up heads from a wider angle, we could interpret it as \"what is the probability of some particular coin sitting heads-up over the span of all time\", which is another question all together.  What this is asking is \"what is the probability of the event that this coin sits heads-up over the universal probability space\", i.e. the probability space of all events that could occur at some time during the existence of the universe, and we have no clear way to calculate the probability of such an event other than to say that the universal probability space must contain infinitely many (how infinitely is still up for debate) events of measure zero.  So there is a universal probability space; it's just not very useful to us, hence the title of the article, since it practically doesn't exist for us.

\n

None of this is to say, though, that the people committing these crimes against probability are aware of what probability space they are taking a measure over.  Many people act as if there is some number they can assign to any event which tells them how likely it is to occur and questions of \"probability spaces\" never enter their minds.  What does it mean that something happens 1% of the time?  I don't know; maybe that it doesn't happen 99% of the time?  How is 1% of the time measured?  I don't know; maybe one out of every 100 seconds?  Their crime is not one of mathematical abuse but of mathematical ignorance.

\n

As aspiring rationalists, if we measure a probability, we ought to know over what probability space we're measuring.  Otherwise a probability isn't well defined and is just another number that, at best, is meaningless and, at worst, can be used to help us defeat ourselves.  Even if it's not always a good stylistic choice to make the probability space explicit in our speech and writing, we must always know over what probability space we are measuring a probability.  Otherwise we are just making up numbers to feel rational.

" } }, { "_id": "gWG9x4GGLkm5CYaP6", "title": "Consider Representative Data Sets", "pageUrl": "https://www.lesswrong.com/posts/gWG9x4GGLkm5CYaP6/consider-representative-data-sets", "postedAt": "2009-05-06T01:49:21.389Z", "baseScore": 12, "voteCount": 12, "commentCount": 15, "url": null, "contents": { "documentId": "gWG9x4GGLkm5CYaP6", "html": "

In this article, I consider the standard biases in drawing factual conclusions that are not related to emotional reactions, and describe a simple model summarizing what goes wrong with the reasoning in these cases, that in turn suggests a way of systematically avoiding this kind of problems.

\n

The following model is used to describe the process of getting from a question to a (potentially biased) answer for the purposes of this article. First, you ask yourself a question. Second, in the context of the question, a data set is presented before your mind, either directly, by you looking at the explicit statements of fact, or indirectly, by associated facts becoming salient to your attention, triggered by the explicit data items or by the question. Third, you construct an intuitive model of some phenomenon, that allows to see its properties, as a result of considering the data set. And finally, you pronounce the answer, that is read out as one of the properties of the model you've just constructed.

\n

This description is meant to present mental paintbrush handles, to refer to the things you can see introspectively, and things you could operate consciously if you choose to.

\n

Most of the biases in the considered class may be seen as particular ways in which you pay attention to a wrong data set, not representative of the phenomenon you model to get to the answer you seek. As a result, the intuitive model gets systematically wrong, and the answer read out from it gets biased. Below I review the specific biases, to identify the ways in which things go wrong in each particular case, and then I summarize the classes of mistakes of reasoning playing major roles in these biases and correspondingly the ways of avoiding those mistakes.

\n

Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way. The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior.

\n

Hindsight bias is a tendency to overestimate the a priori probability of an event that has actually happened. The data set that gets considered overemphasizes the scenario that did happen, while the model that needs to be constructed, of the a priori belief, should be indifferent to which of the options will actually get realized. From this model, you need to read out the probability of the specific event, but which event you'll read out shouldn't figure into the model itself.

\n

Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others. This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate.

\n

Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold. As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong.

\n

The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.

\n

Proposing Solutions Prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.

\n

Generalization From One Example is a tendency to pay too much attention to the few anecdotal pieces of evidence you experienced, and model some general phenomenon based on them. This is a special case of availability bias, and the way in which the mistake unfolds is closely related to the correspondence bias and the hindsight bias.

\n

Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.

\n

Knowing About Biases Can Hurt People. When you learn about the biases, you obtain a toolset for constructing new statements of fact. Similarly to what goes wrong when you propose solutions to a hard problem prematurely, you contaminate the data set with weak conclusions, allegations against specific data items that don't add to the understanding of phenomenon you are trying to model, distract from considering the question, take away whatever relevant knowledge you had, and in some cases even invert it.

\n

A more general technique for not making these mistakes consists in making sure that the data set you consider is representative of the phenomenon you are trying to understand. Human brain can't automatically correct for the misleading selection of data, so you need to consciously ensure that you get presented with a balanced selection.

\n

The first mistake is introduction of irrelevant data items. Focus on the problem, don't let the distractions get their way. The irrelevant data may find its way in your thoughts covertly, through priming effects you don't even notice. Don't let anything distract you, even if you understand that the distraction isn't related to the problem you are working on. Don't construct the irrelevant items yourself, as byproducts of your activity. Make sure that the data items you consider are actually related to the phenomenon you are trying to understand. To form accurate beliefs about something, you really do have to observe it. Don't think about fictional evidence, don't think about the facts that look superficially relevant to the question, but actually aren't, as in the case of the hindsight bias and reasoning by surface analogies.

\n

The second mistake is to consider an unbalanced data set, overemphasizing some aspects of the phenomenon, and underemphasizing the others. The data needs to cover the whole phenomenon in a representative way, for the human mind to process it adequately. There are two sides to correcting this imbalance. First, you may take away the excessive data points, deliberatively refusing to consider them, so that your mind gets presented with less evidence, but this remaining evidence is more balanced, more representative of what you are trying to understand. This is similar to what happens when you take an outside view, for example, to avoid planning fallacy. Second, you may generate the correct data items to fill the rest of the model, from the cluster of evidence you've got. This generation may happen either formally, through using technical models of the phenomenon that allow to explicitly calculate more facts, or informally, through training your intuition to follow reliable rules for interpreting the specific pieces of evidence as the aspects of the whole phenomenon you are studying. Together, these feats constitute expertise in the domain, an art of knowing how to make use of the data that would only confuse a naive mind. When discarding evidence to correct the imbalance of data, only parts you don't posses expertise in need to be thrown away, while the parts that you are ready to process may be kept, making your understanding of the phenomenon stronger.

\n

The third mistake is to mix reliable evidence with unreliable evidence. The mind can't tell between relevant info and fictional irrelevant info, let alone between solid relevant evidence and shaky relevant evidence. If you know some facts for sure, and some facts only through indirect unreliable methods, don't consider the latter at all when forming the initial understanding of the phenomenon. Your own untrained intuition generates weak facts, on the things in which you don't have domain expertise, for example when you spontaneously think up solutions to a hard problem. You only get wild guesses when the data is too thin for your intuition to retain at least minimal reliability, when getting a few steps away from the data. You get weak evidence from applying general heuristics that don't promise exceptional precision, such as knowledge of biases. You get weak evidence from listening to the opinion of the majority, from listening to the virulent memes. However, when you don't have reliable data, you need to start including less reliable evidence into consideration, but only the best of what you can come up with.

\n

Your thinking shouldn't be contaminated by unrelated facts, shouldn't tumble over from the imbalance in knowledge, and shouldn't get diluted by the abundance of weak conclusions. Instead, the understanding should grow more focused on the relevant details, more comprehensive and balanced, attending to more aspects of the problem, and more technically accurate.

\n

Think representative sets of your best data.

" } }, { "_id": "u2JAuc3iQ2PRXSsEy", "title": "Introduction Thread: May 2009", "pageUrl": "https://www.lesswrong.com/posts/u2JAuc3iQ2PRXSsEy/introduction-thread-may-2009", "postedAt": "2009-05-05T20:39:33.644Z", "baseScore": 5, "voteCount": 4, "commentCount": 22, "url": null, "contents": { "documentId": "u2JAuc3iQ2PRXSsEy", "html": "

If you've just joined the Less Wrong community, here's a space for you to tell us a bit about yourself, and perhaps test the waters on any subjects you'd like to discuss with us. You might also want to check our welcome page.  Glad to have you with us =)

" } }, { "_id": "7K6og3ssSBbH7mrYM", "title": "Off Topic Thread: May 2009", "pageUrl": "https://www.lesswrong.com/posts/7K6og3ssSBbH7mrYM/off-topic-thread-may-2009", "postedAt": "2009-05-05T20:36:11.296Z", "baseScore": 2, "voteCount": 6, "commentCount": 72, "url": null, "contents": { "documentId": "7K6og3ssSBbH7mrYM", "html": "

Here's your space to talk about anything totally unrelated to being Less Wrong

" } }, { "_id": "5XMrWNGQySFdcuMsA", "title": "How to use \"philosophical majoritarianism\"", "pageUrl": "https://www.lesswrong.com/posts/5XMrWNGQySFdcuMsA/how-to-use-philosophical-majoritarianism", "postedAt": "2009-05-05T06:49:45.419Z", "baseScore": 13, "voteCount": 26, "commentCount": 9, "url": null, "contents": { "documentId": "5XMrWNGQySFdcuMsA", "html": "

The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn't risk information cascades, we're talking about averaging impressions and coming up with the same belief.

\n

To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you're doing it right, your beliefs should get closer and closer to agreement, and you shouldn't be able to predict who is going to fall on which side.

Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we're going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.

This much I think has been said again and again. I'm gonna attempt to describe how.

\n

Imagine for a moment that you are a perfectly rational Bayesian, and you just need data.

First realize that \"duplicate people\" don't count double. If you make a maximum precision copy of someone, that doesn't make him any more likely to be right- clearly we can do better than averaging over all people with equal weighting.  By the same idea, finding out that a certain train of thought leading to a certain belief is common shouldn't make you proportionally more confident in that idea.  The only reason it might make you any more confident in it is the possibility that its truth leads to its proliferation and therefore its popularity is (weak) evidence.

\n

This explains why we can dismiss the beliefs of the billions of theists. First of all, their beliefs are very well correlated so that all useful information can be learned through only a handful of theists.  Second of all, we understand their arguments and we understand how they formed their beliefs-and have already taken them into account. The reason they continue to disagree is because the situation isn't symmetric - they don't understand the opposing arguments or the causal path that leads one to be a reductionist atheist. 

\n

No wonder \"majoritarionism\" doesn't seem to work here.

\n

Since we're still pretending to be perfect Bayesians, we only care about people who are fairly predictable (given access to their information) and have information that we don't have. If they don't have any new information, then we can just follow the causal path and say \"and here, sir, is where you went wrong.\". Even if we don't understand their mind perfectly, we don't take them seriously since it is clear that whatever they were doing, they're doing it wrong.  On the other hand, if the other person has a lot of data, but we have no idea how data affects their beliefs, then we can't extract any useful information.

\n

We only change our beliefs to more closely match theirs when they are not only predictable, but predictably rational. If you know someone is always wrong, then reversing his stupidity can help you get more accurate beliefs, but it won't bring you closer to agreement- just the opposite!

\n

If we stop kidding ourselves and realize that we aren't perfect Bayesian, then we have to start giving credit to how other people think. If you and an epistemic peer come upon the same data set and come to different conclusions, then you have no reason to think that your way of thinking is any more accurate than his (as we assumed he's an epistemic peer).  While you may have different initial impressions, you better be able to converge to the same belief.  And again, on each iteration, it shouldn't be predictable who is going to fall on which side.

\n

If we revisit the cases like religion, then you still understand how they came to their beliefs and exactly why they fail.  So to the extent that you believe you can recognize stupidity when you see it, you still stick to your own belief. Even though you aren't perfect, for this case, you're good enough.

One sentence summary: You want to shift your belief to the average over answers given by predictably rational \"Rituals of Cognition\"/data set pairs1, not people2.

\n

You weight the different \"Rituals Of Cognition\"/data pairs by how much you trust the ROC and by how large the data set is.  You must, however, keep in mind that to trust yourself more than average, you have to have a better than average reason to think that you're better than average.

\n

To the extent that everyone has a unique take on the subject, counting people and counting cognitive rituals are equivalent.  But when it comes to a group where all people think pretty close to the same way, then they only get one \"vote\". 

\n

You can get \"bonus points\" if you can predict the conditional response of irrational peoples action in response to data and update based on that. For practical purposes though, I don't think much of this happens as not many people are intelligently stupid.

\n

 

\n

ETA: This takes the anthropomorphism out of the loop. We're looking at valid ROC, and polling human beliefs is just a cheap way to find them. If we can come up with other ways of finding them, I expect that to be very valuable. The smart people that impress me most aren't the ones that learn slightly quicker, since everyone else gets there too. The smart people that impress me the most come in where everyone else is stumped and chop Gordian's knot in half with their unique way of thinking about the problem. Can we train this skill?

\n

Footnotes:
1. I'm fully aware of how hoaky this sounds without any real math there, but it seems like it should be formalizable.
If you're just trying to improve human rationality (as opposed to programming AI), the real math would have to be interpreted again anyway and I'm not gonna spend the time right now.

2. Just as thinking identically to your twin doesn't help you get the right answer (and therefore is weighted less), if you can come up with more than one valid way of looking at things, you can expect justifiably be weighted as strongly as a small group of people.

" } }, { "_id": "rQJ7Epe6b8WpX9yeK", "title": "How David Beats Goliath", "pageUrl": "https://www.lesswrong.com/posts/rQJ7Epe6b8WpX9yeK/how-david-beats-goliath", "postedAt": "2009-05-05T01:25:03.586Z", "baseScore": 25, "voteCount": 25, "commentCount": 27, "url": null, "contents": { "documentId": "rQJ7Epe6b8WpX9yeK", "html": "

From the New Yorker:

\n
\n

It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, and Ranadivé thought that that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that made them so good?

\n
\n

[...]

\n
\n

David’s victory over Goliath, in the Biblical account, is held to be an anomaly. It was not. Davids win all the time. The political scientist Ivan Arreguín-Toft recently looked at every war fought in the past two hundred years between strong and weak combatants. The Goliaths, he found, won in 71.5 per cent of the cases. That is a remarkable fact. Arreguín-Toft was analyzing conflicts in which one side was at least ten times as powerful—in terms of armed might and population—as its opponent, and even in those lopsided contests the underdog won almost a third of the time.

[...] What happened, Arreguín-Toft wondered, when the underdogs likewise acknowledged their weakness and chose an unconventional strategy? He went back and re-analyzed his data. In those cases, David’s winning percentage went from 28.5 to 63.6.

\n
\n

[...]

\n
\n

Arreguín-Toft found the same puzzling pattern. When an underdog fought like David, he usually won. But most of the time underdogs didn’t fight like David. Of the two hundred and two lopsided conflicts in Arreguín-Toft’s database, the underdog chose to go toe to toe with Goliath the conventional way a hundred and fifty-two times—and lost a hundred and nineteen times.

\n
" } }, { "_id": "jaW5XerRuYRhzjDLE", "title": "Special Status Needs Special Support", "pageUrl": "https://www.lesswrong.com/posts/jaW5XerRuYRhzjDLE/special-status-needs-special-support", "postedAt": "2009-05-04T22:59:59.183Z", "baseScore": 30, "voteCount": 38, "commentCount": 75, "url": null, "contents": { "documentId": "jaW5XerRuYRhzjDLE", "html": "

I just recorded another BHTV with Adam Frank, though it's not out yet, and I had a thought that seems worth recording.  At a certain point in the dialogue, Adam Frank was praising the wisdom and poetry in religion.  I retorted, \"Tolkien's got great poetry, and some parts that are wise and some that are unwise; but you don't see people wearing little rings around their neck in memory of Frodo.\"

\n

(I don't remember whether this observation is original to me, so if anyone knows a prior source for this exact wording, please comment it!)

\n

The general structure of this critique is that Frank wants to assign a special status to the Book of Job, but he gives a reason that would be equally applicable to The Lord of the Rings (good poetry and some wise parts).  So if those are his real reasons, he should feel just the same way about God and Gandalf.  Or if not that exact particular book, then some other work of poetic fiction that was always understood to be poetic fiction.

\n

Later on I did demand of Adam Frank to say whether he thought the Book of Job ought to be assigned any different status from The Merchant of Venice, and Frank did reply \"No\".  I'm not sure that he lives up to this reply, frankly.  I strongly suspect he grants the two works a different emotional status.  One is widely revered as Sacred Religious Truth while the other is merely a Great Work of Literature.  Frank, while not a religious believer himself, does have different modes of thought for Sacred Truth and Great Literature and he knows that Job is supposed to be Sacred Truth.

\n

When I challenged the sacredness of the Book of Job, Frank reacted by trying to praise Job's \"great poetry\", which positive affect then seems to justify the positive-affect sacred status via the affect heuristic / halo effect.  But \"great poetry\" would apply to Tolkien as well; and yet if you talked about Tolkien the way that Frank talked about Job, most people would write you down as a hopeless fanboy/fangirl...

\n

So the general form of the bias that I'm critiquing is to try and justify a special positive (negative) status by pointing to positive (negative) attributes, saying, \"Therefore I can assign it this very positive status!\", but the same attributes belong to many other works that you don't grant the special positive status.

\n

Other places to watch out for this would be if, say, you thought that Morton Smerdley was the greatest genius ever, and someone called on you to justify this, and you replied \"Morton Smerdley became a Math Professor at just the age of 27\" - but there are other people who became math professors at 27, or even 26, and yet you don't feel the special reverence toward them that you attach to Smerdley.

" } }, { "_id": "mXaPPjud9MuuboWgd", "title": "Bead Jar Guesses", "pageUrl": "https://www.lesswrong.com/posts/mXaPPjud9MuuboWgd/bead-jar-guesses", "postedAt": "2009-05-04T18:59:56.768Z", "baseScore": 22, "voteCount": 32, "commentCount": 134, "url": null, "contents": { "documentId": "mXaPPjud9MuuboWgd", "html": "

Let's say Omega turns up and sets you a puzzle, since this seems to be what Omega does in his spare time.  He has with him an opaque jar, which he says contains some solid-colored beads, and he's going to draw one bead out of the jar.  He would like to know what your probability is that the bead will be red.

\n

Well, now there is an interesting question.  We'll bypass the novice mistake of calling it .5, of course; just because the options are binary (red or non-red) doesn't make them equally likely.  It's not like you have any information.  Assuming you don't think Omega is out to deliberately screw with you, you could say that the probability is .083 based on the fact that \"red\" is one of twelve basic color words in English.  (If he had asked for the probability that the bead would be lilac, you'd be in a bit more trouble.)  If you were obliged to make a bet that the bead is red, you would probably take the most conservative bet available (even if you're still assuming Omega isn't deliberately screwing with you), but .083 sounds okay.

\n

But because you start with no information, it's very hard to gather more.  Suppose Omega reaches into the jar and pulls out a red bead.  Does your probability that the second bead will be red go up (obviously the beads come in red)?  Does it go down (that might have been the only one, and however many red beads there were before, there are fewer now)?  Does it stay the same (the beads are all - as far as you know - independent of one another; removing this one bead has an effect on the actual probabilities of what the next one will be, but it can't affect your epistemic probability)?  What if he pulled out a gray bead first, instead of a red one?  How many beads would he have to pull, and in what colors, for you to start making confident predictions?

\n

So that's one kind of probability: the bead jar guess.  It has a basis, but it's a terribly flimsy one, and guessing right (or wrong) doesn't help much to confirm or disconfirm the guess.  Even if Omega had asked about the bead being lilac, and you'd dutifully given a tiny probability, it would not have surprised you to see a lilac bead emerge from the jar.

\n

A non-bead-jar-guess probability yields surprise when it turns out to be true even if it's just the same size.  Say your probability for lilac was .003.  That's tiny.  If you had a probability of .003 that it would rain on a particular day, you would be right to be astonished if you turned out to need the umbrella you left at home.

\n

Bead jar guesses vacillate more easily.  Although in the case of the bead jar, you are in an extremely disadvantageous position when it comes to getting more information, we can fix that: somebody who says she's peeked into the jar says all the beads in the jar are red.  Just like that, you'll discard the .083 and swap it for a solid .99 (adjusted as you like for the possibility that she is lying or can't see well).  It would take considerable evidence to move a probability that far if it were not a wild guess, not just a single person's say-so, but that's all you've got.  Then Omega pulling out a bead can give you information: the minute he pulls out the gray bead you know you can't rely on your informant, at least not completely.  You can start making decent inferences.

\n

I think more of our beliefs are bead jar guesses than we realize, but because of assorted insidious psychological tendencies, we don't recognize that and we hold onto them tighter than baseless suppositions deserve.

" } }, { "_id": "Ba6buPA3u2btdKS82", "title": "Without models", "pageUrl": "https://www.lesswrong.com/posts/Ba6buPA3u2btdKS82/without-models", "postedAt": "2009-05-04T11:31:38.399Z", "baseScore": 23, "voteCount": 33, "commentCount": 55, "url": null, "contents": { "documentId": "Ba6buPA3u2btdKS82", "html": "

Followup to: What is control theory?

\n

I mentioned in my post testing the water on this subject that control systems are not intuitive until one has learnt to understand them. The point I am going to talk about is one of those non-intuitive features of the subject. It is (a) basic to the very idea of a control system, and (b) something that almost everyone gets wrong when they first encounter control systems.

\n

I'm going to address just this one point, not in order to ignore the rest, but because the discussion arising from my last post has shown that this is presently the most important thing.

\n

There is a great temptation to think that to control a variable -- that is, to keep it at a desired value in spite of disturbing influences -- the controller must contain a model of the process to be controlled and use it to calculate what actions will have the desired effect. In addition, it must measure the disturbances or better still, predict them in advance and what effect they will have, and take those into account in deciding its actions.

\n

In terms more familiar here, the temptation to think that to bring about desired effects in the world, one must have a model of the relevant parts of the world and predict what actions will produce the desired results.

\n

However, this is absolutely wrong. This is not a minor mistake or a small misunderstanding; it is the pons asinorum of the subject.

\n

Note the word \"must\". It is not disputed that one can use models and predictions, only that one must, that the task inherently requires it.

\n

\n

A control system can work without having any model of what it is controlling.

\n

The designer will have a model. For the room thermostat, he must know that the heating should turn on when the room is too cold and off when it is too hot, rather than the other way around, and he must arrange that the source of heat is powerful enough. The controller he designs does not know that; it merely does that. (Compare the similar relationship between evolution and evolved organisms. How evolution works is not how the evolved organism works, nor is how a designer works how the designed system works.) For a cruise control, he must choose the parameters of the controller, taking into account the engine's response to the accelerator pedal. The resulting control system, however, contains no representation of that. According to the HowStuffWorks article, they typically use nothing more complicated than proportional or PID control. The parameters are chosen by the designer according to his knowledge about the system; the parameters themselves are not something the controller knows about the system.

\n

It is possible to design control systems that do contain models, but it is not inherent to the task of control. This is what model-based controllers look like. (Thanks to Tom Talbot for that reference.) Pick up any book on model-based control to see more examples. There are signals within the control system that are designed to relate to each other in the same way as do corresponding properties of the world outside. That is what a model is. There is nothing even slightly resembling that in a thermostat or a cruise control. Nor is there in the knee-jerk tendon reflex. Whether there are models elsewhere in the human body is an empirical matter, to be decided by investigations such as those in the linked paper. To merely be entangled with the outside world is not what it is, to be a model.

\n

Within the Alien Space Bat Prison Cell, the thermostat is flicking a switch one way when the needle is to the left of the mark, and the other when it is to the right. The cruise control is turning a knob by an amount proportional to the distance between the needle and the mark. Neither of them knows why. Neither of them knows what is outside the cell. Neither of them cares whether what they are doing is working. They just do it, and they work.

\n

A control system can work without having any knowledge of the external disturbances.

\n

The thermostat does not know that the sun is shining in through the window. It only knows the current temperature. The cruise control does not sense the gradient of the road, nor the head wind. It senses the speed of the car. It may be tuned for some broad characteristics of the vehicle, but it does not itself know those characteristics, or sense when they change, such as when passengers get in and out.

\n

Again, it is possible to design controllers that do sense at least some of the disturbances, but it is not inherent to the task of control.

\n

A control system can work without making any predictions about anything.

\n

The room thermostat does not know that the sun is shining, nor the cruise control the gradient. A fortiori, they do not predict that the sun will come out in a few minutes, nor that there is a hill in the distance.

\n

It is possible to design controllers that make predictions, but it is not an inherent requirement of the task of control. The fact that a controller works does not constitute a prediction, by the controller, that it will work. I am belabouring this point, because the error has already been belaboured.

\n

But (it was maintained) doesn't the control system have an implicit model, implicit knowledge, and implicitly make predictions?

\n

No. None of these things are true. The very concepts of implicit model, implicit knowledge, and implicit prediction are problematic. The phrases do have sensible meanings in some other contexts, but not here. An implicit model is one in which functional relationships are expressed not as explicit functions y=f(x), but as relations g(x,y)=k. Implicit knowledge is knowledge that one has but cannot express in words. Implicit prediction is an unarticulated belief about the effect of the actions one is taking.

\n

In the present context, \"implicit\" is indistinguishable from \"not\". Just because a system was made a certain way in order to interact with some other system a certain way, it does not make the one a model of the other. As well say that a hammer is a model of a nail. The examples I am using, the thermostat and the cruise control, sense temperature and speed respectively, compare them with their set points, and apply a rule for determining their action. In the rule for a proportional controller:

\n

output = constant × (reference - perception)

\n

there is no model of anything. The gain constant is not a model. The perception, the reference, and the output are not models. The equation relating them is my model of the controller. It is not the controller's model of anything: it is what the controller is.

\n

The only knowledge these systems have is their perceptions and their references, for temperature or speed. They contain no \"implicit knowledge\".

\n

They do not \"implicitly\" make predictions. The designer can predict that they will work. The controllers themselves predict nothing. They do what they do whether it works or not. Sometimes, in fact, these systems do not work. The thermostat will fail to control if the outside temperature is above the set point. The cruise control will fail to control on a sufficiently steep downhill gradient. They will not notice that they are not working. They will not behave any differently as a result. They will just carry on doing o=c×(r-p), or whatever their output rule is.

\n

I don't know if anyone tried my robot simulation applet that I linked to, but I've noticed that people I show it to readily anthropomorphise it. (BTW, if its interface appears scrambled, resize the browser window a little and it should sort itself out.) They see the robot apparently going around the side of a hill to get to a food particle and think it planned that, when in fact it knows absolutely nothing about the shape of the terrain ahead. They see it go to one food particle rather than another and think it made a decision, when in fact it does not know how many food particles there are or where. There is almost nothing inside the robot, compared to what people imagine: no planning, no adaptation, no prediction, no sensing of disturbances, and no model of anything but its own geometry. The 6-legged version contains 44 proportional controllers. The 44 gain constants are not a model, they merely work.

\n

(A tangent: people look at other people and think they can see those other people's purposes, thoughts, and feelings. Are their projections any more accurate than they are when they look at that robot? If you think that they are, how do you know?)

\n

Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible. This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? And how much of the functioning of an artificial organism must be designed to use these means? It appears inescapable that all of what a brain does consists of control systems. To what extent these may be model-based is an empirical question, and is not implied merely by the fact of control. Likewise, the extent to which these methods are useful in the design of artificial systems embodying the Ultimate Art.

\n

Evolution operates statistically; I would be entirely unsurprised by Bayesian analyses of evolution. But how evolution works is not how the evolved organism works. That must be studied separately.

\n

\n

I may post something more on the relationship between Bayesian reasoning and control systems neither designed by nor performing the same when I've digested the material that Steve_Rayhawk pointed to. For the moment, though, I'll just remark that \"Bayes!\" is merely a mysterious answer, unless backed up by actual mathematical application to the specific case.

\n

Exercises.

\n

1. A room thermostat is set to turn the heating on at 20 degrees and off at 21. The ambient temperature outside is 10 degrees. You place a candle near the thermostat, whose effect is to raise its temperature 5 degrees relative to the body of the room. What will happen to (a) the temperature of the room and (b) the temperature of the thermostat?

\n

2. A cruise control is set to maintain the speed at 50 mph. It is mechanically connected to the accelerator pedal -- it moves it up and down, operating the throttle just as you would be doing if you were controlling the speed yourself. It is designed to disengage the moment you depress the brake. Suppose that that switch fails: the cruise control continues to operate when you apply the brake. As you gently apply the brake, what will happen to (a) the accelerator pedal, and (b) the speed of the car? What will happen if you attempt to keep the speed down to 40 mph?

\n

3. An employee is paid an hourly rate for however many hours he wishes to work. What will happen to the number of hours per week he works if the rate is increased?

\n

4. A target is imposed on a doctor's practice, of never having a waiting list for appointments more than four weeks long. What effect will this have on (a) how long a patient must wait to see the doctor, and (b) the length of the appointments book?

\n

5. What relates questions 3 and 4 to the subject of this article?

\n

6. Controller: o = c×(r-p). Environment: dp/dt = k×o + d. o, r, and p as above; c and k are constants; d is an arbitrary function of time (the disturbance). How fast and how accurately does this controller reject the disturbance and track the reference?

" } }, { "_id": "39bD65my8GvEiXQ9o", "title": "Allais Hack -- Transform Your Decisions!", "pageUrl": "https://www.lesswrong.com/posts/39bD65my8GvEiXQ9o/allais-hack-transform-your-decisions", "postedAt": "2009-05-03T22:37:13.238Z", "baseScore": 22, "voteCount": 19, "commentCount": 19, "url": null, "contents": { "documentId": "39bD65my8GvEiXQ9o", "html": "

The Allais Paradox, though not actually a paradox, was a classic experiment which showed that decisions made by humans do not demonstrate consistent preferences. If you actually want to accomplish something, rather than simply feel good about your decisions, this is rather disturbing.

\n

When something like the Allais Paradox is presented all in one go, it's fairly easy to see that the two cases are equivalent, and ensure that your decisions are consistent. But if I clone you right now, present one of you with gamble 1, and one of you with gamble 2, you might not fare so well. The question is how to consistently advance your own preferences even when you're only looking at one side of the problem.

\n

Obviously, one solution is to actually construct a utility function in money, and apply it rigorously to all decisions. Logarithmic in your total net worth is usually a good place to start. Next you can assign a number of utilons to each year you live, a negative number to each day you are sick, a number for each sunrise you witness...

\n

I would humbly suggest that a less drastic strategy might be to familiarize yourself with the ways in which you can transform a decision which should make no difference unto decision theory, and actually get in the habit of applying these transformations to decisions you make in real life.

\n

So, let us say that I present you with Allais Gamble #2: choose between A: 34% chance of winning $24,000, and 66% chance of winning nothing, and B: 33% chance of winning $27,000, and 67% chance of winning nothing.

\n

Before snapping to a judgment, try some of the following transforms:

\n

Assume your decision matters:

\n

The gamble, as given, contains lots of probability mass in which your decision will not matter one way or the other -- shave it off!

\n

Two possible resulting scenarios:

\n

A: $24,000 with certainty, B: 33/34 chance of $27,000

\n

Or, less obviously: I spin a wheel with 67 notches, 34 marked A and 33 marked B. Choose A and win $24,000 if the wheel comes up A, nothing otherwise. Choose B and win $27,000 if the wheel comes up B, nothing otherwise.

\n

Assume your decision probably doesn't matter:

\n

Tiny movements away from certainty tend to be more strongly felt -- try shifting all your probabilities down and see how you feel about them.

\n

A: 3.4% chance of winning $24,000, 96.6% chance of nothing. B: 3.3% chance of winning $27,000, 96.7% chance of nothing.

\n

Convert potential wins into potential losses, and vice versa:

\n

Suppose I simply give you the $24,000 today. You spend the rest of the day counting your bills and planning wonderful ways of spending it. Tomorrow, I come to you and offer you an additional $3,000, with the proviso that there is a 1/34 chance that you will lose everything.

\n

(If 1/34 is hard to emotionally weight, also feel free to imagine a fair coin coming up heads five times in a row)

\n

Or, suppose I give you the full $27,000 today, and tomorrow, a mugger comes, grabs $3,000 from your wallet, and then offers it back for a 1/34 shot at the whole thing.

\n

 

\n
\n

I'm not saying that there is one way of transforming a decision such that your inner Bayesian master will suddenly snap to attention and make the decision for you. This method is simply a diagnostic. If you make one of these transforms and find the emotional weight of the decision switching sides, something is going wrong in your reasoning, and you should fight to understand what it is before making a decision either way.

\n

 

" } }, { "_id": "gERucNHtgfaJ6HMhd", "title": "Essay-Question Poll: Dietary Choices", "pageUrl": "https://www.lesswrong.com/posts/gERucNHtgfaJ6HMhd/essay-question-poll-dietary-choices", "postedAt": "2009-05-03T15:27:26.437Z", "baseScore": 17, "voteCount": 22, "commentCount": 244, "url": null, "contents": { "documentId": "gERucNHtgfaJ6HMhd", "html": "

I have noticed that among philosophers, vegetarianism of one form or another is quite common.  In fact, I became a vegetarian (technically a pescetarian) myself partly out of respect for an undergraduate philosophy professor.  I am interested in finding out if there is a similar disproportion in the Less Wrong community.

\n

I didn't request that this go into Yvain's survey because I want more information than just what animal products you do or don't eat; I'd also like to see nuances of the reasons behind your diet.  There are a lot more shades than carnivore/vegetarian/vegan - if you want to be a vegetarian but are allergic to soy and gluten, that's a compelling reason to diversify protein sources, for instance.  I'd also like to hear about if you avoid any plant foods (if you think they're farmed in a way that's environmentally destructive or that hurts people or if you have warm fuzzy feelings for plants, maybe).  Here are some questions that come to mind:

\n
    \n
  1. What foods, if any, do you normally avoid for reasons other than pure culinary taste, cost, individual health concerns (allergies, diabetes, etc.) or ease of preparation?  (Avoiding foods that are considered revolting or just non-food in your culture of origin, like balut or fried locusts, counts as \"culinary taste\".)
  2. \n
  3. What are your reasons for avoiding those foods?
  4. \n
  5. How strictly do you avoid them?  For instance, will you eat them if you are served them while a guest at a meal, or if you are hungry and there is nothing else available?  Do you check to see if they're in potentially questionable dishes at restaurants (and if so, do you trust what the server says?)
  6. \n
  7. If you have children or plan to have children, will you expect or encourage them to avoid the same foods?
  8. \n
  9. Do you try to convince your friends and family members to make dietary choices similar to yours?  If so, have you ever succeeded?
  10. \n
  11. If you avoid a class of foods with valuable nutritive content (as opposed to Twinkies), what do you replace it with to get complete nutrition?
  12. \n
  13. What are your attitudes to people who are more restrictive in their diets than you are?  Less restrictive?
  14. \n
  15. What is the timeline of your dietary restrictions?  (Transitions, lapses, increases or decreases in restrictiveness, etc.)
  16. \n
  17. If you have not avoided these foods for your entire life, how much did you enjoy them when you ate them, and do you still sometimes want to eat them?
  18. \n
  19. Is there anything else about your choice of diet that might be relevant or interesting?
  20. \n
" } }, { "_id": "2AuvBPw6Rb7yxkvKc", "title": "Return of the Survey", "pageUrl": "https://www.lesswrong.com/posts/2AuvBPw6Rb7yxkvKc/return-of-the-survey", "postedAt": "2009-05-03T02:10:43.448Z", "baseScore": 16, "voteCount": 14, "commentCount": 61, "url": null, "contents": { "documentId": "2AuvBPw6Rb7yxkvKc", "html": "

[UPDATE: Survey is now closed. Thanks to everyone who took it. Results soon. Ignore everything below.]

\n

Last week, I asked people for help writing a survey. I've since taken some of your suggestions. Not all, because I wanted to keep the survey short, and because the survey software I'm using made certain types of questions inconvenient, but some. I hope no one's too angry about their contributions being left out.

\n

Please note that, due to what was very possibly a bad decision on my part as to what would be most intuitive, I've requested all probabilities be in percentage format. So if you think something has a 1/2 chance of being true, please list 50 instead of .5.

\n

Please take the survey now; it can be found here and it shouldn't take more than fifteen or twenty minutes. Unless perhaps you need to spend a lot of time determining your opinions on controversial issues, in which case it will be time well spent!

\n

Several people, despite the BOLD ALL CAPS TEXT saying not to take the survey in the last post, went ahead and took the survey. Your results have been deleted. Please take it again. Thank you.

\n

I'll leave this open for about a week, calculate some results, then send out the data. There is an option to make your data private at the bottom of the survey.

\n

Thanks to everyone who takes this. If you want, post a comment saying you took it below, and I'll give you a karma point :)

" } }, { "_id": "B4AyJXYPpGbBmxQzd", "title": "What I Tell You Three Times Is True", "pageUrl": "https://www.lesswrong.com/posts/B4AyJXYPpGbBmxQzd/what-i-tell-you-three-times-is-true", "postedAt": "2009-05-02T23:47:32.013Z", "baseScore": 60, "voteCount": 56, "commentCount": 32, "url": null, "contents": { "documentId": "B4AyJXYPpGbBmxQzd", "html": "

\"The human brain evidently operates on some variation of the famous principle enunciated in 'The Hunting of the Snark': 'What I tell you three times is true.'\"

\n

   -- Norbert Weiner, from Cybernetics

\n

Ask for a high-profile rationalist, and you'll hear about Richard Dawkins or James Randi or maybe Peter Thiel. Not a lot of people would immediately name Scott Adams, creator of Dilbert. But as readers of his blog know, he's got a deep interest in rationality, and sometimes it shows up in his comics: for example, this one from last week. How many people can expose several million people to the phrase \"Boltzmann brain hypothesis\" and have them enjoy it?

So I was very surprised to find Adams was a believer in and evangelist of something that sounded a lot like pseudoscience. \"Affirmations\" are positive statements made with the belief that saying the statement loud enough and long enough will help it come true. For example, you might say \"I will become a syndicated cartoonist\" fifteen times before bed every night, thinking that this will in fact make you a syndicated cartoonist. Adams partially credits his success as a cartoonist to doing exactly this.

He admits \"it sounds as if I believe in some sort of voodoo or magic\", and acknowledges that \"skeptics have suggested, and reasonably so, that this is a classic case of selective memory\" but still swears that it works. He also has \"received thousands of e-mails from people recounting their own experiences with affirmations. Most people seem to be amazed at how well they worked.\"

None of this should be taken too seriously without a controlled scientific study investigating it, of course. But is it worth the effort of a study, or should it be filed under \"so stupid that it's not worth anyone's time to investigate further\"?

I think there's a good case to be made from within a rationalist/scientific worldview that affirmations may in fact be effective for certain goals. Not miraculously effective, but not totally useless either.

\n

\n

To build this case, I want to provide evidence for two propositions. First, that whether we subconsciously believe we can succeed affects whether or not we succeed. Second, that repeating a statement verbally can make the subconscious believe it.

The link between belief in success and success has progressed beyond the motivational speaker stage and into the scientific evidence stage. The best-known of these links is the placebo effect. For certain diseases, believing that you will get better does increase your probability of getting better. This works not only subjectively (ie you feel less pain) but objectively (ie ulcers heal more quickly, inflammation decreases).

The placebo effect applies in some stranger cases outside simple curative drugs. A placebo stop-smoking pill does increase your chance of successfully quitting tobacco. Placebo strength pills enable you to run faster and lift more weight. Placebo alcohol makes you more gregarious and less inhibited1. Placebo therapies for phobia can desensitize you to otherwise terrifying stimuli.

There are some great studies about the effect of belief in school settings. Pick a student at random and tell the teacher that she's especially smart, and by the end of the year she will be doing exceptionally well; tell the teacher that she is exceptionally stupid, and by the end of the year she'll be doing exceptionally poorly. The experimenters theorized that the teacher's belief about the student's intelligence was subconsciously detected by the student, and that the student was somehow adjusted her performance to fit that belief. In a similar study, minority students were found to do worse on tests when reminded of stereotypes that minorities are stupid, and better when tested in contexts that downplayed their minority status, suggesting that the students' belief that they would fail was enough to make them fail.

Belief can also translate to success when mediated by signals of dominance and confidence. We've already discussed how hard-to-fake signals of confidence can help someone pick up women2. Although I don't know of any studies proving that confidence/dominance signals help a businessperson get promoted or a politician get elected, common sense suggests they do. For example, height does have a proven effect in this area, suggesting that our ancestral algorithms for assessing dominance play a major role.

MBlume has already discussed how one cannot simply choose to consciously project dominance signals. The expressions and postures involved are too complicated and too far from the normal domain of conscious control. He suggests using imagination and self-deception to trick the subconscious mind into adopting the necessary role.

So I hope it is not too controversial when I say that subconscious beliefs can significantly affect disease, willpower, physical strength, intelligence, and romantic and financial success.

The second part of my case is that repeating something makes the brain believe it on a subconscious level.

Say Anna Salamon and Steve Rayhawk: \"Any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.\" That's from their excellent post Cached Selves, where they explain that once you say something, even if you don't really mean it, it affects all your beliefs and behaviors afterwards. If you haven't read it yet, read it now: it is one of Less Wrong's growing number of classics.

There's also this study which someone linked me to on Overcoming Bias and to which I keep returning. It demonstrates pretty clearly that we don't have a lot of access to our own beliefs, and tend to make them up based on our behavior. So if I am repeating \"I will become a syndicated cartoonist\", and my subconscious is not subtle enough to realize I am doing it as part of a complex plot, it might very well assume I am doing it because, well, I think I will become a syndicated cartoonist. And the subconscious quite likes to keep beliefs consistent, so once it \"discovers\" I have that belief, it may edit whatever it needs to edit to become more consistent with it.

There have been a few studies vaguely related to affirmations. One that came out just a few weeks ago found that minorities who wrote 'value affirmation' essays did significantly better in school (although the same effect did not apply to Caucasians) . Another found that some similar sort of 'value affirmation' decreased stress as measured in cortisol and other physiological measures . But AFAIK no one's ever done a study on Adams-variety simple personal affirmations in all of their counter-intuitive weirdness, probably because it sounds so silly, and I think that's a shame. If they works, it's a useful self-help technique and akrasia-buster. If they don't work, that blocks off a few theories about how the mind works and helps us start looking for alternatives.

\n

 

\n

Footnote

\n

1: A story I like: in one of the studies that discovered the placebo effect for alcohol, one of the male participants who falsely believed he was drunk tried to cop a feel of a female researcher's breasts. That must have been the most awkward debriefing ever.

\n

2: Here I'm not just making my usual mistake and being accidentally sexist; I really mean \"pick up women\". There is less research suggesting the same thing works on men. See Chapter 6 of The Adapted Mind, \"The Evolution of Sexual Attraction: Evaluative Mechanisms in Women\".

" } }, { "_id": "yfxp4Y6YETjjtChFh", "title": "The mind-killer", "pageUrl": "https://www.lesswrong.com/posts/yfxp4Y6YETjjtChFh/the-mind-killer", "postedAt": "2009-05-02T16:49:19.539Z", "baseScore": 29, "voteCount": 30, "commentCount": 160, "url": null, "contents": { "documentId": "yfxp4Y6YETjjtChFh", "html": "

Can we talk about changing the world? Or saving the world?

\n

I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.

\n

One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.

\n

If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?

\n

I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of \"deserving\" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the \"deserving\" more than the \"undeserving\", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.

\n

For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.

\n

However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?

" } }, { "_id": "8JEuBXi3gCT5ZpDBr", "title": "TED Talks for Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/8JEuBXi3gCT5ZpDBr/ted-talks-for-less-wrong", "postedAt": "2009-05-02T03:32:09.269Z", "baseScore": 15, "voteCount": 14, "commentCount": 24, "url": null, "contents": { "documentId": "8JEuBXi3gCT5ZpDBr", "html": "

Dan Ariely talks about pain and cheating. In a nutshell: people report less pain when (i) they experience the strongest pain first; (ii) they experience less pain for a longer interval rather than more pain for a shorter interval; (iii) they can take breaks. The data falsifies the common intuition that people will prefer short, high intensity pain. In general, people tend to cheat more when (i) they obtain things other than actual cash; (ii) they observe in-group members cheating successfully; they tend to cheat less when (i) they take away cash; (ii) they observe out-group members cheating successfully; (iii) they experience priming with moral concepts such as the Ten Commandments.

\n

Post yours in comments. I've put a couple with the theme \"how brains work\" down there.

" } }, { "_id": "7PAAJofAe5hTMeSFT", "title": "Second London Rationalist Meeting upcoming - Sunday 14:00", "pageUrl": "https://www.lesswrong.com/posts/7PAAJofAe5hTMeSFT/second-london-rationalist-meeting-upcoming-sunday-14-00", "postedAt": "2009-05-02T00:17:55.558Z", "baseScore": 3, "voteCount": 5, "commentCount": 4, "url": null, "contents": { "documentId": "7PAAJofAe5hTMeSFT", "html": "

The second meeting will take place on Sunday (2009-05-03) 14:00, in cafe on top of the Waterstones bookstore near the Piccadilly Circus Tube station.

\n

If you want to know more, email me (Tomasz.Wegrzanowski@gmail.com) for details. Or just come straight away.

\n

If you're wondering what it will be like - here's summary of the first meeting.

" } }, { "_id": "n3xFPYWaPfNqMwRwC", "title": "Open Thread: May 2009", "pageUrl": "https://www.lesswrong.com/posts/n3xFPYWaPfNqMwRwC/open-thread-may-2009", "postedAt": "2009-05-01T16:16:35.156Z", "baseScore": 7, "voteCount": 5, "commentCount": 210, "url": null, "contents": { "documentId": "n3xFPYWaPfNqMwRwC", "html": "

Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.

" } }, { "_id": "25AYXth87fCN6w8zJ", "title": "Conventions and Confusing Continuity Conundrums", "pageUrl": "https://www.lesswrong.com/posts/25AYXth87fCN6w8zJ/conventions-and-confusing-continuity-conundrums", "postedAt": "2009-05-01T01:41:14.146Z", "baseScore": 5, "voteCount": 4, "commentCount": 9, "url": null, "contents": { "documentId": "25AYXth87fCN6w8zJ", "html": "

The next \"How Not to be Stupid\" may be a bit delayed for a couple of reasons.

\n

First, there appears to be a certain unstated continuity assumption in the material I've been working from that would probably be relevant for the next posting. As I said in the intro post, I'm working from Stephen Omhunduro's vulnerability argument, but filling in what I viewed as missing bits, generalizing one or two things, and so on. Anyways, the short of it is I thought that I was able to how to derive the relevant continuity condition, but turns out I need to think about that a bit more carefully.

\n

If I remain stumped on that bit, I'll just post and explicitly state the assumption, pointing it out as a potential problem area that needs to be dealt with one way or another. ie, either solved somehow, or demonstrate that it actually is invalid (thus causing some issues for decision theory...)

\n

Also, I'm going to be at Penguicon the next few days.

\n

Actually, I think I'll right now state the continuity condition I need, let others play with it too:

\n

Basically, I need it to be that if there's a preference ranking A > C > B, there must exist some p such that the p*A + (1-p)*B lottery ranks equal to C. (That is, that the mixing lotteries correspond to a smooth spectrum of preferences between, well, the things they are a mixing of rather than having any discontinuous jumps.)

\n

Anyways, I hope we can put this little trouble bit behind us and resume climbing the shining path of awakening to nonstupidity. :)

" } }, { "_id": "jLCtiGwmMJR4ktwbY", "title": "Rationalist Role in the Information Age", "pageUrl": "https://www.lesswrong.com/posts/jLCtiGwmMJR4ktwbY/rationalist-role-in-the-information-age", "postedAt": "2009-04-30T18:24:24.865Z", "baseScore": 7, "voteCount": 18, "commentCount": 19, "url": null, "contents": { "documentId": "jLCtiGwmMJR4ktwbY", "html": "

In response to Marketing rationalism, Bystander Apathy, Step N1 in Extreme Rationality: It Could Be Great, and Rationality: Common Interest of Many Causes.

\n

The problem that motivates this post is:
 “Given a controversial question in which there are good and bad arguments on both sides, as well as unreliable and conflicting information, how do you determine the answer when you’re not yourself an expert in the subject?”

\n

Well into the information age, we are still not pooling our resources in the most efficient way to get to the bottom of things. It would be enormously useful to develop some kind of group strategy to answer questions that have solutions somewhere in there.

\n

The idea I'm presenting is a way to apply our intellectual (and rational) resources in a niche way, that I will shortly describe, to facilitate public (non-expert) understanding of real world problems.

\n

The Niche and the Need

\n

Science, obviously, does the best job of solving problems. I'm confident that epidemiologists are effectively and efficiently working on the best models for pandemics as I write this post.

\n

And journalists do a pretty good job of what they do: providing information about what the scientists are doing. What's best about how journalists do that is that they always provide the source of the information, so that a rational person can judge the truth-value of that information. A qualified and rational person can then put the information in perspective.

\n

Alas, people are not so good at interpreting the information: they are neither rational in weighting the information nor qualified to put it in perspective. (Presumably, the epidemiologists are too busy to do this.)

\n

Interpreting information correctly is a service that rationalists could provide.

\n

\n

How we could do it:

\n

1. Information culled from various sources would be posted in the first half of a post titled P and updated continuously.

\n

2. As a rational group, within threads, we would discuss interpretations and implications of the available information.

\n

3. Only consensus views would be presented in the second half of the post P, updated continuously to prioritize the most relevant information on top.

\n

Why we would do it

\n

1. It would be objectively and enormously useful to cull useful and consensus interpretations. It is a time-consuming task even for a qualified person, as a group we would be effective.

\n

2. It would be a good demonstration of the usefulness of rationality.

\n

3. I would strongly recommend against any kind of advertisement, but if people from the general public happened to come there and happened to find the information exceptionally useful, rationality would be considered a useful and practical thing (and they would be inclined to fund the organization that provided this service).

\n

I'm motivated to do this because I feel like this is exactly what is missing in the information age. We have science and we have journalism but we need something more. Blogs are doing it, but not effectively because they are doing it as individuals (with comments, which is helpful), and they're not generally responsive to feedback and new information. I think LW has the intellectual resources and the correct problem solving paradigm to be successful.

\n

Small Scale verses Large Scale

\n

On a small scale, it could be done here, just among ourselves. If on a large scale, eventually, it would be done somewhere else. There, I see Huge Opportunity.

\n

Large Scale Idea:

\n

A library of posts. Each post would address a different problem and would be mediated by a particular individual. If someone in the general public is interested in a particular problem, they will go to that post for information. More important and relevant topics will have more activity.

\n

There would be interaction between the post and the public via threads, and between the post and other blogs on the internet via people navigating back and forth and sharing information.

\n

A post is mediated by a person or group concerned about that topic. There will be limitations associated (the mediator may not be rational enough, the mediator might be biased), so we would allow competition by allowing several posts on one topic. Here, the karma scoring would be enormously useful to help people decide which post is worth going to.

\n

Popular and relevant posts would get traffic and funding.

\n

The reason why this has a chance of working, if it isn't obvious, is because of the karma system. The problem in the information age is TMI (too much information), and the karma system solves that. We would have to instruct people, until it becomes the ethical standard, that noise and errors get down-voted, new information and plausible dissenting views get upvoted.

\n

Small scale idea:

\n

If there is interest in this idea, either small scale or large scale, I would like to suggest beginning with a post on swine flu. Volunteers to mediate the post could submit credentials and we would choose a team. We would open a new LW account to be shared by the team so that they can mediate the post collectively.

\n

 

" } }, { "_id": "XTmZbosXr3hRLT37x", "title": "Rationalistic Losing", "pageUrl": "https://www.lesswrong.com/posts/XTmZbosXr3hRLT37x/rationalistic-losing", "postedAt": "2009-04-30T12:48:52.437Z", "baseScore": 7, "voteCount": 16, "commentCount": 17, "url": null, "contents": { "documentId": "XTmZbosXr3hRLT37x", "html": "

Playing to learn

\n

I like losing. I don't even think that losing is necessarily evil. Personally, I believe this has less to do with a desire to lose and more to do with curiosity about the game-space.

\n

Technically, my goals are probably shifted into some form of meta-winning — I like to understand winning or non-winning moves, strategies, and tactics. Actually winning is icing on the cake. The cake is learning as much as I can about whatever subject in which I am competing. I can do that if I win; I can do that if I lose.

\n

I still prefer winning and I want to win and I play to win, but I also like losing. When I dive into a competition I will like the outcome. No matter what happens I will be happy because I will either (a) win or (b) lose and satiate my curiosity. Of course, learning is also possible while watching someone else lose and this generally makes winning more valuable than losing (I can watch them lose). It also provides a solid reason to watch and study other people play (or play myself and watch me \"lose\").

\n

The catch is that the valuable knowledge contained within winning has diminishing returns. When I fight I either (a) win or (b) lose and, as a completely separate event, (c) may have an interesting match to study. Ideally I get (a) and (c) but the odds of (c) get lower the more I dominate because my opponents could lose in a known fashion (by me winning in an \"old\" method). (c) should always be found next to (b). If there is a reason I lost I should learn the reason. If I knew the reason I should not have lost. Because of this, (c) offsets the negative of (b) and losing is valuable. This makes winning and losing worth the effort. When I lose, I win.

\n

Personally, I find (c) so valuable that I start getting bored when I no longer see anything to learn. If I keep winning over and over and never learn anything from the contest I have to find someone stronger to play or start losing creatively so that I can start learning again. Both of these solutions set up scenarios where I am increasing my chances to lose. Mathematically, this starts to make sense if the value of knowledge gained and the penalty of losing combine into something greater than winning without learning anything. (c - b > a) My hunches tell me that I value winning too little and curiosity is starting to curb my desire to win. I am not playing to win; I am playing to learn.

\n

skip to summary

\n


\n

Losing is good

\n

To be fair, I am specifically talking about winning within organized systems of competition. This generally means something like Magic: The Gathering, Go, or Mafia. Translating this onto Life is harder because I have more emotional investment in the outcome. The penalty of losing is stronger. If I lose a Game I know that the next round is entirely independent of this loss and there are minimal long-term effects to worry about. The consequences of losing will be much more severe if I screw up an investment portfolio or fail at attempting the perfect murder. To draw a finer line between life and gaming: If the win or loss means placing in a tournament with cash prizes the incentive for winning jumps well beyond the incentive to learn something new and I start playing to win. But is the pain of a loss inversely proportional to the value of a win? No, not necessarily.

\n

To map a tournament into payouts, say it costs $10 for entering and the prize for winning is $20. Losing at any point in the tournament has no monetary costs assigned to it. The $10 is a sunk cost and the $20 is only eligible to winners. Losing has some intrinsic emotional penalty but, other than feeling icky, the loss simply gives another opportunity to learn. The value of winning is greater than that of losing, but losing still has value. Because it has value, losing is good.

\n

Life works the same way. If I am applying for a competitive job I should be playing to win because the payouts are enormous. Losing is a bummer, but the value in the loss is learning new information within the game-space of applying for jobs. Namely, this could mean properly building past experience or learning to sell yourself well. Because you learned something, you are better off than when you started even though you lost. Therefore, losing is good. Winning is better, but losing is still valuable. The cost of losing is not the same as flipping the value of winning negative. You did not \"lose\" the job because you never had it. You failed simply failed to win.

\n


\n

Irrational winning

\n

Winning is great, but if there is no value in winning other than simply being the winner, losing may be worth more. Winning for the sake of winning is noble but useless. In such a scenario, playing to win may not be the most beneficial course. Playing to learn can result in a gain even if it means you \"lose\" the game. Looking at it rationalistically, \"losing\" is winning.

\n

If I play Carcassonne against my opponents and whomp them thirty times in a row by repeating my best known strategies I will have gained nothing. If I decide to use the game as a learning experience to test new strategies I can create an opportunity to learn, but am no longer playing to win. If I play just strong enough to win I can learn and win but this is less valuable than simply learning as much as you can because the win still means nothing. And if it did than you should have played to win.

\n

A better example would be a movie ticket that you purchased for $10. The game-space revolves around whether or not you get more value out of watching a movie than what you spent on the ticket. If you purchase a non-refundable ticket in advance but, on the day of the movie, you do not feel like going to the movie the \"win\" would be staying home which is actually a \"loss\" in the original game. The losing scenario has changed because \"losing\" now has more value.

\n

Note: This example is directly borrowed from Z_M_Davis's Sunk Cost Fallacy article.

\n

Minor point: In this example, the value of \"losing\" could be learning to not prepay for tickets or checking the weather first or learning from whatever caused you to mispredict your mood.

\n


\n

Rationalistic losing

\n

Rationalistic losing is essentially acknowledging and playing a super-game so that no matter what happens, you win. In this super-game, playing to win does not mean winning the contest. It means getting something valuable from the contest. In the above examples, even though the contests were lost, the rationalist should still win by learning the information available. Losing should be good. If it wasn't, something went wrong before you got to this point. Do not rob yourself of the value of losing by focusing on the lost win.

\n

This principle is hard to see when it applies to something you really, really wanted to win. If you really, really wanted that job than losing feels like losing the job. If you skip the movie it feels like losing $10. But you never had the job and you already spent the $10. The only thing left to lose is information. You can still win the super-game if you are able to gather this information. This is rationalistic losing.

\n

There are many examples of, and exceptions to, this principle but the whole point can be boiled into this:

\n
    \n
  1. Assume a contest where you can win or lose.
  2. \n
  3. If there is value in winning the contest, play to win, otherwise play to learn.
  4. \n
  5. If you play to win and lose, learn why you lost.
  6. \n
  7. If you already knew why you lost, you were not playing to win.
  8. \n
  9. Learning why you lost is valuable.
  10. \n
  11. Learning after a loss means the loss was valuable.
  12. \n
  13. If the loss was valuable, the loss was \"good\".
  14. \n
" } }, { "_id": "FyGaAy5jxu7LWEZzT", "title": "Test", "pageUrl": "https://www.lesswrong.com/posts/FyGaAy5jxu7LWEZzT/test-56", "postedAt": "2009-04-30T05:12:01.391Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "FyGaAy5jxu7LWEZzT", "html": "

asf bold asf

" } }, { "_id": "afZHsMD26ufh6BuaB", "title": "How Not to be Stupid: Adorable Maybes", "pageUrl": "https://www.lesswrong.com/posts/afZHsMD26ufh6BuaB/how-not-to-be-stupid-adorable-maybes", "postedAt": "2009-04-29T19:15:01.162Z", "baseScore": 1, "voteCount": 14, "commentCount": 55, "url": null, "contents": { "documentId": "afZHsMD26ufh6BuaB", "html": "

Previous: Know What You Want

\n

Ah wahned yah, ah wahned yah about the titles. </some enchanter named Tim>

\n

(Oh, a note: the idea here is to establish general rules for what sorts of decisions one in principle ought to make, and how one in principle ought to know stuff, given that one wants to avoid Being Stupid. (in the sense described in earlier posts) So I'm giving some general and contrived hypothetical situations to throw at the system to try to break it, to see what properties it would have to have to not automatically fail.)

\n

Okay, so assuming you buy the argument in favor of ranked preferences, let's see what else we can learn by considering sources of, ahem, randomness:

\n

Suppose that either via indexical uncertainty, or it turns out there really is some nondeterminism in the universe, or there's some source of bits such that the only thing you're able to determine about it is that the ratio of 1s it puts out to total bits is p. You're not able to determine anything else about the pattern of bits, they seem unconnected to each other. In other words, you've got some source of uncertainty that leaves you only knowing that some outcomes happen more often than others, and potentially you know something about the precise relative rates of those outcomes.

\n

I'm trying here to avoid actually assuming epistemic probabilities. (If I've inserted an invisible assumption for such that I didn't notice, let me know.) Instead I'm trying to construct a situation in which that specific situation can be accepted as at least validly describable by something resembling probabilities (propensity or frequencies. (frequencies? aieeee! Burn the heretic, or at least flame them without mercy! :))) So, for whatever reason, suppose the universe or your opponent or whatever has access to such a source of bits. Let's consider some of the implications of this.

\n

For instance, suppose you prefer A > B.

\n

Now, suppose you are somehow presented with the following choice: Choose B, or choose a situation in which if, at a specific instance, the source outputs a 1, A will occur. Otherwise, B occurs. We'll call this sort of situation a p*A + (1-p)*B lottery, or simply p*A + (1-p)*B

\n

So, which should you prefer? B or the above lottery? (assume there's no other cost other than declaring your choice. Or just wanting the choice. It's not a \"pay for a lottery ticket\" scenario yet. Just a \"assuming you simply choose one or the other... which do you choose?\")

\n

Consider our holy law of \"Don't Be Stupid\", specifcally in the manifestation of \"Don't automatically lose when you could potentially do better without risking doing worse. It would seem the correct answer would be \"choose the lottery, dangit!\" The only possible outcomes of it are A or B. So it can't possibly be worse than B, since you actually prefer A. Further, choosing B is accepting an automatic loss compared to chosing the above lottery which at least gives you a chance of to do better. (obviously we assume here that p is nonzero. In the degenerate case of p = 0, you'd presumably be indifferent between the lottery and B since, well... choosing that actually is the same thing as choosing B)

\n

By an exactly analogous argument, you should prefer A more than the lottery. Specifically, A is an automatic WIN compared to the lottery, which doesn't give you any hope of doing better than A, but does give you a chance of doing worse.

\n

Example: Imagine you're dying horribly of some really nasty disease that know isn't going to heal on its own and you're offered a possible medication for it. Assume there's no other medication available, and assume that somehow you know as a fact that none of the ways it could fail could possibly be worse. Further, assume that you know as a fact no one else on the planet has this disease, and the medication is availible for free to you and has already been prepared. (These last few assumptions are to remove any possible considerations like altruistically giving up your dose of the med to save another or similar.)

\n

Do you choose to take the medication or no? Well, by assumption, the outcome can't possibly be worse than what the disease will do to you, and there's the possibility that it will cure you. Further, there're no other options availible that may potentially be better than taking this med. (oh, assume for whatever reason cryo, so taking an ambulance ride to the future in hope of a better treatment is also not an option. Basically, assume your choices are \"die really really horribly\" or \"some chance of that, and some chance of making a full recovery. No chance of partially surviving in a state worse than death.\"

\n

So the obviously obvious choice is \"choose to take the medication.\"

\n

Next time: We actually do a bit more math based on what we've got so far and begin to actually construct utilities.

" } }, { "_id": "qMYjfAaYQddsqc3gS", "title": "Fiction of interest", "pageUrl": "https://www.lesswrong.com/posts/qMYjfAaYQddsqc3gS/fiction-of-interest", "postedAt": "2009-04-29T18:47:59.414Z", "baseScore": 14, "voteCount": 11, "commentCount": 16, "url": null, "contents": { "documentId": "qMYjfAaYQddsqc3gS", "html": "

The fiction piece in this week's New Yorker deals with some of the same themes as Eliezer's \"Three Worlds Collide\"; viz., the clash of value systems (and the difficulty of seeing those with a different value system as rational), and the idea of humanity developing in ways that seem bizarre/grotesque/evil to us. 

" } }, { "_id": "SnGmTK4bSmtPTuZee", "title": "Fire and Motion", "pageUrl": "https://www.lesswrong.com/posts/SnGmTK4bSmtPTuZee/fire-and-motion", "postedAt": "2009-04-29T16:06:11.967Z", "baseScore": 7, "voteCount": 5, "commentCount": 3, "url": null, "contents": { "documentId": "SnGmTK4bSmtPTuZee", "html": "

Related to: Extreme Rationality: It's Not That Great

\n

On the recent topics of \"rationality is all very well but how do we translate understanding into winning?\" and \"isn't akrasia the most common limiting factor?\", one of the best (non-recent) articles on practical rationality that I've come across is:

\n

http://www.joelonsoftware.com/articles/fog0000000339.html

\n

Interestingly, it uses a different kind of martial art as a metaphor. I conjecture it to be the sort of metaphor that just works well for humans.

\n

(Most of Spolsky's posts are good reading even if you're not a programmer. I'm not in the New York real estate market but I still enjoyed his posts on that topic. He's just that good a writer.)

" } }, { "_id": "KW5m4eREWGitPb8Ev", "title": "Fighting Akrasia: Incentivising Action", "pageUrl": "https://www.lesswrong.com/posts/KW5m4eREWGitPb8Ev/fighting-akrasia-incentivising-action", "postedAt": "2009-04-29T13:48:56.070Z", "baseScore": 12, "voteCount": 15, "commentCount": 58, "url": null, "contents": { "documentId": "KW5m4eREWGitPb8Ev", "html": "

Related To:  Incremental Progress and the Valley, Silver Chairs, Paternalism, and Akrasia, How a pathological procrastinator can lose weight

\n

Akrasia can strike anywhere, but one place it doesn't seem to strike too often or too severely, assuming you are employed, is in the work place.  You may not want to do something, and it might take considerable willpower to perform a task, but unless you want to get fired you can't always play Solitaire.  The reason is clear to most working folks:  you have to do your job to keep it, and not keeping your job is often worse than performing an undesirable task, so you suck it up and find the willpower to make it through the day.  So one question we might ask is, how can we take this motivational method and put it to our own use?

\n

First, let's look at the mechanics of the method.  You have to perform a task and some exterior entity will pay you unless you fail utterly to perform the task.  Notice that this is quite different from working for prizes, where you receive pay in exchange for performing a particular task.  Financially they may appear the same, but from the inside of the human mind they are quite different.  In the former case you are motivated by a potential loss, whereas in the later you are motivated by a potential gain.  Since losses carry more weight than gains, in general the former model will provide more motivation than the latter, keeping in mind that loss aversion is a statistical property of human thought and there may be exceptions.

\n

\n

This suggests that certain techniques will work better more often than others.  For example, if you run a website about rationality and need programming work done for it, you have a couple of options.  You can wait for someone to volunteer their time, you can offer a prize for implementing certain features for the site, or you can offer to pay someone to do it on the condition that if they don't meet certain deadlines they won't get paid or will be paid a lesser amount.  If you aren't so lucky as to have someone come along who will volunteer their time and do a fantastic job for free, you are faced with accepting mediocre free work, offering prizes, or paying someone.  Since prizes are usually inefficient, it appears that offering to pay someone is the best option, so long as you are able to stipulate that there will be no or reduced pay if the work is not done on time and to specification.

\n

It's also important that the entity with the authority to implement the loss reside outside the self.  This is why, for example, a swear box works best if others are around to keep you honest (or if you're religious, believe that god is watching you):  the temptation to let yourself slide just-this-one-time is too great.  And this really comes back to an issue of akrasia:  you don't have to expend any willpower for someone else to implement a loss on your part, whereas you do to make yourself take a loss.

\n

In what other ways can this method work?  Post in the comments further examples of applying loss aversion to overcome akrasia, with particular attention to details of the methods that can make or break them.

" } }, { "_id": "6Bz4TK37T8t5S3AbM", "title": "How to come up with verbal probabilities", "pageUrl": "https://www.lesswrong.com/posts/6Bz4TK37T8t5S3AbM/how-to-come-up-with-verbal-probabilities", "postedAt": "2009-04-29T08:35:01.709Z", "baseScore": 27, "voteCount": 29, "commentCount": 20, "url": null, "contents": { "documentId": "6Bz4TK37T8t5S3AbM", "html": "

Unfortunately, we are kludged together, and we can't just look up our probability estimates in a register somewhere when someone asks us \"How sure are you?\".

The usual heuristic for putting a number on the strength of beliefs is to ask \"When you're this sure about something, what fraction of the time do you expect to be right in the long run?\".  This is surely better than just \"making up\" numbers with no feel for what they mean, but still has it's faults.  The big one is that unless you've done your calibrating, you may not have a good idea of how often you'd expect to be right.

I can think of a few different heuristics to use when coming up with probabilities to assign.

1) Pretend you have to bet on it. Pretend that someone says \"I'll give you ____ odds, which side do you want?\", and figure out what the odds would have to be to make you indifferent to which side you bet on. Consider the question as if though you were actually going to put money on it . If this question is covered on a prediction market, your answer is given to you.

\n

2) Ask yourself how much evidence someone would have to give you before you're back to 50%. Since we're trying to update according to bayes law, knowing how much evidence it takes to bring you to 50% tells you the probability you're implicitely assigning.

\n

For example, pretend someone said something like \"I can guess peoples names by their looks\".  If he guesses the first name right, and it's a common name, you'll probably write it off as fluke.  The second time you'll probably think he knew the people or is somehow fooling you, but conditional on that, you'd probably say he's just lucky.  By bayes law, this suggests that you put the prior probability of him pulling this stunt at 0.1%<p<3%, and less than 0.1% prior probability of him having his claimed skill.  If it takes 4 correct calls to bring you to equally unsure either way, then thats about 0.03^4 if they're common names, or one in a million1...

There's a couple neat things about this trick.  One is that it allows you to get an idea of what your subconscious level of certainty is before you ever think of it.  You can imagine your immediate reaction to \"Why yes, my name is Alex, how did you know\" as well as your carefully deliberated response to the same data (if they're much different, be wary of belief in belief).  The other neat thing is that it pulls up alternate hypotheses that you find more likely, and how likely you find those to be (eg. \"you know these people\").

3) Map out the typical shape of your probability distributions (ie through calibration tests) and then go by how many standard deviations off the mean you are. If you're asked to give the probability that x<C, you can find your one sigma confidence intervals and then pull up your curve to see what it predicts based on how far out C is2.

4) Draw out your metaprobability distribution, and take the mean.

You may initially have different answers for each question, and in the end you have to decide which to trust when actually placing bets.

I personally tend to lean towards 1 for intermediate probabilities, and 2 then 4 for very unlikely things.  The betting model breaks down as risk gets high (either by high stakes or extreme odds), since we bet to maximize a utility function that is not linear in money.

What other techniques do you use, and to how do you weight them?


 
Footnotes:

1: A common name covers about 3% of the population, so p(b|!a) = 0.03^4 for 4 consecutive correct guesses, and p(b|a) ~=1 for sake of simplicity.  Since p(a) is small, (1-p(a)) is approximated as 1.

\n

p(a|b) = p(b|a)*p(a)/p(b) = p(b|a)*p(a)/(p(b|a)*p(a)+p(b|!a)*(1-p(a)) => approximately 0.5 = p(a)/(p(a)+0.03^4) => p(a) = 0.03^4 ~= 1/1,000,000

2: The idea came from paranoid debating where Steve Rayhawk assumed a cauchy distribution. I tried to fit some data I had taken myself, but had insufficient statistics to figure out what the real shape is (if you guys have a bunch more data I could try again). It's also worth noting that the shape of one's probability distribution can change significantly from question to question so this would only apply in some cases.

" } }, { "_id": "H9CyQWsEjPvczPvnA", "title": "Wednesday depends on us.", "pageUrl": "https://www.lesswrong.com/posts/H9CyQWsEjPvczPvnA/wednesday-depends-on-us", "postedAt": "2009-04-29T03:47:50.833Z", "baseScore": 2, "voteCount": 18, "commentCount": 43, "url": null, "contents": { "documentId": "H9CyQWsEjPvczPvnA", "html": "

\n

\n

In response to Theism, Wednesday, and Not Being Adopted

\n

It is a theist cliché: you need religion to define morality. The argument doesn't have to be as simplistic as “you need God to impose it”, but at the least it is the belief that your community needs to agree on what is ethical. When a community starts talking about what is ethical, they quickly depart from anything strictly fact-based. As a community, they need to figure out what the morality is (e.g., love your neighbor), construct a narrative using symbols that make sense to everyone (there’s this external entity God, someone like your father, who wants it this way) and enforce it (if you don’t go along, you go to Hell.) This is probably 40% of what religion is.

\n

While the development of a religion is a community effort to some extent (communities choose among competing religions and religions evolve), the main work is done by the priests. The priest is usually an exceptionally good thinker/reasoner/philosopher – maybe 4-7 3-5 standard deviations from the mean. [Correction]

\n

There are a few things that are very confusing to Wednesday when you try to convert her. First of all, she understands on at least a subconscious level that religion is her community’s ethical system. When you say you don’t believe in God, she thinks you’re saying, ‘it’s OK to torture babies’. What’s scary is that she’s somewhat justified here: without an externally applied ethical belief system, individual ethics can vary widely from what she accepts as ethical (and what you accept as ethical).

\n

 If morality is something that humanity protects, can we blame Wednesday for that?

\n

\n

Fortunately, Eliezer assures us: Anyone worried that reductionism drains the meaning from existence can stop worrying.

\n

This brings us to the second problem for Wednesday. While I believe Eliezer about rationality not denying morality and meaning, I believe him in the same way Wednesday believes her priest: because he’s been right before and I figure I probably have something to learn.

\n

 Rational arguments sound just as good to Wednesday as her Bishop’s theological arguments. What is she to do? Wednesday’s priest has warned her of this with some well-chosen examples appropriate for her level of sophistication and he explains: when you get confused, just trust your intuition: Is it really OK to torture babies?

\n

I think the average person needs some help to defend from wanton intellectual argument. Here's the handy heuristic: Choose to preserve a meta-truth (i.e., the truth you are committed to protect) over a fact-based truth that has proven, again and again, to not be reliable when you factor in that you're not a great thinker and thus can be easily mislead by “facts”.

\n

 On some level Wednesday is aware her religion contradicts facts (God is a mystery, etc) but she is comfortable with the idea that there may be a hierarchy of truths: truths about whether it is OK to torture babies is more important to her than knowing how many years old the Earth is.

\n

Don’t you agree with Wednesday? If Eliezer had not already ascertained that there’s still morality after rationality, would you be willing to go there? I wouldn’t, personally. (If that makes me irrational, that is also what makes me human. Typical sci fi theme – but science fiction, like religion, has many symbols that ring true tones.) 

\n

But among two presented belief systems, an intellectually unsophisticated Wednesday is just choosing the belief system that has falseness AND “meaningful” truth over a belief system that (certainly, historically) has no falseness (in theory) BUT no meaningful truth. So you should accept Wednesday as rational: she values the value of truth, which is better than valuing valueless truth at whatever cost.

\n

 Maybe you’re surprised (or skeptical) that Wednesday values truth. But I’m not. I have evidence that valuing truth is a pretty universal human quality. Alas, often second to valuing security and power … But still: another reason to accept Wednesday. She is typical humanity. Some of you have a lot of anger towards religion, with good reason, but it would be a mistake to define ourselves antagonistically against 99.99% of humanity. Even if we are right and they’re wrong, whose side are we on?

\n

 I think we’re on their side. Religion – defined now as the set of ways the community defines our relationship with each other and with the world – is supposed to evolve with our understanding of those relationships. Wednesday is stuck in a religion and a moral code that hasn’t really changed in 2000 years! (35% true, but point left for dramatic effect) She depends upon us to figure out how to draft a new belief system based on science that is also human: an ethical science. Don’t leave her with the choice of having to choose between either rejecting science or rejecting the meaning and value of being human.

\n

 Instead of complaining about how idiotic humanity is, we have some work to do:

\n

 (1)            decide whether meaning exists, if it important, and if it can be brought into a scientific view of the world without making stuff up

\n

(2)            develop a scientific view of the world that accommodates the meta-truth, if it exists

\n

(3)            explain it to Wednesday in a way that makes sense to her (symbols and analogies are OK, but they must be honest ones)    

\n

I think we should debate about whether meaning exists, whether a scientific view of the world accommodates meaning, and whether it is our responsibility to help Wednesday. But if yes to all three, we should define ourselves in service to her, and bring her along.

\n

 

" } }, { "_id": "baTWMegR42PAsH9qJ", "title": "Generalizing From One Example", "pageUrl": "https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizing-from-one-example", "postedAt": "2009-04-28T22:00:50.764Z", "baseScore": 440, "voteCount": 386, "commentCount": 423, "url": null, "contents": { "documentId": "baTWMegR42PAsH9qJ", "html": "

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

\"Everyone generalizes from one example. At least, I do.\"

\n

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the \"typical mind fallacy\", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether \"imagination\" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say \"I saw it in my mind\" as a metaphor for considering what it looked like?

Upon hearing this, my response was \"How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane.\" Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the \"wisdom of crowds\", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

\n

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

\n

\n

He kind of took this idea and ran with it. He interpreted certain passages in George Berkeley's biography to mean that Berkeley was an eidetic imager, and that this was why the idea of the universe as sense-perception held such interest to him. He also suggested that experience of consciousness and qualia were as variable as imaging, and that philosophers who deny their existence (Ryle? Dennett? Behaviorists?) were simply people whose mind lacked the ability to easily experience qualia. In general, he believed philosophy of mind was littered with examples of philosophers taking their own mental experiences and building theories on them, and other philosophers with different mental experiences critiquing them and wondering why they disagreed.

The formal typical mind fallacy is about serious matters of mental structure. But I've also run into something similar with something more like the psyche than the mind: a tendency to generalize from our personalities and behaviors.

For example, I'm about as introverted a person as you're ever likely to meet - anyone more introverted than I am doesn't communicate with anyone. All through elementary and middle school, I suspected that the other children were out to get me. They kept on grabbing me when I was busy with something and trying to drag me off to do some rough activity with them and their friends. When I protested, they counter-protested and told me I really needed to stop whatever I was doing and come join them. I figured they were bullies who were trying to annoy me, and found ways to hide from them and scare them off.

Eventually I realized that it was a double misunderstanding. They figured I must be like them, and the only thing keeping me from playing their fun games was that I was too shy. I figured they must be like me, and that the only reason they would interrupt a person who was obviously busy reading was that they wanted to annoy him.

Likewise: I can't deal with noise. If someone's being loud, I can't sleep, I can't study, I can't concentrate, I can't do anything except bang my head against the wall and hope they stop. I once had a noisy housemate. Whenever I asked her to keep it down, she told me I was being oversensitive and should just mellow out. I can't claim total victory here, because she was very neat and kept yelling at me for leaving things out of place, and I told her she needed to just mellow out and you couldn't even tell that there was dust on that dresser anyway. It didn't occur to me then that neatness to her might be as necessary and uncompromisable as quiet was to me, and that this was an actual feature of how our minds processed information rather than just some weird quirk on her part.

\n

\"Just some weird quirk on her part\" and \"just being oversensitive\" are representative of the problem with the typical psyche fallacy, which is that it's invisible. We tend to neglect the role of differently-built minds in disagreements, and attribute the problems to the other side being deliberately perverse or confused. I happen to know that loud noise seriously pains and debilitates me, but when I say this to other people they think I'm just expressing some weird personal preference for quiet. Think about all those poor non-imagers who thought everyone else was just taking a metaphor about seeing mental images way too far and refusing to give it up.

\n

And the reason I'm posting this here is because it's rationality that helps us deal with these problems.

There's some evidence that the usual method of interacting with people involves something sorta like emulating them within our own brain. We think about how we would react, adjust for the other person's differences, and then assume the other person would react that way. This method of interaction is very tempting, and it always feels like it ought to work.

But when statistics tell you that the method that would work on you doesn't work on anyone else, then continuing to follow that gut feeling is a Typical Psyche Fallacy. You've got to be a good rationalist, reject your gut feeling, and follow the data.

I only really discovered this in my last job as a school teacher. There's a lot of data on teaching methods that students enjoy and learn from. I had some of these methods...inflicted...on me during my school days, and I had no intention of abusing my own students in the same way. And when I tried the sorts of really creative stuff I would have loved as a student...it fell completely flat. What ended up working? Something pretty close to the teaching methods I'd hated as a kid. Oh. Well. Now I know why people use them so much. And here I'd gone through life thinking my teachers were just inexplicably bad at what they did, never figuring out that I was just the odd outlier who couldn't be reached by this sort of stuff.

The other reason I'm posting this here is because I think it relates to some of the discussions of seduction that are going on in MBlume's Bardic thread. There are a lot of not-particularly-complimentary things about women that many men tend to believe. Some guys say that women will never have romantic relationships with their actually-decent-people male friends because they prefer alpha-male jerks who treat them poorly. Other guys say women want to be lied to and tricked. I could go on, but I think most of them are covered in that thread anyway.

The response I hear from most of the women I know is that this is complete balderdash and women aren't like that at all. So what's going on?

Well, I'm afraid I kind of trust the seduction people. They've put a lot of work into their \"art\" and at least according to their self-report are pretty successful. And unhappy romantically frustrated nice guys everywhere can't be completely wrong.

My theory is that the women in this case are committing a Typical Psyche Fallacy. The women I ask about this are not even remotely close to being a representative sample of all women. They're the kind of women whom a shy and somewhat geeky guy knows and talks about psychology with. Likewise, the type of women who publish strong opinions about this on the Internet aren't close to a representative sample. They're well-educated women who have strong opinions about gender issues and post about them on blogs.

And lest I sound chauvinistic, the same is certainly true of men. I hear a lot of bad things said about men (especially with reference to what they want romantically) that I wouldn't dream of applying to myself, my close friends, or to any man I know. But they're so common and so well-supported that I have excellent reason to believe they're true.

This post has gradually been getting less rigorous and less connected to the formal Typical Mind Fallacy. First I changed it to a Typical Psyche Fallacy so I could talk about things that were more psychological and social than mental. And now it's expanding to cover the related fallacy of believing your own social circle is at least a little representative of society at large, which it very rarely is3.

It was originally titled \"The Typical Mind Fallacy\", but I'm taking a hint fromt the quote and changing it to \"Generalizing From One Example\", because that seems to be the link between all of these errors. We only have direct first-person knowledge one one mind, one psyche, and one social circle, and we find it tempting to treat it as typical even in the face of contrary evidence.

This, I think, is especially important for the sort of people who enjoy Less Wrong, who as far as I can tell are with few exceptions the sort of people who are extreme outliers on every psychometric test ever invented.

\n


Footnotes

1. Eidetic imagery, vaguely related to the idea of a \"photographic memory\", is the ability to visualize something and have it be exactly as clear, vivid and obvious as actually seeing it. My professor's example (which Michael Howard somehow remembers even though I only mentioned it once a few years ago) is that although many people can imagine a picture of a tiger, only an eidetic imager would be able to count the number of stripes.

2. According to Galton, people incapable of forming images were overrepresented in math and science. I've since heard that this idea has been challenged, but I can't access the study.

3. The example that really drove this home to me: what percent of high school students do you think cheat on tests? What percent have shoplifted? Someone did a survey on this recently and found that the answer was nobhg gjb guveqf unir purngrq naq nobhg bar guveq unir fubcyvsgrq (rot13ed so you have to actually take a guess first). This shocked me and everyone I knew, because we didn't cheat or steal during high school and we didn't know anyone who did. I spent an afternoon trying to find some proof that the study was wrong or unrepresentative and coming up with nothing.

" } }, { "_id": "5iK6rsa3MSrMhHQyf", "title": "Re-formalizing PD", "pageUrl": "https://www.lesswrong.com/posts/5iK6rsa3MSrMhHQyf/re-formalizing-pd", "postedAt": "2009-04-28T12:10:30.070Z", "baseScore": 32, "voteCount": 32, "commentCount": 63, "url": null, "contents": { "documentId": "5iK6rsa3MSrMhHQyf", "html": "

The Prisoner's Dilemma has been discussed to death here on OB/LW, right? Well, here's a couple new twists to somewhat... uh... expand the discussion.

\n

Warning: programming and math ahead.

\n

 

\n

Scenario 1

\n

Imagine a PD tournament between programs that can read each other's source code. In every match, player A receives the source code of player B as an argument, and vice versa. Matches are one-shot, not iterated.

\n

In this situation it's possible to write a program that's much better than \"always defect\". Yes, in an ordinary programming language like C or Python, no futuristic superintelligent oracles required. No, Rice's theorem doesn't cause any problems.

\n

Here's an outline of the program:

\n
\n
 // begin PREFIX
Strategy main(SourceCode other)
{
    // get source code of this program from \"begin PREFIX\" to \"end PREFIX\",
    // using ordinary quine (self-printing program) techniques
    String PREFIX = \"...\"

    if (other.beginsWith(PREFIX))
        return Strategy.COOPERATE;
    else
        return anythingElse(other);
}
// end PREFIX

// from this point you can write anything you wish
Strategy anythingElse(SourceCode other)
{
    return Strategy.DEFECT;
}
\n
\n

Some features of this program:

\n\n

So, introducing such a program into the tournament should lead to a chain reaction until everyone cooperates. Unless I've missed something. What say ye?

\n

Edit: the last point and the conclusion were wrong. Thanks to Warrigal for pointing this out.

\n\n

 

\n

Scenario 2

\n

Now imagine another tournament where programs can't read each other's source code, but are instead given access to a perfect simulator. So programs now look like this:

\n
\n
Strategy main(ObjectCode self, ObjectCode other, Simulator simulator) {...}
\n
\n

and can call simulator.simulate(ObjectCode a, ObjectCode b) arbitrarily many times with any arguments. To give players a chance to avoid bottomless recursion, we also make available a random number generator.

\n

Problem: in this setting, is it possible to write a program that's better than \"always defect\"?

\n

The most general form of a reasonable program I can imagine at the moment is a centipede:

\n
    \n
  1. Programmer invents a number N and a sequence of real numbers 0 < p1 < p2 < ... < pN < 1.
  2. \n
  3. Program generates a random number 0 < r < 1.
  4. \n
  5. If r < p1, cooperate.
  6. \n
  7. Simulate the opponent's reaction to you.
  8. \n
  9. If opponent defects, defect.
  10. \n
  11. Otherwise if r < p2, cooperate.
  12. \n
  13. And so on until N.
  14. \n
\n

Exercise 1: when (for what N and pi) does this program cooperate against itself? (To cooperate, the recursive tree of simulations must terminate with probability one.)

\n

Exercise 2: when does this program win against a simple randomizing opponent?

\n

Exercise 3: what's the connection between the first two exercises, and does it imply any general theorem?

\n

 

\n

Epilogue

\n

Ordinary humans playing the PD othen rely on assumptions about their opponent. They may consider certain invariant properties of their opponent, like altruism, or run mental simulations. Such wetware processes are inherently hard to model, but even a half-hearted attempt brings out startling and rigorous formalizations instead of our usual vague intuitions about game theory.

\n

Is this direction of inquiry fruitful?

\n

What do you think?

" } }, { "_id": "fJKbCXrCPwAR5wjL8", "title": "What is control theory, and why do you need to know about it?", "pageUrl": "https://www.lesswrong.com/posts/fJKbCXrCPwAR5wjL8/what-is-control-theory-and-why-do-you-need-to-know-about-it", "postedAt": "2009-04-28T09:25:48.139Z", "baseScore": 60, "voteCount": 57, "commentCount": 48, "url": null, "contents": { "documentId": "fJKbCXrCPwAR5wjL8", "html": "

This is long, but it's the shortest length I could cut from the material and have a complete thought.

\n

1. Alien Space Bats have abducted you.

\n

In the spirit of this posting, I shall describe a magical power that some devices have. They have an intention, and certain means available to achieve that intention. They succeed in doing so, despite knowing almost nothing about the world outside. If you push on them, they push back. Their magic is not invincible: if you push hard enough, you may overwhelm them. But within their limits, they will push back against anything that would deflect them from their goal. And yet, they are not even aware that anything is opposing them. Nor do they act passively, like a nail holding something down, but instead they draw upon energy sources to actively apply whatever force is required. They do not know you are there, but they will struggle against you with all of their strength, precisely countering whatever you do. It seems that they have a sliver of that Ultimate Power of shaping reality, despite their almost complete ignorance of that reality. Just a sliver, not a whole beam, for their goals are generally simple and limited ones. But they pursue them relentlessly, and they absolutely will not stop until they are dead.

\n

You look inside one of these devices to see how it works, and imagine yourself doing the same task...

\n
\n

Alien Space Bats have abducted you. You find yourself in a sealed cell, featureless but for two devices on the wall. One seems to be some sort of meter with an unbreakable cover, the needle of which wanders over a scale marked off in units, but without any indication of what, if anything, it is measuring. There is a red blob at one point on the scale. The other device is a knob next to the meter, that you can turn. If you twiddle the knob at random, it seems to have some effect on the needle, but there is no fixed relationship. As you play with it, you realise that you very much want the needle to point to the red dot. Nothing else matters to you. Probably the ASBs' doing. But you do not know what moves the needle, and you do not know what turning the knob actually does. You know nothing of what lies outside the cell. There is only the needle, the red dot, and the knob. To make matters worse, the red dot also jumps along the scale from time to time, in no particular pattern, and nothing you do seems to have any effect on it. You don't know why, only that wherever it moves, you must keep the needle aligned with it.

\n

Solve this problem.

\n
\n

That is what it is like, to be one of these magical devices. They are actually commonplace: you can find them everywhere.

\n

They are the thermostat that keeps your home at a constant temperature, the cruise control that keeps your car at a constant speed, the power supply that provides a constant voltage to your computer's circuit boards. The magical thing is how little they need to know to perform their tasks. They have just the needle, the mark on the scale, the knob, and hardwired into them, a rule for how to turn the knob based only on what they see the needle and the red dot do. They do not need to sense the disturbing forces, or predict the effects of their actions, or learn. The thermostat does not know when the sun comes out. The cruise control does not know the gradient of the road. The power supply does not know why or when the mains voltage or the current demand will change. They model nothing, they predict nothing, they learn nothing. They do not know what they are doing. But they work.

\n

These things are called control systems. A control system is a device for keeping a variable at a specified value, regardless of disturbing forces in its environment that would otherwise change it. It has two inputs, called the perception and the reference, and one output, called the output or the action. The output depends only on the perception and the reference (and possibly their past histories, integrals, or derivatives) and is such as to always tend to bring the perception closer to the reference.

\n

Why is this important for LW readers?

\n

2. Two descriptions of the same thing that both make sense but don't fit together.

\n

I shall come to that via an autobiographical detour. In the mid-90's, I came across William Powers' book, Behavior: the Control of Perception, in which he set out an analysis of human behaviour in terms of control theory. (Powers' profession was -- he is retired now -- control engineering.) It made sense to me, and it made nonsense of every other approach to psychology. He gave it the name of Perceptual Control Theory, or PCT, and the title of his book expresses the fundamental viewpoint: all of the behaviour of an organism is the output of control systems, and is performed with the purpose of controlling perceptions at desired reference values. Behaviour is the control of perception.

\n

This is 180 degrees around from the behavioural stimulus-response view, in which you apply a stimulus (a perception) to the organism, and that causes it to emit a response (a behaviour). I shall come back to why this is wrong below. But there is no doubt that it is wrong. Completely, totally wrong. To this audience I can say, as wrong as theism. That wrong. Cognitive psychology just adds layers of processing between stimulus and response, and fares little better.

\n

I made a simulation of a walking robot whose control systems were designed according to the principles of PCT, and it works. It stands up, walks over uneven terrain, and navigates to food particles. (My earliest simulation is still on the web in the form of this Java applet.) It resists a simulated wind, despite having no way to perceive it. It cannot see, sensing the direction of food only by the differential scent signals from its antennae. It walks on uneven terrain, despite having no perception of the ground other than the positions of its feet relative to its body.

\n

And then, a year or two ago, I came upon Overcoming Bias, and before that, Eliezer's article on Bayes' theorem. (Anyone who has not read that article should do so: besides being essential background to OB and LW, it's a good read, and when you have studied it, you will intuitively know why a positive result on a screening test for a rare condition may not be telling you very much.) Bayes' theorem itself is a perfectly sound piece of mathematics, and has practical applications in those cases where you actually have the necessary numbers, such as in that example of screening tests.

\n

But it was being put forward as something more than that, as a fundamental principle of reasoning, even when you don't have the numbers. Bayes' Theorem as the foundation of rationality, entangling one's brain with the real world, allowing the probability mass of one's beliefs to be pushed by the evidence, acting to funnel the world through a desired tunnel in configuration space. And it was presented as even more than a technique to be learned and applied well or badly, but as the essence of all successful action. Rationality not only wins, it wins by Bayescraft. Bayescraft is the single essence of any method of pushing probability mass into sharp peaks. This all made sense too.

\n

But the two world-views did not seem to fit together. Consider the humble room thermostat, which keeps the temperature within a narrow range by turning the heating on and off (or in warmer climes, the air conditioning), and consider everything that it does not do while doing the single thing that it does:

\n\n

And yet despite that, it has a sliver of the Ultimate Power, the ability to funnel the world through its desired tunnel in configuration space. In short, control systems win while being entirely arational. How is this possible?

\n

If you look up subjects such as \"optimal control\", \"adaptive control\", or \"modern control theory\", you will certainly find a lot of work on using Bayesian methods to design control systems. However, the fact remains that the majority of all installed control systems are nothing but manually tuned PID controllers. And I have never seen, although I have looked for it, any analysis of general control systems in Bayesian terms. (Except for one author, but despite having a mathematical background, I couldn't make head nor tail of what he was saying. I don't think it's me, because despite his being an eminent person in the field of \"intelligent control\", almost no-one cites his work.) So much for modern control theory. You can design things that way, but you usually don't have to, and it takes a lot more mathematics and computing power. I only mention it because anyone googling \"Bayes\" and \"control theory\" will find all that and may mistake it for the whole subject.

\n

3. Why it matters.

\n

If this was only about cruise controls and room thermostats, it would just be a minor conundrum. But it is also about people, and all living organisms. The Alien Space Bat Prison Cell describes us just as much as it describes a thermostat. We have a large array of meter needles, red dots, and knobs on the walls of our cell, but it remains the case that we are held inside an unbreakable prison exactly the same shape as ourselves. We are brains in vats, the vat of our own body. No matter how we imagine we are reaching out into the world to perceive it directly, our perceptions are all just neural signals. We have reasons to think there is a world out there that causes these perceptions (and I am not seeking to cast doubt on that), but there is no direct access. All our perceptions enter us as neural signals. Our actions, too, are more neural signals, directed outwards -- we think -- to move our muscles. We can never dig our way out of the cell. All that does is make a bigger cell, perhaps with more meters and knobs.

\n

We do pretty well at controlling some of those needles, without having received the grace of Bayes. When you steer your car, how do you keep it directed along the intended path? By seeing through the windscreen how it is positioned, and doing whatever is necessary with the steering wheel in order to see what you want to see. You cannot do it if the windows are blacked out (no perception), if the steering linkage is broken (no action), or if you do not care where the car goes (no reference). But you can do it even if you do not know about the cross-wind, or the misadjusted brake dragging on one of the wheels, or the changing balance of the car according to where passengers are sitting. It would not help if you did. All you need is to see the actual state of affairs, and know what you want to see, and know how to use the steering wheel to get the view closer to the view you want. You don't need to know much about that last. Most people pick it up at once in their first driving lesson, and practice merely refines their control.

\n

Consider stimulus/response again. You can't sense the crosswind from inside a car, yet the angle of the steering wheel will always be just enough to counteract the cross-wind. The correlation between the two will be very high. A simple, measurable analogue of the task is easily carried out on a computer. There is a mark on the screen that moves left and right, which the subject must keep close to a static mark. The position of the moving mark is simply the sum of the mouse position and a randomly drifting disturbance calculated by the program. So long as the disturbance is not too large and does not vary too rapidly, it is easy to keep the two marks fairly well aligned. The correlation between the mouse position (the subject's action) and the disturbance (which the subject cannot see) is typically around -0.99. (I just tried it and scored -0.987.) On the other hand, the correlation between mouse position and mark position (the subject's perception) will be close to zero.

\n

So in a control task, the \"stimulus\" -- the perception -- is uncorrelated with the \"response\" -- the behaviour. To put that in different terminology, the mutual information between them is close to zero. But the behaviour is highly correlated with something that the subject cannot perceive.

\n

When driving a car, suppose you decide to change lanes? (Or in the tracking task, suppose you decide to keep the moving mark one inch to the left of the static mark?) Suddenly you do something different with the steering wheel. Nothing about your perception changed, yet your actions changed, because a reference signal inside your head changed.

\n

If you do not know that you are dealing with a control system, it will seem mysterious. You will apply stimuli and measure responses, and end up with statistical mush. Since everyone else does the same, you can excuse the situation by saying that people are terribly complicated and you can't expect more. 0.6 is considered a high correlation in a psychology experiment, and 0.2 is considered publishable (link). Real answers go ping!! when you hit them, instead of slopping around like lumpy porridge. What is needed is to discover that a control system is present, what it is controlling, and how.

\n

There are ways of doing that, but this is enough for one posting.

\n

4. Conclusion.

\n

Conclusion of this posting, not my entire thoughts on the subject, not by a long way.

\n

My questions to you are these.

\n

Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.

\n

If, as is speculated, a living organism's brain is a collection of control systems, is Bayescraft no more related to its physical working than arithmetic is? Our brains can learn to do arithmetic, but arithmetic is not how our brains work. Likewise, we can learn Bayescraft, or some practical approximation to it, but do Bayesian processes have anything to do with the mechanism of brains?

\n

Does Bayescraft necessarily have anything to do with the task of building a machine that ... can do something not to be discussed here yet?

\n

5. Things I have not yet spoken of.

\n

The control system's designer who put the rule in, that tells it what output to emit given the perception and the reference: whether he supplied the rationality that is the real source of its miraculous power?

\n

How to discover the presence of a control system and discern its reference, even if its physical embodiment remains obscure.

\n

How to control a perception even when you don't know how.

\n

Hierarchical arrangements of control systems as a method of building more complex control systems.

\n

Simple control systems win at their limited tasks while being arational. How much more is possible for arational systems built of control systems?

\n

6. WARNING: Autonomous device

\n

After those few thousand words of seriousness, a small dessert.

\n

Exhibit A: A supposedly futuristic warning sign.

\n

Exhibit B: A contemporary warning sign in an undergraduate control engineering lab: \"WARNING: These devices may start moving without warning, even if they appear powered off, and can exert sudden and considerable forces. Exercise caution in their vicinity.\"

\n

They say the same thing.

" } }, { "_id": "dMzALgLJk4JiPjSBg", "title": "Epistemic vs. Instrumental Rationality: Approximations", "pageUrl": "https://www.lesswrong.com/posts/dMzALgLJk4JiPjSBg/epistemic-vs-instrumental-rationality-approximations", "postedAt": "2009-04-28T03:12:55.675Z", "baseScore": 30, "voteCount": 35, "commentCount": 29, "url": null, "contents": { "documentId": "dMzALgLJk4JiPjSBg", "html": "

What is the probability that my apartment will be struck by a meteorite tomorrow? Based on the information I have, I might say something like 10-18. Now suppose I wanted to approximate that probability with a different number. Which is a better approximation: 0 or 1/2?

\n

The answer depends on what we mean by \"better,\" and this is a situation where epistemic (truthseeking) and instrumental (useful) rationality will disagree.

\n

As an epistemic rationalist, I would say that 1/2 is a better approximation than 0, because the Kullback-Leibler Divergence is (about) 1 bit for the former, and infinity for the latter. This means that my expected Bayes Score drops by one bit if I use 1/2 instead of 10-18, but it drops to minus infinity if I use 0, and any probability conditional on a meteorite striking my apartment would be undefined; if a meteorite did indeed strike, I would instantly fall to the lowest layer of Bayesian hell. This is too horrible a fate to imagine, so I would have to go with a probability of 1/2.

\n

As an instrumental rationalist, I would say that 0 is a better approximation than 1/2. Even if a meteorite does strike my apartment, I will suffer only a finite amount of harm. If I'm still alive, I won't lose all of my powers as a predictor, even if I assigned a probability of 0; I will simply rationalize some other explanation for the destruction of my apartment. Assigning a probability of 1/2 would force me to actually plan for the meteorite strike, perhaps by moving all of my stuff out of the apartment. This is a totally unreasonable price to pay, so I would have to go with a probability of 0.

\n

I hope this can be a simple and uncontroversial example of the difference between epistemic and instrumental rationality. While the normative theory of probabilities is the same for any rationalist, the sorts of approximations a bounded rationalist would prefer can differ very much.

" } }, { "_id": "H6d6Be4MzRxvz9eZT", "title": "How Not to be Stupid: Know What You Want, What You Really Really Want", "pageUrl": "https://www.lesswrong.com/posts/H6d6Be4MzRxvz9eZT/how-not-to-be-stupid-know-what-you-want-what-you-really", "postedAt": "2009-04-28T01:11:46.326Z", "baseScore": 2, "voteCount": 15, "commentCount": 39, "url": null, "contents": { "documentId": "H6d6Be4MzRxvz9eZT", "html": "

Previously: Starting Up

\n

So, you want to be rational, huh? You want to be Less Wrong than you were before, hrmmm? First you must pass through the posting titles of a thousand groans. Muhahahahaha!

\n

Let's start with the idea of preference rankings.  If you prefer A to B, well, given the choice between A and B, you'd choose A.

\n

For example, if you face a choice between a random child being tortured to death vs them leading a happy and healthy life, all else being equal and the choice costing you nothing, which do you choose?

\n

This isn't a trick question. If you're a perfectly ordinary human, you presumably prefer the latter to the former.

\n

Therefore you choose it. That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B.

\n

Now, if there're many possibilities, you may ask... \"But, what if I prefer B to A, C to B, and A to C?\"

\n

The answer, of course, is that you're a bit confused about what you actually prefer. I mean, all that ranking would do is just keep you switching between those, looping around.

\n

And if thinking in terms of resources, the universe or an opponent or whatever could, for a small price, sell each of those to you in sequence, draining you of the resource (time, money, whatever) as you go around the vortex of confused desires.

\n

This, of course, translates more precisely into a sequence of states, Ai, Bi, Ci, and preferences of the form A0 < B1 < C2 < A3 < B4 ... where each one of those is the same as the original name except you also have a drop less of the relevant resource as you did before. ie, indicating a willingness to pay the price. If the sequence keeps going all the way, then you'll be drained, and that's a rather inefficient way of going about it if you just want to give the relevant resource up, no? ;)

\n

Still, a strict loop, A > B, B > C, C > A really is an indication that you just don't know what you want. I'll just dismiss that at this point as \"not really what I'd call preferences\" as such.

\n

Note, however, that it's perfectly okay to have some states of reality, histories of the entire universe, whatever, such that A, B, and C are all ranked equally in your preferences.

\n

If you, however, say something like \"I don't prefer A less than B, nor more than B, nor equally to B\", I'm just going to give you a very stern look until you realize you're rather confused. (note, ranking two things equally doesn't mean you are incapable of distinguishing them. Also, what you want may be a function of multiple variables that may end up translate to something like \"in this instance I want X, though in that other instance I would have wanted Y.\" This is perfectly acceptable as long as the overal ranking properties (and other rules) are being followed. That is, as long as you're Not Being Stupid.)

\n

Let's suppose there're two states A and B that for you fall under this relative preference nonpreference zone. Let's further suppose that somehow the universe ends up presenting you with a situation in which you have to choose between them.

\n

What do you do? When it actually comes down to it, so that your options are \"choose A, choose B, or something else does the deciding.\" (either coin flip, or someone else who's willing to choose between them, or basically some something other than you.)

\n

If you can say \"if pressed, I'd have to choose... A\", then in the end, you have ranked one above the other. If you choose option 3, then basically you're saying \"I know it's going to be one or the other, but I don't want to be the one making that choice.\" Which could be interpreted as either indifferent or at least _sufficiently_ indifferent that the (emotional or whatever) cost to you of you yourself directly making that choice is much greater.

\n

At that point, if you say to me \"nope, I still neither prefer A to B, prefer B to A, nor am indifferent to the choice. It's simply not meaningful for my preferences to state any relative ranking, even equal\", well, I would be at that point rather confused as to what it is that you even meant by that statement. If in the above situation you would actually choose one of A or B, then clearly you have a relative ranking for them. If you went by the third option, and state that you're not indifferent to them, but prefer neither to the other, well, I honestly don't know what you would mean then. It at least seems to me at this point that such thought would be more a confusion than anything else. Or, at least, that at that point it isn't even what I think I or most other people mean by \"preferences.\" So I'm just going to declare this as the \"Hey everyone, look, here's the weakest point I think I can find so far, even though it doesn't seem like all that weak a weak point to me.\"

\n

So, for now, I'm going to move on and assume that preferences will be of the form A < B < C < D, E, F, G < H, I < J < K (assuming all states are comparable, \"Don't Be Stupid\" does actually seem to imply rejection of cycles.)

\n

For convinience, let's introduce a notion of numerically representing these rankings. The rule simply is this: If you rank two things the same, assign them the same real number. If you rank something B higher than A, then assign B a higher number than A. (Why real numbers? Well, we've got an ordering here. Complex numbers aren't going to be helping at all, so real numbers are perhaps the most general useful way of doing this.)

\n

For any particular preference ranking, there's obviously many valid ways of numerically representing it given the above rules. Further, one can always use a strictly increasing function to translate between any of those. And there will be an inverse, so you can translate back to your prefered encoding.

\n

(A strictly increasing function is, well, exactly what it sounds like. If x > y, f(x) > f(y). Try to visualize this. It never changes direction, never doubles back on itself. So there's always an inverse, for every output, there's always a unique input. So later on. when I start focusing on indexings of the preferences that has specific mathematical properties, no generality is lost. One can always translate into another numerical coding for the preferences, and then back again.)

\n

A few words of warning though: While this preference ranking thing is the ideal, any simple rule for generating the ranking is not going to reproduce your preferences, your morality, your desires. Your preferences are complex. Best to instead figure out what you want in specific cases. In conflicting decisions, query yourself, see which deeper principles \"seem right\", and extrapolate from there. But any simple rule for generating your own One True Preference Ranking is simply going to be wrong. (Don't worry about what a \"utility function\" is exactly yet. I'll get to that later. For now, all you need to know is that it's one of those numerical encodings of preferences that has certain useful mathematical properties.)

\n

 

\n

(EDIT: added in the example of how lack of having a single ranking for all preferences can lead to Being Stupid)

\n

(EDIT2: (4/29/2009) okay, so I was wrong thinking that I've shown \"don't be stupid\" (in the sense used in this sequence) prohibits uncomparable states. (That is, preference functions that can, when input two states, output \"invalid pair\" rather than \">\" \"<\" or \"=\". I've removed that argument and replaced it with a discussion that I think gets more to the heart of that matter.))

" } }, { "_id": "yn2mF9Y7fDywvR8tz", "title": "How Not to be Stupid: Starting Up", "pageUrl": "https://www.lesswrong.com/posts/yn2mF9Y7fDywvR8tz/how-not-to-be-stupid-starting-up", "postedAt": "2009-04-28T00:23:24.171Z", "baseScore": 13, "voteCount": 10, "commentCount": 5, "url": null, "contents": { "documentId": "yn2mF9Y7fDywvR8tz", "html": "

First, don't stand up. ;)

\n

Okay. So what I'm hoping to do in this mini sequence is to introduce a basic argument for Bayesian Decision Theory and epistemic probabilities. I'm going to be basing it on dutch book arguments and Dr. Omohundro's vulnerability based argument, however with various details filled in because, well... I myself had to sit and think about those things, so maybe it would be useful to others too. For that matter, actually writing this up will hopefully sort out my thoughts on this.

\n

Also, I want to try to generalize it a bit to remove the explicit dependancy of the arguments on resources. (Though I may include arguments from that to illustrate some of the ideas.)

\n

Anyways, the spirit of the idea is \"don't be stupid.\" \"Don't AUTOMATICALLY lose when there's a better alternative that doesn't risk you losing even worse.\"

\n

More to the point, repeated application of that idea is going to let us build up the mathematics of decision theory. My plan right now is for each of the posts in this sequence to be relatively short, discussing and deriving one principle or (a couple of related principles) of decision theory and bayesian probability at a time from the above. The math should be pretty simple, with the very worst being potentially a tiny bit of linear algebra. I expect the nastiest bit of math will be one instance of matrix reduction down the line. Everything else ought to be rather straightforward, showing the mathematics of decision theory to be a matter of, as Mr. Smith would say, \"inevitability.\"

\n

Consider this whole sequence a work in progress. If anyone thinks any partcular bits of it could be rewritted more clearly, please speak up! Or at least type up. (But of course, don't stand up. ;))

" } }, { "_id": "sfyCj4fSWzNvYmdTR", "title": "Verbal Overshadowing and The Art of Rationality", "pageUrl": "https://www.lesswrong.com/posts/sfyCj4fSWzNvYmdTR/verbal-overshadowing-and-the-art-of-rationality", "postedAt": "2009-04-27T23:39:34.940Z", "baseScore": 68, "voteCount": 71, "commentCount": 24, "url": null, "contents": { "documentId": "sfyCj4fSWzNvYmdTR", "html": "

To begin, here are some Fun Psychology Facts:  

\n

People who were asked to describe a face after seeing it are worse at recognizing the same face later.

\n

People who are asked to describe a wine after drinking it are worse at recognizing the same wine later.

\n

People who are asked to give reasons for their preferences among a collection of jellies are worse at identifying their own preferences among those jellies.

\n

 

\n

This effect, known as Verbal Overshadowing, occurs primarily when a principally non-verbal process is disrupted by a task which involves verbalization.  The above generalizations (and Verbal Overshadowing effects more generally), do not occur among what we can term \"Verbal Experts\": individuals who are as good at verbalizing the relevant process as they are at doing it implicitly or automatically.  This seems like it will be very important to keep in mind when cultivating our own Rationality.

\n

\n

Here's an oversimplified picture of what this means:  We've got an implicit facial recognition process, IFRP, which is pretty good.  We've also got a generalized explicit verbal thinking process, GEVTP, which is good for lots of things, but isn't especially good at recognizing faces.  Normally, IFRP is in charge of facial recognition, but there are some things we can do, like, trying to put a face into words, that wakes up GEVTP, which then muscles IFRP out of the way, and all of a sudden, we are a lot worse at recognizing faces.

\n

The good news is that GEVTP can be trained.  To take the wine case, people who put in the time and effort can become verbal experts about wine.  This isn't to say they automatically have better judgments about wine.  Rather, it means that their GEVTP is on par with their implicit wine recognition, because it has been trained to do the same quality job as the the implicit process.

\n

As a crude metaphor, imagine the difference between the natural process by which you go about walking, versus having to keep track of each and every instruction that needs to be sent to different joints and muscles if you had to consciously issue each one.

\n

Now, obviously the specific studies mentioned are important for wine tasting, eye-witness identification, or determining one's own jelly preferences, but the phenomenon of Verbal Overshadowing has a much larger, more systematic importance for th Art of Rationality.

\n

Let's bridge to the broader point with a quote from David Hume, a man whose insights were often far ahead of their time: \"I shall add [...] that, as this operation of the mind, by which we infer like effects from like causes, and vice versa, is so essential to the subsistence of all human creatures, it is not probable, that it could be trusted to the fallacious deductions of our reason, which is slow in its operations; appears not, in any degree, during the first years of infancy; and at best is, in every age and period of human life, extremely liable to error and mistake. It is more conformable to the ordinary wisdom of nature to secure so necessary an act of the mind, by some instinct or mechanical tendency, which may be infallible in its operations, may discover itself at the first appearance of life and thought, and may be independent of all the laboured deductions of the understanding. As nature has taught us the use of our limbs, without giving us the knowledge of the muscles and nerves, by which they are actuated; so has she implanted in us an instinct, which carries forward the thought in a correspondent course to that which she has established among external objects; though we are ignorant of those powers and forces, on which this regular course and succession of objects totally depends.\"

\n

In short, Hume is saying, in the field of inference and reasoning, our Implicit Reasoning Process often outpaces our GEVTP.  I'm not suggesting that our implicit reasoning is perfect (it is, after all, fraught with its own biases), but, supposing that Verbal Overshadowing is a general phenomenon, it would appear that, with respect to our reasoning and inferences more generally, our situation is one in which trying to talk about what we are doing is liable to mess us up.

\n

The obvious suggestion, then, is that we become verbal experts on the subject, so that our thinking about rationality doesn't mess up our thinking rationally.

\n

\"Aha,\" I hear you all say, \"then your advice is unnecessary, for what is it that we Rationalists are already doing, if not training ourselves to think explicitly about rationality?\"  And that would be a good reply, but for one crucial fact: we are not training ourselves correctly to become verbal experts.

\n

One does not become a verbal expert about wine by tasting only strange vintages or the wine of abnormal grapes.  One does not become a verbal expert about facial recognition by practicing only on the stunningly gorgeous or the hideously deformed.  And likewise, one does not become a verbal expert on Rational thinking by focusing on the edge cases (i.e. The Epistemic Prisoner's dilemmas, The Gettier Cases, The High Stakes scenarios, etc.).  Verbal Experts get trained, primarily, on the paradigms.

\n

In fact, the studies on Insight Puzzles in particular (i.e. verbal overshadowing with respect to explaining the actual process by which one achieved the solution to a problem), suggest that those of us who engage in verbalization tasks relating to our reasoning and inferences (say, those of us dedicating a lot of time and energy to writing posts or comments about it), had better figure out how to train our Generalized Explicit Verbal Thinking Process not to drop the ball when it comes to thinking about reasoning.

\n

I am not a psychologist, but I do know that our current plan (of, for example, thinking about the brainteaser cases), is definitely not the way to develop actual expertise.

" } }, { "_id": "yPQGYn9rSme9RRpiQ", "title": "Bayesian Cabaret", "pageUrl": "https://www.lesswrong.com/posts/yPQGYn9rSme9RRpiQ/bayesian-cabaret", "postedAt": "2009-04-27T22:29:36.931Z", "baseScore": 26, "voteCount": 25, "commentCount": 12, "url": null, "contents": { "documentId": "yPQGYn9rSme9RRpiQ", "html": "

I'd heard rumors that some leading Bayesians had achieved rank in the Bardic Conspiracy. But I wasn't aware that every two years, some of the world's top statisticians hold a Bayesian Cabaret, full of songs, dances, and skits about Bayesian probability theory.

...no, really. Really. I think my favorite has got to be this one.

YouTube seems to be full of this stuff, including What A Bayesian World and We Didn't Start The Prior. Be sure to also check out some of the recordings and the Bayesian Songbook.

Eliezer's finished his sequences, and it's a new era for Less Wrong. Let's celebrate...Bayesian style!

" } }, { "_id": "yffPyiu7hRLyc7r23", "title": "Final Words", "pageUrl": "https://www.lesswrong.com/posts/yffPyiu7hRLyc7r23/final-words", "postedAt": "2009-04-27T21:12:02.966Z", "baseScore": 182, "voteCount": 137, "commentCount": 64, "url": null, "contents": { "documentId": "yffPyiu7hRLyc7r23", "html": "

Sunlight enriched air already alive with curiosity, as dawn rose on Brennan and his fellow students in the place to which Jeffreyssai had summoned them.

\n

They sat there and waited, the five, at the top of the great glassy crag that was sometimes called Mount Mirror, and more often simply left unnamed.  The high top and peak of the mountain, from which you could see all the lands below and seas beyond.

\n

(Well, not all the lands below, nor seas beyond.  So far as anyone knew, there was no place in the world from which all the world was visible; nor, equivalently, any kind of vision that would see through all obstacle-horizons.  In the end it was the top only of one particular mountain: there were other peaks, and from their tops you would see other lands below; even though, in the end, it was all a single world.)

\n

\"What do you think comes next?\" said Hiriwa.  Her eyes were bright, and she gazed to the far horizons like a lord.

\n

Taji shrugged, though his own eyes were alive with anticipation.  \"Jeffreyssai's last lesson doesn't have any obvious sequel that I can think of.  In fact, I think we've learned just about everything that I knew the beisutsukai masters know.  What's left, then -\"

\n

\"Are the real secrets,\" Yin completed the thought.

\n

Hiriwa and Taji and Yin shared a grin, among themselves.

\n

Styrlyn wasn't smiling.  Brennan suspected rather strongly that Styrlyn was older than he had admitted.

\n

Brennan wasn't smiling either.  He might be young, but he kept high company, and had witnesssed some of what went on behind the curtains of the world.  Secrets had their price, always, that was the barrier that made them secrets; and Brennan thought he had a good idea of what this price might be.

\n

\n

There was a cough from behind them, at a moment when they had all happened to be looking in any other direction but that one.

\n

As one, their heads turned.

\n

Jeffreyssai stood there, in a casual robe that looked more like glass than any proper sort of mirrorweave.

\n

Jeffreyssai stood there and looked at them, a strange abiding sorrow in those inscrutable eyes.

\n

\"Sen...sei,\" Taji started, faltering as that bright anticipation stumbled over Jeffreyssai's return look.  \"What's next?\"

\n

\"Nothing,\" Jeffreyssai said abruptly.  \"You're finished.  It's done.\"

\n

Hiriwa, Taji, and Yin all blinked, a perfect synchronized gesture of shock.  Then, before their expressions could turn to outrage and objections -

\n

\"Don't,\" Jeffreyssai said.  There was real pain in it.  \"Believe me, it hurts me more than it hurts you.\"  He might have been looking at them; or at something far away, or long ago.  \"I don't know exactly what roads may lie before you - but yes, I know that you're not ready.  That I'm sending you out unprepared.  That everything I taught you is incomplete.  I know that what I said is not what you heard.  That I left out the one most important thing.  That the rhythm at the center of everything is missing and astray.  I know that you will harm yourself in the course of trying to use what I taught.  So that I, personally, will have shaped, in some fashion unknown to me, the very knife that will cut you...\"

\n

\"...that's the hell of being a teacher, you see,\" Jeffreyssai said.  Something grim flickered in his expression.  \"Nonetheless, you're done.  Finished, for now.  What lies between you and mastery is not another classroom.  We are fortunate, or perhaps not fortunate, that the road to power does not wend only through lecture halls.  Else the quest would be boring to the bitter end.  Still, I cannot teach you; and so it is a moot point whether I would if I could.  There is no master here whose art is entirely inherited.  Even the beisutsukai have never discovered how to teach certain things; it is possible that such an event has been prohibited.  And so you can only arrive at mastery by using to the fullest the techniques you have already learned, facing challenges and apprehending them, mastering the tools you have been taught until they shatter in your hands -\"

\n

Jeffreyssai's eyes were hard, as though steeled in acceptance of unwelcome news.

\n

\"- and you are left in the midst of wreckage absolute.  That is where I, your teacher, am sending you.  You are not beisutsukai masters.  I cannot create masters.  I have never known how to create masters.  Go forth, then, and fail.\"

\n

\"But -\" said Yin, and stopped herself.

\n

\"Speak,\" said Jeffreyssai.

\n

\"But then why,\" she said, \"why teach us anything in the first place?\"

\n

Brennan's eyelids flickered some tiny amount.

\n

It was enough for Jeffreyssai.  \"Answer her, Brennan, if you think you know.\"

\n

\"Because,\" Brennan said, \"if we were not taught, there would be no chance at all of our becoming masters.\"

\n

\"Even so,\" said Jeffreyssai.  \"If you were not taught - then when you failed, you might simply think you had reached the limits of Reason itself.  You would be discouraged and bitter within your disaster.  You might not even realize when you had failed.  No; you have been shaped into something that may emerge from the wreckage, determined to remake your Art.  And then you may remember much that will help you.  I cannot create masters, but if you had not been taught, your chances would be - less.\"  His gaze passed over the group.  \"It should be obvious, but understand that you cannot provoke the moment of your crisis artificially.  To teach you something, the catastrophe must come to you as a surprise. You must go as far as you can, as best you can, and fail honestly.  The higher road begins after the Art seems to fail you; though the reality will be that it was you who failed your Art.\"

\n

Brennan made the gesture with his hand that indicated a question; and Jeffreyssai nodded in reply.

\n

\"Is this the only way in which Bayesian masters come to be, sensei?\"

\n

\"I do not know,\" said Jeffreyssai, from which the overall state of the evidence was obvious enough.  \"But I doubt there would ever be a road to mastery that goes only through the monastery.  We are the heirs in this world of mystics as well as scientists, just as the Competitive Conspiracy inherits from chessplayers alongside cagefighters.  We have turned our impulses to more constructive uses - but we must still stay on our guard against old failure modes.\"

\n

Jeffreyssai took a breath.  \"Three flaws above all are common among the beisutsukai.  The first flaw is to look just the slightest bit harder for flaws in arguments whose conclusions you would rather not accept.  If you cannot contain this aspect of yourself then every flaw you know how to detect will make you that much stupider.  This is the challenge which determines whether you possess the art or its opposite:  Intelligence, to be useful, must be used for something other than defeating itself.\"

\n

\"The second flaw is cleverness.  To invent great complicated plans and great complicated theories and great complicated arguments - or even, perhaps, plans and theories and arguments which are commended too much by their elegance and too little by their realism.  There is a widespread saying which runs:  'The vulnerability of the beisutsukai is well-known; they are prone to be too clever.'  Your enemies will know this saying, if they know you for a beisutsukai, so you had best remember it also.  And you may think to yourself:  'But if I could never try anything clever or elegant, would my life even be worth living?'  This is why cleverness is still our chief vulnerability even after its being well-known, like offering a Competitor a challenge that seems fair, or tempting a Bard with drama.\"

\n

\"The third flaw is underconfidence, though it will seem to you like modesty or humility.  You have learned so many flaws in your own nature, some of them impossible to fix, that you may think that the rule of wisdom is to confess your own inability.  You may question yourself, without resolution or testing to determine the self-answers.  You may refuse to decide, pending further evidence, when a quick decision is necessary.  You may take advice you should not take.  Jaded cynicism and sage despair are less fashionable than once they were, but you may still be tempted by them.  Or you may simply - lose momentum.\"

\n

Jeffreyssai fell silent then.

\n

He looked from each of them, one to the other, with quiet intensity.

\n

And said at last, \"Those are my final words to you.  If and when we meet next, you and I - if and when you return to this place, Brennan, or Hiriwa, or Taji, or Yin, or Styrlyn - I will no longer be your teacher.\"

\n

And Jeffreyssai turned and walked swiftly away, heading back toward the glassy tunnel that had emitted him.

\n

Even Brennan was shocked.  For a moment they were all speechless.

\n

Then -

\n

\"Wait!\" cried Hiriwa.  \"What about our final words to you?  I never said -\"

\n

\"I will tell you what my sensei told me,\" Jeffreyssai's voice came back as he disappeared.  \"You can thank me after you return, if you return.  One of you at least seems likely to come back.\"

\n

\"No, wait, I -\"  Hiriwa fell silent.  In the mirrored tunnel, the fractured reflections of Jeffreyssai were already fading.  She shook her head.  \"Never... mind, then.\"

\n

There was a brief, uncomfortable silence, as the five of them looked at each other.

\n

\"Good heavens,\" Taji said finally.  \"Even the Bardic Conspiracy wouldn't try for that much drama.\"

\n

Yin suddenly laughed.  \"Oh, this was nothing.  You should have seen my send-off when I left Diamond Sea University.\"  She smiled.  \"I'll tell you about it sometime - if you're interested.\"

\n

Taji coughed.  \"I suppose I should go back and... pack my things...\"

\n

\"I'm already packed,\" Brennan said.  He smiled, ever so slightly, when the other three turned to look at him.

\n

\"Really?\" Taji asked.  \"What was the clue?\"

\n

Brennan shrugged with artful carelessness.  \"Beyond a certain point, it is futile to inquire how a beisutsukai master knows a thing -\"

\n

\"Come off it!\" Yin said.  \"You're not a beisutsukai master yet.\"

\n

\"Neither is Styrlyn,\" Brennan said.  \"But he has already packed as well.\"  He made it a statement rather than a question, betting double or nothing on his image of inscrutable foreknowledge.

\n

Styrlyn cleared his throat.  \"As you say.  Other commitments call me, and I have already tarried longer than I planned.  Though, Brennan, I do feel that you and I have certain mutual interests, which I would be happy to discuss with you -\"

\n

\"Styrlyn, my most excellent friend, I shall be happy to speak with you on any topic you desire,\" Brennan said politely and noncommitally, \"if we should meet again.\"  As in, not now.  He certainly wasn't selling out his Mistress this early in their relationship.

\n

There was an exchange of goodbyes, and of hints and offers.

\n

And then Brennan was walking down the road that led toward or away from Mount Mirror (for every road is a two-edged sword), the glassy pebbles clicking under his feet.

\n

He strode out along the path with purpose, vigor, and determination, just in case someone was watching.

\n

Some time later he stopped, stepped off the path, and moved just far enough away to prevent anyone from finding him unless they were deliberately following.

\n

Then Brennan sagged back against a tree-trunk.  It was a sparse clearing, with only a few trees poking out of the ground; not much present in the way of distracting scenery, unless you counted the red-tinted stream flowing out of a dark cave-mouth.  And Brennan deliberately faced away from that, leaving only the far grey of the horizons, and the blue sky and bright sun.

\n

Now what?

\n

He had thought that the Bayesian Conspiracy, of all the possible trainings that existed in this world, would have cleared up his uncertainty about what to do with the rest of his life.

\n

Power, he'd sought at first.  Strength to prevent a repetition of the past.  \"If you don't know what you need, take power\" - so went the proverb.  He had gone first to the Competitive Conspiracy, then to the beisutsukai.

\n

And now...

\n

Now he felt more lost than ever.

\n

He could think of things that made him happy, but nothing that he really wanted.

\n

The passionate intensity that he'd come to associate with his Mistress, or with Jeffreyssai, or the other figures of power that he'd met... a life of pursuing small pleasures seemed to pale in comparison, next to that.

\n

In a city not far from the center of the world, his Mistress waited for him (in all probability, assuming she hadn't gotten bored with her life and run away).  But to merely return, and then drift aimlessly, waiting to fall into someone else's web of intrigue... no.  That didn't seem like enough.

\n

Brennan plucked a blade of grass from the ground and stared at it, half-unconsciously looking for anything interesting about it; an old, old game that his very first teacher had taught him, what now seemed like ages ago.

\n

Why did I believe that going to Mount Mirror would tell me what I wanted?

\n

Well, decision theory did require that your utility function be consistent, but...

\n

If the beisutsukai knew what I wanted, would they even tell me?

\n

At Mount Mirror they taught doubt.  So now he was falling prey to the third besetting sin of which Jeffreyssai had spoken, lost momentum, for he had learned to question the image that he held of himself in his mind.

\n

Are you seeking power because that is your true desire, Brennan?

\n

Or because you have a picture in your mind, of the role that you play as an ambitious young man, and you think it is what someone playing your role would do?

\n

Almost everything he'd done up until now, even going to Mount Mirror, had probably been the latter.

\n

And when he blanked out the old thoughts and tried to see the problem as though for the first time...

\n

...nothing much came to mind.

\n

What do I want?

\n

Maybe it wasn't reasonable to expect the beisutsukai to tell him outright.  But was there anything they had taught him by which he might answer?

\n

Brennan closed his eyes and thought.

\n

First, suppose there is something I would passionately desire.  Why would I not know what it is?

\n

Because I have not yet encountered it, or ever imagined it?

\n

Or because there is some reason I would not admit it to myself?

\n

Brennan laughed out loud, then, and opened his eyes.

\n

So simple, once you thought of it that way.  So obvious in retrospect.  That was what they called a silver-shoes moment, and yet, if he hadn't gone to Mount Mirror, it wouldn't ever have occurred to him.

\n

Of course there was something he wanted.  He knew exactly what he wanted.  Wanted so desperately he could taste it like an sharp tinge on his tongue.

\n

It just hadn't come to mind earlier, because... if he acknowledged his desire explicitly... then he also had to see that it was difficult.  High, high, above him.  Far out of his reach.  \"Impossible\" was the word that came to mind, though it was not, of course, physically impossible.

\n

But once he asked himself if he preferred to wander aimlessly through his life - once it was put that way, the answer became obvious.  Pursuing the unattainable would make for a hard life, but not a sad one.  He could think of things that made him happy, either way.  And in the end - it was what he wanted.

\n

Brennan stood up, and took his first steps, in the exact direction of Shir L'or, the city that lies in the center of the world.  He had a plot to hatch, and he did not know who would be part of it.

\n

And then Brennan almost stumbled, when he realized that Jeffreyssai had already known.

\n

One of you at least seems likely to come back...

\n

Brennan had thought he was talking about Taji.  Taji had probably thought he was talking about Taji.  It was what Taji said he wanted.  But how reliable of an indicator was that, really?

\n

There was a proverb about that very road he had just left:  Whoever sets out from Mount Mirror seeking the impossible, will surely return.

\n

When you considered Jeffreyssai's last warning - and that the proverb said nothing of succeeding at the impossible task itself - it was a less optimistic saying than it sounded.

\n

Brennan shook his head wonderingly.  How could Jeffreyssai possibly have known before Brennan knew himself?

\n

Well, beyond a certain point, it is futile to inquire how a beisutsukai master knows a thing -

\n

Brennan halted in mid-thought.

\n

No.

\n

No, if he was going to become a beisutsukai master himself someday, then he ought to figure it out.

\n

It was, Brennan realized, a stupid proverb.

\n

So he walked, and this time, he thought about it carefully.

\n

As the sun was setting, red-golden, shading his footsteps in light.

" } }, { "_id": "9jF4zbZqz6DydJ5En", "title": "The End (of Sequences)", "pageUrl": "https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences", "postedAt": "2009-04-27T21:07:52.368Z", "baseScore": 46, "voteCount": 39, "commentCount": 36, "url": null, "contents": { "documentId": "9jF4zbZqz6DydJ5En", "html": "

This concludes the final sequence on Overcoming Bias / Less Wrong.  I have not said everything I wanted to say, but I hope I have said (almost) everything I needed to say.  (Such that I actually could say it in these twenty-one months of daily posting, August 2007 through April 2009.)

\n

The project to which Less Wrong is devoted - the art and science and craft of human rationality - is, indeed, important.  But the calculus of choosing among altruistic efforts is, in some ways, a calculus of who can take your place.  I am more easily replaced here, than elsewhere.  And so it has come time for me to begin pulling my focus away from Less Wrong, and turning toward other matters, where I am less easily replaced.

\n

But I do need replacing - or rather, the work that I was doing needs replacing, whether by one person or by many people or by e.g. a karma system.

\n

And so my final sequence was my letter that describes the work that I can already see remaining to be done, gives some advice on how to configure the effort, and warns direly against standard failure modes.

\n

Any idea that can produce great enthusiasm is a dangerous idea.  It may be a necessary idea, but that does not make it any less dangerous.  I do fear, to a certain extent, that I will turn my focus away, and then find out that someone has picked up the ideas and run with them and gotten it all wrong...

\n

But you can only devote your whole life to one thing at a time.  In those ways I have thought to anticipate, at least, I have placed a blocking Go stone or two, and you have been warned.

\n

I am not going to turn my attention away entirely and all at once.  My initial plan is to cut back my posting to no more than one post per week.

\n

At some future point, though, there must come a time when I turn my attention entirely away from building rationalism, and focus only on that other task.

\n

So, yes, just to belabor the point - if there's going to be a lasting community, and not just a body of online writing that people occasionally stumble across, it needs to set itself up to run without me.

\n

The last explicit dependency left on me is promoting posts, and I've been mostly doing that based on user voting (though not entirely; my activation threshold is lower for posts I perceive as higher-quality).  I plan to start trying to delegate that power to co-editors shortly.

\n

For myself...  I've been feeling rather burned out on writing, so I'm thinking of taking a short vacation and then studying math and working over the summer, before I go back to producing a book.

\n

I'm no longer certain about the time-investment wisdom of trying to convert the OB/LW sequences into minibooks.  The Wiki might accomplish a lot of the same purpose of making the info more accessible and organizing it.  We'll see how the Wiki develops (especially once my old posts are imported which should happen Any Time Now).

\n

Now, while I'm on semi-vacation, is a good time to have me speak at your hedge fund.  If you wait until after I write a book and it comes out, then, if things go remotely well, it will cost you a lot more money to have me speak (because the marginal utility to me of additional money will have gone way down, and demand gone up).  Right now, though, additional money does have substantial marginal utility to me.  So those of you who work at the hedge funds who were rational enough to survive: please request me for your speaker programs.  I'm told that I give a wonderful, amazing, etcetera talk for audiences interested in rationality, considerably more than twice as useful as talks from big-name speakers who cost considerably more than twice as much.

\n

If the rationality book is written, sold, and takes off, I may well vanish entirely off the face of the Earth, all purposes of publicity having already been served.  That is the optimal and desirable outcome - it means I am allowed to specialize narrowly.

\n

I would like to take this moment to confess, once again, that I have had ulterior motives in all this - as is right and proper to a rationalist.  It is not the pure service of rationality that diverted my focus here... though I have served that overt cause as a thing in its own right, with its own integrity.

\n

And I will not insist on recruiting all the rationalists that I create...

\n

But still...

\n

In the end, these 21 months were wagered upon your active response, not your passive possession of truth.  Even those of you who are not moved to care about the thing that I protect, may still be moved to align their activities with Seasteading, or the Methuselah Foundation, or Givewell, or cryonics.  And, perhaps, tell them that we sent you, so that they know that this \"rationality\" business is working to their favor; so that maybe they will say a thing or two about rationality.  And someone who encounters the craft there, will read on it further, and eventually donate professionally specialized hours (in one form or another) to the Singularity Institute.  What goes around comes around, I hope.

\n

To the extent that you don't feel moved to do anything in particular - even include a link in your signature, or link to apposite posts now and then in your blog - then I have lost my gamble of these 21 months.  Or I have lost that part of the stakes which was about you and decision processes similar to you.  (No, that doesn't mean you should seize this opportunity to post about how I lost my gamble with you.  You should know better by now, if you want any rationalist cause to get anything done ever, whether or not you are a part of it.)

\n

And this advice:  If there is some rationalist cause you have decided to help eventually, I advise you very strongly to help that cause now - even if it's just a tiny amount.  One of the regularities I have discovered, working in the nonprofit industry, is that people who donated last year donate the next year, and people who are planning to donate next year will, next year, still be planning to donate \"next year\".  The gap between little helpers and big helpers is a lot more permeable than the membrane that separates helpers and procrastinators.  This holds whether you would help my own cause, or any of the other causes that have rationality as their common interest.

\n

As for why Earth needs rational activists in particular - I hope that by now this has become clear.  In this fragile Earth there are many tasks which are underserved by irrational altruists.  Scope insensitivity and the purchase of moral satisfaction leads people to donate to puppy pounds as easily as existential risk prevention; circular altruism prevents them from going so far as to multiply utilons by probabilities; unsocialized in basic economics, they see money as a dirty thing inferior to volunteering unspecialized labor; they try to purchase warm fuzzies and status and utilons all at the same time; they feel nervous outside of conventional groups and follow the first thought that associates to \"charity\"...

\n

And these are all very normal and human mistakes, to be sure - forgiveable in others, if not in yourself.  Nonetheless, I will advise you that a rationalist's efforts should not be wasted on causes that are already popular far outside of rationalist circles.  There is nothing remotely approaching an efficient market in utilons.

\n

Is all this inclusiveness a pretense?  Did I, in the end, gamble only upon the portion of the activism that would flow to my own cause?  Yes, of course I did; that is how the calculation comes out when I shut up and multiply.

\n

But I have faithfully served the integrity of that pretense, because that inclusiveness matters to my own cause as well.

\n

So I say to you now, on behalf of all our causes:  Do, whatever you may find worth doing.

" } }, { "_id": "AYa2gc3sFWCCFSaFq", "title": "Theism, Wednesday, and Not Being Adopted", "pageUrl": "https://www.lesswrong.com/posts/AYa2gc3sFWCCFSaFq/theism-wednesday-and-not-being-adopted", "postedAt": "2009-04-27T16:49:32.087Z", "baseScore": 60, "voteCount": 81, "commentCount": 342, "url": null, "contents": { "documentId": "AYa2gc3sFWCCFSaFq", "html": "

(Disclaimer: This post is sympathetic to a certain subset of theists.  I am not myself a theist, nor have I ever been one.  I do not intend to justify all varieties of theism, nor do I intend to justify much in the way of common theistic behavior.)

\n

I'm not adopted.  You all believe me, right?  How do you think I came by this information, that you're confident in my statement?  The obvious and correct answer is that my parents told me so1.  Why do I believe them?  Well, they would be in a position to know the answer, and they have been generally honest and sincere in their statements to me.  A false belief on the subject could be hazardous to me, if I report inaccurate family history to physicians, and I believe that my parents have my safety in mind.  I know of the existence of adopted people; the possibility isn't completely absent from my mind - but I believe quite confidently that I am not among those people, because my parents say otherwise.

\n

\n

Now let's consider another example.  I have a friend who plans to name her first daughter Wednesday.  Wednesday will also not be adopted, but that isn't the part of the example that is important: Wednesday will grow up in Provo, Utah, in a Mormon family in a Mormon community with Mormon friends, classmates, and neighbors, attending an LDS church every week and reading scripture and participating in church activities.  It is overwhelmingly likely that she will believe the doctrines of the LDS church, because not only her parents, but virtually everyone she knows will reinforce these beliefs in her.  Given the particular nuances of Mormonism as opposed to other forms of Christianity, Wednesday will also be regularly informed that several of these people are in a position to have special knowledge on the subject via direct prayer-derived evidence2 - in much the same way that her parents will have special knowledge of her non-adopted status via direct experience when she wasn't in a state suitable to notice or remember the events.  Also, a false belief on the subject could have all kinds of bad consequences - if the Muslims are right, for instance, no doubt Hell awaits Wednesday and her family - so if she also correctly assumes that her parents have her best interests at heart, she'll assume they would do their best to give her accurate information.

\n

Atheism tends to be treated as an open-and-shut case here and in other intellectually sophisticated venues, but is that fair?  What about Wednesday?  What would have to happen to her to get her to give up those beliefs?  Well, for starters, she'd have to dramatically change her opinion of her family.  Her parents care enough about honesty that they are already planning not to deceive her about Santa Claus - should she believe that they're liars?  They're both college-educated, clever people, who read a lot and think carefully about (some) things - should she believe that they're fools?  They've traveled around the world and have friends like me who are, vocally, non-Mormons and even non-Christians - should she believe that her parents have not been exposed to other ideas?

\n

Would giving up her religion help Wednesday win?  I don't think her family would outright reject her for it, but it would definitely strain those valued relationships, and some of the aforementioned friends, classmates, and neighbors would certainly react badly.  It doesn't seem that it would make her any richer, happier, more successful - especially if she carries on living in Utah3.  (I reject out of hand the idea that she should deconvert in the closet and systematically lie to everyone she knows.)  It would make her right.  And that would be all it would do - if she were lucky.

\n

Is it really essential that, as a community, we exclude or dismiss or reflexively criticize theists who are good at partitioning, who like and are good at rational reasoning in every other sphere - and who just have higher priorities than being right?  I have priorities that I'd probably put ahead of being right, too; I'm just not in a position where I really have to choose between \"keeping my friends and being right\", \"feeling at home and being right\", \"eating this week and being right\".  That's my luck, not my cleverness, at work.

\n

When Wednesday has been born and has learned to read, it would be nice if there were a place for her here.

\n

 

\n

1I have other evidence - I have inherited some physical characteristics from my parents and have seen my birth certificate - but the point is that this is something I would take their word for even if I didn't take after them very strongly and had never seen the documentation.

\n

2Mormons believe in direct revelation, and they also believe that priesthood authorities are entitled to receive revelations for those over whom they have said authority (e.g. fathers for their children, husbands for their wives, etc.).

\n

3I have lived in Salt Lake City, and during this time was, as always, openly an atheist.  Everyone was tolerant of me, but I do not think it improved my situation in any way.

" } }, { "_id": "kM3P4eLDzDgnYxEHT", "title": "Should we be biased?", "pageUrl": "https://www.lesswrong.com/posts/kM3P4eLDzDgnYxEHT/should-we-be-biased", "postedAt": "2009-04-27T15:42:16.444Z", "baseScore": -12, "voteCount": 17, "commentCount": 23, "url": null, "contents": { "documentId": "kM3P4eLDzDgnYxEHT", "html": "

According to the University of Chicago:

\r\n

\"Bias is a pre-formed negative opinion or attitude toward a group of persons who possess common characteristics, such as skin color, or cultural experiences, such as religion or national origin.\"

\r\n

 

\r\n

So, should we ever be biased?  And if the answer is yes then should we hide our biases for signaling reasons?

\r\n

 

\r\n

Or should we take into account that many people have irrational biases against those who possess different skin colors, religions or national origins.  So perhaps a high percentage of any negative biases readers of this blog have are irrational and so maybe the best course for us rationalists in training is to work against having any negative biases.

" } }, { "_id": "H6LnGwjKiGvDyR5yo", "title": "Excuse me, would you like to take a survey?", "pageUrl": "https://www.lesswrong.com/posts/H6LnGwjKiGvDyR5yo/excuse-me-would-you-like-to-take-a-survey", "postedAt": "2009-04-26T21:23:53.642Z", "baseScore": 14, "voteCount": 16, "commentCount": 132, "url": null, "contents": { "documentId": "H6LnGwjKiGvDyR5yo", "html": "

Related to: Practical Rationality Questionnaire

\n

Here among this community of prior-using, Aumann-believing rationalists, it is a bit strange that we don't have any good measure of what the community thinks about certain things.

I no longer place much credence in raw majoritarianism: the majority is too uneducated, too susceptible to the Dark Arts, and too vulnerable to cognitive biases. If I had to choose the people whose mean opinion I trusted most, it would be - all of you.

So, at the risk of people getting surveyed-out, I'd like to run a survey on the stuff Anna Salamon didn't. Part on demographics, part on opinions, and part on the interactions between the two.

I've already put up an incomplete rough draft of the survey I'd like to use, but I'll post it here again. Remember, this is an incomplete rough draft survey. DO NOT FILL IT OUT YET. YOUR SURVEY WILL NOT BE COUNTED.

Incomplete rough draft of survey

Right now what I want from people is more interesting questions that you want asked. Any question that you want to know the Less Wrong consensus on. Please post each question as a separate comment, and upvote any question that you're also interested in. I'll include as many of the top-scoring questions as I think people can be bothered to answer.

No need to include questions already on the survey, although if you really hate them you can suggest their un-inclusion or re-phrasing.

\n

Also important: how concerned are you about privacy? I was thinking about releasing the raw data later in case other people wanted to perform their own analyses, but it might be possible to identify specific people if you knew enough about them. Are there any people who would be comfortable giving such data if only one person were to see the data, but uncomfortable with it if the data were publically accessible?

" } }, { "_id": "YdcF6WbBmJhaaDqoD", "title": "The Craft and the Community", "pageUrl": "https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community", "postedAt": "2009-04-26T17:52:21.611Z", "baseScore": 45, "voteCount": 38, "commentCount": 11, "url": null, "contents": { "documentId": "YdcF6WbBmJhaaDqoD", "html": "

This sequence ran from March to April of 2009 and dealt with the topic of building rationalist communities that could systematically improve on the art, craft, and science of human rationality. This is a highly forward-looking sequence - not so much an immediately complete recipe, as a list of action items and warnings for anyone setting out in the future to build a craft and a community.

" } }, { "_id": "TdqL5k3KaNfERpWC3", "title": "SIAI call for skilled volunteers and potential interns", "pageUrl": "https://www.lesswrong.com/posts/TdqL5k3KaNfERpWC3/siai-call-for-skilled-volunteers-and-potential-interns", "postedAt": "2009-04-26T05:56:14.452Z", "baseScore": 20, "voteCount": 16, "commentCount": 3, "url": null, "contents": { "documentId": "TdqL5k3KaNfERpWC3", "html": "

Want to increase the odds that humanity correctly navigates whatever risks and promises artificial intelligence may bring?  Interested in spending this summer in the SF Bay Area, working on projects and picking up background with similar others, with some possibility of staying on thereafter?  Want to work with, and learn with, some of the best thinkers you'll ever meet? – more specifically, some of the best at synthesizing evidence across a wide range of disciplines, and using it to make incremental progress on problems that are both damn slippery and damn important? 

If so, drop us an email.  Show us your skills; give us a chance to jointly brainstorm what you might be able to do.

We are particularly interested in people who have *any* of the following traits:

\n\n

The only musts are that you be capable, rational, and interested in helping reduce existential risk.

If you’re interested, send an email to annasalamon at gmail dot com, who will be doing the first-pass screening.  Include:

\n
    \n
  1. Why you’re interested;
  2. \n
  3. What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume);
  4. \n
  5. Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
  6. \n
\n

Our application process is fairly informal, so send us a quick email as initial inquiry and after some correspondence we can decide whether or not to follow up with more application components.

\n


(Background on where we're coming from: SIAI is currently seeing who's out there and brainstorming possibilities (however, it now looks like a summer project likely will go forward).  If you're part of who's out there, do let us know.  Plausible projects include:

\n\n

(This post is specially exempted from the \"no AI discussion until after April\" ban because it is time-urgent.)

\n

ETA: Fluency in economics would also be a plus.  (But don't feel like you need all the traits.  Rationality, general competence, and unusual skill in one of the above is fine.  Special consideration if you're young and have indicators of promise, though for the most part we're looking for people who are older and have actual past success.)

" } }, { "_id": "W3LDwqHxiwKqWkWJi", "title": "Less Meta", "pageUrl": "https://www.lesswrong.com/posts/W3LDwqHxiwKqWkWJi/less-meta", "postedAt": "2009-04-26T05:38:42.984Z", "baseScore": 20, "voteCount": 22, "commentCount": 21, "url": null, "contents": { "documentId": "W3LDwqHxiwKqWkWJi", "html": "

My recent sequence on the craft and the community is highly forward-looking—not an immediately whole recipe, but a list of action items and warnings for anyone setting out in the future.  Having expended this much effort already, it seems worthwhile to try and leverage others' future efforts.

\n

That sequence seemed like an appropriate finale, but putting it last had some side effects that I didn't expect.  Thanks to the recency effect, people are now talking as if the entire œuvre had all been anticipation of future awesomeness, with no practical value in the present...

\n

Okay, seriously, if you look over my posts on Overcoming Bias that are not just a couple of months old, you really should see quite a lot of practical day-to-day stuff.  Yes, there's a long sequence on quantum mechanics, but there really is plenty of day-to-day stuff, right down to applying biases of evaluability to save money on holiday shopping.  (Not to mention that I finally did derive real-world advice out of the QM detour!)

\n

I suspect there may also be a problem here with present schools not teaching people the experience of creating new craft.  What they present you with, is what you learn.  So when I talk about my belief that we could be doing better, they look around and say:  \"But we aren't doing that well!\" rather than \"Hm, how can we make progress on this?\"

\n

And then your current accomplishments start to pale in the light of grander dreams, etcetera.  One of the great lessons of Artificial Intelligence is that no matter how much progress you make, it can always be made to appear slow and unexciting just by having someone else quoted in the newspapers about much grander promises on which they fail to deliver.

\n

Anyway.  I think that discussion here is going a bit too meta, too much about the community, and that was not the final push I had planned to deliver.  (I know, I know, obvious in retrospect, yes, but I still didn't see it coming.)  Hence the quick add-on about practical advice backed by experimental results or deep theories—thankfully we are going back to those again in recent posts.

\n

So if it's all right with you, dear readers, after the end of April, I will not promote more than one meta post unless at least four nonmeta posts have appeared before it—does that sound fair?

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Go Forth and Create the Art!\"

\n

Previous post: \"Practical Advice Backed By Deep Theories\"

" } }, { "_id": "snwX7hXgLFikqDBr6", "title": "Where's Your Sense of Mystery?", "pageUrl": "https://www.lesswrong.com/posts/snwX7hXgLFikqDBr6/where-s-your-sense-of-mystery", "postedAt": "2009-04-26T00:45:25.820Z", "baseScore": 40, "voteCount": 39, "commentCount": 55, "url": null, "contents": { "documentId": "snwX7hXgLFikqDBr6", "html": "

Related to: Joy in the Merely Real, How An Algorithm Feels From Inside, \"Science\" As Curiosity-Stopper

\n

Your friend tells you that a certain rock formation on Mars looks a lot like a pyramid, and that maybe it was built by aliens in the distant past. You scoff, and respond that a lot of geological processes can produce regular-looking rocks, and in all the other cases like this closer investigation has revealed the rocks to be completely natural. You think this whole conversation is silly and don't want to waste your time on such nonsense. Your friend scoffs and asks:

\"Where's your sense of mystery?\"


You respond, as you have been taught to do, that your sense of mystery is exactly where it should be, among all of the real non-flimflam mysteries of science. How exactly does photosynthesis happen, what is the relationship between gravity and quantum theory, what is the source of the perturbations in Neptune's orbit? These are the real mysteries, not some bunkum about aliens. And if we cannot learn to take joy in the merely real, our life will be empty indeed.

\n

But do you really believe it?

I loved the Joy in the Merely Real sequence. But it spoke to me because it's one of the things I have the most trouble with. I am the kind of person who would have much more fun reading about the Martian pyramid than about photosynthesis.

And the one shortcoming of Joy in the Merely Real was that it was entirely normative, and not descriptive. It tells me I should reserve my sense of mystery for real science, but doesn't explain why it's so hard to do so, or why most people never even try.

\n

So what is this sense of mystery thing anyway?

I think the sense of mystery (sense of wonder, curiosity, call it what you want) is how the mind's algorithm for determining what problems to work on feels from the inside. Compare this to lust, how the mind's algorithm for determining what potential mates to pursue feels from the inside. In both cases, the mind makes a decision based on criteria of its own, which is then presented to the consciousness in the form of an emotion. And in both cases, the mind's decision is very often contrary to our best interest - as anyone who's ever fallen for a woman based entirely on her looks can tell you.

What sort of stuff makes us curious? I don't have anything better than introspection to go on, but here are some thoughts:

1. We feel more curious about things that could potentially alter many different beliefs.
2. We feel more curious about things that we feel like we can solve.
3. We feel more curious about things that might give us knowledge other people want but don't have.
4. We feel more curious about things that use the native architecture; that is, the sorts of human-level events and personal interactions our minds evolved to deal with.

So let's go back and consider how the original example - a pyramid on Mars versus photosynthesis - fits each of these criteria:

The pyramid on Mars could alter our worldview completely1. We'd have to rework all of our theories about ancient history, astronomy, the origin of civilization, maybe even religion. Learning exactly how photosynthesis works, on the other hand, probably won't make too big a difference. I assume it probably involves some sort of chemistry that sounds a lot like the other chemistry I know. I anticipate that learning more about photosynthesis wouldn't alter any of my beliefs except those directly involving photosynthesis and maybe some obscure biochemical reactions.

Pseudoscience and pseudohistory feel solveable. When you're reading a good pseudoscience book, it feels like you have all the clues and you just have to put them together. If you don't believe me, Google some pseudoscience. You'll find hundreds of webpages by people who think they've discovered the 'secret'. One person who says the pyramid on Mars was made by Atlanteans, another who says it was made by the Babylonian gods, another who says it was made by God to test our faith. On the other hand, I know I can't figure out photosynthesis without already being an expert in chemistry and biology. There's not that tantalizing sense of \"I could be the one to figure this out!\"

Knowing about a pyramid on Mars means you know more than other people. Most of humankind doesn't think there are any structures on Mars - the fools! And if you were to figure it out, you'd be...one of the greatest scientists ever. The one who proved the existence of intelligent life on other planets. It'd be great! In comparison, knowing about photosynthesis makes you one of a few thousand boring chemist types who also know about photosynthesis. Even if you're the first person to discover something new about it, the only people likely to care are...a few thousand boring chemist types.

And the pyramid deals in human-level problems: civilizations, monuments, collapse. Photosynthesis is a matter of equations and chemical reactions; much harder for most people.

Evolutionarily, all these criteria make sense. Of course you should spend more time on a problem if you're likely to solve it and the solution will be very important. And when you're a hunter-gatherer, all your problems are going to be on the human level, so you might as well direct your sense of mystery there. But the algorithm is unsuited to modern day science, when interesting discoveries are usually several inferential distances away in highly specialized domains and don't directly relate to the human level at all.

Again, compare this to lust. In the evolutionary era, mating with a woman with wide hips was quite adaptive for a male. Nowadays, with the advent of the Caesarian section, not so much. Nowadays it's probably most important for him to choose a mate whom he can tolerate for more than a few years so he doesn't end up divorced. But the mental algorithms whose result outputs as lust don't know that, so they end up making him weak-kneed for some wide-hipped woman with a terrible personality. This isn't something to feel guilty about. It's just something he needs to be wary of and devote some of his willpower resources toward fighting.

The practical take home advice, for me at least, is to treat curiosity in the same way. For a while, I felt genuinely guilty about my attraction to pseudohistory, as if it was some kind of moral flaw. It's not, no more than feeling lust towards someone you don't like is a moral flaw. They're both just misplaced drives, and all you can do is ignore, sublimate, or redirect them2.

The great thing about lust is that satisfying your unconscious and conscious feelings don't have to be mutually exclusive. Sometimes somebody comes around who's both beautiful and the sort of person you want to spend the rest of your life with. Problem solved. Other times, once your conscious mind commits to someone, your unconscious mind eventually starts coming around. These are the only two solutions I've found for the curiosity problem too.

The other practical take home advice here is for anyone whose job is educating others about science. Their job is going to be a lot easier if they can take advantage of this sense of mystery. The best science teachers I know do this. They emphasize the places where science produces counterintuitive, worldview-changing results. They present their information in the form of puzzles just difficult enough for their students to solve with a bit of effort. They try to pique their students interest with tales of the unusual or impressive. And they try to use metaphors to use the native architecture of human minds: talking about search algorithms in terms of water flowing downhill, for example.

I hope that any work that gets done on Less Wrong involving synchronizing conscious and unconscious feelings and fighting akrasia can be applied to this issue too.

\n

 

\n

Footnotes:

\n

1: The brain seems generally bad at dealing with tiny probabilities of huge payoffs. It may be that the payoff measured in size of paradigm shift from any paranormal belief being true is just so high that people aren't very good at discounting for the very small percent chance of it being true.

\n

2: One big question I'm still uncertain about: why do some people, despite it all, find science really interesting? How come this is sometimes true of one science and not others? I have a friend who loves physics and desperately wants to solve its open questions, but whose eyes glaze over every time she hears about biology - what's up with that?

" } }, { "_id": "58qCizhA2QNpHjEhh", "title": "\"Self-pretending\" is not as useful as we think", "pageUrl": "https://www.lesswrong.com/posts/58qCizhA2QNpHjEhh/self-pretending-is-not-as-useful-as-we-think", "postedAt": "2009-04-25T23:01:09.067Z", "baseScore": 4, "voteCount": 12, "commentCount": 15, "url": null, "contents": { "documentId": "58qCizhA2QNpHjEhh", "html": "

\n

A few weeks ago I made a draft of a post that was originally intended to be about the same issue addressed in MBlume’s post regarding beneficial false beliefs. Coincidentally, my draft included the same exact hypothetical about entering a club believing you’re the most attractive person in the room in order to increase chances of attracting women. There seems to be a general agreement with MBlume’s “it’s ok to pretend because it’s not self-deception and produces similar results” conclusion. I was surprised to see so much agreement considering that when I made my original draft I reached a completely different conclusion.

\n

I do agree, however, that pretending may have some benefits, but those benefits are much more limited than MBlume makes them out to be. He brings up a time where pretending helped him better fit into his character in a play. Unfortunately, his anecdote is not an appropriate example of overcoming vestigial evolutionary impulses by pretending. His mind wasn’t evolutionarily programmed to “be afraid” when pretending to be someone else, it was programmed to “be afraid” when hitting on attractive women. When I am alone in my room I can act like a real alpha male all day long, but put me in front of attractive women (or people in general) and I will retreat back to my stifled self.

\n

The only way false beliefs can overcome your obsolete evolutionary impulses is to truly believe in those false beliefs. And we all know why that would be a bad idea. Furthermore, pretending can be dangerous just like reading fiction can be dangerous. So the small benefit that pretending might give may not even be worth the cost (at times).

\n

But there is something we can learn from these (sometimes beneficial) false beliefs.

\n

Obviously, there is no direct casual chain that goes from self-fulfilling beliefs to real-world success. Beliefs, per se, are not the key variables in causing success; instead, these beliefs give rise to whatever the key variable is. We should figure out what are the key variables that arise and find a systematic way of getting those variables.

\n

With the club example, we should instead figure out what behavior changes may result from believing that every girl is attracted to you. Then, figure out which of those behaviors attract women and find a way to perfect those behaviors. This is the approach the seduction community adopts for learning how to attract women—and it works.

\n

Same goes with public speaking. If you have a fear of public speaking, you can’t expect to pretend your fear away. There are ways of reducing unnecessary emotions; the ways that work, however, don’t depend on pretending.

\n

 

" } }, { "_id": "QESemPKE774Wwe6gP", "title": "Meetup Reminder: UC Santa Barbara, Today @6pm", "pageUrl": "https://www.lesswrong.com/posts/QESemPKE774Wwe6gP/meetup-reminder-uc-santa-barbara-today-6pm", "postedAt": "2009-04-25T19:19:27.876Z", "baseScore": 4, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "QESemPKE774Wwe6gP", "html": "

Michael Blume and Anna Salamon have invited you to a Less Wrong meetup at UC Santa Barbara, at 6pm in the college of creative studies building.  See previous post.  (This post is a temporary reminder and will be deleted after the meetup, so comment there.)

" } }, { "_id": "LqjKP255fPRY7aMzw", "title": "Practical Advice Backed By Deep Theories", "pageUrl": "https://www.lesswrong.com/posts/LqjKP255fPRY7aMzw/practical-advice-backed-by-deep-theories", "postedAt": "2009-04-25T18:52:21.809Z", "baseScore": 71, "voteCount": 62, "commentCount": 114, "url": null, "contents": { "documentId": "LqjKP255fPRY7aMzw", "html": "

Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices.

\n

Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin' scientific experimental research that had been done on rats and occasionally humans.

\n

He would have posted to his blog, \"Gosh, everyone!  You should try these amazing fruit juices that are making me lose weight!\"  And that would have been the end of it.  Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se.

\n

The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn't seem to work, and there is no apparent reason for this or any logic permitting it.  But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn't work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.

\n

One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of \"the bottom line\"—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion.  And this ties directly into another oft-cited most important thing, which is the idea of \"engines of cognition\", minds as mapping engines that require evidence as fuel.

\n

If I had merely written one more blog post that said, \"You know, you really should be more open to changing your mind—it's pretty important—and oh yes, you should pay attention to the evidence too.\"  And this would not have been as useful.  Not just because it was less persuasive, but because the actual operations would have been much less clear without the explicit theory backing it up.  What constitutes evidence, for example?  Is it anything that seems like a forceful argument?  Having an explicit probability theory and an explicit causal account of what makes reasoning effective, makes a large difference in the forcefulness and implementational details of the old advice to \"Keep an open mind and pay attention to the evidence.\"

\n

It is also important to realize that causal theories are much more likely to be true when they are picked up from a science textbook than when invented on the fly—it is very easy to invent cognitive structures that look like causal theories but are not even anticipation-controlling, let alone true.

\n

This is the signature style I want to convey from all those posts that entangled cognitive science experiments and probability theory and epistemology with the practical advice—that practical advice actually becomes practically more powerful if you go out and read up on cognitive science experiments, or probability theory, or even materialist epistemology, and realize what you're seeing.  This is the brand that can distinguish LW from ten thousand other blogs purporting to offer advice.

\n

I could tell you, \"You know, how much you're satisfied with your food probably depends more on the quality of the food than on how much of it you eat.\"  And you would read it and forget about it, and the impulse to finish off a whole plate would still feel just as strong.  But if I tell you about scope insensitivity, and duration neglect and the Peak/End rule, you are suddenly aware in a very concrete way, looking at your plate, that you will form almost exactly the same retrospective memory whether your portion size is large or small; you now possess a deep theory about the rules governing your memory, and you know that this is what the rules say.  (You also know to save the dessert for last.)

\n

I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain.  But there are ten thousand people purporting to give advice on this, and for the most part, it is on the level of that alternate Seth Roberts who just tells people about the amazing effects of drinking fruit juice.  Or actually, somewhat worse than that—it's people trying to describe internal mental levers that they pulled, for which there are no standard words, and which they do not actually know how to point to.  See also the illusion of transparency, inferential distance, and double illusion of transparency.  (Notice how \"You overestimate how much you're explaining and your listeners overestimate how much they're hearing\" becomes much more forceful as advice, after I back it up with a cognitive science experiment and some evolutionary psychology?)

\n

I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

\n

Note the grade of increasing difficulty in citing:

\n\n

If you don't know who to trust, or you don't trust yourself, you should concentrate on experimental results to start with, move on to thinking in terms of causal theories that are widely used within a science, and dip your toes into math and epistemology with extreme caution.

\n

But practical advice really, really does become a lot more powerful when it's backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.

" } }, { "_id": "5MmCYWKNnvAPWRBYL", "title": "Cached Procrastination", "pageUrl": "https://www.lesswrong.com/posts/5MmCYWKNnvAPWRBYL/cached-procrastination", "postedAt": "2009-04-25T16:22:56.720Z", "baseScore": 44, "voteCount": 45, "commentCount": 49, "url": null, "contents": { "documentId": "5MmCYWKNnvAPWRBYL", "html": "

I have a paper to write. Where do I start? The first time I asked this question, it was easy: just sit down and start typing. I wrote a few hundred words, then got stuck; I needed to think some more, so I took a break and did something else. Over the next few days, my thoughts on the subject settled, and I was ready to write again. So I sat down, and asked: What do I do next? Fortunately, my brain had a cached response ready to answer this question: Solitaire!

So I procrastinated, and every time I asked my brain what to write, I got back an answer like \"Don't bother!\". Now a deadline's approaching, and I still don't have much written, so I sit down to write again. This time, I'm determined: using my willpower, I will not allow myself to think about anything except for the paper and its topic. So I ask again: Where do I start? (Solitaire!) What thoughts come to mind? I should've started a week ago. Every time I think about this topic I get stuck.  Maybe I shouldn't write this paper after all. These, too, are cached thoughts, generated during previous failed attempts to get started. These thoughts are much harder to clear, both because there are more of them, because of their emotional content, but I'm determined to do so anyways; I think through all the cached thoughts, return to the original question (Where do I start?), get my text editor open, start planning a section and... Ping! I have a new e-mail to read, I get distracted, and when I return half an hour later I have to clear those same cached thoughts again.

Many authors say to stop in the middle of a thought when you leave off, so that \"Where do I start?\" will always have an easy answer. This sounds like a solution, but it ignores the fact that you'll get stuck eventually, so that you have to stop, at a spot that won't be easy to come back to.

In order to stop procrastinating, there are two obstacles to overcome: A question to answer, and a cached answer to clear. The question is \"What do I do first?\" and the cached answer is \"procrastinate more\". Knowing that \"procrastinate\" was a cached answer makes it easier to get past, but the original question is still a problem. Why is deciding what to do first so often difficult?

When I'm programming, I make a long, unorded to-do list for each project, listing all of the features I plan to implement. When I finish one, I go back to the list to pick something to work on next. Sometimes, I can't decide; I just stare at the list for awhile, weighing the costs and benefits of each, until eventually something happens to distract me. Most of the items on that list are harmful options, which serve only to induce analysis paralysis. It's the same problem some people have ordering off restaurant menus, and the same solution works. Instead of considering a series of options and deciding for each whether it's good enough to settle on, choose one option as the current-best without considering it at all, and compare options against the current-best.

\n

Usually, choosing where to start, or what to do next, requires generating options, not picking one off a menu. When choosing, say, the topic of the next chapter, it's easy to convince ourselves that we'll come up with the perfect answer, if only we think about it a little more. If we take the outside view, we can see that this is probably not the case; and if we let thinking about one decision crowd out everything else, and think about it long enough without reaching an answer, then eventually we will settle on Solitaire as the best choice. When deciding how much thought  to apply, remember: The utility we get from thinking about a decision is the cost of deciding incorrectly times the probability that we'll change our mind from incorrect to correct, minus the probability that we'll change our mind from correct to incorrect; and the longer we have gone without changing our mind, the less likely we are to do so in the future.

\n

Procrastination is not a single problem, at least two: cached thought, and analysis paralysis, working together to stop us from getting work done. If we miss the distinction, then any attempts to find solutions will be doomed to confusion and failure; we must recognize and address each underlying problem, separately.

" } }, { "_id": "CC5RvYaZi9MmPWuXN", "title": "Programmatic Prediction markets", "pageUrl": "https://www.lesswrong.com/posts/CC5RvYaZi9MmPWuXN/programmatic-prediction-markets", "postedAt": "2009-04-25T09:29:31.762Z", "baseScore": 7, "voteCount": 17, "commentCount": 19, "url": null, "contents": { "documentId": "CC5RvYaZi9MmPWuXN", "html": "

I have a problem with \"prediction markets\" as news view. They just aren't informative enough.

\n

If the price of oil goes down is that due to: A reduction in demand. an increase in supply, a large amount of investors finding a better investment or a large amount of investors wanting cash (due to having to pay creditors/taxes).

\n

I want them to tell me enough information so that I can begin trading in an informed manner. When you see the market expects rain fall on montana is 2 cm in a day, what information is this based upon? If you read about a newly created huge man made lake in the are which you expect to change the micro climate, how do you know whether the simulations people and betting using are running take this into consideration?

\n

If I don't get this information I can't trade with expectation of being able to make a profit and the market doesn't get any information that I may have that it doesn't. I could try and reverse engineer peoples climate models from the way they trade, but that is pretty hard to do. So I would like to try and lower the barrier of entry to the market by giving more information to potential players.

\n

This lack of information is due to the signals each trader sends to the market, they are binary in nature buy or sell, with the traders strategy and information she used a black box that we can't get into. As potential traders we don't know if the market is taking certain information into account with the price we are shown.

\n

The only way I have thought of being able to get at some of the information enclosed in the black box, is to only allow programs to bid in a market. The programs would be run on the markets server  and have acces to news sources, financial data and government reports. The programs could be stopped at any time by the trader but otherwise not communicated with or updated. So the would be trader would have to build models of the world and ways of processing the financial data.

\n

How then would we get at the information? The source for each program that wanted to trade would be held in escrow and it would be released as open source code when it was withdrawn from the system. This would create a time lag, where people could profit from a better predicting system, but also allow everyone else to catch up after a certain time. There would also be some significant financial cost to starting a new bot so that people would not just start lots of one shot bots that did a single action based on their hidden knowledge.

\n

It would have the side affect of have different biases to the human market. possibly more accurate as people would have to build the bias into the program and it would be visible and open to exploitation if it became prevalent and known.

\n

I don't see this as a reality any time soon, but I would be interested if anyone else had any, more easily implementable, ideas about improving the informativeness of the markets.

" } }, { "_id": "AGP9PwnhQcuYMKyMm", "title": "Instrumental vs. Epistemic -- A Bardic Perspective", "pageUrl": "https://www.lesswrong.com/posts/AGP9PwnhQcuYMKyMm/instrumental-vs-epistemic-a-bardic-perspective", "postedAt": "2009-04-25T07:41:41.482Z", "baseScore": 95, "voteCount": 79, "commentCount": 189, "url": null, "contents": { "documentId": "AGP9PwnhQcuYMKyMm", "html": "

(This article expands upon my response to a question posed by pjeby here)

\n

I've seen a few back-and-forths lately debating the instrumental use of epistemic irrationality -- to put the matter in very broad strokes, you'll have one commenter claiming that a particular trick for enhancing your effectiveness, your productivity, your attractiveness, demands that you embrace some belief unsupported by the evidence, while another claims that such a compromise is unacceptable, since a true art should use all available true information. As Eliezer put it:

\n
\n

I find it hard to believe that the optimally motivated individual, the strongest entrepreneur a human being can become, is still wrapped up in a blanket of comforting overconfidence. I think they've probably thrown that blanket out the window and organized their mind a little differently. I find it hard to believe that the happiest we can possibly live, even in the realms of human possibility, involves a tiny awareness lurking in the corner of your mind that it's all a lie.

\n
\n

And with this I agree -- the idea that a fully developed rational art of anything would involving pumping yourself with false data seems absurd.

\n

Still, let us say that I am entering a club, in which I would like to pick up an attractive woman. Many people will tell me that I must believe myself to be the most attractive, interesting, desirable man in the room. An outside-view examination of my life thus far, and my success with women in particular, tells me that I most certainly am not. What shall I do?

\n

Well, the question is, why am I being asked to hold these odd beliefs?  Is it because I'm going to be performing conscious calculations of expected utility, and will be more likely to select the optimal actions if I plug incorrect probabilities into the calculation? Well, no, not exactly. More likely, it's because the blind idiot god has already done the calculation for me.

\n

Evolution's goals are not my own, and neither are evolution's utility calculations. Most saliently, other men are no longer allowed to hit me with mastodon bones if I approach women they might have liked to pursue. The trouble is, evolution has already done the calculation, using this now-faulty assumption, with the result that, if I do not see myself as dominant, my motor cortex directs the movement of my body and the inflection of my voice in a way which clearly signals this fact, thus avoiding a conflict. And, of course, any woman I may be pursuing can read this signal just as clearly. I cannot redo this calculation, any more than I can perform a fourier analysis to decide how I should form my vowels. It seems the best I can do is to fight an error with an error, and imagine that I am an attractive, virile, alpha male.

\n

So the question is, is this self-deception? I think it is not.

\n

In high school, I spent four happy years as a novice initiate of the Bardic Conspiracy. And of all the roles I played, my favorite by far was Iago, from Shakespeare's Othello. We were performing at a competition, and as the day went by, I would look at the people I passed, and tell myself that if I wanted, I could control any of them, that I could find the secrets to their minds, and in just a few words, utterly own any one of them. And as I thought this, completely unbidden, my whole body language changed. My gaze became cold and penetrating, my smile grew thin and predatory, the way I held my body was altered in a thousand tiny ways that I would never have known to order consciously.

\n

And, judging by the reactions, both of my (slightly alarmed) classmates, and of the judges, it worked

\n

But if a researcher with a clipboard had suddenly shown up and asked my honest opinion of my ability as a manipulator of humans, I would have dropped the act, and given a reasonably well-calibrated, modest answer.

\n

Perhaps we could call this soft self-deception. I didn't so much change my explicit conscious beliefs as... rehearse beliefs I knew to be false, and allow them to seep into my unconscious.

\n

In An Actor Prepares, Bardic Master Stanislavski describes this as the use of if:

\n
\n

Take into consideration also that this inner stimulus was brought about without force, and without deception. I did not tell you that there was a madman behind the door. On the contrary, by using the word if I frankly recognized the fact that I was offering you only a supposition. All I wanted to accomplish was to make you say what you would have done if the supposition about the madman were a real fact, leaving you to feel what anybody in the given circumstances must feel. You in turn did not force yourselves, or make yourselves accept the supposition as reality, but only as a supposition.

\n
\n

Is this dangerous? Is this a short step down the path to the dark side?

\n

If so, there must be a parting of ways between the Cartographers and the Bards, and I know not which way I shall go.

" } }, { "_id": "gj7Z7Zj6SMkrEaN8J", "title": "Rational Groups Kick Ass", "pageUrl": "https://www.lesswrong.com/posts/gj7Z7Zj6SMkrEaN8J/rational-groups-kick-ass", "postedAt": "2009-04-25T02:37:31.992Z", "baseScore": 33, "voteCount": 33, "commentCount": 24, "url": null, "contents": { "documentId": "gj7Z7Zj6SMkrEaN8J", "html": "

Reply to: Extreme Rationality: It's Not That Great
Belaboring of: Rational Me Or We?
Related to: A Sense That More Is Possible

\n

The success of Yvain's post threw me off completely.  My experience has been opposite to what he describes: x-rationality, which I've been working on since the mid-to-late nineties, has been centrally important to successses I've had in business and family life.  Yet the LessWrong community, which I greatly respect, broadly endorsed Yvain's argument that:

\n
\n

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it.

\n
\n

So that left me pondering what's different in my experience.  I've been working on these things longer than most, and am more skilled than many, but that seemed unlikely to be the key.

\n

The difference, I now think, is that I've been lucky enough to spend huge amounts of time in deeply rationalist organizations and groups--the companies I've worked at, my marriage, my circle of friends.

\n

And rational groups kick ass.

\n

An individual can unpack free will or figure out that the Copenhagen interpretation is nonsense.  But I agree with Yvain that in a lonely rationalist's individual life, the extra oomph of x-rationality may well be drowned in the noise of all the other factors of success and failure.

\n

But groups!  Groups magnify the importance of rational thinking tremendously:

\n\n

And we're not even talking about the extra power of x-rationality.  Imagine a couple that truly understood Aumann, a company that grokked the Planning Fallacy, a polity that consistently tried Pulling the Rope Sideways.

\n

When it comes to groups--sized from two to a billion--Yvain couldn't be more wrong.

\n

Update:  Orthonormal points out that I don't provide many concrete examples; I only link to three above.  I'll try to put more here as I think of them:

\n" } }, { "_id": "RyWLr5e68TGDMdP4h", "title": "A puzzle", "pageUrl": "https://www.lesswrong.com/posts/RyWLr5e68TGDMdP4h/a-puzzle", "postedAt": "2009-04-25T02:33:00.000Z", "baseScore": -11, "voteCount": 28, "commentCount": 45, "url": null, "contents": { "documentId": "RyWLr5e68TGDMdP4h", "html": "

What do these things have in common? Nerves, emotions, morality, prices.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "tGPeyg2GXFvtXH8XN", "title": "What's in a name? That which we call a rationalist…", "pageUrl": "https://www.lesswrong.com/posts/tGPeyg2GXFvtXH8XN/what-s-in-a-name-that-which-we-call-a-rationalist", "postedAt": "2009-04-24T23:53:51.129Z", "baseScore": 8, "voteCount": 12, "commentCount": 92, "url": null, "contents": { "documentId": "tGPeyg2GXFvtXH8XN", "html": "

Who are we? I've heard a couple of comments about what Less Wrong members should be called lately. \"Rationalist\" is the word most commonly used, although use of that term might presume we are something we are not. \"Aspiring rationalist\" avoids that problem, but is awkward to use casually. Something unique to this site might insulate us from the rest of the world, however. 

\n

What are your suggestions? Please make one suggestion per comment to facilitate voting.

\n

Update: I think \"Less Wrong reader\" works well for referring to members of this site as members of this site, but what are we trying to be in a broader sense? Maybe my intent in asking for suggestions was unclear. Is there a word that could replace \"rationalist\" in the following titles:

\n

\n

\n
or is \"rationalist\" just the least bad term?
\n

" } }, { "_id": "5mRQdFJNWC6PDkR9r", "title": "Less Wrong: Progress Report", "pageUrl": "https://www.lesswrong.com/posts/5mRQdFJNWC6PDkR9r/less-wrong-progress-report", "postedAt": "2009-04-24T23:49:28.000Z", "baseScore": 5, "voteCount": 3, "commentCount": 22, "url": null, "contents": { "documentId": "5mRQdFJNWC6PDkR9r", "html": "

Less Wrong is emerging from beta as bugs continue to get fixed.  This is an open-source project, and if any Python-fluent programmers are willing to contribute a day or two of work, more would get done faster.

The character of the new site is becoming clear.  The pace of commenting is higher; the threaded comments encourage short replies and continuing conversations.  The pace of posting exceeds my fondest hopes - apparently not being able to post automatically on OB was a much greater barrier to potential contributors than I realized.

We've had 12,428 comments so far on 113 articles, 100 of them posted since contributing was enabled for all users over 20 karma on March 5th.

Browsing to the Top Scoring articles on Less Wrong will give you an idea of how things are developing.  A quick view of all posts can be found here, with the current top scorer being "Cached Selves" by Salamon and Rayhawk, followed by "Rational Me or We?" by Hanson.  If this looks like a blog you like, go ahead and add it to your blog roll now, please!

\n\n\n

It might be just my imagination or my prior hopes, but it looks to me like the threaded, rated, and sorted comments create a completely different experience of reading a post - the first comment you encounter is going to be something highly intelligent, and then right away, you're going to see the most intelligent reply and a well-sorted discussion all in one place.  Much more of the action is in the comments.

The karma system is giving me valuable (if not always pleasant) feedback about which of my posts and comments my readers actually like.  I shall try not to be too influenced by this.

An on-site wiki is on the way, and meanwhile there's a temporary Wiki hosted at Wikia, currently with 163 articles.

" } }, { "_id": "SzZKTNzexDuAGHa9a", "title": "Just a bit of humor...", "pageUrl": "https://www.lesswrong.com/posts/SzZKTNzexDuAGHa9a/just-a-bit-of-humor", "postedAt": "2009-04-24T04:21:34.312Z", "baseScore": -8, "voteCount": 23, "commentCount": 15, "url": null, "contents": { "documentId": "SzZKTNzexDuAGHa9a", "html": "

\"\"

" } }, { "_id": "oNeE7jcHnLL2Gg2vW", "title": "This Didn't Have To Happen", "pageUrl": "https://www.lesswrong.com/posts/oNeE7jcHnLL2Gg2vW/this-didn-t-have-to-happen", "postedAt": "2009-04-23T19:07:48.215Z", "baseScore": 27, "voteCount": 57, "commentCount": 189, "url": null, "contents": { "documentId": "oNeE7jcHnLL2Gg2vW", "html": "

My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.

\n

He wasn't signed up for cryonics, of course.  She tried to convince him, and I tried myself a little the one time I met her grandparents.

\n

\"This didn't have to happen.  Fucking religion.\"

\n

That's what my girlfriend said.

\n

I asked her if I could share that with you, and she said yes.

\n

Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.

" } }, { "_id": "N75n9scjdvvvMN627", "title": "Fix it and tell us what you did", "pageUrl": "https://www.lesswrong.com/posts/N75n9scjdvvvMN627/fix-it-and-tell-us-what-you-did", "postedAt": "2009-04-23T14:54:10.191Z", "baseScore": 49, "voteCount": 49, "commentCount": 38, "url": null, "contents": { "documentId": "N75n9scjdvvvMN627", "html": "
\n
\n

The main danger for LW is that it could become rationalist-porn for daydreamers.

\n

I suggest a pattern of counterattack:

\n
    \n
  1. \n

    Find a nonrational aspect of your nature that is hindering you right now.

    \n
  2. \n
  3. \n

    Determine privately to fix it.

    \n
  4. \n
  5. \n

    Set a short deadline. Do the necessary work.

    \n
  6. \n
  7. \n

    Write it up on LW at the deadline. Whether or not it worked.

    \n
  8. \n
(This used to be a comment, here.)
\n
" } }, { "_id": "aFEsqd6ofwnkNqaXo", "title": "Go Forth and Create the Art!", "pageUrl": "https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art", "postedAt": "2009-04-23T01:37:14.442Z", "baseScore": 90, "voteCount": 82, "commentCount": 114, "url": null, "contents": { "documentId": "aFEsqd6ofwnkNqaXo", "html": "

I have said a thing or two about rationality, these past months.  I have said a thing or two about how to untangle questions that have become confused, and how to tell the difference between real reasoning and fake reasoning, and the will to become stronger that leads you to try before you flee; I have said something about doing the impossible.

\n

And these are all techniques that I developed in the course of my own projects—which is why there is so much about cognitive reductionism, say—and it is possible that your mileage may vary in trying to apply it yourself.  The one's mileage may vary.  Still, those wandering about asking \"But what good is it?\" might consider rereading some of the earlier posts; knowing about e.g. the conjunction fallacy and how to spot it in an argument, hardly seems esoteric.  Understanding why motivated skepticism is bad for you can constitute the whole difference, I suspect, between a smart person who ends up smart and a smart person who ends up stupid.  Affective death spirals consume many among the unwary...

\n

Yet there is, I think, more absent than present in this \"art of rationality\"—defeating akrasia and coordinating groups are two of the deficits I feel most keenly.  I've concentrated more heavily on epistemic rationality than instrumental rationality, in general.  And then there's training, teaching, verification, and becoming a proper experimental science based on that.  And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...

\n

But those small pieces of rationality that I've set out... I hope... just maybe...

\n

I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality.  Where by default, in the beginning, you don't have enough to build on.  Indeed so little that you don't have a clue that more exists, that there is an Art to be found.  And if you do begin to sense that more is possible—then you may just instantaneously go wrong.  As David Stove observes—I'm not going to link it, because it deserves its own post—most \"great thinkers\" in philosophy, e.g. Hegel, are properly objects of pity.  That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.

\n

When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence.  You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.

\n

It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.

\n

I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, \"How should people think, in order to solve new problem X that I'm working on?\"  They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one.  They will get a saving throw.

\n

It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.

\n

And so instead they... go off and invent Freudian psychoanalysis.  Or a new religion.  Or something.  That's what happens by default, when people start thinking about thinking.

\n

I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.

\n

That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.

\n

A task which I am leaving to you.  Probably, anyway.  I make no promises as to where my attention may turn in the future.  But y'know, there are certain other things I need to do.  Even if I develop yet more Art by accident, it may be that I will not have the time to write any of it up.

\n

Beyond all that I have said of fake answers and traps, there are two things I would like you to keep in mind.

\n

The first—that I drew on multiple sources to create my Art.  I read many different authors, many different experiments, used analogies from many different fields.  You will need to draw on multiple sources to create your portion of the Art.  You should not be getting all your rationality from one author—though there might be, perhaps, a certain centralized website, where you went to post the links and papers that struck you as really important.  And a maturing Art will need to draw from multiple sources.  To the best of my knowledge there is no true science that draws its strength from only one person.  To the best of my knowledge that is strictly an idiom of cults.  A true science may have its heroes, it may even have its lonely defiant heroes, but it will have more than one.

\n

The second—that I created my Art in the course of trying to do some particular thing which animated all my efforts.  Maybe I'm being too idealistic—maybe thinking too much of the way the world should work—but even so, I somewhat suspect that you couldn't develop the Art just by sitting around thinking to yourself, \"Now how can I fight that akrasia thingy?\"  You'd develop the rest of the Art in the course of trying to do something.  Maybe even—if I'm not overgeneralizing from my own history—some task difficult enough to strain and break your old understanding and force you to reinvent a few things.  But maybe I'm wrong, and the next leg of the work will be done by direct, specific investigation of \"rationality\", without any need of a specific application considered more important.

\n

My previous attempt to describe this principle in terms of respect bounded by a secret identity, was roundly rejected by my audience.  Maybe \"leave the house\" would be more appropriate?  It sounds to me like a really good, healthy idea.  Still—perhaps I am deceived.  We shall see where the next pieces of the Art do, in fact, come from.

\n

I have striven for a long time now to convey, pass on, share a piece of the strange thing I touched, which seems to me so precious.  And I'm not sure that I ever said the central rhythm into words.  Maybe you can find it by listening to the notes.  I can say these words but not the rule that generates them, or the rule behind the rule; one can only hope that by using the ideas, perhaps, similar machinery might be born inside you.  Remember that all human efforts at learning arcana, slide by default into passwords, hymns, and floating assertions.

\n

I have striven for a long time now to convey my Art.  Mostly without success, before this present effort.  Earlier I made efforts only in passing, and got, perhaps, as much success as I deserved.  Like throwing pebbles in a pond, that generate a few ripples, and then fade away...  This time I put some back into it, and heaved a large rock.  Time will tell if it was large enough—if I really disturbed anyone deeply enough that the waves of the impact will continue under their own motion.  Time will tell if I have created anything that moves under its own power.

\n

(Not to mention that—I hope—the thing with the karma will stop the slide into virtual entropy that has destroyed every community I tried to build earlier as soon as I tried to pull back my attention a little.)

\n

My last essay on having a secret identity was not well-received, so let me try again:  I want people to go forth, but also to return.  Or maybe even to go forth and stay simultaneously, because this is the Internet and we can get away with that sort of thing; I've learned some interesting things on Less Wrong, lately, and if continuing motivation over years is any sort of problem, talking to others (or even seeing that others are also trying) does often help.

\n

But at any rate, if I have affected you at all, then I hope you will go forth and confront challenges, and achieve somewhere beyond your armchair, and create new Art; and then, remembering whence you came, radio back to tell others what you learned.

" } }, { "_id": "ASiCdt4qFaNADzdzw", "title": "Escaping Your Past", "pageUrl": "https://www.lesswrong.com/posts/ASiCdt4qFaNADzdzw/escaping-your-past", "postedAt": "2009-04-22T21:15:14.171Z", "baseScore": 28, "voteCount": 39, "commentCount": 51, "url": null, "contents": { "documentId": "ASiCdt4qFaNADzdzw", "html": "

Followup to: Sunk Cost Fallacy

\n

Related to: Rebelling Against NatureShut Up and Do the Impossible!

\n

(expanded from my comment)

\n

\"The world is weary of the past—
O might it die or rest at last!\"
— Percy Bysshe Shelley, from \"Hellas\"

\n

Probability theory and decision theory push us in opposite directions. Induction demands that you cannot forget your past; the sunk cost fallacy demands that you must. Let me explain.

\n

An important part of epistemic rationality is learning to be at home in a material universe. You are not a magical fount of originality and free will; you are a physical system: the same laws that bind the planets in their orbits, also bind you; the same sorts of regularities in these laws that govern the lives of rabbits or aphids, also govern human societies. Indeed, in the last analysis, free will as traditionally conceived is but a confusion—and bind and govern are misleading metaphors at best: what is bound as by ropes can be unbound with, say, a good knife; what is \"bound\" by \"nature\"—well, I can hardly finish the sentence, the phrasing being so absurd!

\n

Epistemic rationality alone might be well enough for those of us who simply love truth (who love truthseeking, I mean; the truth itself is usually an abomination), but some of my friends tell me there should be some sort of payoff for all this work of inference. And indeed, there should be: if you know how something works, you might be able to make it work better. Enter intrumental rationality, the art of doing better. We all want to better, and we all believe that we can do better...

\n

But we should also all know that beliefs require evidence.

\n

Suppose you're an employer interviewing a jobseeker for a position you have open. Examining the jobseeker's application, you see that she was expelled from four schools, was fired from her last three jobs, and was convicted of two felonies. You ask, \"Given your record, I regret having let you enter the building. Why on Earth should I hire you?\"

\n

And the jobseeker replies, \"But all those transgressions are in the past. Sunk costs can't play into my decision theory—it would hardly be helping for me to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and right now that means working ever so hard for you, O dearest future boss! Tsuyoku naritai!\"

\n

And you say, \"Why should I believe you?\"

\n

And then—oh, wait. Just a moment, I've gotten my notes mixed up—oh, dear. I've been telling this scenario all wrong. You're not the employer. You're the jobseeker.

\n

Why should you believe yourself? You honestly swear that you're going to change, and this is great. But take the outside view. What good have these oaths done for all the other millions who have sworn them? You might very well be different, but in order to justifiably believe that you're different, you need to have some sort of evidence that you're different. It's not a special question; there has to be something about your brain that is different, whether or not you can easily communicate this evidence to others with present technology. What do you have besides the oath? Are you doing reasearch, trying new things, keeping track of results, genuinely searching at long last for something that will actually work?

\n

For if you do succeed, it won't have been a miracle: you should be able to pin down at least approximately the causal factors that got you to where you are. And it has to be a plausible story. You won't really be able to say, \"Well, I read all these blogposts about rationality, and that's why I'm such an amazing person now.\" Compare: \"I read the Bible, and that's why I'm such an amazing person now.\" The words are different, but translated into math, is it really a different story? It could be. But if it is, you should be able to explain further; there has to be some coherent sequence of events that could take place in an material universe, a continuous path through spacetime that took you from there to here. If the blog helped, how specifically did it help? What did it cause you to do that you would not otherwise have done?

\n

This could be more difficult than it now seems in your current ignorance: the more you know about the forces that determine you, the less room there is for magical hopes. When you have a really fantastic day, you're more likely to expect tomorrow to be like that as well if you don't know about regression towards the mean.

\n

I'm not trying to induce despair with this post; really, I'm not. It is possible to do better; I myself am doing better than I was this time last year. I just think it's important to understand exactly what doing better really involves.

\n

I feel bad blogging about rationality, given that I'm so horribly, ludicrously bad at it. I'm also horribly, ludicrously bad at writing. But it would hardly be helping for me to just shut up in despair—to go sulk in a gutter somewhere. I can only seek to maximize expected utility now, and for now, that apparently means writing the occasional blogpost. Tsuyoku

" } }, { "_id": "cDQFK7tPDo9H4nPSE", "title": "Proposal: Use the Wiki for Concepts", "pageUrl": "https://www.lesswrong.com/posts/cDQFK7tPDo9H4nPSE/proposal-use-the-wiki-for-concepts", "postedAt": "2009-04-22T05:21:49.377Z", "baseScore": 40, "voteCount": 38, "commentCount": 67, "url": null, "contents": { "documentId": "cDQFK7tPDo9H4nPSE", "html": "

So... the longer I think about this Wiki thing, the more it seems like a really good idea - a missing piece falling into place.

\n

Here's my proposal, which I turn over to this, the larger community that suggested the Wiki in the first place:

\n

The Wiki should consist mainly of short concept introductions plus links to longer posts, rather than original writing.  Original writing goes in a post on Less Wrong, which may get voted up and down, or commented on; and this post should reference previous work by linking to the Wiki rather than other posts, to the extent that the concepts referred to can be given short summaries.  The intent is to set up a resonance that bounces back and forth between the Wiki (short concept summaries that can be read standalone, and links to more info for in-depth exploration) and the posts (which make the actual arguments and do the actual analyses).

\n

My role model here is TV Tropes, which manages to be, shall we say, really explorable, because of the resonance between the tropes, and the shows/events in which those tropes occur, and the other tropes that occur in those shows/events.  And furthermore, you know that the trope explanation itself will be a short bite of joy, and that reading the further references is optional.

\n

There would be exceptions to the \"no original research\" rule for projects that were multi-editor and not easily prosecuted through comments - for example, a project to make a list of all posts with one-sentence summaries in chronological order.

\n

There are also obvious exceptions to the \"link to the wiki\" rule, such as for any case where it really was futile to reference anything except the complete argument; or where you wanted to talk about part of the argument, rather than the general concept argued; or when you wanted to talk about a conversational event that happened in a particular post.

\n

I would suggest that the general format of a Wiki entry be a short summary / definition (that can maybe gloss or list some of the arguments if there's room, or even give the argument if it can be put really briefly), with the body of this being no more than a screenful as a general rule.  Then links to posts, with descriptions (for each post) of why that post is relevant or what it has to say - probably one or two sentences, as a rule.  Then references to outside posts on the same topic - although if the best reference is an outside discussion, that could come first.

\n

Summaries of whole sequences could also go on the Wiki - since it seems more like static descriptive content, rather than debatable analysis and argument, which is how the wiki/blog dichotomy is starting to shape up in my mind.

\n

Given unlimited development resources we'd want to integrate the two userbases, have a karma requirement to edit the Wiki, and such things, but we don't have much development resources (whines for Python volunteers again).  But I would still like to see a list of recent edits and/or active pages in the Less Wrong blog sidebar, and a list of recent blog posts and recent comments in the Wiki sidebar.  Of course the first priority is getting the Wiki set up on Less Wrong at all, rather than the current foreign host - I'm told this is in In Progress.

\n

Once my old posts are imported from Overcoming Bias, it would be nice if someone went through them and changed the links to posts (that reference concepts per se, rather than conversational events or parts of arguments) to links to the Wiki, creating appropriate pages and concept summaries as necessary.  That is, it would be nice if I didn't have to do this myself.  Anyone interested in volunteering, leave a comment - it'd be nice if you had some comments to your name, by which to judge your writing skills, and perhaps some karma.  This is a large job - though 10 posts a day would get it done in two months - so don't step up if you don't have the time.  It does seem like the sort of thing that should definitely get done by someone who is not me.

\n

I've already seen at least one person call Overcoming Bias \"a bigger, and far more productive, time sink than Wikipedia or even TV Tropes\".  Done right, it really could be more addictive than TV Tropes, for the intellectually curious, because it would seem (and be!) more productive; when you browse TV Tropes it feels like you're wasting time.

\n

I suppose I should feel slightly nervous about this, but it still seems like something that ought to be done if feasible, even though it sounds a bit scary - and I hope I'm not just saying that because I'm tempted to break out into mad scientist laughter.

" } }, { "_id": "d96W52qJhKkp7syYY", "title": "LessWrong Boo Vote (Stochastic Downvoting)", "pageUrl": "https://www.lesswrong.com/posts/d96W52qJhKkp7syYY/lesswrong-boo-vote-stochastic-downvoting", "postedAt": "2009-04-22T01:18:01.692Z", "baseScore": 4, "voteCount": 31, "commentCount": 44, "url": null, "contents": { "documentId": "d96W52qJhKkp7syYY", "html": "

Related to: Well-Kept Gardens Die By Pacifism.

\n

I wrote a script for the greasemonkey extension for Firefox, implementing less painful downvoting. It inserts a button \"Vote boo\" in addition to \"Vote up\" and \"Vote down\" for each comment. Pressing this button has 30% chance of resulting in downvoting the comment, which is on average equivalent to taking 0.3 points of rating. If pressing the button once has no effect, don't press it twice: the action is already performed, resulting in one of the two possible outcomes.

\n

The idea is to lower the level of punishment from downvoting, thus making it easier to downvote average mediocre comments, not just remarkably bad ones. Systematically downvoting mediocre comments should make their expected rating negative, creating an incentive to focus more on making high-quality comments, and punishing systematic mediocrity. At the same time, low penalty for average comments (implemented through stochastic downvoting) allows to still make them freely, which is essential for supporting a discussion. Contributors may see positive rating of good comments as currency for which they can buy a limited number of discussion-supporting average comments.

\n

The \"Vote boo\" option is not to be taken lightly, one should understand a comment before declaring it mediocre. If you are not sure, don't vote. If comment is visibly a simple passing remark, or of mediocre quality, press the button.

" } }, { "_id": "dTnfX3HWizKeovFn3", "title": "UC Santa Barbara Rationalists Unite - Saturday, 6PM", "pageUrl": "https://www.lesswrong.com/posts/dTnfX3HWizKeovFn3/uc-santa-barbara-rationalists-unite-saturday-6pm", "postedAt": "2009-04-22T00:23:12.630Z", "baseScore": 3, "voteCount": 6, "commentCount": 2, "url": null, "contents": { "documentId": "dTnfX3HWizKeovFn3", "html": "

Anna Salamon and I are calling for a Less Wrong meetup here in sunny Santa Barbara. Anna and Steve Rayhawk, Less Wrong's most successful (and so far, only) writing duo are on their way through SB, and I know there's a few LWers hiding in Isla Vista and in Santa Barbara. We'll be meeting at the college of creative studies building on the UCSB campus, right between Santa Rosa dorms and the Multicultural Center. I'll also be inviting some folks from the campus secular group, SURE (Scientific Understanding and Reason Enrichment).

\n

You'll also find the event listed on Facebook -- feel free to RSVP there. 

\n

Definitely looking forward to seeing you all =).

" } }, { "_id": "tscc3e5eujrsEeFN4", "title": "Well-Kept Gardens Die By Pacifism", "pageUrl": "https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism", "postedAt": "2009-04-21T02:44:52.788Z", "baseScore": 269, "voteCount": 236, "commentCount": 324, "url": null, "contents": { "documentId": "tscc3e5eujrsEeFN4", "html": "

Previously in seriesMy Way
Followup toThe Sin of Underconfidence

\n

Good online communities die primarily by refusing to defend themselves.

\n

Somewhere in the vastness of the Internet, it is happening even now.  It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing.  But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting.  (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)

\n

So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood.  Or if there are new members, their quality also has gone down.

\n

Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...

\n

I am old enough to remember the USENET that is forgotten, though I was very young.  Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content.  We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.

\n

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

\n

After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.

\n

It is easy to be naive about the evils of censorship when you already live in a carefully kept garden.  Just like it is easy to be naive about the universal virtue of unconditional nonviolent pacifism, when your country already has armed soldiers on the borders, and your city already has police.  It costs you nothing to be righteous, so long as the police stay on their jobs.

\n

The thing about online communities, though, is that you can't rely on the police ignoring you and staying on the job; the community actually pays the price of its virtuousness.

\n

In the beginning, while the community is still thriving, censorship seems like a terrible and unnecessary imposition.  Things are still going fine.  It's just one fool, and if we can't tolerate just one fool, well, we must not be very tolerant.  Perhaps the fool will give up and go away, without any need of censorship.  And if the whole community has become just that much less fun to be a part of... mere fun doesn't seem like a good justification for (gasp!) censorship, any more than disliking someone's looks seems like a good reason to punch them in the nose.

\n

(But joining a community is a strictly voluntary process, and if prospective new members don't like your looks, they won't join in the first place.)

\n

And after all—who will be the censor?  Who can possibly be trusted with such power?

\n

Quite a lot of people, probably, in any well-kept garden.  But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—

\n

(for such internal politics often seem like a matter of far greater import than mere invading barbarians)

\n

—then trying to defend the community is typically depicted as a coup attempt.  Who is this one who dares appoint themselves as judge and executioner?  Do they think their ownership of the server means they own the people?  Own our community?  Do they think that control over the source code makes them a god?

\n

I confess, for a while I didn't even understand why communities had such trouble defending themselves—I thought it was pure naivete.  It didn't occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.  \"None of us are bigger than one another, all of us are men and can fight; I am going to get my arrows\", was the saying in one hunter-gatherer tribe whose name I forget.  (Because among humans, unlike chimpanzees, weapons are an equalizer—the tribal chieftain seems to be an invention of agriculture, when people can't just walk away any more.)

\n

Maybe it's because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities.  Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam).  Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws.  Maybe because I take it for granted that if you don't like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).

\n

And maybe because I, myself, have often been the one running the server.  But I am consistent, usually being first in line to support moderators—even when they're on the other side from me of the internal politics.  I know what happens when an online community starts questioning its moderators.  Any political enemy I have on a mailing list who's popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator's hat, I vocally support them—they need urging on, not restraining.  People who've grown up in academia simply don't realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of \"free speech\".

\n

Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.  But this is more accused than realized, so far as I can see.

\n

In any case the light didn't go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently.  While reading a comment at Less Wrong, in fact, though I don't recall which one.

\n

But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay.  Being too humble, doubting themselves an order of magnitude more than I would have doubted them.  It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.

\n

This about the Internet:  Anyone can walk in.  And anyone can walk out.  And so an online community must stay fun to stay alive.  Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.

\n

I have seen rationalist communities die because they trusted their moderators too little.

\n

But that was not a karma system, actually.

\n

Here—you must trust yourselves.

\n

A certain quote seems appropriate here:  \"Don't believe in yourself!  Believe that I believe in you!\"

\n

Because I really do honestly think that if you want to downvote a comment that seems low-quality... and yet you hesitate, wondering if maybe you're downvoting just because you disagree with the conclusion or dislike the author... feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship... then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.

\n

You have the downvote.  Use it or USENET.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Practical Advice Backed By Deep Theories\"

\n

Previous post: \"The Sin of Underconfidencee\"

" } }, { "_id": "yuhYmSmHTqC7QPShf", "title": "Masochism vs. Self-defeat", "pageUrl": "https://www.lesswrong.com/posts/yuhYmSmHTqC7QPShf/masochism-vs-self-defeat", "postedAt": "2009-04-20T21:20:50.815Z", "baseScore": 2, "voteCount": 16, "commentCount": 10, "url": null, "contents": { "documentId": "yuhYmSmHTqC7QPShf", "html": "

Follow up to: Is masochism necessary?Stuck in the middle with Bruce

\r\n

Masochism has two very different meanings: enjoyment of pain, and pursuit (not enjoyment) of suffering.

\r\n

As a rather blunt example of this distinction, consider a sexual masochist. If his girlfriend ties him up and beats him, he'll experience pain, but he certainly won't suffer; he'll probably enjoy himself immensely. Put someone with vanilla sexual tastes in his place, and he would experience both pain and suffering.

\r\n

Bruce-like behaviour is best understood as pursuit of suffering. People undermine themselves or set themselves up to lose. They may do it so that they have a comfortable excuse, or because they are used to failing and afraid of being happy, or for many other reasons. Most of us do this to some degree, however slight, and it's something we want to avoid.1 Pursuit of suffering, quite simply, gets in the way of winning, and, much like akratic behaviour, it is something that we should try desperately to find and destroy, because we should be happier without it.

\r\n

This is very, very different from enjoyment of pain. If you like getting beaten up, or spicy foods, or running marathons, this has no effect on whether you win; these become a kind of winning. The fact that these activities cause suffering in some people is wholly irrelevant. For those who enjoy them, they create happiness, and obtaining them is, in a sense, a form of winning. Because of this, there's no reason to try to catch ourselves engaging in them or to worry about engaging in them less. It does not seem like people would be happier if they lost these prefereces.2

\r\n

Indeed, given that they require some level of initial exposure, and (in the sexual case) have strong social taboos against them, it seems quite likely that masochistic behaviour isn't engaged in enough, though I admit I may be going too far.

\r\n

Edit: As a point of clarification, \"Bruce-like\" behaviour may be overbroad. Some people set themselves up to lose because, for whatever reason, they genuinely like losing. That isn't pursuit of suffering, because there's no suffering. However, we do sometimes undermine ourselves when we want to win. The precise cause of this is, for my purposes, immaterial. This is what I'm referring to by \"pursuit of suffering,\" and my entire point is that it is quite distinct from enjoyment or pursuit of pain, and that this difference is worth noticing.

\r\n

A proof of the utilitarian benefit of sadism is left to the reader, or as the topic for a follow-up post if people like this one.

\r\n

1 - If we actually enjoy failure, such that presented with the simple choice of win/lose, we repeatedly chose lose, that's a separate subject and would fall under another description, like \"enjoyment of failure.\" This is something that one might be happier without, but that's really another issue for another post.

\r\n

2- This is not to say that some people shouldn't engage in them less. There are people who engage in self-destructive behaviour. Some use sex as a means of escape. Some anorexics exercise compulsively. But the fact that these can be unhealthy in specific circumstances is of no relevance to the greater population that enjoys them responsibly.

" } }, { "_id": "pkFazhcTErMw7TFtT", "title": "The Sin of Underconfidence", "pageUrl": "https://www.lesswrong.com/posts/pkFazhcTErMw7TFtT/the-sin-of-underconfidence", "postedAt": "2009-04-20T06:30:03.826Z", "baseScore": 120, "voteCount": 109, "commentCount": 187, "url": null, "contents": { "documentId": "pkFazhcTErMw7TFtT", "html": "

There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.  Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.

\n

But he's actually quite right to worry, and I worry too, and any adept rationalist will probably spend a fair amount of time worying about it.  When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result.  That's what makes a lot of cognitive subtasks so troublesome—you know you're biased but you're not sure how much, and you don't know if you're correcting enough—and so perhaps you ought to correct a little more, and then a little more, but is that enough?  Or have you, perhaps, far overshot?  Are you now perhaps worse off than if you hadn't tried any correction?

\n

You contemplate the matter, feeling more and more lost, and the very task of estimation begins to feel increasingly futile...

\n

And when it comes to the particular questions of confidence, overconfidence, and underconfidence—being interpreted now in the broader sense, not just calibrated confidence intervals—then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt.  To place yourself too high—to overreach your proper place—to think too much of yourself—to put yourself forward—to put down your fellows by implicit comparison—and the consequences of humiliation and being cast down, perhaps publicly—are these not loathesome and fearsome things?

\n

To be too modest—seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.

\n

So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence—heck, even if you've just read a couple of dozen—and you don't know exactly how overconfident you are—then yes, you might genuinely be in danger of nudging yourself a step too far down.

\n

I have no perfect formula to give you that will counteract this.  But I have an item or two of advice.

\n

What is the danger of underconfidence?

\n

Passing up opportunities.  Not doing things you could have done, but didn't try (hard enough).

\n

So here's a first item of advice:  If there's a way to find out how good you are, the thing to do is test it.  A hypothesis affords testing; hypotheses about your own abilities likewise.  Once upon a time it seemed to me that I ought to be able to win at the AI-Box Experiment; and it seemed like a very doubtful and hubristic thought; so I tested it.  Then later it seemed to me that I might be able to win even with large sums of money at stake, and I tested that, but I only won 1 time out of 3.  So that was the limit of my ability at that time, and it was not necessary to argue myself upward or downward, because I could just test it.

\n

One of the chief ways that smart people end up stupid, is by getting so used to winning that they stick to places where they know they can win—meaning that they never stretch their abilities, they never try anything difficult.

\n

It is said that this is linked to defining yourself in terms of your \"intelligence\" rather than \"effort\", because then winning easily is a sign of your \"intelligence\", where failing on a hard problem could have been interpreted in terms of a good effort.

\n

Now, I am not quite sure this is how an adept rationalist should think about these things: rationality is systematized winning and trying to try seems like a path to failure.  I would put it this way:  A hypothesis affords testing!  If you don't know whether you'll win on a hard problem—then challenge your rationality to discover your current level.  I don't usually hold with congratulating yourself on having tried—it seems like a bad mental habit to me—but surely not trying is even worse.  If you have cultivated a general habit of confronting challenges, and won on at least some of them, then you may, perhaps, think to yourself \"I did keep up my habit of confronting challenges, and will do so next time as well\".  You may also think to yourself \"I have gained valuable information about my current level and where I need improvement\", so long as you properly complete the thought, \"I shall try not to gain this same valuable information again next time\".

\n

If you win every time, it means you aren't stretching yourself enough.  But you should seriously try to win every time.  And if you console yourself too much for failure, you lose your winning spirit and become a scrub.

\n

When I try to imagine what a fictional master of the Competitive Conspiracy would say about this, it comes out something like:  \"It's not okay to lose.  But the hurt of losing is not something so scary that you should flee the challenge for fear of it.  It's not so scary that you have to carefully avoid feeling it, or refuse to admit that you lost and lost hard.  Losing is supposed to hurt.  If it didn't hurt you wouldn't be a Competitor.  And there's no Competitor who never knows the pain of losing.  Now get out there and win.\"

\n

Cultivate a habit of confronting challenges—not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.  I recently read of a certain theist that he had defeated Christopher Hitchens in a debate (severely so; this was said by atheists).  And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate.  This seemed like someone I wanted to test myself against.  Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.  Note that this is not self-handicapping in the classic sense—if the debate is indeed arranged (I haven't yet heard back), and I do not prepare, and I fail, then I do lose those stakes of myself that I have put up; I gain information about my limits; I have not given myself anything I consider an excuse for losing.

\n

Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost.  In that case you make everything as easy for yourself as possible.  To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.

\n

A subtler form of underconfidence is losing your forward momentum—amid all the things you realize that humans are doing wrong, that you used to be doing wrong, of which you are probably still doing some wrong.  You become timid; you question yourself but don't answer the self-questions and move on; when you hypothesize your own inability you do not put that hypothesis to the test.

\n

Perhaps without there ever being a watershed moment when you deliberately, self-visibly decide not to try at some particular test... you just.... slow..... down......

\n

It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...

\n

There's not enough hope of triumph to inspire you to try hard...

\n

When you consider doing any new thing, a dozen questions about your ability at once leap into your mind, and it does not occur to you that you could answer the questions by testing yourself...

\n

And having read so much wisdom of human flaws, it seems that the course of wisdom is ever doubting (never resolving doubts), ever the humility of refusal (never the humility of preparation), and just generally, that it is wise to say worse and worse things about human abilities, to pass into feel-good feel-bad cynicism.

\n

And so my last piece of advice is another perspective from which to view the problem—by which to judge any potential habit of thought you might adopt—and that is to ask:

\n

Does this way of thinking make me stronger, or weaker?  Really truly?

\n

I have previously spoken of the danger of reasonableness—the reasonable-sounding argument that we should two-box on Newcomb's problem, the reasonable-sounding argument that we can't know anything due to the problem of induction, the reasonable-sounding argument that we will be better off on average if we always adopt the majority belief, and other such impediments to the Way.  \"Does it win?\" is one question you could ask to get an alternate perspective.  Another, slightly different perspective is to ask, \"Does this way of thinking make me stronger, or weaker?\"  Does constantly reminding yourself to doubt everything make you stronger, or weaker?  Does never resolving or decreasing those doubts make you stronger, or weaker?  Does undergoing a deliberate crisis of faith in the face of uncertainty make you stronger, or weaker?  Does answering every objection with a humble confession of you fallibility make you stronger, or weaker?

\n

Are your current attempts to compensate for possible overconfidence making you stronger, or weaker?  Hint:  If you are taking more precautions, more scrupulously trying to test yourself, asking friends for advice, working your way up to big things incrementally, or still failing sometimes but less often then you used to, you are probably getting stronger.  If you are never failing, avoiding challenges, and feeling generally hopeless and dispirited, you are probably getting weaker.

\n

I learned the first form of this rule at a very early age, when I was practicing for a certain math test, and found that my score was going down with each practice test I took, and noticed going over the answer sheet that I had been pencilling in the correct answers and erasing them.  So I said to myself, \"All right, this time I'm going to use the Force and act on instinct\", and my score shot up to above what it had been in the beginning, and on the real test it was higher still.  So that was how I learned that doubting yourself does not always make you stronger—especially if it interferes with your ability to be moved by good information, such as your math intuitions.  (But I did need the test to tell me this!)

\n

Underconfidence is not a unique sin of rationalists alone.  But it is a particular danger into which the attempt to be rational can lead you.  And it is a stopping mistake—an error which prevents you from gaining that further experience which would correct the error.

\n

Because underconfidence actually does seem quite common among aspiring rationalists who I meet—though rather less common among rationalists who have become famous role models)—I would indeed name it third among the three besetting sins of rationalists.

" } }, { "_id": "cH85oDuQasBdtakWv", "title": "Evangelical Rationality", "pageUrl": "https://www.lesswrong.com/posts/cH85oDuQasBdtakWv/evangelical-rationality", "postedAt": "2009-04-20T04:51:42.006Z", "baseScore": 45, "voteCount": 42, "commentCount": 20, "url": null, "contents": { "documentId": "cH85oDuQasBdtakWv", "html": "

Spreading the Word prompted me to report back as promised.

\n

I have two sisters aged 17 and 14, and mom and dad aged 40-something. I'm 22, male. We're all white and Latvian. I translated the articles as I read them.

\n

I read Never Leave Your Room to the oldest sister and she expressed great interest in it.

\n

I read Cached Selves to them all. When I got to the part about Greenskyers the older sister asserted \"the sky is green\" for fun. Later in the conversation I asked her, \"Is the sky blue?\", and her answer was \"No. I mean, yes! Gah!\" They all found real life examples of this quickly - it turns out this is how the older sister schmoozes money and stuff out of dad (\"Can I have this discount cereal?\" followed by \"Can I have this expensive yogurt to go with my cereal?\").

\n

I started reading The Apologist and the Revolutionary to them but halfway through the article they asked \"what's the practical application for us?\", and I realized that I couldn't answer that question - it's just a piece of trivia. So I moved on.

\n

I tried reading about near-far thing to them, but couldn't find a single good article that describes it concisely. Thus I stumbled around, and failed to convey the idea properly.

\n

In the end I asked whether they'd like to hear similar stuff in the future, and the reply was an unanimous yes. I asked them why, in their opinion, haven't they found this stuff by themselves and the reason seems to be that they have have no paths that lead to rationality stuff in their lives. Indeed, I found OB through Dresden Codak, which I found through Minus, which I found through some other webcomic forum. Nobody in my family reads webcomics not to mention frequenting their forums.

\n

The takeaway, I think, is this: We must establish non-geeky paths to rationality. Go and tell people how to not be suckers. Start with people who would listen to you. You don't have to advertise LW - just be +5 informative. Rationality stuff must enter the mass media: radio, TV, newspapers. If you are in a position to make that happen, act!

\n

I would also like to see more articles like this one on LW - go, do something, report back.

" } }, { "_id": "ZJSGLPbnKxjRiTHHz", "title": "The ideas you're not ready to post", "pageUrl": "https://www.lesswrong.com/posts/ZJSGLPbnKxjRiTHHz/the-ideas-you-re-not-ready-to-post", "postedAt": "2009-04-19T21:23:42.999Z", "baseScore": 29, "voteCount": 28, "commentCount": 264, "url": null, "contents": { "documentId": "ZJSGLPbnKxjRiTHHz", "html": "

I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.

\n

So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!

" } }, { "_id": "2pR3aStEjxJp4fphr", "title": "Spreading the word?", "pageUrl": "https://www.lesswrong.com/posts/2pR3aStEjxJp4fphr/spreading-the-word", "postedAt": "2009-04-19T19:25:32.850Z", "baseScore": 8, "voteCount": 20, "commentCount": 49, "url": null, "contents": { "documentId": "2pR3aStEjxJp4fphr", "html": "

This has been discussed some, but I don't think it's been the sole subject of a top-level post. I want to find out other people's ideas rather than driving the discussion into my ideas, so I'm asking the question in a very general form, and holding off on my own answers:

\n" } }, { "_id": "3DQTfZCxSKZBEGyoN", "title": "The True Epistemic Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/3DQTfZCxSKZBEGyoN/the-true-epistemic-prisoner-s-dilemma", "postedAt": "2009-04-19T08:57:02.580Z", "baseScore": 25, "voteCount": 20, "commentCount": 72, "url": null, "contents": { "documentId": "3DQTfZCxSKZBEGyoN", "html": "

I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:

\n
\n

I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.

\n
\n

To which I said:

\n
\n

Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?

\n
\n

And lo, JGWeissman saved me a lot of writing when he replied thus:

\n
\n

Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.

\n

So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.

\n
\n

I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).

\n

It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say \"the doctors should cooperate\".  I hope I have now broken some of that emotional symmetry.

\n

As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say \"You're right, I was wrong. You defect, I'll cooperate -- let's save the world together.\"

\n

But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.

" } }, { "_id": "4LjFH7GxyeryaKzv4", "title": "Weekly Wiki Workshop and suggested articles", "pageUrl": "https://www.lesswrong.com/posts/4LjFH7GxyeryaKzv4/weekly-wiki-workshop-and-suggested-articles", "postedAt": "2009-04-19T01:13:23.164Z", "baseScore": 3, "voteCount": 4, "commentCount": 2, "url": null, "contents": { "documentId": "4LjFH7GxyeryaKzv4", "html": "

Now 12 days old, LWiki is still growing. Most of the articles are stubs, but progress is being made. Eliezer confirmed that an official wiki hosted on LW is eventually coming, so be careful about linking to the wiki, but don't let that deter you from adding content. 

\n

Standards and conventions are still being hashed out, so jump in now if you want to contribute. There is broad consensus that articles should primarily defer to existing work, either on OB/LW or Wikipedia. However, even quick summaries and links to blog posts can look very different depending on the subject. For example, contrast Rationality as Martial Art and Bias. The former is short and to the point, whereas the latter annotates each link. The latter also makes for much more interesting reading, in my opinion.

\n

Ok, then where do we go from here? The two main avenues of development are creating stubs and then fleshing them out with content. For the first, please suggest any topics, concepts, established phrases, acronyms, techniques, or jargon in this thread that you can think of, and I will be happy to add them as new articles. Or, better yet, feel free to add them yourself. 

\n

For the second, I suggest we have a weekly thread that designates one topic for our community to throw its collective intelligence at. That way, we can get all the relevant discussion about the content of an article out at once. For this inaugural Weekly Wiki Workshop, I'm going to suggest the Rationality article.

\n

So, what articles could the wiki use? What should the Rationality article look like?

\n

 

" } }, { "_id": "RWosa2YbcK4qeoYMD", "title": "Great Books of Failure", "pageUrl": "https://www.lesswrong.com/posts/RWosa2YbcK4qeoYMD/great-books-of-failure", "postedAt": "2009-04-19T00:59:14.063Z", "baseScore": 32, "voteCount": 28, "commentCount": 53, "url": null, "contents": { "documentId": "RWosa2YbcK4qeoYMD", "html": "

Followup toUnteachable Excellence

\n

As previously observed, extraordinary successes tend to be considered extraordinary precisely because it is hard to teach (relative to the then-current level of understanding and systematization).  On the other hand, famous failures are much more likely to contain lessons on what to avoid next time.

\n

Books about epic screwups have constituted some of my more enlightening reading.  Do you have any such books to recommend?

\n

Please break up multiple recommendations into multiple comments, one book per comment, so they can be voted on and discussed separately.  And please say at least a little about the book's subject and what sort of lesson you learned from it.

" } }, { "_id": "8fbFuQEEH5ZhD2trL", "title": "Atheist or Agnostic?", "pageUrl": "https://www.lesswrong.com/posts/8fbFuQEEH5ZhD2trL/atheist-or-agnostic", "postedAt": "2009-04-18T21:25:14.492Z", "baseScore": 7, "voteCount": 17, "commentCount": 33, "url": null, "contents": { "documentId": "8fbFuQEEH5ZhD2trL", "html": "

If you’re not sure:

\n

Where I come from, if you don’t believe in God and you don’t have a proof that God doesn’t exist, you say you’re agnostic. A typical conversation in polite company would go like this:

\n

Woman: What are your religious views?

\n

Me: Oh, I’m an atheist. You?

\n

Woman: Well, do you know for certain that God doesn’t exist?

\n

Me: I’m pretty sure, that’s what I believe.

\n

Woman: How do you know that God isn’t withholding all evidence that he exists to test your faith? How do you know that’s not the case?

\n

Me: Well, it’s possible that everything is an illusion.

\n

Woman (with finality): You’re agnostic.

\n

 

\n

Every community has its own set of definitions. Here on LW, you are an atheist, simply, if you don’t believe in God. You don’t have to be 100% certain – we understand that nothing is 100% certain and you believe in God’s non-existence if you believe it with the same conviction that you believe other things, such as the Earth is orbiting around the sun. For a fuller explanation, see this comment.

\n

 For the rest of us:

\n

My favorite passage in the Bible is Exodus 4 because this is the part of the bible that made me suspect that it was written by men; men that were pretty unsophisticated in their sense of justice and reasonable deity behavior. God asks Moses to come be on His side, and Moses agrees. The next thing that happens is that God is trying to kill Moses because his son isn’t circumcised. I guess God already asked Moses to do that? They left that part out of the story. Nevertheless, God seems more peevish than rational here. Presumably, he chose Moses for a reason. So trying to kill him in the very next scene doesn’t make a lot of sense.

\n

As someone who has had some trouble figuring out how things are thought about in atheist circles, I would like to suggest not being like God in Exodus 4 when people ask why we’re atheist even though we can’t prove there’s no God. It’s understandably annoying to repeat yourself, but they need to understand the context of atheism here. You can refer them to  this comment again or \"The Fallacy of Gray\" or here.

\n

And steel yourself for the inevitable argument that belief in God is a special case and deserves extra certainty. These are final steps…

\n

 

\n

----

\n

I would like this to be a reference for people coming onto the site that consider themselves agnostic. Any editing suggestions welcome.

" } }, { "_id": "Z6ESPufeiC4P8c8en", "title": "How a pathological procrastinor can lose weight [Anti-akrasia]", "pageUrl": "https://www.lesswrong.com/posts/Z6ESPufeiC4P8c8en/how-a-pathological-procrastinor-can-lose-weight-anti-akrasia", "postedAt": "2009-04-18T20:05:49.049Z", "baseScore": 34, "voteCount": 39, "commentCount": 31, "url": null, "contents": { "documentId": "Z6ESPufeiC4P8c8en", "html": "

[This post has now been subsumed by the following: blog.beeminder.com/akrasia. Also, the service described below, then known as Kibotzer, is now a real startup called Beeminder, announced here: http://lesswrong.com/lw/7z1/antiakrasia_tool_like_stickkcom_for_data_nerds/]

\n

If you are a pathological procrastinator you're pretty screwed when it comes to weight loss.  You have this monumental goal like \"lose 20 pounds\" but there's no \"last minute\" that you can put it off until.

\n

I and my partner have thought a lot about akrasia (ie, failure to do what we think we should be doing) and have a tool that tries to apply some anti-akrasia principles.  It's called Kibotzer (for \"kibitzing robot\") and is currently in private beta.

\n

This is not necessarily the best way to use Kibotzer but if you're a pathological procrastinator and want to just embrace that flaw, Kibotzer can help:  (It's more general than weight-loss but that makes for a nice example.)

\n

1. Pick your goal weight and goal date.

\n

2. Kibotzer creates your \"Yellow Brick Road.\"

\n

\"kibotzer

\n

3. Place a bet with us that you'll stay on your Road.

\n

   (if you go off your Road for even a single day, you lose.)

\n

4. Procrastinate like hell until you're about to lose the bet.

\n

The change in focus from \"weigh 20 pounds less next year\" to \"be on the yellow brick road tomorrow morning\" makes all the difference.  If you're in the wrong \"lane\" of your Road today then it's crunch time.  You have to be on your road tomorrow morning.  Pull an all-nighter on the treadmill if that's what it takes.

\n

In one sense that mentality's crazy.  Whatever you do in any single 24 hour period makes essentially no difference to your weight next year. But that's the kind of thinking that let you drift away from your ideal weight in the first place.  The whole secret of Kibotzer is to automatically break down your long-term goal into day-to-day guidance.  And then, critically, add a wager to force you to stick to it.

\n

Kibotzer's tagline is \"Bring Long-Term Consequences Near!\"  (Note that this differs from Stickk.com which adds consequences but can't bring them quite so near!)

\n

We're interested to get the opinions of folks on LessWrong and perhaps some of you would like to be guinea pigs...

\n

I'll put the rest of the details in the form of an FAQ.  Basically, we want to make sure we never cheat anyone out of money so we have safeguards we've worked out based on previous bets.

\n

FREQUENTLY ASKED QUESTIONS:

\n

1. \"What if I have a random up-day because I'm retaining water or something?\"

\n

The Yellow Brick Road adjusts its width so you shouldn't ever lose because of a random up-day.  We want to set unbending rules where each day matters, because that's what's motivating (no \"I'll catch up later\" where you dig yourself in a hole) but you should never lose on a technicality.

\n

 

\n

2. \"What if I forget to reply to the bot or get too busy?\"

\n

If you stop replying to the bot you automatically get your money back. We only want your money if we're providing something so valuable that you want to interact with it continually.

\n

 

\n

3. \"My goal is a year away; will you just hold my money that whole time?\"

\n

Whenever we're holding on to your money we pay a fair interest rate on it.

\n

 

\n

4. \"It seems a little unforgiving; everyone makes mistakes...\"

\n

The Yellow Brick Road itself allows for a nice margin of error but to further ensure that you don't lose because of one or two mistakes, there's a \"three-strikes\" policy:  You can drive off your Road twice and the road will then be reset from where you currently are, targeting the same goal weight and goal date.  Only on the third time do you actually lose the bet.

\n

 

\n

5. \"Do I still win if I go off the road once but end up reaching my goal in time?\"

\n

The short answer is that you lose if you go off at *any* time (modulo the three-strikes policy).  *But* the brilliance of Kibotzer is that it *knows* about random fluctuation, water retention, and hormonal cycles: the road is wide enough that you will never lose on a technicality.  What that roughly means is that you have to mostly stay in the right lane of your yellow brick road and reserve the left lane as your safety buffer for random (or monthly) up-days.

\n

Recall Kibotzer's goal: \"bring long-term consequences near\".  In other words, the fact that you lose the game if you go off *tomorrow* is by design.  It's very hard to, for example, forgo that piece of pie merely because it will make it harder to weigh 20 pounds less 10 months from now.  Please!  One piece of pie won't make the difference and there's plenty of time to catch up!  Each individual piece of pie is *totally worth it*.  Same with each workout you really don't feel like doing right now.  Which of course is how you and everyone else in the country end up 20 pounds away from their ideal.  With Kibotzer that whole dynamic changes: when you're in the wrong lane of your road that one piece of pie could very well make the difference *tomorrow morning* and you're acutely aware of it.  The consequences are immediate.  And of course even better is the flipside of that coin: if you are well into your right lane then it's very very nice to be able to enjoy your hard-earned safety buffer and eat that piece of pie guilt-free!

\n

 

\n

6. \"The graphs and numbers and betting seem a little gimmicky; is there another way to do this?\"

\n

Fundamentally it has to involve making a genuine commitment. Like, yes, I'm perfectly capable of staying below X pounds and I can commit to doing that. And then \"commit\" has to actually be made meaningful. Risking a painful chunk of money is the simplest way to do that.

\n

It's sad but it often doesn't mean much when we verbally commit to something. (Some people are worse about this than others.) So the bet is like an acknowledgment that there's two \"me\"s: the me right now who definitely wants this to happen and then future-me who is going to thumb their nose and thwart it. You just have to force future-me's hand. Forget the charade that it's the same me -- it isn't. Verbalize the commitment all you want, history proves that future-me has a damn good chance of thumbing their nose at you (after all, how did you end up well over your ideal weight in the first place?). So if current-me is really serious then prove it by making it impossible for future-me to renege. Or, not impossible, just make future-me not *want* to renege. That's the best you can do and all that's needed.

\n

 

\n

7. \"How do I actually place a bet?\"

\n

Email me with how much money you might be willing to risk (or indicate this on kibotzer.com/register).  I'll reply with the odds (how much you can win).  If that's acceptable, there's a \"donate\" button at kibotzer.com/money where you can put up the money.  The rest is honor system for now.

\n

 

\n

8. \"I don't understand betting lingo.  What are 'odds'?\"  (Probably don't need this one for LessWrong folks but interested to hear your ideas on how to explain this sort of thing to layfolk!)

\n

First, \"even odds\" means that if you win you double your money.  If I'm betting on something where I'll probably lose then I'll want better odds to compensate, meaning that I'll more than double my money on the off chance that I win.  Higher risk, higher reward.  From your perspective, it's lower risk so you'll have a lower reward if you win.

\n

For example, if you choose to risk $1000 then we'll figure you're highly likely to win so we might offer you odds where you risk the $1000 but only win $50 if you stay on your road (we'll also factor in how steep your Road is).  If you're sufficiently sure you can stay on your Road then that's a free $50 for you. (Of course the real advantage is the motivation it provides and the fact that you end up at the end of your Yellow Brick Road!)

" } }, { "_id": "vtMSQtxH7ei2a5T4r", "title": "The Epistemic Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/vtMSQtxH7ei2a5T4r/the-epistemic-prisoner-s-dilemma", "postedAt": "2009-04-18T05:36:17.561Z", "baseScore": 121, "voteCount": 72, "commentCount": 46, "url": null, "contents": { "documentId": "vtMSQtxH7ei2a5T4r", "html": "

Let us say you are a doctor, and you are dealing with a malaria epidemic in your village. You are faced with two problems. First, you have no access to the drugs needed for treatment. Second, you are one of two doctors in the village, and the two of you cannot agree on the nature of the disease itself. You, having carefully tested many patients, being a highly skilled, well-educated diagnostician, have proven to yourself that the disease in question is malaria. Of this you are >99% certain. Yet your colleague, the blinkered fool, insists that you are dealing with an outbreak of bird flu, and to this he assigns >99% certainty.

\n

Well, it need hardly be said that someone here is failing at rationality. Rational agents do not have common knowledge of disagreements etc. But... what can we say? We're human, and it happens.

\n

So, let's say that one day, OmegaDr. House calls you both into his office and tells you that he knows, with certainty, which disease is afflicting the villagers. As confident as you both are in your own diagnoses, you are even more confident in House's abilities. House, however, will not tell you his diagnosis until you've played a game with him. He's going to put you in one room and your colleague in another. He's going to offer you a choice between 5,000 units of malaria medication, and 10,000 units of bird-flu medication. At the same time, he's going to offer your colleague a choice between 5,000 units of bird-flu meds, and 10,000 units of malaria meds.

\n

(Let us assume that keeping a malaria patient alive and healthy takes the same number of units of malaria meds as keeping a bird flu patient alive and healthy takes bird flu meds).

\n

You know the disease in question is malaria. The bird-flu drugs are literally worthless to you, and the malaria drugs will save lives. You might worry that your colleague would be upset with you for making this decision, but you also know House is going to tell him that it was actually malaria before he sees you. Far from being angry, he'll embrace you, and thank you for doing the right thing, despite his blindness.

\n

So you take the 5,000 units of malaria medication, your colleague takes the 5,000 units of bird-flu meds (reasoning in precisely the same way), and you have 5,000 units of useful drugs with which to fight the outbreak.

\n

Had you each taken that which you supposed to be worthless, you'd be guaranteed 10,000 units. I don't think you can claim to have acted rationally.

\n

Now obviously you should be able to do even better than that. You should be able to take one another's estimates into account, share evidence, revise your estimates, reach a probability you both agree on, and, if the odds exceed 2:1 in one direction or the other, jointly take 15,000 units of whatever you expect to be effective, and otherwise get 10,000 units of each. I'm not giving out any excuses for failing to take this path.

\n

But still, both choosing the 5,000 units strictly loses. If you can agree on nothing else, you should at least agree that cooperating is better than defecting.

\n

Thus I propose that the epistemic prisoner's dilemma, though it has unique features (the agents differ epistemically, not preferentially) should be treated by rational agents (or agents so boundedly rational that they can still have persistent disagreements) in the same way as the vanilla prisoner's dilemma. What say you?

" } }, { "_id": "jBd3Zdb8j9LHwFMwL", "title": "Rationality Quotes - April 2009", "pageUrl": "https://www.lesswrong.com/posts/jBd3Zdb8j9LHwFMwL/rationality-quotes-april-2009", "postedAt": "2009-04-18T01:26:13.041Z", "baseScore": 17, "voteCount": 21, "commentCount": 144, "url": null, "contents": { "documentId": "jBd3Zdb8j9LHwFMwL", "html": "

A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.

\n" } }, { "_id": "YSb3YzXf7p8fBYvP4", "title": "Just for fun - let's play a game.", "pageUrl": "https://www.lesswrong.com/posts/YSb3YzXf7p8fBYvP4/just-for-fun-let-s-play-a-game", "postedAt": "2009-04-17T23:13:08.720Z", "baseScore": 12, "voteCount": 15, "commentCount": 69, "url": null, "contents": { "documentId": "YSb3YzXf7p8fBYvP4", "html": "

How well can we, at Less Wrong, tell the difference between truth and fiction? Let's play a little game, which I once saw in a movie

\n

In this game, we each say five things about ourselves, four of which are true. We then try to guess which ones are true and which ones are lies. (Go ahead and use Google, if you like.) I'll start.

\n

My five facts:

\n

1) I have another LessWrong commenter's autograph.

\n

2) I once received the first place prize in an (unsanctioned, online) Magic tournament that lasted a total of ten rounds (seven rounds of Swiss pairings, plus three single elimination rounds) but only beat an opponent at Magic during four of them.

\n

3) I've beaten Battletoads, on an actual NES, without using a Game Genie or other cheat device.

\n

4) Not too long ago, I made a $500 donation to Population Services International, using my debit card. Unfortunately, this overdrew my checking account. My parents were not pleased, not only because I overdrew my checking account, but also because they disapproved of my donating such a large amount of money.

\n

5) My favorite Star Wars movie is \"Attack of the Clones.\"

\n

(Note: I used a random number generator to determine which position would contain the lie.)

" } }, { "_id": "cPNr6JCnZATZMc6AZ", "title": "My main problem with utilitarianism", "pageUrl": "https://www.lesswrong.com/posts/cPNr6JCnZATZMc6AZ/my-main-problem-with-utilitarianism", "postedAt": "2009-04-17T20:26:26.304Z", "baseScore": -1, "voteCount": 14, "commentCount": 84, "url": null, "contents": { "documentId": "cPNr6JCnZATZMc6AZ", "html": "

It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:

\n\n

There are a few obivous problems here, that I won't be bothering with today:

\n\n

But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.

\n

Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.

\n

There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?

" } }, { "_id": "PL7KpiDdJnh6j5LZS", "title": "Two-Tier Rationalism", "pageUrl": "https://www.lesswrong.com/posts/PL7KpiDdJnh6j5LZS/two-tier-rationalism", "postedAt": "2009-04-17T19:44:16.522Z", "baseScore": 48, "voteCount": 46, "commentCount": 26, "url": null, "contents": { "documentId": "PL7KpiDdJnh6j5LZS", "html": "

Related to: Bayesians vs. Barbarians

Consequentialism1 is a catchall term for a vast number of specific ethical theories, the common thread of which is that they take goodness (usually of a state of affairs) to be the determining factor of rightness (usually of an action).  One family of consequentialisms that came to mind when it was suggested that I post about my Weird Forms of Utilitarianism class is called \"Two-Tier Consequentialism\", which I think can be made to connect interestingly to our rationalism goals on Less Wrong.  Here's a summary of two-tier consequentialism2.

(Some form of) consequentialism is correct and yields the right answer about what people ought to do.  But (this form of) consequentialism has many bad features:

\n\n\n\n


To solve these problems, some consequentialist ethicists (my class focused on Railton and Hare) invented \"two-tier consequentialism\".  The basic idea is that because all of these bad features of (pick your favorite kind of) consequentialism, being a consequentialist has bad consequences, and therefore you shouldn't do it.  Instead, you should layer on top of your consequentialist thinking a second tier of moral principles called your \"Practically Ideal Moral Code\", which ought to have the following more convenient properties:

\n\n


That last part is key, because the two-tier consequentialist is not abandoning consequentialism.  Unlike a rule consequentialist, he still thinks that any given action (if his favorite consequentialism is act-based) is right according to the goodness of something (probably a resulting state of affairs), not according to whether they are permitted by his Practically Ideal Moral Code.  He simply brainwashes himself into using his Practically Ideal Moral Code because over the long run, this will be for the best according to his initial, consequentialist values.

And here is the reason I linked to \"Bayesians vs. Barbarians\", above: what Eliezer is proposing as the best course of action for a rationalist society that is attacked from without sounds like a second-tier rationalism.  If it is rational - for the society as a whole and in the long run - that there be soldiers chosen by a particular mechanism who go on to promptly obey the orders of their commanding officers?  Well, then, the rational society will just have to arrange for that - even if this causes some individual actions to be non-rational - because the general strategy is the one that generates the results they are interested in (winning the war), and the most rational general strategy isn't one that consists of all the most individually rational parts.

In ethics, the three main families are consequentialism (rightness via goodness), deontic ethics (rightness as adherence to duty), and virtue ethics (rightness as the implementation of virtues and/or faithfulness to an archetype of a good person).  Inasmuch as rationality and morality are isomorphic, it seems like you could just as easily have a duty-based rationalism or a virtue-based rationalism.  I have strong sympathy for two-tier consequentialism as consequentialist ethics go.  But it seems like Eliezer is presupposing a kind of consequentialism of rationality, both in that article and in general with the maxim \"rationalists should win!\"  It sounds rather like we are supposed to be rationalists because rationalists win, and winning is good.

I don't know how widely that maps onto other people's motivations, but for my part, I think my intuitions around why I wish to be rational have more to do with considering it a virtue.  I like winning just fine, but even if it turns out that devoting myself to ever finer-grained rationalism confers no significant winnings, I will still consider it a valuable thing in and of itself to be rational.  It's not difficult to imagine someone who thinks that it is the duty of intelligent beings in general to hone their intellects in the form of rationalist training; such a person would be more of a deontic rationalist.

\n



1I'm not a consequentialist of any stripe myself.  However, my views are almost extensionally equivalent with an extremely overcomplicated interpretation of Nozickian side-constraint rights-based utilitarianism.

2Paraphrased liberally from classroom handouts by Fred Feldman.

3The example we were given in class was of a man who buys flowers for his wife and, when asked why, says, \"As her husband, I'm in a special position to efficiently generate utility in her, and considered buying her flowers to be for the best overall.\"  This in contrast to, \"Well, because she's my wife and I love her and she deserves a bouquet of carnations every now and then.\"

" } }, { "_id": "kWRFfWfhrS6m4Fibm", "title": "Anti-rationality quotes", "pageUrl": "https://www.lesswrong.com/posts/kWRFfWfhrS6m4Fibm/anti-rationality-quotes", "postedAt": "2009-04-17T17:55:30.517Z", "baseScore": 12, "voteCount": 26, "commentCount": 53, "url": null, "contents": { "documentId": "kWRFfWfhrS6m4Fibm", "html": " \n

There's a semi-regular feature on OB called \"Rationality quotes\".  In Marketing rationality, I'm claiming that for conservative religious people, using rationality instrumentally is as epistemically dangerous as for us to use faith instrumentally.  People object.  So to supplement that, I'm giving you a list of anti-rationality quotes.  I originally compiled them to respond to a theologian who claimed that Christianity encouraged inquisitiveness; but I think they apply to reason as well.  Please note that these quotes are not from obscure authors; all of these quotes, with the possible exception of Sturgeon, are from authors who are considered by Christians (either Catholics or Protestants) to be more authoritative than anyone alive today.

\n

Some of these examples come from “Curiosity, Forbidden Knowledge, and the Reformation of Natural Philosophy in Early Modern England”, Peter Harrison, Isis, Vol. 92, No. 2 (June 2001), pp. 265-290; and from The Uses of Curiosity in Early Modern France and Germany, Neil Kenny (2004).  2 quotes come from chpt. 5 of Hitchens, God is Not Great.  (His Aquinas quote, however, says exactly the opposite of what Aquinas actually said.)  Some of them I found myself.

\n

Also see the Wikipedia page on the Syllabus of Errors that byrnema provided.

\n

ADDED:  ICMMT.  I concede that the relation between rationalists using unreason, and Christians using reason, is not symmetric.  But it is analogical.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n" } }, { "_id": "bXqtCQYXgrFd9DYaZ", "title": "Chomsky on reason and science", "pageUrl": "https://www.lesswrong.com/posts/bXqtCQYXgrFd9DYaZ/chomsky-on-reason-and-science", "postedAt": "2009-04-17T17:52:35.547Z", "baseScore": 9, "voteCount": 8, "commentCount": 7, "url": null, "contents": { "documentId": "bXqtCQYXgrFd9DYaZ", "html": "

I came across this delightful 1995 article by Noam Chomsky while testing whether googling 'rationality' would lead people to LW (it doesn't).  It defends rational inquiry against postmodern, Kuhnian attacks1.  I was pleasantly surprised, because Chomsky is ideologically aligned with the people making the attacks.  (Also because I have reservations about Chomsky's rationality, which I will not state because I don't want this to turn into a discussion of Chomsky, socialism, American foreign policy, or universal grammar.)

\n

Here are some choice sentences:

\n
\n

With regard to the second problem, since what is called \"science,\" etc., is largely unfamiliar to me, let me replace it by \"X,\" and see if I understand the argument against X. Let's consider several kinds of properties attributed to X, then turning to the proposals for a new direction; quotes below are from the papers criticizing X.

\n

<long paragraph of DHMO-like attributions about X>

\n

Conclusion: there is \"something inherently wrong\" with X. We must reject or transcend it, replacing it by something else; and we must instruct poor and suffering people to do so likewise. It follows that we must abandon literacy and the arts, which surely satisfy the conditions on X as well as science. More generally, we must take a vow of silence and induce the world's victims to do so likewise since language and its use typically have all these properties.

\n
\n

...

\n
\n

There is also at least an element of truth in the statement that the natural sciences are \"disembedded from the body, from metaphorical thought, from ethical thought and from the world\"--to their credit. ... Though scientists are human, and cannot get out of their skins, they certainly, if honest, try to overcome the distortions imposed by \"body\" (in particular, human cognitive structures, with their specific properties) as much as possible. ... It is also true that \"Reason separates the `real' or knowable...and the `not real',\" or at least tries to (without identifying \"real\" with \"knowable\")--again, to its credit.

\n
\n

...

\n
\n

It strikes me as remarkable that their left counterparts today should seek to deprive oppressed people not only of the joys of understanding and insight, but also of tools of emancipation, informing us that the \"project of the Enlightenment\" is dead, that we must abandon the \"illusions\" of science and rationality--a message that will gladden the hearts of the powerful, delighted to monopolize these instruments for their own use.

\n
\n

...

\n
\n

The critique of \"science\" and \"rationality\" has many merits, which I haven't discussed. But as far as I can see, where valid and useful the critique is largely devoted to the perversion of the values of rational inquiry as they are \"wrongly used\" in a particular institutional setting. What is presented here as a deeper critique of their nature seems to me based on beliefs about the enterprise and its guiding values that have little basis. No coherent alternative is suggested, as far as I can discern; the reason, perhaps, is that there is none. What is suggested is a path that leads directly to disaster for people who need help--which means everyone, before too long.

\n
\n

 

\n

1  Kuhn later claimed not to have made these kinds of attacks on science.  I don't accept citations of Kuhn's interpretation of Kuhn as valid; I've concluded that my interpretation of Kuhn-1962 is more accurate than Kuhn-1977's interpretation of Kuhn-1962.  What I think happened was that Kuhn made a lot of radical claims and rode them to fame; once he was famous and part of the establishment, it was advantageous to abandon those claims and pretend not to have made them.  Anyway, Kuhn has said \"I am not a Kuhnian\", so I take that as license to keep using the term the way I used it.

" } }, { "_id": "J2zFSMNxkz2CaPYqq", "title": "While we're on the subject of meta-ethics...", "pageUrl": "https://www.lesswrong.com/posts/J2zFSMNxkz2CaPYqq/while-we-re-on-the-subject-of-meta-ethics", "postedAt": "2009-04-17T08:01:10.478Z", "baseScore": 7, "voteCount": 8, "commentCount": 4, "url": null, "contents": { "documentId": "J2zFSMNxkz2CaPYqq", "html": "

The best theory of morality I've ever found is the one invented by Alonzo Fyfe, which he chose to call \"desire utilitarianism.\"

\n

This short e-book (warning: pdf), written by a commenter on Alonzo's blog, describes the theory very well. He also wrote a FAQ.

\n

One great advantage of this theory is that what it describes actually exists even if you prefer to use the word \"morality\" to mean something else.  Even a community of paperclip maximizers may find something in it to be relevant, amazingly enough.

" } }, { "_id": "M2LWXsJxKS626QNEA", "title": "The Trouble With \"Good\"", "pageUrl": "https://www.lesswrong.com/posts/M2LWXsJxKS626QNEA/the-trouble-with-good", "postedAt": "2009-04-17T02:07:32.881Z", "baseScore": 100, "voteCount": 103, "commentCount": 137, "url": null, "contents": { "documentId": "M2LWXsJxKS626QNEA", "html": "

Related to: How An Algorithm Feels From Inside, The Affect Heuristic, The Power of Positivist Thinking

\n

I am a normative utilitarian and a descriptive emotivist: I believe utilitarianism is the correct way to resolve moral problems, but that the normal mental algorithms for resolving moral problems use emotivism.

Emotivism, aka the yay/boo theory, is the belief that moral statements, however official they may sound, are merely personal opinions of preference or dislike. Thus, \"feeding the hungry is a moral duty\" corresponds to \"yay for feeding the hungry!\" and \"murdering kittens is wrong\" corresponds to \"boo for kitten murderers!\"

Emotivism is a very nice theory of what people actually mean when they make moral statements. Billions of people around the world, even the non-religious, happily make moral statements every day without having any idea what they reduce to or feeling like they ought to reduce to anything.

Emotivism also does a remarkably good job capturing the common meanings of the words \"good\" and \"bad\". An average person may have beliefs like \"pizza is good, but seafood is bad\", \"Israel is good, but Palestine is bad\", \"the book was good, but the movie was bad\", \"atheism is good, theism is bad\", \"evolution is good, creationism is bad\", and \"dogs are good, but cats are bad\". Some of these seem to be moral beliefs, others seem to be factual beliefs, and others seem to be personal preferences. But we are happy using the word \"good\" for all of them, and it doesn't feel like we're using the same word in several different ways, the way it does when we use \"right\" to mean both \"correct\" and \"opposite of left\". It feels like they're all just the same thing. The moral theory that captures that feeling is emotivism. Yay pizza, books, Israelis, atheists, dogs, and evolution! Boo seafood, Palestinians, movies, theists, creationism, and cats!

\n

\n

Remember, evolution is a crazy tinker who recycles everything. So it would not be surprising to find that our morality is a quick hack on the same machinery that runs our decisions about which food to eat or which pet to adopt. To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote \"cats\" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote \"Palestinians\" a few points. Richard Dawkins just said something especially witty, so you up-vote \"atheism\". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.1

Remember back during the presidential election, when a McCain supporter claimed that an Obama supporter attacked her and carved a \"B\" on her face with a knife? This was HUGE news. All of my Republican friends started emailing me  and saying \"Hey, did you hear about this, this proves we've been right all along!\" And all my Democratic friends were grumbling and saying how it was probably made up and how we should all just forget the whole thing.

And then it turned out it WAS all made up, and the McCain supporter had faked the whole affair. And now all of my Democrat friends started emailing me and saying \"Hey, did you hear about this, it shows what those Republicans and McCain supporters are REALLY like!\" and so on, and the Republicans were trying to bury it as quickly as possible.

The overwhelmingly interesting thing I noticed here was that everyone seemed to accept - not explicitly, but implicitly very much - that an Obama supporter acting violently was in some sense evidence against Obama or justification for opposition to Obama; or, that a McCain supporter acting dishonestly was in some sense evidence against McCain or confirmation that Obama supporters were better people. To a Bayesian, this would be balderdash. But to an emotivist, where any bad feelings associated with Obama count against him, it sort of makes sense. All those people emailing me about this were saying: Look, here is something negative associated with Obama; downvote him!2

So this is one problem: the inputs to our mental karma system aren't always closely related to the real merit of a person/thing/idea.

Another problem: our interpretation of whether to upvote or downvote something depends on how many upvotes or downvotes it already has. Here on Less Wrong we call this an information cascade. In the mind, we call it an Affective Death Spiral.

Another problem: we are tempted to assign everything about a concept the same score. Eliezer Yudkowsky currently has 2486 karma. How good is Eliezer at philosophy? Apparently somewhere around the level it would take to get 2486 karma. How much does he know about economics? Somewhere around level 2486 would be my guess. How well does he write? Probably well enough to get 2486 karma. Translated into mental terms, this looks like the Halo Effect. Yes, we can pick apart our analyses in greater detail; having read Eliezer's posts, I know he's better at some things than others. But that 2486 number is going to cause anchoring-and-adjustment issues even so.

But the big problem, the world-breaking problem, is that sticking everything good and bad about something into one big bin and making decisions based on whether it's a net positive or a net negative is an unsubtle, leaky heuristic completely unsuitable for complicated problems.

Take gun control. Are guns good or bad? My gut-level emotivist response is: bad. They're loud and scary and dangerous and they shoot people and often kill them. It is very tempting to say: guns are bad, therefore we should have fewer of them, therefore gun control. I'm not saying gun control is therefore wrong: reversed stupidity is not intelligence. I'm just saying that before you can rationally consider whether or not gun control is wrong, you need to get past this mode of thinking about the problem.

In the hopes of using theism less often, a bunch of Less Wrongers have agreed that the War on Drugs would make a good stock example of irrationality. So, why is the War on Drugs so popular? I think it's because drugs are obviously BAD. They addict people, break up their families, destroy their health, drive them into poverty, and eventually kill them. If we've got to have a category \"drugs\"3, and we've got to call it either \"good\" or \"bad\", then \"bad\" is clearly the way to go. And if drugs are bad, getting rid of them would be good! Right?

So how do we avoid all of these problems?

I said at the very beginning that I think we should switch to solving moral problems through utilitarianism. But we can't do that directly. If we ask utilitarianism \"Are drugs good or bad?\" it returns: CATEGORY ERROR. Good for it.

Utilitarianism can only be applied to states, actions, or decisions, and it can only return a comparative result. Want to know whether stopping or diverting the trolley in the Trolley Problem would be better? Utilitarianism can tell you. That's because it's a decision between two alternatives (alternate way of looking at it: two possible actions; or two possible states) and all you need to do is figure out which of the two is higher utility.

When people say \"Utilitarianism says slavery is bad\" or \"Utilitarianism says murder is wrong\" - well, a utilitarian would endorse those statements over their opposites, but it takes a lot of interpretation first. What utilitarianism properly says is \"In this particular situation, the action of freeing the slaves leads to a higher utility state than not doing so\" and possibly \"and the same would be true of any broadly similar situation\".

But why in blue blazes can't we just go ahead and say \"slavery is bad\"? What could possibly go wrong?

Ask an anarchist. Taxation of X% means you're forced to work for X% of the year without getting paid. Therefore, since slavery is \"being forced to work without pay\" taxation is slavery. Since slavery is bad, taxation is bad. Therefore government is bad and statists are no better than slavemasters.4

(again, reversed stupidity is not intelligence. There are good arguments against taxation. But this is not one of them.)

Emotivism is the native architecture of the human mind. No one can think like a utilitarian all the time. But when you are in an Irresolvable Debate, utilitarian thinking may become necessary to avoid dangling variable problems around the word \"good\" (cf. Islam is a religion of peace). Problems that are insoluble at the emotivist level can be reduced, simplified, and resolved on the utilitarian level with enough effort.

I've used the example before, and I'll use it again. Israel versus Palestine. One person can go on and on for months about all the reasons the Israelis are totally right and the Palestinians are completely in the wrong, and another person can go on just as long about how the Israelis are evil oppressors and the Palestinians just want freedom. And then if you ask them about an action, or a decision, or a state - they've never thought about it. They'll both answer something like \"I dunno, the two-state solution or something?\". And if they still disagree at this level, you can suddenly apply the full power of utilitarianism to the problem in a way that tugs sideways to all of their personal prejudices.

In general, any debate about whether something is \"good\" or \"bad\" is sketchy, and can be changed to a more useful form by converting the thing to an action and applying utilitarianism.

\n

Footnotes:

1: It should be noted that this karma analogy can't explain our original perception of good and bad, only the system we use for combining, processing and utilizing it. My guess is that the original judgment of good or bad takes place through association with other previously determined good or bad things, down to the bottom level which are programmed into the organism (ie pain, hunger, death) with some input from the rational centers.

2: More evidence: we tend to like the idea of \"good\" or \"bad\" being innate qualities of objects. Thus the alternative medicine practioner who tells you that real medicine is bad, because it uses scary pungent chemicals, which are unhealthy, and alternative medicine is good, because it uses roots and plants and flowers, which everyone likes. Or fantasy books, where the Golden Sword of Holy Light can only be wielded for good, and the Dark Sword of Demonic Shadow can only be wielded for evil.

3: Of course, the battle has already been half-lost once you have a category \"drugs\". Eliezer once mentioned something about how considering {Adolf Hitler, Joe Stalin, John Smith} a natural category isn't going to do John Smith any good, no matter how nice a man he may be. In the category \"drugs\", which looks like {cocaine, heroin, LSD, marijuana}, LSD and marijuana get to play the role of John Smith.

\n

4: And, uh, I'm sure Louis XVI would feel the same way. Sorry. I couldn't think of a better example.

" } }, { "_id": "D8vWc2SdBsTbmcrNw", "title": "The Art of Critical Decision Making", "pageUrl": "https://www.lesswrong.com/posts/D8vWc2SdBsTbmcrNw/the-art-of-critical-decision-making", "postedAt": "2009-04-17T01:54:32.266Z", "baseScore": -2, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "D8vWc2SdBsTbmcrNw", "html": "

The Art of Critical Decision Making is a new 12-hour lecture series (audio and video) available from The Teaching Company, available as an audio MP3 download for $35.  After May 14 it will cost $130.

" } }, { "_id": "FBgozHEv7J72NCEPB", "title": "My Way", "pageUrl": "https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way", "postedAt": "2009-04-17T01:25:26.171Z", "baseScore": 49, "voteCount": 61, "commentCount": 126, "url": null, "contents": { "documentId": "FBgozHEv7J72NCEPB", "html": "

Previously in seriesBayesians vs. Barbarians
Followup toOf Gender and Rationality, Beware of Other-Optimizing

\n

There is no such thing as masculine probability theory or feminine decision theory.  In their pure form, the maths probably aren't even human.  But the human practice of rationality—the arts associated with, for example, motivating yourself, or compensating factors applied to overcome your own biases—these things can in principle differ from gender to gender, or from person to person.

\n

My attention was first drawn to this possibility of individual differences in optimization (in general) by thinking about rationality and gender (in particular).  I've written rather more fiction than I've ever finished and published, including a story in which the main character, who happens to be the most rational person around, happens to be female.  I experienced no particular difficulty in writing a female character who happened to be a rationalist.  But she was not an obtrusive, explicit rationalist.  She was not Jeffreyssai.

\n

And it occurred to me that I could not imagine how to write Jeffreyssai as a woman; his way of teaching is paternal, not maternal.  Even more, it occurred to me that in my writing there are women who are highly rational (on their way to other goals) but not women who are rationalists (as their primary, explicit role in the story).

\n

It was at this point that I realized how much of my own take on rationality was specifically male, which hinted in turn that even more of it might be specifically Eliezer Yudkowsky.

\n

A parenthetical, at this point, upon my own gender politics (lest anyone misinterpret me here).  Of much of what passes for gender politics in present times, I have very little patience, as you might guess.  But as recently as the 1970s this still passed for educational material, which makes me a bit more sympathetic.

\n

So this about my gender politics:  Unlike the case with, say, race, I don't think that an optimal outcome consists of gender distinctions being obliterated.  If the day comes when no one notices or cares whether someone is black or white, any more than they notice eye color, I would only applaud.  But obliterating the difference between male and female does not seem to me desirable, and I am glad that it is impossible using present-day technology; the fact that humanity has (at least) two sexes is part of what keeps life interesting.

\n

But it seems to me that, as an inheritance from the dark ages, the concept of \"normal\" is tilted more toward male than female.  Men are not constantly made aware that they are men in the same way that women are made constantly aware that they are women.  (Though there are contexts where explicit masculinity is suddenly a focus.)  It's not fun for women if female is defined as abnormal, as special.  And so some feminists direct their efforts into trying to collapse gender distinctions, the way you would try to collapse racial distinctions.  Just have everyone be normal, part of the same group.  But I don't think that's realistic for our species—sex is real, it's not just gender—and in any case I prefer to live in a culture with (at least) two genders.

\n

So—rather than obliterate the difference between genders into a common normality—I think that men should become more aware of themselves as men, so that being female isn't any more special or unusual or abnormal or worthy-of-remark than being male.  Until a man sees his own argumentativeness as a distinctively male trait, he'll see women as abnormally passive (departures from the norm) rather than thinking \"I am a male and therefore argumentative\" (in the same way that women now identify various parts of themselves as feminine).

\n

And yes, this does involve all sorts of dangers.  Other cultures already have stronger male gender identities, and that's not always a good thing for the women in those cultures, if that culture already has an imbalance of power.  But I'm not sure that the safe-seeming path of trying to obliterate as many distinctions as possible, is really available; men and women are different.  Moreover, I like being a man free to express those forms of masculinity that I think are worthwhile, and I want to live in a world in which women are free to express whatever forms of feminity they think are worthwhile.

\n

I'm saying all this, because I look over my accumulated essays and see that I am a distinctively male rationalist.  Meanwhile, in another thread, a number of my fellow rationalists did go to some length to disidentify themselves as \"female rationalists\".  I am sympathetic; from having been a child prodigy, I know how annoying it is to be celebrated as \"having done so much while so young\" rather than just \"having done neat stuff in its own right regardless of age\".  I doubt that being singled out as an \"amazing female rationalist\" is any less annoying.  But still:  I built my art out of myself, and it became tied into every part of myself, and it happens to be a fact that I'm male.  And if a woman were to pursue her art far enough, and tie it into every part of herself, she would, I think, find that her art came to resemble herself more and more, tied into her own motives and preferences; so that her art was, among other things, female.

\n

It's hard to pin down this sort of thing exactly, because my own brain knows only half the story.  My understanding of what it means to be female is too much shallower than my understanding of what it means to be male, it doesn't ring as true.  I will try, though, to give an example of what I mean, if you will excuse me another excursion...

\n

The single author I know who strikes me as most feminine is Jacqueline Carey.  When I read her book Kushiel's Avatar, it gave me a feeling of being overwhelmingly outmatched as an author.  I want to write characters with that kind of incredible depth and I can't.  She is too far above me as an author.  I write stories with female characters, and I wish I could write female characters who were as female as Carey's female characters, and so long as I'm dreaming, I also want to sprout wings and fly.

\n

Let me give you an example, drawn from Kushiel's Avatar.  This book—as have so many other books—involves, among its other plot points, saving the world.  A shallow understanding of sex and gender, built mostly around abstract evolutionary psychology—such as I myself possess—would suggest that \"taking great risks to save your tribe\" is likely to be a more male sort of motivation—the status payoff from success would represent a greater fitness benefit to a man, and in the ancestral environment, it is the men who defend their tribe, etcetera.  But in fact, reading SF and fantasy books by female authors, I have not noticed any particularly lower incidence of world-saving behavior by female protagonists.

\n

If you told me to write a strongly feminine character, then I, with my shallow understanding, might try to have her risk everything to save her husband.  The protagonist of Kushiel's Avatar, Phèdre nó Delaunay, does realize that the world is in danger and it needs to be saved.  But she is also, in the same process, trying to rescue a kidnapped young boy.  Her own child?  That's how I would have written the story, but no; she is trying to rescue someone else's child.  The child of her own archenemy, in fact, but no less innocent for all that.  When I look at it after the fact, I can see how this reveals a deeper feminity, not the stereotype but a step beyond and behind the stereotype, something that rings true.  Phèdre loves her husband—and this is shown not by how she puts aside saving the world to save him, but by how much it hurts her to put him in harm's way to save the world.  Her feminity is shown, not by how protective she is toward her own child, but toward someone else's child.

\n

It is this depth of writing that makes me aware of how my own brain is only regurgitating stereotypes by comparison.

\n

I do dare say that I have developed my art of rationality as thoroughly as Carey has developed her thesis on love.  And so my art taps into parts of me that are male.  I cultivate the desire to become stronger; I accept and acknowledge within myself the desire to outdo others; I have learned to take pride in my identity as someone who faces down impossible challenges.  While my own brain only knows half the story, it does seem to me that this is noticeably more a theme of shōnen anime than shōjo anime.  Watch Hikaru no Go for an idea of what I mean.

\n

And this is the reason why I can't write Jeffreyssai as a woman—I would not be able to really understand her motivations; I don't understand what taps female drives on that deep a level.  I can regurgitate stereotypes, but reading Jacqueline Carey has made me aware that my grasp is shallow; it would not ring true.

\n

What would the corresponding female rationalist be like?  I don't know.  I can't say.  Some woman has to pursue her art as far as I've pursued mine, far enough that the art she learned from others fails her, so that she must remake her shattered art in her own image and in the image of her own task.  And then tell the rest of us about it.

\n

I sometimes think of myself as being like the protagonist in a classic SF labyrinth story, wandering further and further into some alien artifact, trying to call into a radio my description of the bizarre things I'm seeing, so that I can be followed.  But what I'm finding is not just the Way, the thing that lies at the center of the labyrinth; it is also my Way, the path that I would take to come closer to the center, from whatever place I started out.

\n

(Perhaps a woman would phrase the above, not as \"Bayes's Theorem is the high pure abstract thing that is not male or female\", but rather, \"Bayes's Theorem is something we can all agree on\".  Or maybe that's only my own brain regurgitating stereotypes.)

\n

Someone's bound to suggest, \"Take the male parts out, then!  Don't describe rationality as 'the martial art of mind'.\"  Well... I may put in some work to gender-purify my planned book on rationality.  It would be too much effort to make my blog posts less like myself, in that dimension.  But I also want to point out that I enjoyed reading Kushiel's Avatar—I was not blocked from appreciating it on account of the book being visibly female.

\n

I say all this because I want to convey this important idea, that there is the Way and my Way, the pure (or perhaps shared) thing at the center, and the many paths we take there from wherever we started out.  To say that the path is individualized, is not to say that we are shielded from criticism by a screen of privacy (a common idiom of modern Dark Side Epistemology).  There is still a common thing we are all trying to find.  We should be aware that others' shortest paths may not be the same as our own, but this is not the same as giving up the ability to judge or to share.

\n

Even so, you should be aware that I have radioed back my description of the single central shape and the path I took to get closer.  If there are parts that are visibly male, then there are probably other parts—perhaps harder to identify—that are tightly bound to growing up with Orthodox Jewish parents, or (cough) certain other unusual features of my life.

\n

I think there will not be a proper Art until many people have progressed to the point of remaking the Art in their own image, and then radioed back to describe their paths.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"The Sin of Underconfidence\"

\n

Previous post: \"Of Gender and Rationality\"

" } }, { "_id": "agJzTQ6AqZKtKnyvW", "title": "Test Post", "pageUrl": "https://www.lesswrong.com/posts/agJzTQ6AqZKtKnyvW/test-post-3", "postedAt": "2009-04-17T00:49:54.828Z", "baseScore": 1, "voteCount": 1, "commentCount": 1, "url": null, "contents": { "documentId": "agJzTQ6AqZKtKnyvW", "html": null } }, { "_id": "hbdYWmu2ozwNvvWcW", "title": "Practical rationality questionnaire", "pageUrl": "https://www.lesswrong.com/posts/hbdYWmu2ozwNvvWcW/practical-rationality-questionnaire", "postedAt": "2009-04-16T23:21:05.787Z", "baseScore": 22, "voteCount": 19, "commentCount": 28, "url": null, "contents": { "documentId": "hbdYWmu2ozwNvvWcW", "html": "

EDIT, 4/18:  I'm closing the survey.  I'll post analysis and a better anonymized version of the raw data in a day or so.  236 people responded; thanks very much to all who did.

\n

For survey participants curious about the calibration questions, the answers are:

\n

Number of republics the USSR broke up into, following the output of the cold war: 15.

\n

The year in which the global population reached 1 billion: 1804.

\n

The average percentage of a watermelon's weight that comes from water: 92.

\n

 

\n

The old post:

\n

There has been much discussion of the extent to which rationality is or isn’t practically useful.  There have also been many calls for better empirical evidence.

In an attempt to produce empirical evidence for or against rationality’s usefulness for LW-ers, I have here a rationality questionnaire.  It takes about 15 minutes to complete, according to myself and to Katja Grace, who kindly helped me with it.  I tried to hug the query of “Are there OB/LW-like techniques, or similar techniques, that actually help LW-ers achieve their goals?”   This isn’t a test -- we’re not measuring individuals’ rationality -- we’re just looking for correlations and noisy indicators that may nevertheless tell us give us useful info in aggeragate, when used on groups.

Fill in the survey -- by following

\n

this survey link [Survey is now closed.  Though the link will still let you see the questions.]

\n

-- and know your next 15 minutes will contribute to science, truth, rationality, and the future practical successes of LW-ers.  =)  (... at least as far as expected value is concerned, if you assign some probablity to this data being useful.)

\n

ADDED:  Please hold off on discussing the implications of different responses for a day or two, until the rate of survey-completions dies down.  Unless you're sure your discussion won't prejudice others' answers.

" } }, { "_id": "qTSRpyuuu6i9gGmWY", "title": "Instrumental Rationality is a Chimera", "pageUrl": "https://www.lesswrong.com/posts/qTSRpyuuu6i9gGmWY/instrumental-rationality-is-a-chimera", "postedAt": "2009-04-16T23:15:43.765Z", "baseScore": 10, "voteCount": 29, "commentCount": 36, "url": null, "contents": { "documentId": "qTSRpyuuu6i9gGmWY", "html": "

Eliezer observes, “Among all self-identified \"rationalist\" communities that I know of, and Less Wrong in particular, there is an obvious gender imbalance - a male/female ratio tilted strongly toward males.” and provides us with a selection of hypotheses that attempt to explain this notable fact, ranging over the normal cultural and biological explanations for male/female imbalances in any community. One important point was missing however, a point raised by Yvain last week under the title, Extreme Rationality: It's Not That Great. That fact is that we have not done anything yet. Eliezer writes under the assumption that women ought to want to study our writings, but since we have so far failed to produce a single practical application of our rationalist techniques, I really cannot blame women for staying away. They may be being more rational than we are.

\n

\n

Long have we pondered Eliezer's enigmatic homily, “Rationalists should win.” and like the aristoteleans of old we agreed that it must be so, since a proclivity to win is inherent in the definition of the word “rationalist”.

\n

Well, have you won anything lately? Are the horizons of your power expanding, you rationalist Übermenschen? Perhaps you will say, “We have only just gotten started! We are pregnant with potential, if not abounding with achievements.”

\n

I do not mean to be impatient but it has been a few weeks now and we appear to be spinning our wheels a little tiny bit. As interesting as many of the posts here have been, I cannot recall any of them having been instrumentally useful to me, or anyone else here mentioning posts that have been instrumentally useful to them. In fact it almost seems as if most of the posts contributed by the Less Wrong community have been about the Less Wrong community. These self-referential meta-posts accumulate, and as they become increasingly impenetrable they discourage potential contributors of either sex.

\n

Since the confusion caused by this notion of instrumental rationality shows no signs of abating, I will attempt to cut the knot. There is no such thing as instrumental rationality. What is the rational way to butter toast? Brew coffee? Drive a car? Raise a child? Conduct a particle physics experiment? You will notice that the unifying feature among these examples is that there is no unifying feature among these examples. Rationality – real world, day to day, nine to five rationality – is entirely context dependent. The attempt to develop a grand unified theory of instrumental rationality is an attempt to abstract away from the details of inidividual circumstances, in order to come up with a Best Way To Do Everything Forever. This is untenable. Rationality can be used to choose the best course of action for achieving a particular goal, but this is simply an example of knowing the truth – epistemic rationality.

\n

I think that we have been on the wrong track, up until now. I believe we can do better, but first we must abandon the silly martial arts metaphors. You do not need academic-grade rationality every second of the day and you do not need to pretend that you are the only rational person in the world. Co-operate. In order to live rationally and live well, we must have easy access to organised expert domain knowledge in useful areas such as self-motivation, health and fitness, development of social skills, use of technology and of course, the abstract rules of epistemic rationality. I am sure there is much more that could be added to this list. To achieve this I suggest that, like an economy, we subdivide and specialise. Rather than racking their brains in an attempt to come up with something novel to say on the topic of abstract rationalism, we should encourage contributors to tell us about something they specialise in, to give us advice backed by evidence and reasoned argument about something they know a lot about, and to direct us to useful references wherein we may learn more. I imagine people contributing a guide to getting accurate medical information, tips on child psychology and raising children, or an essay on how to exercise to increase longevity.

\n

Clearly, we have a group of interested, motivated, highly intelligent people here at Less Wrong, each of whom has their own particular talent, so why not make the most of them?

" } }, { "_id": "CG9AEXwSjdrXPBEZ9", "title": "Welcome to Less Wrong!", "pageUrl": "https://www.lesswrong.com/posts/CG9AEXwSjdrXPBEZ9/welcome-to-less-wrong", "postedAt": "2009-04-16T09:06:25.124Z", "baseScore": 58, "voteCount": 53, "commentCount": 2000, "url": null, "contents": { "documentId": "CG9AEXwSjdrXPBEZ9", "html": "

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.

\n

If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread.  Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.

\n

You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have \"karma\" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.

\n

If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

\n

A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.

\n

A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box.  This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.

\n

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

\n

(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)

" } }, { "_id": "xsyG7PkMekHud2DMK", "title": "Of Gender and Rationality", "pageUrl": "https://www.lesswrong.com/posts/xsyG7PkMekHud2DMK/of-gender-and-rationality", "postedAt": "2009-04-16T00:56:11.827Z", "baseScore": 64, "voteCount": 62, "commentCount": 361, "url": null, "contents": { "documentId": "xsyG7PkMekHud2DMK", "html": "

Among all self-identified \"rationalist\" communities that I know of, and Less Wrong in particular, there is an obvious gender imbalance—a male/female ratio tilted strongly toward males.

\n

Yet surely epistemic and instrumental rationality have no gender signature.  There is no such thing as masculine probability theory or feminine decision theory.

\n

There could be some entirely innocuous explanation for this imbalance.  Perhaps, by sheer historical contingency, aspiring rationalists are recruited primarily from the atheist/libertarian/technophile cluster, which has a gender imbalance for its own reasons—having nothing to do with rationality or rationalists; and this is the entire explanation.

\n

Uh huh.  Sure.

\n

And then there are the less innocuous explanations—those that point an accusing finger at the rationalist community, or at womankind.

\n

If possible, let's try not to make things worse in the course of having this discussion.  Remember that to name two parts of a community is to split that community—see the Robbers Cave experiment:  Two labels → two groups.  Let us try not to make some of our fellow rationalists feel singled-out as objects of scrutiny, here.  But in the long run especially, it is not a good thing if half the potential audience is being actively filtered out; whatever the cause, the effect is noticeable, and we can't afford to ignore the question.

\n

These are the major possibilities that I see:

\n

(1)  While the pure math of the right Way has no gender signatures on it, we can imagine that men and women are annoyed to different degrees by different mistakes.  Suppose that Less Wrong is too disagreeable—that relative to the ideal, just-right, perfectly-rational amount of disagreement, we have a little more disagreement than that.  You can imagine that to the men, this seems normal, forgivable, takeable in-stride—wrong, perhaps, but not really all that annoying.  And you can imagine that conversely, the female-dominated mirror-image of Less Wrong would involve too much agreement relative to the ideal—lots of comments agreeing with each other—and that while this would seem normal, forgivable, takeable-in-stride to the female majority, it would drive the men up the wall, and some of them would leave, and the rest would be gritting their teeth.  (This example plays to gender stereotypes, but that's because I'm speculating blindly; my brain only knows half the story and has to guess at the other half.  Less obvious hypotheses are also welcome.)  In a case like this, you begin by checking with trusted female rationalists to see if they think you're doing anything characteristically male, irrational, and annoying.

\n

(2)  The above points a finger at the rationalist community, and in particular its men, as making a mistake that drives away rational women.  The complementary explanation would say:  \"No, we have exactly the rational amount of argument as it stands, or even too little.  Male newcomers are fine with this, but female newcomers feel that there's too much conflict and disagreement and they leave.\"  The true Way has no gender signature, but you can have a mistake that is characteristic of one sex but not the other, or a mistake that has been culturally inculcated in one gender but not the other.  In this case we try to survey female newcomers to see what aspects seem like turn-offs (whether normatively rational or not), and then fix it (if not normatively rational) or try to soften the impact somehow (if normatively rational).  (Ultimately, though, rationality is tough for everyone—there are parts that are hard for anyone to swallow, and you just have to make it as easy as you can.)

\n

(3)  It could be some indefinable difference of style—\"indefinable\" meaning that we can't pin it down tightly enough to duplicate—whereby male writers tend to attract male recruits and female writers attract female recruits.  On this hypothesis, male writers end up with mostly male readers for much the same reason that Japanese writers end up with mostly Japanese readers.  In this case I would suggest to potential female authors that they should write more, including new introductions and similar recruiting material.  We could try for a mix of authorial genders in the material first encountered on-site.  (By the same logic that if we wanted more Japanese rationalists we might encourage potential writers who happened to be Japanese.)

\n

(4)  We could be looking at a direct gender difference—where I parenthetically note that (by convention in such discussions) \"gender\" refers to a culture's concept of what it means to be a man or woman, while \"sex\" refers to actual distinctions of XX versus XY chromosomes.  For example, consider this inspirational poster from a 1970s childrens' book.  \"Boys are pilots... girls are stewardesses... boys are doctors... girls are nurses.\"  \"Modern\" cultures may still have a strong dose of \"boys are rational, girls are un-self-controlled creatures of pure feeling who find logic and indeed all verbal argument to be vaguely unfeminine\".  I suppose the main remedy would be (a) to try and correct this the same way you would correct any other sort of childhood damage to sanity and (b) present strong female rationalist role models.

\n

(5)  The complementary hypothesis is a direct sex difference—i.e., the average female human actually is less interested in and compelled by deliberative reasoning compared to the average male human.  If you were motivated to correct the sex balance regardless, you would consider e.g. where to find a prefiltered audience of people compellable by deliberative reasoning, a group that already happened to have good gender balance, and go recruiting there.

\n

(6)  We could be looking an indirect gender difference.  Say, boys are raised to find a concept like \"tsuyoku naritai\" (\"I want to become stronger\") appealing, while girls are told to shut up and keep their heads down.  If the masculine gender concept has a stronger endorsement of aspiring to self-improvement, it will, as a side effect, make a stronger endorsement of improving one's rationality.  Again, the solutions would be female authors to tailor introductions to feminine audiences, and strong female role models.  (If you're a woman and you're a talented writer and speaker, consider reading up on antitheism and trying to become a Fifth Horsewoman alongside Dawkins, Dennett, Harris and Hitchens...?)

\n

(7)  We could be looking at an indirect sex difference.  The obvious evolutionary psychology hypothesis behind the imbalanced gender ratio in the iconoclastic community—the atheist/libertarian/technophile cluster—is the idea that males are inherently more attracted to gambles that seem high-risk and high-reward; they are more driven to try out strange ideas that come with big promises, because the genetic payoff for an unusually successful male has a much higher upper bound than the genetic payoff for an unusually successful female.  It seems to me that male teenagers especially have something like a higher cognitive temperature, an ability to wander into strange places both good and bad.  To some extent, this can be viewed as a problem of authorial style as well as innate dispositions—there's no law that says you have to emphasize the strangeness.  You could start right out with pictures of a happy gender-balanced rationalist unchurch somewhere, and banner the page \"A Return To Sanity\".  But a difference as basic as \"more male teenagers have a high cognitive temperature\" could prove very hard to address completely.

\n

(8)  Then there's the hypothesis made infamous by Larry Summers:  Male variance in IQ (not the mean) is higher, so the right tail is dominated by males as you get further out.  I know that just mentioning this sort of thing can cause a webpage to burst into flames, and so I would like to once again point out that individual IQ differences, whether derived from genes or eating lead-based paint as a kid, are already as awful as it gets—nothing is made any worse by talking about groups, since groups are just made out of individuals.  The universe is already dreadful along this dimension, so we shouldn't care more whether groups are involved—though of course, thanks to our political instincts, we do care.  The remedies in this not-actually-any-more-awful case are (a) continue the quest to systematize rationality training so that it is less exclusively the preserve of high-g individuals, and (b) recruit among prefiltered audiences that have good gender balance.

\n

(9)  Perhaps women are less underrepresented on Less Wrong than may at first appear, and men are more likely to comment for some reason.  Or perhaps women are less likely to choose visibly feminine usernames.  The gender ratio at physical meetups, while still unbalanced, seems noticeably better than the visible gender ratio among active commenters on the Internet.  Not very plausible as a complete explanation; but we should consider hypotheses that involve unbalanced participation/visibility rather than unbalanced attraction/retention.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"My Way\"

\n

Previous post: \"Bayesians vs. Barbarians\"

" } }, { "_id": "wiF4xWEoBwXR4XdSY", "title": "I Changed My Mind Today - Canned Laughter", "pageUrl": "https://www.lesswrong.com/posts/wiF4xWEoBwXR4XdSY/i-changed-my-mind-today-canned-laughter", "postedAt": "2009-04-15T23:59:26.897Z", "baseScore": 15, "voteCount": 25, "commentCount": 9, "url": null, "contents": { "documentId": "wiF4xWEoBwXR4XdSY", "html": "

If we had topic-headings here, I'd be suggesting a new one: I changed my mind today.

Being rational is all about chainging your mind, right? It's about re-assessing in the face of some new evidence. About examining the difference between your assumptions and the world itself. Narrowing down the difference between the model and the reality, the map and the territory.

Maybe your 'karma' should reflect how much you've told us when you changed your mind? Certainly I'd like to know when people change their minds about things more than when they just agree with me.

\n

In fact, I think that is probably the thing I most want to know about from any of the people whom I know primarily because of their professed rationality.

\n

Especially if they explain why they changed their minds, and do it well.

With that in mind, and introducing the new acronym: ICMMT

I Changed My Mind Today!

Or at least I revised my opinion.

As you may know, the UK TV channel \"Dave\" recorded and then broadcast three new episodes of \"Red Dwarf\" this Easter. If you didn't know that, your time is better spent tracking down those shows and watching them than reading the remainder of this article. Come back when you're done. If you haven't even watched the BBC originals then, um. Enjoy! See you in a year or so.

Anyway.

I enjoyed the new episodes, laughed a lot, reminisced a lot more, but was left somehow feeling more *flat* than when watching previous shows.

\n

I didn't really even know why, until a friend pointed it out:

\n

> The new shows have no 'laugh track'.

\n

As soon as she said it, I knew that I'd heard mention that this was the first time they'd shot the show without a studio audience. And that she was right. And that the \"laugh track\" had been an important part of that show to me right from the first episode.

Now this was a revelation.

Until now, when I've noticed a laugh-track or when a laugh-track has been talked about by others, it's been to bitch about \"canned laughter\" being really false and it ruining the whole atmosphere and making everything seem fake.

Which I agreed with. Totally. That stuff is damned annoying. When I notice a laugh-track, it's because the people laughing are clearly moronic idiots who'd laugh at the fact of gravity and I hate them.

However. When my friend said that the lack of that laugh track left the show 'flat', I knew exactly what she meant. It made all the difference in the world. Deathly silence, all over. The comedic timing that was essential, the knowing when the laughter comes, knowing when it dies, responding to the audience, was gone.

They were all great actors, and they faked it well. Presumably they timed it so well with my drunken-stoned first viewing that I didn't even notice. But once you do, it's obvious.

Which completely changes my view on the laugh-track in comedy.

I used to think it was just an annoying gloss, a manipulation, an attempt to program my brain through skinneresque association. Now I see that it communicates *both sides* of an interaction between two groups of people. That the live audience, and being able to hear that audience in the edit, tells the performer exactly how to act. How to reflect the laughter and mood back.

This is annoying, to some extent. I have some minor film-making ambition, and in any show I want to shoot on any kind of budget I can afford, the audience won't be there. Yet now I see the need to find a way to do so. Which makes it all even *more* expensive and difficult.

I already knew that \"canned laughter\" isn't the same as \"filmed in front of a live studio audience\" of course. But it always just sounded like a cop-out.

Turns out that the live audience laughter and the actor actually interact. That's what makes it so compelling. That's what makes it actually work in a way that actual canned laughter doesn't. Why actual 'canned laughter', if it really exists, got it's reputation.

Even showing the film to an audience and then dubbing on their laughter for release won't really cut it. The way the actor responds to the audience is more important than the way the audience responds to the actor.

So yeah.

\n

I changed my mind today. I already knew the difference between \"Live studio audience\" and \"Canned laughter\", but now I feel like I know why one irritates so much and the other stops a film looking so damed flat. I'm no longer against it in principle.

" } }, { "_id": "GTaPLD3Wponb7hFC3", "title": "Mechanics without wrenches", "pageUrl": "https://www.lesswrong.com/posts/GTaPLD3Wponb7hFC3/mechanics-without-wrenches", "postedAt": "2009-04-15T20:09:12.138Z", "baseScore": 36, "voteCount": 42, "commentCount": 78, "url": null, "contents": { "documentId": "GTaPLD3Wponb7hFC3", "html": "

Say you're taking your car to an auto mechanic for repairs.  You've been told he's the best mechanic in town.  The mechanic rolls up the steel garage door before driving the car into the garage, and you look inside and notice something funny.  There are no tools.  The garage is bare - just an empty concrete space with four bay doors and three other cars.

\n

You point this out to the mechanic.  He shrugs it off, saying, \"This is how I've always worked.  I'm just that good.  You were lucky I had an opening; I'm usually booked.\"  And you believe him, having seen the parking lot full of cars waiting to be repaired.

\n

You take your car to another mechanic in the same town.  He, too, has no tools in his garage.  You visit all the mechanics in town, and find a few that have some wrenches, and others with a jack or an air compressor, but no one with a full set of tools.

\n

You notice the streets are nearly empty besides your car.  Most of the cars in town seem to be in for repairs.  You talk to the townsfolk, and they tell you how they take their cars from one shop to another, hoping to someday find the mechanic who is brilliant and gifted enough to fix their car.

\n

I sometimes tell people how I believe that governments should not be documents, but semi-autonomous computer programs.  I have a story that I'm not going to tell now, about incorporating inequalities into laws, then incorporating functions into them, then feedback loops, then statistical measures, then learning mechanisms, on up to the point where voters and/or legislatures set only the values that control the system, and the system produces the low-level laws and policy decisions (in a way that balances exploration and exploitation).  (Robin's futarchy in which you \"vote on values, bet on beliefs\" describes a similar, though less-automated system of government.)

\n

And one reaction - actually, one of the most intelligent reactions - is, \"But then... legislators would have to understand something about math.\"  As if that were a bug, and not a feature.

\n

We have 535 Congressmen in the United States.  Over the past half a year, they've decided how to spend several trillion of our dollars on interventions to vitalize our economy.  But after listening to them for 20 years, I have the feeling that few of them could explain the concepts of opportunity cost, diminishing returns, or the law of supply and demand.  You could probably count on one hand the number who could solve an ordinary differential equation.

\n

This isn't the fault of the congressmen.  This is the fault of the voters.  Why do we regularly elect representatives who are mechanics without wrenches?

\n

We like to praise the man who achieves great things through vision, genius, and force of personality.  If you tell people that he had great tools, people think you're trying to diminish his accomplishments.  People love Einstein above all scientists because they have the idea that he just sat in a chair and conducted thought-experiments.  They like to believe that he did poorly in math at school (he didn't).  Maybe this is because they feel math is a crutch that a true genius wouldn't need.  Maybe it's because they would like to think that they could also come up with general relativity if they just had enough time alone.  They love scientists who say they work by visualization or intuition, and who talk about seeing the solution to a problem in a dream.  It's not evident that Einstein was smarter than John von Neumann or Alan Turing, yet most Americans have never heard their names.

\n

I think that what America needs most, in terms of rationality, is not training in rationalist techniques - although that's of value.  What America needs most is awareness of how much of a difference intelligence and education and rationality can make.  And what America needs second-most is for people to recognize the toolkits of rationality and appreciate their power.

\n

Most people don't realize that there are small bodies of knowledge that radically amplify your intelligence.  Even a general understanding of evolution, or thermodynamics, or information theory, gives you a grasp on all sorts of other topics that would have otherwise remained mysterious.  Understanding how to rephrase a real-world problem as a function maximization problem lets you think quantitatively about something that before you would have had to address with gut feelings.

\n

One reason for this may be that, in the mind of the public, the prototypical smart person is a physicist.  And particle physics, quantum mechanics, and relativity just aren't very useful toolkits.  People hardly ever get an insight into anything in their ordinary lives from quantum mechanics or relativity (and when they do, they're wrong).  You don't have to know that stuff.  And, as 20th-century physics is thought of as the pinnacle of science, it taints all the other sciences with its own narrowness of applicability.

\n

With the exception of math, I can't recall any teacher ever trying to show me that something we were studying was a toolkit applicable beyond the subject being studied.  The way we try to teach our students to think is like the (failed) way we tried to teach AIs to think in the 1970s (and, in Austin, through the present day) - by giving them a lot of specialized knowledge about a lot of different subjects.  This is a time-tested way to spend a lot of time and money without instilling much intelligence into either a computer or a student.  A better approach would be to look for abstractions that can be applied to as many domains as possible, and to have one class for each of these abstractions.

\n

 

\n

(PS - When I speak specifically about America, it's not because I think the rest of the world is unimportant.  I just don't know as much about the rest of the world.)

" } }, { "_id": "rGTfQJ8E5CxqcA6LD", "title": "Actions and Words: Akrasia and the Fruit of Self-Knowledge", "pageUrl": "https://www.lesswrong.com/posts/rGTfQJ8E5CxqcA6LD/actions-and-words-akrasia-and-the-fruit-of-self-knowledge", "postedAt": "2009-04-15T15:27:13.693Z", "baseScore": 10, "voteCount": 27, "commentCount": 20, "url": null, "contents": { "documentId": "rGTfQJ8E5CxqcA6LD", "html": "
\n

Knowing other people requires intelligence,

\n

but knowing yourself requires wisdom.

\n

Those who overcome others have force,

\n

but those who overcome themselves have power.

\n

- Tao Te Ching, Chapter 33:  Without Force, Without Perishing

\n
\n

Much has been written here about the issue of akrasia.  People often report that they really, sincerely want to do something, that they recognize that certain courses of action are desirable/undesirable and that they should choose them -- but when the time comes to decide, they do otherwise.  Their choices don't match what they said their choices would be.

\n

While I'm sure many people are less than honest in reporting their intentions to others, and possibly even more who aren't even being honest with themselves, there are still plenty of people that are presumably sincere and honest.  So how can they make their actions match their understanding of what they want?  How can their choices reflect their own best judgment?

\n

Isn't that really the wrong question?

\n
\n

The very powerful and the very stupid have one thing in common:  they don't alter their views to fit the facts, they alter the facts to fit their views.  Which can be very uncomfortable, if you're one of the facts that needs correcting.

\n

Doctor Who, The Face of Evil

\n
\n

If a model of a phenomenon fails to accurately predict it, we conclude that the model is flawed and try to change it.  If what we're trying to understand is ourselves, our own choices, and the motivations, desires, and preferences that direct those choices, why should we do any differently?  Our actions reveal what we actually want, not what we believe we want or believe we should want.  No one chooses against their own judgment.  What we do is choose against our understanding of our own judgment, and that is a far subtler matter.  By our fruits shall we know ourselves.

\n
\n

Q:  How many therapists does it take to change a lightbulb?

\n

A:  The lightbulb has to want to change.

\n
\n

Expecting our behavior to be constrained and controlled by our understanding is like expecting our limbs to move if we yell at them to do so.  It doesn't matter how much we believe we want them to move, or how much we say we want them to move.  It is irrelevant whether we have a conscious understanding of the nerves and muscles involved.  Our conscious awareness is a bystander that reports what happens and attributes its observations to itself, when in actuality it controls very little at all.

\n

There are people whose ability to move has been damaged by nerve trauma or damage to the brain.  The established relationships between their intents, their desires, and the signals to their muscles, have been damaged or destroyed.  Such people do not improve by talking to others about how much they want to move, or by talking to themselves about it (which is what conscious thought really is).  They get better by searching out connections that work and building on them.

\n
\n

Those whom heaven helps we call the children of heaven.  They do not learn this by learning.  They do not work it by working.  They do not reason it by using reason.  To let understanding stop at what cannot be understood is a high achievement.

\n

- Zhuangzi, Zhuangzi, Chapter 2:  On the Proper Order of Things

\n
\n

Babies have little if any consciousness, and they don't possess theory.  Their nervous systems learn to move their bodies by bombarding their muscles with random noise triggered by their interests, and strengthening the signals that happen to get them closer to what they want.  Not what they think they want.  It is quite unlikely that babies have models of their minds, much less conscious ones, although they are either born with models of their bodies or the foundations for building such a model.

\n

Those who wish to bring themselves into alignment with what is truly correct, instead of what their impulses and desires seek in themselves, must first understand the nature of their impulses and the nature of their understanding.

\n
\n

Let him that would move the world first move himself.

\n

- Socrates of Athens

\n
" } }, { "_id": "KsHmn6iJAEr9bACQW", "title": "Bayesians vs. Barbarians", "pageUrl": "https://www.lesswrong.com/posts/KsHmn6iJAEr9bACQW/bayesians-vs-barbarians", "postedAt": "2009-04-14T23:45:48.156Z", "baseScore": 107, "voteCount": 100, "commentCount": 277, "url": null, "contents": { "documentId": "KsHmn6iJAEr9bACQW", "html": "

Previously:

\n

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

\n

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

\n

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

\n
\n

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

\n

Now there's a certain viewpoint on \"rationality\" or \"rationalism\" which would say something like this:

\n

\"Obviously, the rationalists will lose.  The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse.  Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition.  They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group.  Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves.  Even if they can find soldiers, their civilians won't be as cooperative:  So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could.  No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun.  In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would.\"

\n

War is not fun.  As many many people have found since the dawn of recorded history, as many many people have found out before the dawn of recorded history, as some community somewhere is finding out right now in some sad little country whose internal agonies don't even make the front pages any more.

\n

War is not fun.  Losing a war is even less fun.  And it was said since the ancient times:  \"If thou would have peace, prepare for war.\"  Your opponents don't have to believe that you'll win, that you'll conquer; but they have to believe you'll put up enough of a fight to make it not worth their while.

\n

You perceive, then, that if it were genuinely the lot of \"rationalists\" to always lose in war, that I could not in good conscience advocate the widespread public adoption of \"rationality\".

\n

This is probably the dirtiest topic I've discussed or plan to discuss on LW.  War is not clean.  Current high-tech militaries—by this I mean the US military—are unique in the overwhelmingly superior force they can bring to bear on opponents, which allows for a historically extraordinary degree of concern about enemy casualties and civilian casualties.

\n

Winning in war has not always meant tossing aside all morality.  Wars have been won without using torture.  The unfunness of war does not imply, say, that questioning the President is unpatriotic.  We're used to \"war\" being exploited as an excuse for bad behavior, because in recent US history that pretty much is exactly what it's been used for...

\n

But reversed stupidity is not intelligence.  And reversed evil is not intelligence either.  It remains true that real wars cannot be won by refined politeness.  If \"rationalists\" can't prepare themselves for that mental shock, the Barbarians really will win; and the \"rationalists\"... I don't want to say, \"deserve to lose\".  But they will have failed that test of their society's existence.

\n

Let me start by disposing of the idea that, in principle, ideal rational agents cannot fight a war, because each of them prefers being a civilian to being a soldier.

\n

As has already been discussed at some length, I one-box on Newcomb's Problem.

\n

Consistently, I do not believe that if an election is settled by 100,000 to 99,998 votes, that all of the voters were irrational in expending effort to go to the polling place because \"my staying home would not have affected the outcome\".  (Nor do I believe that if the election came out 100,000 to 99,999, then 100,000 people were all, individually, solely responsible for the outcome.)

\n

Consistently, I also hold that two rational AIs (that use my kind of decision theory), even if they had completely different utility functions and were designed by different creators, will cooperate on the true Prisoner's Dilemma if they have common knowledge of each other's source code.  (Or even just common knowledge of each other's rationality in the appropriate sense.)

\n

Consistently, I believe that rational agents are capable of coordinating on group projects whenever the (expected probabilistic) outcome is better than it would be without such coordination.  A society of agents that use my kind of decision theory, and have common knowledge of this fact, will end up at Pareto optima instead of Nash equilibria.  If all rational agents agree that they are better off fighting than surrendering, they will fight the Barbarians rather than surrender.

\n

Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting.  One solution is to run a lottery, unpredictable to any agent, to select warriors.  Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death.

\n

(A reflectively consistent decision theory works the same way, only without the self-modification.)

\n

You reply:  \"But in the real, human world, agents are not perfectly rational, nor do they have common knowledge of each other's source code.  Cooperation in the Prisoner's Dilemma requires certain conditions according to your decision theory (which these margins are too small to contain) and these conditions are not met in real life.\"

\n

I reply:  The pure, true Prisoner's Dilemma is incredibly rare in real life.  In real life you usually have knock-on effects—what you do affects your reputation.  In real life most people care to some degree about what happens to other people.  And in real life you have an opportunity to set up incentive mechanisms.

\n

And in real life, I do think that a community of human rationalists could manage to produce soldiers willing to die to defend the community.  So long as children aren't told in school that ideal rationalists are supposed to defect against each other in the Prisoner's Dilemma.  Let it be widely believed—and I do believe it, for exactly the same reason I one-box on Newcomb's Problem—that if people decided as individuals not to be soldiers or if soldiers decided to run away, then that is the same as deciding for the Barbarians to win.  By that same theory whereby, if a lottery is won by 100,000 votes to 99,998 votes, it does not make sense for every voter to say \"my vote made no difference\".  Let it be said (for it is true) that utility functions don't need to be solipsistic, and that a rational agent can fight to the death if they care enough about what they're protecting.  Let them not be told that rationalists should expect to lose reasonably.

\n

If this is the culture and the mores of the rationalist society, then, I think, ordinary human beings in that society would volunteer to be soldiers.  That also seems to be built into human beings, after all.  You only need to ensure that the cultural training does not get in the way.

\n

And if I'm wrong, and that doesn't get you enough volunteers?

\n

Then so long as people still prefer, on the whole, fighting to surrender; they have an opportunity to set up incentive mechanisms, and avert the True Prisoner's Dilemma.

\n

You can have lotteries for who gets elected as a warrior.  Sort of like the example above with AIs changing their own code.  Except that if \"be reflectively consistent; do that which you would precommit to do\" is not sufficient motivation for humans to obey the lottery, then...

\n

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.  Even considering that we ourselves might be selected in the lottery.  Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.

\n

...like I said:  Real wars = not fun, losing wars = less fun.

\n

Let's be clear, by the way, that I'm not endorsing the draft as practiced nowadays.  Those drafts are not collective attempts by a populace to move from a Nash equilibrium to a Pareto optimum.  Drafts are a tool of kings playing games in need of toy soldiers. The Vietnam draftees who fled to Canada, I hold to have been in the right.  But a society that considers itself too smart for kings, does not have to be too smart to survive.  Even if the Barbarian hordes are invading, and the Barbarians do practice the draft.

\n

Will rational soldiers obey orders?  What if the commanding officer makes a mistake?

\n

Soldiers march.  Everyone's feet hitting the ground in the same rhythm.  Even, perhaps, against their own inclinations, since people left to themselves would walk all at separate paces.  Lasers made out of people.  That's marching.

\n

If it's possible to invent some method of group decisionmaking that is superior to the captain handing down orders, then a company of rational soldiers might implement that procedure.  If there is no proven method better than a captain, then a company of rational soldiers commit to obey the captain, even against their own separate inclinations.  And if human beings aren't that rational... then in advance of the lottery, the general policy that gives you the highest personal expectation of survival is to shoot soldiers who disobey orders.  This is not to say that those who fragged their own officers in Vietnam were in the wrong; for they could have consistently held that they preferred no one to participate in the draft lottery.

\n

But an uncoordinated mob gets slaughtered, and so the soldiers need some way of all doing the same thing at the same time in the pursuit of the same goal, even though, left to their own devices, they might march off in all directions.  The orders may not come from a captain like a superior tribal chief, but unified orders have to come from somewhere.  A society whose soldiers are too clever to obey orders, is a society which is too clever to survive.  Just like a society whose people are too clever to be soldiers.  That is why I say \"clever\", which I often use as a term of opprobrium, rather than \"rational\".

\n

(Though I do think it's an important question as to whether you can come up with a small-group coordination method that really genuinely in practice works better than having a leader.  The more people can trust the group decision method—the more they can believe that it really is superior to people going their own way—the more coherently they can behave even in the absence of enforceable penalties for disobedience.)

\n

I say all this, even though I certainly don't expect rationalists to take over a country any time soon, because I think that what we believe about a society of \"people like us\" has some reflection on what we think of ourselves.  If you believe that a society of people like you would be too reasonable to survive in the long run... that's one sort of self-image.  And it's a different sort of self-image if you think that a society of people all like you could fight the vicious Evil Barbarians and win—not just by dint of superior technology, but because your people care about each other and about their collective society—and because they can face the realities of war without losing themselves—and because they would calculate the group-rational thing to do and make sure it got done—and because there's nothing in the rules of probability theory or decision theory that says you can't sacrifice yourself for a cause—and because if you really are smarter than the Enemy and not just flattering yourself about that, then you should be able to exploit the blind spots that the Enemy does not allow itself to think about—and because no matter how heavily the Enemy hypes itself up before battle, you think that just maybe a coherent mind, undivided within itself, and perhaps practicing something akin to meditation or self-hypnosis, can fight as hard in practice as someone who theoretically believes they've got seventy-two virgins waiting for them.

\n

Then you'll expect more of yourself and people like you operating in groups; and then you can see yourself as something more than a cultural dead end.

\n

So look at it this wayJeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone.  A whole army of beisutsukai masters ought to be a force that no one would mess with.  That's the motivating vision.  The question is how, exactly, that works.

" } }, { "_id": "DjM7cjeRLNBosz65Q", "title": "Tell it to someone who doesn't care", "pageUrl": "https://www.lesswrong.com/posts/DjM7cjeRLNBosz65Q/tell-it-to-someone-who-doesn-t-care", "postedAt": "2009-04-14T18:15:18.104Z", "baseScore": 25, "voteCount": 24, "commentCount": 34, "url": null, "contents": { "documentId": "DjM7cjeRLNBosz65Q", "html": "

Followup to Marketing rationalism

\n

Target fence-sitters

\n

American culture frames issues as debates between two sides.  The inefficacy of debates is amazing.  You can attend debates on a subject for years without ever seeing anyone change their mind.  I think this is because of who attends debates.  People listen to debates because they care about the issue.  And they only care about the issue because they've already taken a side.  Caring then innoculates them to reason.

\n

If the debate really can be approximated by a binary decision, then the people you want to talk to are the fence-sitters.  And they aren't there.

\n

This reminded me of my \"wound-healing\" theory of international aid.  I'll float a similar idea for social debate:  In order to win society over to a view in the long run, you should target the people who don't care much one way or the other.  Politicians already do this.

\n

So how do you get them to listen?  They won't come to your debate, or your conference, or your website.  Here are some ways:

\n\n

(When I combine this theory with the observation that most people don't change their worldview or their preferences much after the age of maybe 15, I come up with the idea that most cultural change is driven by the random drift of the opinions of children.)

\n

Gravitational debate

\n

But there are many instances of inspirational books targeted at people already well on one side of an issue, that inspired people to action, or had a strong influence on people without flipping them from a 0 to a 1.  The God Delusion, for example; or Schrödinger's What is Life?.

\n

So here's theory number 2: The gravitational model of debate.  People adjust their opinions in response to the opinions of the people around them.  If a lot of the people around Jack shift their opinions to the right, Jack is likely to shift his opinion to the right.  I suspect that Jack is more sensitive to opinions similar to his, than to opinions far away.  So, like gravity, the strength of the attraction falls off with distance.  An opinion sufficiently different from your own is repelling; it invokes an outgroup response rather than an attractive ingroup response.  Rush Limbaugh causes some people to shift further left.  We could posit a gravitational attraction between opinions that varies from positive at close range, to negative at long range.

\n

The consequences of this model are that, by shifting anyone's opinion in one direction, you may trigger a cascade of opinion-shifts that will move the median1 opinion.  This says you can write a book targeted anywhere on a spectrum of opinion, and have it effect the entire spectrum indirectly, moving some people from one side of the fence to the other even though they never heard of your book.

\n

One consequence is that, as in tug-of-war voting, it's rational to try to persuade extremists to be even more extreme than you think is rational, in order to shift the median opinion in your chosen direction.  (It might not be the most effective use of your time).

\n

Another consequence is that your book might not influence the masses if the distribution of opinions in opinion-space has large gaps.  If, for instance, you write a rah-rah transhumanist book, this might have no effect on the population at large if few people have a partly-positive view of transhumanism - even if the gap in opinion-space isn't where your targeted audience would be.  If the gap is large, your book might move median opinion farther away from your position.  The Nazis had a tremendous effect on later 20th century philosophy, and perhaps art - but not in the way they would have liked.

\n

This model works best for emotional issues, or regulatory issues, in which one's position can be expressed by a real number or vector.  In an academic debate, if you have n competing hypothesis, the range of possible positions is discrete; and opinion space probably isn't a metric space.

\n

Compare and contrast?

\n

These two models make nearly opposite recommendations on how to influence public opinion.  The first says to use marketing to target people who don't care.  The second says (approximately) to examine the distribution of opinions, and express an opinion near a large mass of opinions, in the same direction as the vector from the median opinion to your desired opinion.

\n

I think both models have some truth to them.  But which accounts for more of our behavior; and when should you use which model?

\n

 

\n

1 (The median opinion is more relevant than the mean with one-person one-vote.  The mean is more relevant with voting systems that let people express the strength of their opinions.)

" } }, { "_id": "NnQbfLo868wgnHF4n", "title": "Collective Apathy and the Internet", "pageUrl": "https://www.lesswrong.com/posts/NnQbfLo868wgnHF4n/collective-apathy-and-the-internet", "postedAt": "2009-04-14T00:02:19.161Z", "baseScore": 53, "voteCount": 45, "commentCount": 34, "url": null, "contents": { "documentId": "NnQbfLo868wgnHF4n", "html": "

Yesterday I convered the bystander effect, aka bystander apathy: given a fixed problem situation, a group of bystanders is actually less likely to act than a single bystander.  The standard explanation for this result is in terms of pluralistic ignorance (if it's not clear whether the situation is an emergency, each person tries to look calm while darting their eyes at the other bystanders, and sees other people looking calm) and diffusion of responsibility (everyone hopes that someone else will be first to act; being part of a crowd diminishes the individual pressure to the point where no one acts).

\n

Which may be a symptom of our hunter-gatherer coordination mechanisms being defeated by modern conditions.  You didn't usually form task-forces with strangers back in the ancestral environment; it was mostly people you knew.  And in fact, when all the subjects know each other, the bystander effect diminishes.

\n

So I know this is an amazing and revolutionary observation, and I hope that I don't kill any readers outright from shock by saying this: but people seem to have a hard time reacting constructively to problems encountered over the Internet.

\n

Perhaps because our innate coordination instincts are not tuned for:

\n\n

Etcetera.  I don't have a brilliant solution to this problem.  But it's the sort of thing that I would wish for potential dot-com cofounders to ponder explicitly, rather than wondering how to throw sheep on Facebook.  (Yes, I'm looking at you, Hacker News.)  There are online activism web apps, but they tend to be along the lines of sign this petition! yay, you signed something! rather than How can we counteract the bystander effect, restore motivation, and work with native group-coordination instincts, over the Internet?

\n

Some of the things that come to mind:

\n\n

But mostly I just hand you an open, unsolved problem: make it possible / easier for groups of strangers to coalesce into an effective task force over the Internet, in defiance of the usual failure modes and the default reasons why this is a non-ancestral problem.  Think of that old statistic about Wikipedia representing 1/2,000 of the time spent in the US alone on watching television.  There's quite a lot of fuel out there, if there were only such a thing as an effective engine...

" } }, { "_id": "AErxyDpiBy7CyM2Mk", "title": "GroupThink, Theism ... and the Wiki", "pageUrl": "https://www.lesswrong.com/posts/AErxyDpiBy7CyM2Mk/groupthink-theism-and-the-wiki", "postedAt": "2009-04-13T17:28:58.896Z", "baseScore": -2, "voteCount": 16, "commentCount": 62, "url": null, "contents": { "documentId": "AErxyDpiBy7CyM2Mk", "html": "

In response to the  The uniquely awful example of theism, I presented myself as a datapoint of someone in the group who disagrees that theism is uncontroversially irrational.

\n

With a loss of considerable time, several karma points and two bad posts, I now retract my position.

\n

Because I have deconverted? (Sorry, but no.)

\n

I had a working assumption (inferred from here) that rationality meant believing that all beliefs must be rigorously consistent with empirical observation. I now think of this as a weak form of rationalism (see full definition below). A stronger form of rationalism held by (many, most?) rationalists is that there is no other valid source of knowledge. If we define a belief system as religious if and only if it claims knowledge that is independent of empirical experience (i.e., metaphysical) then it is trivially true that all religions are irrational -- using the stronger definition of rational.

\n

A disagreement of definitions is not really a disagreement. Someone suggested on the April open thread that we define \"rationality\". My idea of a definition would look something like this:

\n

 

\n

Rationality assumes that:

\n

(1) The only source of knowledge is empirical experience.

\n

(2) The only things that are known are deduced from empirical experience by valid logical reasoning and mathematics.

\n

 

\n

Weak Rationality assumes that:

\n

(1) The first source of knowledge is empirical experience.

\n

(2) The only things that are known with certainty are deduced from empirical experience by valid logical reasoning and mathematics.

\n

(3) Define a belief system as all knowledge deduced from empirical observation with all metaphysical beliefs, if any. Then the belief system is rational (nearly rational or weakly rational) if the belief system is internally consistent.

\n

 

\n

Probably these definitions have been outlined somewhere better than they are here. Perhaps I have misplaced emphasis and certainly there are important nuances and variations. Whether this definition works or not, I think it's important to have a working set of definitions that we all agree upon. The wiki has just started out, but I think it's a terrific idea and worth putting time into. Every time you struggle with finding the right definition for something I suggest you add your effort to the group knowledge by adding that definition to the Wiki.

\n

I made the accusation that the consensus about religion was due to \"group think\". In its pejorative sense, group think means everyone thinks the same thing because dissent is eliminated in some way. However, group think can also be the common set of definitions that we are working with. I think that having a well-defined group think will make posting much more efficient for everyone (with fewer semantic confusions) and will also aid newcomers.

\n

The \"group think\" defined in the Wiki would certainly need to be dynamic, nuanced and inclusive. A Wiki is already dynamic. To foster nuance and inclusion, the wiki might prompt for alternatives. For example, if I posted the two definitions of rationality above I might also write, \"Do you have another working definition of rationalism? Please add it here.\" so that a newcomer to LW would know they were not excluded from the \"group of rationalists\" if they have a different definition.

\n

What are some definitions that we could/should add to the Wiki? (I've noticed that \"tolerance\", as a verb or a noun, is problematic.)

" } }, { "_id": "yCjofyFSoAq73LT2A", "title": "Declare your signaling and hidden agendas ", "pageUrl": "https://www.lesswrong.com/posts/yCjofyFSoAq73LT2A/declare-your-signaling-and-hidden-agendas", "postedAt": "2009-04-13T12:01:06.657Z", "baseScore": 25, "voteCount": 25, "commentCount": 21, "url": null, "contents": { "documentId": "yCjofyFSoAq73LT2A", "html": "

Follow-up to: It's okay to be (at least a little) irrational

Many science journals require their authors to declare any competing interests they happen to have. For instance, if you're submitting a study about the health effects of tobacco, and you happen to sit on the board of directors of a major tobacco company, you're supposed to say that out loud. 

The process obviously isn't perfect, as most journals don't have the resources to ensure their authors do actually declare all competing interests. On the whole, though, it helps protect both the readers and the authors. The readers, because they'll know to be more careful in evaluating the reports of researchers who might be biased. The authors, because by declaring any competing interests upfront, they're protected from later accusations of dishonesty. (That's the theory, at least. In practice, authors often don't declare their interests, even if they should.)

Signaling has been discussed a lot on Overcoming Bias, though a bit less on Less Wrong. A large fraction of people's behavior is actually intended to signal some qualities to others, though this isn't necessarily a conscious process. On the other hand, it often is. As seasoned OB/LW readers, it seems to me like many would instinctively try to avoid giving the impression of excess signaling. We're rationalists, after all! We're trying to find the truth, not show off or impress others of our worth!

As if we even could avoid trying to make a good impression on others, or avoid having other kinds of hidden agendas. We're not any less human simply because we have rallied our rationality's banner. (Not to mention that signaling isn't a bad thing, by itself - humanity would be in a very poor state if we didn't have any signals about what others were like.) So, in the interest of self-honesty, I suggest we all begin explicitly declaring our (conscious) hidden agendas and signaling intentions when writing posts. As with the policy of scholarly journals, this will help both readers and writers, and in this case also serve a third and fourth function - making us more honest to ourselves, and make people realize that it's okay to have hidden agendas, and that they don't have to pretend they don't have any. I'll start out with mine.

\r\n

\r\n

I have roughly classified my hidden agendas at three different levels of severity. A \"mild\" agenda had a small impact on the behavior in question (for instance, writing a particular post), but I would have done it either way. A \"strong\" agenda means the behavior probably wouldn't have happened without the hidden agenda. A \"moderate\" agenda means that I'm not able to say either way - the behavior could have happened anyway, or then it might have not. I recognize that these are merely my conscious estimates of the different strengths and agendas, which are likely to be mistaken. They are, however, better than nothing.

Posting here in general - A desire to seek fame and respect in a community of rationalists, and to prove my worth as one (moderate). A desire to indicate that I have read and internalized the previous postings on OB, by linking to any relevant previous articles mentioning related concepts (moderate when it comes to linking, but mild when it comes to writing the articles - without the desire I might not have thrown in so many links, but I'd probably have written the posts anyway).
Does blind review slow down science? - Uncertain, as I don't remember my exact motivations for writing this post anymore.
The Golem - A mild desire to indirectly promote polyamory (by linking to a book about it as the source of the quote).
The Tragedy of the Anticommons - A mild to moderate desire to signal scholarship. My previous posts cited two books and some research articles and now I cited a third book, to give the (mostly accurate) impression that I read a lot and survey the research literature when I want to form an opinion of something. Mild desire to nudge people in the direction of a resource pointing out the harms of patents and copyright in their current form (declaration of possibly competing interest: I'm a board member of the Finnish Pirate Party).
Deliberate and spontaneous creativity - Mild to moderate desire to signal scholarship, again.
Rationalists should beware rationalism - Looking over the post, I'm not certain of any of my motives anymore, be they overt or covert. Perhaps a mild to moderate desire to signal resistance to groupthink.
Too much feedback can be a bad thing - None that I can remember.
It's okay to be (at least a little) irrational - Mild desire to signal support for the Institute Which Shall Not Be Named. Mild desire to signal altruism by bringing up my regular donations.
This post - Strong desire to signal honesty. Mild desire to more effectively promote my previous hidden agendas, by stating them out loud.

What are yours?

" } }, { "_id": "bDK63YCNoFT5wRSyb", "title": "Persuasiveness vs Soundness", "pageUrl": "https://www.lesswrong.com/posts/bDK63YCNoFT5wRSyb/persuasiveness-vs-soundness", "postedAt": "2009-04-13T08:43:12.255Z", "baseScore": 1, "voteCount": 11, "commentCount": 19, "url": null, "contents": { "documentId": "bDK63YCNoFT5wRSyb", "html": "

Compare the following two arguments.

\n
    \n
  1. E. is described by the following axioms
  2. \n
  3. Therefore under E. The square of the longest side of a  right angle triangle is equal to the sum of the squares of the remaining two sides.
  4. \n
    \n
  1. All men are mortal.
  2. \n
  3. Socrates is a man.
  4. \n
  5. Therefore Socrates is mortal.
  6. \n
\n

Naively, the second argument seems tautological, whereas with the first it's much harder to tell. Of course in reality the first argument is the tautology and the second argument is the more dubious one. The phrase \"immortal man\" doesn't seem contradictory, and how do we know Socrates is a man? He could be an android doppleganger. And the first argument's conclusion completely follows from euclid's axioms. Euclid and 100s of other mathematicians have proved that it does.

\n

So, why does the average human think the opposite?

\n

The arguments that change what we think, and the arguments that would change what a logically omniscient bayesian superman wielding Solomonoff's Lightsaber thinks are not very tightly correlated. In fact, there's a whole catalogue dedicated to finding out types of arguments we're persuaded by that a bayesian superman wouldn't be, the logical fallacies. But it's not just argument structure that causes us to lose our way, another factor is how well the argument is written.

\n

People are more persuaded by essays from Eliezer Yudkowsky, Bertrand Russell, Paul Graham or George Orwell than they would be from a forum post by an average 13 year old atheist, even if they both make the exact same point. This threw me for a bit of a loop, until I realized that Eliezer was pitching to bayesian supermen as well as us mortals. How well stylish writing correlates to the truth compared to unstylish writing is nowhere near how much we are persuaded by stylish writing compared to unstylish writing. 

\n

There's also the dreaded intuition pump, I think the reason it's so maligned is because it makes things much more persuasive without making them any more sound. A well chosen metaphor can do more to the human mind than a thousand pages of logic. Of course, we *want* intuition pumps for things that are actually true, because we want people to persuaded of things that are true and more importantly, we want them to be able to reason about things that are true. A good metaphor can enable this reasoning far more effectively than a list of axioms.

\n

The problem lies in both directions, we aren't always persuaded by cogent arguments, and we are sometime persuaded by crappy arguments that are delivered well. I put it to Less Wrong readers, how can we reduce the gap between what we are persuaded by, and what a bayesian superman is persuaded by?

" } }, { "_id": "K5nq3KcDXaGm7QQWR", "title": "Bystander Apathy", "pageUrl": "https://www.lesswrong.com/posts/K5nq3KcDXaGm7QQWR/bystander-apathy", "postedAt": "2009-04-13T01:26:15.635Z", "baseScore": 50, "voteCount": 45, "commentCount": 20, "url": null, "contents": { "documentId": "K5nq3KcDXaGm7QQWR", "html": "

The bystander effect, also known as bystander apathy, is that larger groups are less likely to act in emergencies - not just individually, but collectively.  Put an experimental subject alone in a room and let smoke start coming up from under the door.  75% of the subjects will leave to report it.  Now put three subjects in the room - real subjects, none of whom know what's going on.  On only 38% of the occasions will anyone report the smoke.  Put the subject with two confederates who ignore the smoke, and they'll only report it 10% on the time - even staying in the room until it becomes hazy.  (Latane and Darley 1969.)

\n

On the standard model, the two primary drivers of bystander apathy are:

\n\n

Cialdini (2001):

\n
\n

Very often an emergency is not obviously an emergency.  Is the man lying in the alley a heart-attack victim or a drunk sleeping one off?  ...  In times of such uncertainty, the natural tendency is to look around at the actions of others for clues.  We can learn from the way the other witnesses are reacting whether the event is or is not an emergency.  What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too.  Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us.  Therefore everyone is likely to see everyone else looking unruffled and failing to act.

\n
\n

Cialdini suggests that if you're ever in emergency need of help, you point to one single bystander and ask them for help - making it very clear to whom you're referring.  Remember that the total group, combined, may have less chance of helping than one individual.

\n

I've mused a bit on the evolutionary psychology of the bystander effect.  Suppose that in the ancestral environment, most people in your band were likely to be at least a little related to you - enough to be worth saving, if you were the only one who could do it.  But if there are two others present, then the first person to act incurs a cost, while the other two both reap the genetic benefit of a partial relative being saved.  Could there have been an arms race for who waited the longest?

\n

As far as I've followed this line of speculation, it doesn't seem to be a good explanation - at the point where the whole group is failing to act, a gene that helps immediately ought to be able to invade, I would think.  The experimental result is not a long wait before helping, but simply failure to help: if it's a genetic benefit to help when you're the only person who can do it (as does happen in the experiments) then the group equilibrium should not be no one helping (as happens in the experiments).

\n

So I don't think an arms race of delay is a plausible evolutionary explanation.  More likely, I think, is that we're looking at a nonancestral problem.  If the experimental subjects actually know the apparent victim, the chances of helping go way up (i.e., we're not looking at the correlate of helping an actual fellow band member).  If I recall correctly, if the experimental subjects know each other, the chances of action also go up.

\n

Nervousness about public action may also play a role.  If Robin Hanson is right about the evolutionary role of \"choking\", then being first to act in an emergency might also be taken as a dangerous bid for high status.  (Come to think, I can't actually recall seeing shyness discussed in analyses of the bystander effect, but that's probably just my poor memory.)

\n

Can the bystander effect be explained primarily by diffusion of moral responsibility?  We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help - that they mainly wish to escape antiheroism and possible retribution.  Something like this may well be a contributor, but two observations that mitigate against it are (a) the experimental subjects did not report smoke coming in from under the door, even though it could well have represented a strictly selfish threat and (b) telling people about the bystander effect reduces the bystander effect, even though they're no more likely to be held publicly responsible thereby.

\n

In fact, the bystander effect is one of the main cases I recall offhand where telling people about a bias actually seems able to strongly reduce it - maybe because the appropriate way to compensate is so obvious, and it's not easy to overcompensate (as when you're trying to e.g. adjust your calibration).  So we should be careful not to be too cynical about the implications of the bystander effect and diffusion of responsibility, if we interpret individual action in terms of a cold, calculated attempt to avoid public censure.  People seem at least to sometimes hold themselves responsible, once they realize they're the only ones who know enough about the bystander effect to be likely to act.

\n

Though I wonder what happens if you know that you're part of a crowd where everyone has been told about the bystander effect...

\n
\n

Cialdini, R. (2001.)  Influence: Science and Practice.  Boston, MA: Allyn and Bacon.

\n

Latane, B. and Darley, J. (1969.)  Bystander \"Apathy\", American Scientist, 57: 244-268.

" } }, { "_id": "G5bDjtSbJwbXuji4r", "title": "Marketing rationalism", "pageUrl": "https://www.lesswrong.com/posts/G5bDjtSbJwbXuji4r/marketing-rationalism", "postedAt": "2009-04-12T21:41:26.537Z", "baseScore": 16, "voteCount": 18, "commentCount": 65, "url": null, "contents": { "documentId": "G5bDjtSbJwbXuji4r", "html": "

Suppose you're a protestant, and you want to convince other people to do what the Bible says to do.  Would you persuade them by showing them that the Bible says that they should?

\n

Now suppose you're a rationalist, and you want to convince other people to be rational.  Would you persuade them with a rational argument?

\n

If not, how?

\n

ADDED:  I'm not talking about persuading others who already accept reason as final arbiter to adopt Bayesian principles, or anything like that.  I mean persuading Joe on the street who does whatever feels good, and feels pretty good about that.  Or a doctor of philosophy who believes that truth is relative and reason is a social construct.  Or a Christian who believes that the Bible is God's Word, and things that contradict the Bible must be false.

\n

Christians don't place a whole set of the population off-limits and say, \"These people are unreachable; their paradigms are too different.\"  They go after everyone.  There is no class of people whom they are unsuccessful with.

\n

Saying that we have to play by a set of self-imposed rules in the competition for the minds of humanity, while our competitors don't, means we will lose.  And isn't rationality about winning?

\n

ADDED:  People are missing the point that the situation is symmetrical for religious evangelists.  For them to step outside of their worldview, and use reason to gain converts, is as epistemically dangerous for them, as it is for us to gain converts using something other than reason.  Contemporary Christians consider themselves on good terms with reason; but if you look back in history, you'll find that many of the famous and influential Christian theologians (starting with Paul) made explicit warnings against the temptation of reason.  The proceedings from Galileo's trial contain some choice bits on the relation between reason and faith.

\n

Using all sorts of persuasive techniques that are not grounded in religious truth, and hence are epistemically repulsive to them and corrosive to their belief system, has proven a winning strategy for all religions.  It's a compromise; but these compromises did not weaken those religions.  They made them stronger.

" } }, { "_id": "Yiv9BeroBhJC6zqSs", "title": "It's okay to be (at least a little) irrational", "pageUrl": "https://www.lesswrong.com/posts/Yiv9BeroBhJC6zqSs/it-s-okay-to-be-at-least-a-little-irrational", "postedAt": "2009-04-12T21:06:20.881Z", "baseScore": 61, "voteCount": 64, "commentCount": 59, "url": null, "contents": { "documentId": "Yiv9BeroBhJC6zqSs", "html": "

Caused by: Purchase Fuzzies and Utilons Separately

As most readers will know by now, if you're donating to a charity, it doesn't make sense to spread your donations across several charities (assuming you're primarily trying to maximize the amount of good done). You'll want to pick the charity where your money does the most good, and then donate as much as possible to that one. Most readers will also be aware that this isn't intuitive to most people - many will instinctively try to spread their money across several different causes.

I'm spending part of my income on charity, too. Admittedly, this isn't much - 30 USD each month - but then neither is my income as a student. Previously I had been spreading that sum to three different charities, each of them getting an equal amount. On at least two different venues, people had (not always knowingly) tried to talk me out of it, and I did feel that their arguments were pretty strong. Still, I didn't change my ways, even though there was mental pressure building up, trying to push me in that direction. There were actually even some other charities I was considering also donating to, even though I knew I probably shouldn't.

Then I read Eliezer's Purchase Fuzzies and Utilons Separately. Here was a post saying, in essence, that it's okay to spend some of your money in what amounted to an irrational way. Yes, go ahead and spread your money, and go ahead and use some of it just to purchase warm fuzzies. You're just human, after all. Just try to make sure you still donate more to a utilon maximizer than to purchasing the fuzzies.

Here I was, with a post that allowed me to stop rationalizing reasons for why spreading money was good, and instead spread them because I was honestly selfish and just buying a good feeling. Now, I didn't need to worry about being irrational in having diversified donations. So since it was okay, I logged in to PayPal, cancelled the two monthly donations I had going to the other organizations, and tripled the amount of money that I was giving to the Institute Which Shall Not Be Named.

Not exactly the outcome one might have suspected.

\r\n

A theme that has come up several times is that it's easier to lie to others if you believe in the lies yourself. Being in a community where rationality is highly valued, many people will probably want to avoid appearing irrational, lest they lose the respect of others. They want to signal rationality. One way to avoid admitting irrationality to others is by not admitting it to yourself. But then, if you never even admit your irrationality to yourself, you'll have a hard time of getting over it.

If, on the other hand, the community makes it clear that it's okay to be irrational, for as long as you're trying to get rid of that, then you can actually become more rational. You don't need to rationalize reasons why you're not being irrational, you can accept that you are irrational and then change it. Eliezer's post did that for me, for one particular irrationality [1]. So let me say that out loud: It's okay to be irrational, and to admit that. You are only a human.

Failing to realize this is probably a failure mode for many communities which try to extinguish a specific way of thinking. If you make a behavior seem like it's just outright bad, something which nobody should ever admit to, then you'll get a large amount of people who'll never admit to it - even when they should, in order to get over it.

And it's not just a community thing, it's also an individual thing. Don't simply make it clear to others that some irrationality is okay: make it also clear for yourself. It's okay to be irrational.

\r\n

 

\r\n

Footnote [1]: Note that Eliezer's post didn't extinguish the irrationality entirely. I'm still intending on using some of my money on purchasing warm fuzzies, once my total income is higher. But then I'll actually admit that that's what I'm doing, and treat my purchases of fuzzies and utilions as separate cases. And the utilon purchasing will be the one getting more money.

" } }, { "_id": "tyMdPwd8x2RygcheE", "title": "Sunk Cost Fallacy", "pageUrl": "https://www.lesswrong.com/posts/tyMdPwd8x2RygcheE/sunk-cost-fallacy", "postedAt": "2009-04-12T17:30:52.592Z", "baseScore": 40, "voteCount": 35, "commentCount": 44, "url": null, "contents": { "documentId": "tyMdPwd8x2RygcheE", "html": "

Related to: Just Lose Hope Already, The Allais Paradox, Cached Selves

\n

In economics we have this concept of sunk costs, referring to costs that have already been incurred, but which cannot be recouped. Sunk cost fallacy refers to the fallacy of honoring sunk costs, which decision-theoretically should just be ignored. The canonical example goes something like this: you have purchased a nonrefundable movie ticket in advance. (For the nitpickers in the audience, I will also specify that the ticket is nontransferable and that you weren't planning on meeting anyone.) When the night of the show comes, you notice that you don't actually feel like going out, and would actually enjoy yourself more at home. Do you go to the movie anyway?

\n

A lot of people say yes, to avoid wasting the ticket. But on further consideration, it would seem that these people are simply getting it wrong. The ticket is a sunk cost: it's already paid for, and you can't do anything with it but go to the movie. But we've stipulated that you don't want to go to the movie. The theater owners don't care whether you go; they already have their money. The other theater-goers, insofar as they can be said to have a preference, would actually rather you stayed home, making the theater marginally less crowded. If you go to the movie to satisfy your intuition about not wasting the ticket, you're not actually helping anyone. Of course, you're entitled to your values, if not your belief. If you really do place terminal value on using something because you've paid for it, well, fine, I guess. But we should all try to notice exactly what it is we're doing, in case it turns out to not be what we want. Please, think it through.

\n

Dearest reader, if you're now about to scrap your intuition against wasting things, I implore you: don't! The moral of the parable of the movie ticket is not that waste is okay; it's that you should implement your waste-reduction interventions at a time when they can actually help. If you can anticipate your enthusiasm waning on the night of the show, don't purchase the nonrefundable ticket in the first place!

\n

You can view ignoring sunk costs as a sort of backwards perspective on the principle of the bottom line. The bottom line tells us that a decision can only be justified by its true causes; any arguments that come strictly afterwards don't count; if it just happens to all turn out for the best anyway, that only means you got lucky. The sunk cost fallacy tells us that a decision can only be justified by its immediate true causes; any arguments considered in the past but subsequently dismissed don't count; if you could have seen it coming, why didn't you?

\n

Another possible takeaway: perhaps don't be so afraid to behave inconsistently. Rational behavior may be consistent, but that doesn't mean you can be more rational simply by being more consistent. (Compare with the argument against majoritarianism: the Aumann results guarantee that Bayesians would agree, but that doesn't mean we can be more Bayesian simply by agreeing.) Overcoming Bias commenter John suggests that you go so far as to pretend that you've just been dropped into your current life with no warning. It may be disturbing to even consider such a radical discontinuity from your past—but you can consider something hypothetically, without necessarily having to believe or act on it in any way. And if, on reflection, it turned out that your entire life up to now was a complete waste, well, wouldn't you want to know about it?—and do something about it?

\n

Decision theory is local. Don't be afraid to ask of your methodology: \"What have you done for me lately?\"

" } }, { "_id": "GShnZZRJHsELHviC4", "title": "Awful Austrians", "pageUrl": "https://www.lesswrong.com/posts/GShnZZRJHsELHviC4/awful-austrians", "postedAt": "2009-04-12T06:06:39.990Z", "baseScore": 38, "voteCount": 51, "commentCount": 95, "url": null, "contents": { "documentId": "GShnZZRJHsELHviC4", "html": "

Response to: The uniquely awful example of theism

\n

Why is theism such an ever-present example of irrationality in this community? I think ciphergoth overstates the case. Even theism is not completely immune to evidence, as the acceptance of, say, evolution by so many denominations over time will testify. Theism is a useful whipping boy because it needs no introduction.

\n

But I think the case is overstated for another reason. There are terrible epistemologies out there that are just as bad as theism's. Allow me to tell you a tale, of how I gave up my religion and my association with a school of economics at the same time.

\n

I grew up in a southern Presbyterian church in the U.S. While I was taught standard pseudo-evidential defenses for belief, such as \"creation science\" and standard critiques of evolution, my church was stringently anti-evidentialist. Their preferred apologetic was something called presuppositionalism. It's certainly a minority apologetic among major defenders of Christianity today, especially compared to the cosmological or morality arguments. But it's a particularly rigorous attempt to defend beliefs against evidence nonetheless.

\n

Presuppositionalism (in some forms) hangs on the problem of induction. We cannot ultimately justify any of our beliefs without first making some assumptions, otherwise we end in solipsism. Christianity, then, justifies itself not on evidence, but on internal consistency. It is ok for an argument to be ultimately circular, because all arguments are ultimately circular. Christianity alone maintains perfect worldview consistency when examined through this lens, and is therefore correct.

\n

Since I've spent a lot of time thinking about this--it can take a considerable effort to change one's mind, after all--I can imagine innumerable things wrong with it, but they're not the focus of this entry. First, I just want to note how close it is to a kind of intro-level Bayesian understanding. Bayesians admit that we must have priors, that it's indeed nonsense to think we can even have an argument with one who doesn't. We must ultimately admit that certain justifications are going to be either recursive or based on priors. We believe that we should update our priors based on evidence, but there's nothing in the math that tells us we can't start with a prior for some position of 0% or 100%. (There is something in the math that tells us such probability assignments are very bad ideas, and we have more than enough cognitive bias literature that tells us we shouldn't be so damn overconfident. But then, what if you have a prior that keeps you from accepting such evidence?) It doesn't have any of the mathematical rigor, but it comes very close on a few major points.

\n

This is why Bayesianism appealed to me. It seemed similar to the supposedly deep argument I understood for God's existence, like something I could actually work with. (This is why, I think, anti-religion Overcoming Bias posts didn't throw me into defense mode.) This is also why I used to find Austrian economics so compelling.

\n

For those who aren't familiar, Austrian economics is a radical free-market school, the intellectual product of Ludwig von Mises, Friedrich von Hayek, and Murray Rothbard. Before I continue, in hopes of taking any Austrian economists reading this out of defense mode: I still find many Austrian insights useful, I admire Hayek for his work on knowledge and institutions, and Mises for the economic calculation argument. But the first section on epistemology in Mises' magnum opus, Human Action, is probably the best example of Dark Side Epistemology I have yet seen outside of religious apologetics or standard woo-woo. What does economics (or in Mises' case, praxeology, an expanded science of all human action that seeks to understand more than resource allocation) investigate? After excluding psychology, Mises tells us,

\n
\n

No laboratory experiments can be performed with regard to human action. We are never in a position to observe the change in one element only, all other conditions of the event remaining unchanged. . . The information conveyed by historical experience cannot be used as building material for the construction of theories and the prediction of future events. . . Neither experimental verification nor experimental falsification of a general proposition is possible in its field. (p. 31)

\n
\n

Well, ok. So how does economics tell us anything at all?

\n
\n

Praxeology is a theoretical and systematic, not a historical, science. . . It aims at knowledge valid for all instances in which the conditions exactly correspond to those implied in its assumptions and inferences. Its statements and propositions are not derived from experience. They are, like those of logic and mathematics, a priori. They are not subject to verification or falsification on the ground of experience and facts. They are both logically and temporally antecedent to any comprehension of historical facts. (p. 31)

\n
\n

In other words, the assumptions built into economics (which is a subset of praxeology)--people have preferences, are selfish (in the tautological sense--even altruist acts are self-serving to Mises), and they take rational action to satisfy those preferences--are unquestionable, ultimate givens. No evidence could ever confirm or disconfirm the predictions of economics, because it is an a priori science, just like math or logic. It is deductive--it starts from some assumptions, and its case rests on those assumptions alone, not on any evidence. (And he has a word for those of us seeking instances of human irrationality. On page 103, he claims out that any sign of preference reversals can never be considered irrationality, because preferences cannot be considered stable, even across spans of a few seconds. If your by-the-second preference changing leads you to be pumped for money, so be it. You're still by assumption a rational actor, satisfying his desires.)

\n

You can understand why I think this sounds so similar to presuppositionalism. And, if you've been following Overcoming Bias, you can see how a Bayesian would differ from these views.

\n

I saw the same problems with presuppositionalism as I did Mises' epistemology. So what if it's deductive? What if your deductive logic doesn't conform to the real world? This could be true of math just as well as economics. What if 2 + 2 didn't really equal 4 in our world? Could there be any way to convince you? If the answer is no, then aren't you just starting from the bottom line? If your deductively valid economic argument makes a prediction that is observed to never be true in the real world, would this not affect your rating of your deductions' usefulness? If your deductions are non-disprovable, why do you make so many claims regarding their predictive value? What does your logic not predict?

\n

To really solidify the feeling that Mises' predictions about economics are comparable to the Bible's predictions about how the world works, consider the following. As I mentioned, Mises defines self-interest tautologically:

\n
\n

Praxeology is indifferent to the ultimate goals of action. Its findings are valid for all kinds of action irrespective of the ends aimed at. It is a science of means, not of ends. It applies the term happiness in a purely formal sense. In the praxeological terminology the proposition: man's unique aim is to attain happiness, is tautological. It does not imply any statement about the state of affairs from which man expects happiness. (p. 15)

\n
\n

However, Mises specifically predicts economic outcomes based on self-interest as, well, actual self-interest. For instance, on page 763, he proclaims that price controls will lead to rationing by non-price means. But this is only true if the provider of the good in question is attempting to maximize profit; if the producer is willing to take a hit in the wallet out of the goodness of his heart for his customers' well-being, as Mises' tautological definition of self-interest allows, a small price ceiling could conceivably have no effect.

\n

So when are we to believe Mises? When he says economics is a deductive logic that can never be tested in the real world, or when he makes predictions that can be tested in the real world? When should we believe presuppositional apologists? When they claim that \"the Bible is the word of God\" is an ultimate given, or when they tell us all about miracles (evidence for God) that we can test in the real world (by finding evidence for a global flood)?

\n

The insistence on placing assumptions further and further away from our real ultimate givens, our real recursions, our real mystical priors, is a dark side epistemology. If we can devise a test for one of our assumptions, by golly, as rationalists we're called to test it. If that assumption fails, we have to perform a proper Bayesian update. We have to use all of our evidence available to us.

\n

So to answer what other forms of irrationality we can regularly cite, I'd like to nominate Austrian economics, or at least those of its followers who still eschew the introduction of statistics, behavorial economics, or experimental economics into the discipline. It certainly isn't as pervasive as religion. It's a very minor branch of a specific discipline. Not all of its conclusions are wrong, but I think there's at least a little evidence of dial-cranking in Austrianism. And I think its epistemology is quite awful, as awful as the most evidence-defying justification for theism.

\n

Reference: Ludwig von Mises. Human Action. San Francisco: Fox & Wilkes, 1996.

" } }, { "_id": "YKSwmhGJ3pY9qobnw", "title": "How Much Thought", "pageUrl": "https://www.lesswrong.com/posts/YKSwmhGJ3pY9qobnw/how-much-thought", "postedAt": "2009-04-12T04:56:40.773Z", "baseScore": 49, "voteCount": 51, "commentCount": 26, "url": null, "contents": { "documentId": "YKSwmhGJ3pY9qobnw", "html": "

We have many built in heuristics, and most of them are trouble. The absurdity heuristic makes us reject reasonable things out of hand, so we should take the time to fully understand things that seem absurd at first. Some of our beliefs are not reasoned, but inherited; we should sniff those out and discard them. We repeat cached thoughts, so we should clear and rethink them. The affect heuristic is a tricky one; to work around it, we have to take the outside view. Everything we see and do primes us, so for really important decisions, we should never leave our rooms. We fail to attribute agency to things which should have it, like opinions, so if less drastic means don't work, we should modify English to make ourseves do so.

\n

All of these articles bear the same message, the same message that can be easily found in the subtext of every book, treatise and example of rationality. Think more. Look for the third alternative. Challenge your deeply held beliefs. Drive through semantic stop signs. Prepare a line of retreat. If you don't understand, you should make an extraordinary effort. When you do find cause to change your beliefs, complete a checklist, run a script and follow a ritual. Recheck your answers, because thinking helps; more thought is always better.

\n

The problem is, there's only a limited amount of time in each day. To spend more time thinking about something, we must spend less time on something else. The more we think about each topic, the fewer topics we have time to think about at all. Rationalism gives us a long list of extra things to think about, and angles to think about them from, without guidance on where or how much to apply them. This can make us overthink some things and disastrously underthink others. Our worst mistakes are not those where our thoughts went astray, but those we failed to think about at all. The time between when we learn rationality techniques and when we learn where to apply them is the valley.

\n

\n

Reason, like time and money, is a resource. There are many complex definitions of reason, but I will use a simple one: reason is time spent thinking. We mainly use our reason to make decisions and answer questions; if we do it right, the more reason we spend, the more likely our answer will be correct.. We might question this analogy on the basis that we can't directly control our thoughts, but then, many people can't directly control their monetary spending, either; they impulse buy. In both cases, we can control our spending directly, using willpower (which is also a limited resource), or indirectly by finding ways to adjust our routine.

\n

This model is convenient enough to be suspicious, so we should apply some sanity checks to make sure it all adds up to normality. The utility we get from thinking about a decision is the cost of deciding incorrectly times the probability that we'll change our mind from incorrect to correct, minus the probability that we'll change our mind from correct to incorrect. From this, we get the highly normal statements thinking has higher expected utility when you're likely to change your mind and thinking has higher expected utility when the subject is important. With a resource model of reason, we should also expect simple representations for surpluses and shortages. A surplus of reason manifests as boredom; we are bored when we have nothing to do but think, and nothing interesting to think about. A shortage or reason manifests as stress; we're stressed when we have too much to think about.

\n

 

\n

When we consider costs as well as benefits, it becomes possible to reason about which techniques are worthwhile. It is not enough to show that a technique will sometimes illuminate truth; to justify its cost, it must be marginally more likely to illuminate truth than the next best technique. On easy questions of little consequence, a single cached thought or a simple heuristic will suffice. On hard problems, most techniques will fail to produce any insight, so we need to try more of them.

\n

Our mind is built on heuristics because they're efficient. A heuristic is not a bad word, but a way of answering questions cheaply. You shouldn't base core beliefs or important choices on heuristics alone, but for minor decisions a few simple heuristics may be all you can afford. Core beliefs and important choices, on the other hand, spawn a tree of sub-questions, the leaves of which are answered by heuristics or cached thoughts.

\n

The Overcoming Bias articles on heuristics treat them like villains that sabotage our thoughts. The standard way to prove that a heuristic exists is to present an example where it leads us astray. That means teaching readers, not to avoid using heuristics where they're inappropriate, but to avoid using them entirely. Fortunately, the architecture of our minds won't let us do that, since eliminating a heuristic entirely would make us much stupider. Instead, we should focus on learning and teaching what they feel like from the inside, with examples where they lead us astray and examples where they work properly.

\n

 

\n

In general, the expected return on investment for thinking about a topic starts high, as initial thoughts cut through confusion and affect our decision greatly, then drops as the most productive lines of reasoning are depleted. Once the expected return drops below some threshold, we should stop thinking about it.

\n

Normally, the process for allocating reason works automatically and works well. However, sometimes it breaks. Sometimes we run into questions that we simply can't resolve with the information we have available. If it's important, we run through our entire repertoire of techniques before giving up, perhaps guessing, and moving on. If it's less important, we try only the techniques that we think are likely to work before we give up. If you teach someone more techniques, then you increase the amount of time they can spend on a topic before running out of angles and being forced to move on. If those techniques fail to produce insight, then they make him stupider; he will spend more time on questions for little benefit, and ignore more. Some people are completely unable to budget their reason, like the man who spends ten minutes deciding what to order in a restaurant, knowing fully that he would be happier spending those ten minutes focused on conversation instead. If you teach him enough statistics, he might be foolish enough to try to calculate the probability of various dishes making him happy. He'll fail, of course, because statistics can't answer that question with the data he has, but he'll waste even more time trying.

\n

It would be nice to have more reason, but evidence points to cognitive capacity being a fixed quantity. We can, however, allocate the reason we do have more efficiently. We can set cutoff points to limit the time spent on silly things like restaurant orders. Some things are known to be wastes of reason; politics is the mind killer because it can use an unlimited amount of mental energy without producing the slightest bit of utility. We can identify the thoughts that are least valuable to us by observing what our mind goes to when we're bored: mostly, we daydream and retread old lines of thought. That means that when there are worthwhile topics to think about, daydreaming and retreading should be the first things to go. This conclusion shouldn't surprise anyone, but it's good to have theoretical justification.

\n

 

\n

Take a moment to think about what you spend time thinking about, and where your cutoff point is. Do you keep thinking about the same topic well past the point where insights stop coming? Do you get distracted and stop too early? If you decide unconsciously, would your conscious choice be the same or different?

" } }, { "_id": "8q8hqtfBHALBEmyYi", "title": "Less Wrong IRC meetup, going soon", "pageUrl": "https://www.lesswrong.com/posts/8q8hqtfBHALBEmyYi/less-wrong-irc-meetup-going-soon", "postedAt": "2009-04-11T18:53:48.155Z", "baseScore": 2, "voteCount": 1, "commentCount": 2, "url": null, "contents": { "documentId": "8q8hqtfBHALBEmyYi", "html": "

Reminder:  Less Wrong will be having a meetup on Saturday at 7pm UTC (convert to other time zones), in the #lesswrong IRC channel on Freenode. If all goes well, this will be a recurring event. If you haven't used IRC before, Mibbit provides a web-based client you can use.

\n

(It's my understanding that this works out to 12pm Pacific or 3pm Eastern, i.e. in about 7 minutes from the time of this posting.  I'll delete this post after the meeting is over - comments to main post only, please.)

" } }, { "_id": "nGovfP5okymjcghw8", "title": "Maybe theism is wrong", "pageUrl": "https://www.lesswrong.com/posts/nGovfP5okymjcghw8/maybe-theism-is-wrong", "postedAt": "2009-04-11T16:53:29.690Z", "baseScore": -7, "voteCount": 20, "commentCount": 14, "url": null, "contents": { "documentId": "nGovfP5okymjcghw8", "html": "

 

\n

(This is meant as an entirely rewritten version of the original post. It is still long, but hopefully clearer.)

\n

 

\n

Theism is often bashed. Part of that bashing is gratuitous and undeserved. Some people therefore feel compelled to defend theism. Their defence of theism goes further than just putting the record straight though. It attempts to show how theism can be a good thing, or right. That is probably going too far.

\n

I would argue several points. And for that I will be using the most idealistic vision of religion I can conjure, keeping in mind that real world examples may not be as utopian. My intended conclusion is that fairness and tolerance are a necessary and humane means to the end of helping people, which cannot, however, be used to justify as right something that is ultimately wrong.

\n

Theism is indeed a good thing, on short and mid term, both for individuals and society, as it holds certain benefits.Such as helping people stick together in close knit communities, helping people life a more virtuous life by giving themselves incentives to do so, helping them feel better when life feels unbearable or meaningless.

\n

Another point is that theism also possesses deep similarities with science, and uses optimally rational arguments and induction. Optimally, that is, insofar as the premises of theism allow; those premises, what we could call their priors are, for instance, in Christianity, to be found in the Bible.

\n

Finally, I also wanted to draw on further similarities between religion and secular groups of people. Atheism, humanism, transhumanism, even rationalism as we know it on LW. These similarities lie in the objectives which any of those groups honestly strives to attain. Those goals are, for instance, truth, the welfare of human beings, and their betterment.

\n

Within the world view of each of those groups, each is indeed doing its best to achieve those ends. One of catholicism's final beacon, used to guide people's life path, can be roughly said to be \"what action should I take that will make me more able to love others, and myself\" for instance. This, involves understanding, and following the word of God, as love and morality is understood to emanate from that source.

\n

And so the Bible, is supposed to hold those absolute truths, not so much in a straightforward, explained way, but rather in the same way that the observable universe is supposed to hold absolute truth for secular science. And just as it is possible to misconstrue observations and build flawed theories in the scientific model, given that observational, experimental data, so is it for a christian person, to misunderstand the data presented in the Bible. Rational edifices of thought have therefore been built to derive humanly understandable, cross checked (inside that edifice), usable-on-a-daily-basis truth, from the Bible.

\n

That is about as far as we can go for similarities, purity of purpose, intellectual honesty and adequacy with the real world.

\n

The premise of theism itself, is flawed. Theism presupposes the supernatural. Therefore, the priors of theism, do not correspond to the real state of the universe as we observe it, and this implies two main consequences.

\n

The first is that an intellectual edifice based upon flawed premises, no matter how carefully crafted, will still be flawed itself.

\n

The second runs deeper and is that the premises of theism themselves are in part incompatible with rationality itself, and hence limit the potential use of rational methods. In other words, some methods of rationality, as well as some particular arguments are forbidden, or unknown to what we could tentatively call religious science.

\n

From that, my first conclusion is that theism is wrong. Epistemically wrong, but also, doing itself a disservice, as the goals it has set itself up to, cannot be completed through its program. This program will not be able to hit its targets in optmization space, because of that epistemical flaw. Even though theism possesses short and mid term advantages, its whole edifice makes it a dead end, which will at the very least slow down humanity's progress towards nobler objectives like truth or betterment, if not even rendering that progress outright impossible past a certain point.

\n

Yet, it seems to me that this mistaken edifice isn't totally insane, far from it, at least at its roots. Hence it should be possible to heal it. Or at least, helping the people that are part of it, healing them.

\n

But, religion cannot be honestly called right, no matter how deep that idea is rooted in our culture and collective consciousness. On the long term, theism deprives us of our potential, it builds a virtual, unnecessary cage around us.

\n

To conclude on that, I wanted to point out that religious belief appears to be a human universal, and probably a hard coded part of human nature. It seems fair to recognize it in us, if we have that tendency. I know I do, for instance, and fairly strongly so. Idem for belief in the supernatural.

\n

This should be part of a more general mental discipline, of admitting to our faults and biases, rather than trying to hide and make up for them. The only way to dissect and correct them, is to first thoroughly observe those faults in our reasoning. Publicly so even. In a community of rationalists, there should be no question that even the most flawed, irrational of us, should only be treated as a friend in need of help, if he so desires, and if we have enough ressources to provide to his needs. The important thing there, is to have someone possessing a willingness to learn, and grow past his mistakes. This, can indeed be made easier, if we are supportive of each other, and tolerant, unconditionally.

\n

Yet, at the same time, even for that purpose, we can't yield to falseness. We can and must admit for instance that religion has good points, that we may not have a licence to change people against their will, and that if people want to be helped, that they should feel relaxed in explaining all the relevant information about what they perceive to be their problem. We can't go as far as saying that such a flaw, or problem, is, in itself, alright, though.

\n

 

" } }, { "_id": "a3MhmPM7eZbP6pFPZ", "title": "Twelve Virtues booklet printing?", "pageUrl": "https://www.lesswrong.com/posts/a3MhmPM7eZbP6pFPZ/twelve-virtues-booklet-printing", "postedAt": "2009-04-11T14:40:10.521Z", "baseScore": 8, "voteCount": 19, "commentCount": 32, "url": null, "contents": { "documentId": "a3MhmPM7eZbP6pFPZ", "html": "

For a while now, I've been using a laser printer to print out a couple of hundred copies of the Twelve Virtues of Rationality (in its printable pamphlet version) and taking them with me to conferences, talks, and science fiction conventions.  Cut, staple in the middle (using a large-sized, measuring stapler), and fold.  This method is very cheap, probably something like ten cents a copy for ink and paper.  But it produces crappy-looking pamphlets.

\n

Does anyone know of a way of cheaply printing small 16-page pamphlets?  Take a look at the pamphlet to see the current size.  I would really like to see the pamphlets stack well so I can plump down 50 of them without the tower falling over, which is the main problem with the staple-and-fold method.  But even more important is that they be cheap, considering the quantities in which I hand these out for free.  Something conducive to a professional-looking cover (i.e. allowing for the top sheet to be glossy or a higher quality of paper) would also be nice, again cost permitting.

\n

If I can find a good solution I'll also go ahead and get the pamphlet graphically redesigned before printing, of course, and include some more direct proselytizing material for Less Wrong on the back cover.

\n

I've looked around online, but all the print shops I've seen have been way too expensive for giving away 200 copies per convention - even by myself, much less getting other people to do it on a routine basis.  Does anyone know how to get this done cheaply?  A minimum order of 10,000 for $1000 would be quite acceptable - I expect at that price I could ship some boxes to other LWers.

" } }, { "_id": "GsiTXp2v3SRfG8ugK", "title": "Too much feedback can be a bad thing", "pageUrl": "https://www.lesswrong.com/posts/GsiTXp2v3SRfG8ugK/too-much-feedback-can-be-a-bad-thing", "postedAt": "2009-04-11T14:05:18.306Z", "baseScore": 11, "voteCount": 16, "commentCount": 13, "url": null, "contents": { "documentId": "GsiTXp2v3SRfG8ugK", "html": "

Didn't have the time to read the article itself, but based on the abstract, this certainly sounds relevant for LW:

\r\n
\r\n

Recent advances in information technology make it possible for decision makers to track information in real-time and obtain frequent feedback on their decisions. From a normative sense, an increase in the frequency of feedback and the ability to make changes should lead to enhanced performance as decision makers are able to respond more quickly to changes in the environment and see the consequences of their actions. At the same time, there is reason to believe that more frequent feedback can sometimes lead to declines in performance. Across four inventory management experiments, we find that in environments characterized by random noise more frequent feedback on previous decisions leads to declines in performance. Receiving more frequent feedback leads to excessive focus on and more systematic processing of more recent data as well as a failure to adequately compare information across multiple time periods.

\r\n
\r\n

Hat tip to the BPS Resarch Digest.

\r\n

ETA: Some other relevant studies from the same site, don't remember which ones have been covered here already:

\r\n

Threat of terrorism boosts people's self-esteem

\r\n

The \"too much choice\" problem isn't as straightforward as you'd think

\r\n

Forget everything you thought you knew about Phineas Gage, Kitty Genovese, Little Albert, and other classic psychological tales

\r\n

 

" } }, { "_id": "g3W2mLoGN5osuH4f4", "title": "Toxic Truth", "pageUrl": "https://www.lesswrong.com/posts/g3W2mLoGN5osuH4f4/toxic-truth", "postedAt": "2009-04-11T11:25:16.530Z", "baseScore": 16, "voteCount": 26, "commentCount": 31, "url": null, "contents": { "documentId": "g3W2mLoGN5osuH4f4", "html": "

For those who haven't heard about this yet, I thought this would be a good way to show the potentially insidious effect of biased, one-sided analysis and presentation of evidence under ulterior motives, and the importance of seeking out counter-arguments before accepting a point, even when the evidence being presented to support that point is true.

\n
\n

\"[DHMO] has been a part of nature longer than we have; what gives us the right to eliminate it?\" - Pro-DHMO web site.

\n
\n

DHMO (hydroxilic acid), commonly found in excised tumors and lesions of terminal lung and throat cancer patients, is a compound known to occur in second hand tobacco smoke. Prolonged exposure in solid form causes severe tissue damage, and a proven link has been established between inhalation of DHMO (even in small quantities) and several deaths, including many young children whose parents were heavy smokers.

It's also used as a solvent during the synthesis of cocaine, in certain forms of particularly cruel and unnecessary animal research, and has been traced to the distribution process of several cases of pesticides causing genetic damage and birth defects. But there are huge political and financial incentives to continue using the compound.

There have been efforts across the world to ban DHMO - an Australian MP has announced a campaign to ban it internationally - but little progress. Several online petitions to the British prime minister on this subject have been rejected. The executive director of the public body that operates Louisville Waterfront Park was actually criticised for posting warning signs on a public fountain that was found to contain DHMO. Jacqui Dean, New Zealand National Party MP was simily told \"I seriously doubt that the Expert Advisory Committee on Drugs would want to spend any time evaluating that substance\".

\n

If you haven't guessed why, re-read my first sentence then click here.

HT the Coalition to Ban Dihydrogen Monoxide.

\n

[Edit to clarify point:] I'm not saying truth is in any way bad. Truth rocks. I'm reminding you truth is *not sufficient*. When they're given treacherously or used recklessly, truth is as toxic as hydroxilic acid.

\n

Follow-up to: Comment in The Forbidden Post.

" } }, { "_id": "S4a66fZs64kxjBskY", "title": "Maybe Theism Is OK -- Part 2", "pageUrl": "https://www.lesswrong.com/posts/S4a66fZs64kxjBskY/maybe-theism-is-ok-part-2", "postedAt": "2009-04-11T06:32:50.408Z", "baseScore": -6, "voteCount": 16, "commentCount": 10, "url": null, "contents": { "documentId": "S4a66fZs64kxjBskY", "html": "

In response to: The uniquely awful example of theism

\n

And Maybe Theism Is OK

\n

Finally, I think I understand where gim and others are coming from when they made statements that I thought represented overly intolerant views of religious belief. I think that a good summary of the source of the initial difference in opinion is that while many people in this group have the purpose to eliminate all sources of irrationality,  I would like to pick and choose which sources of irrationality I have in the optimization of a different problem: general life-hacking.

\n

Probably many people in this group believe that the best life-hack would be to eliminate irrationality. But I'm pretty sure this depends on the person (not everyone is suited for X-rationality), and I'm pretty sure -- though not certain -- that my best life-hack would include some irrationality.

\n

Since my goals are different than that of this forum, many of my views are not relevant here, and there is no need to debate them.

\n

Instead, I would like to present two arguments (1,2) for why it could be rational to hold an irrational belief, and two arguments (3,4) as to why someone could be more accepting of the existence of irrational beliefs (i.e., why not to hate it).

\n

(1) It could be rational to hold an irrational belief if you are aware of your irrational belief and choose to hold it because it is grafted to components of your personality/ psyche that are valuable to you. For example, you may find that

\n\n

I imagine these situations would be the result of an organically developing mind that has made several errors and is possibly unstable. But until we have a full understanding of mental processes/psychology/the physiology of emotions, we cannot expect a rational person to just \"tough it out\" to optimize rationality while his life falls apart.

\n

Later added: This argument has since been described better, with a better emphasis, with [this comment.](http://lesswrong.com/lw/aq/how_much_thought/6zp)

\n

(2) It could be rational to hold an irrational belief if you choose to hold it because you would like to exercise true control of your mind. Put another way, you may find it to be an aesthetic art of some form to choose a set of beliefs and truly believe them. Why would anyone want to do this? Eliminating all beliefs and becoming rational is a good exercise in controlling your mind. I hazard that a second exercise would be to believe what you consciously choose to.

\n

(3) I think there is another reason to consciously choose to try to believe something that you don't believe rationally-- true understanding of the enemy; the source and the grip of an irrational thought. What irked me most about the negative comments about religious views was the lack of any empathy for those views. It may seem like a contradiction but while I believe some religious views are irrational I do not dismiss people who hold them as hopelessly irrational. With empathy, I believe that it is possible to hold religious views and not greatly compromise rationality.

\n

(4) Maybe you are indeed right that any kind of religious view is irrational and that we would be better off without it. However, it is not at as clear that religious views can ever be completely exorcised... Suppose we wanted to create a world in which important parts of people's personalities are never tied to religious views. Are children allowed to daydream? Is a child allowed to daydream they are omnipotent? Are they allowed to pretend there is a God for a day? How will it affect creativity and motivation and development if there is no empathy for an understanding of God?

" } }, { "_id": "Afvk6GGfoo8mea5cb", "title": "Missed Distinctions", "pageUrl": "https://www.lesswrong.com/posts/Afvk6GGfoo8mea5cb/missed-distinctions", "postedAt": "2009-04-11T03:15:40.595Z", "baseScore": 38, "voteCount": 47, "commentCount": 13, "url": null, "contents": { "documentId": "Afvk6GGfoo8mea5cb", "html": "

When we lump unlike things together, it confuses us and opens holes in our theories. I'm not normally one to read about diets, dieting advice, or anything of that sort, but in today's article about the Shangri-La Diet, I saw an important distinction that no one's talked about. Something Eliezer said in the comments struck me as odd:

\n
\n

a skipped meal you wouldn't notice would have me dizzy when I stand up

\n
\n

And a few posts later,

\n
\n

I can starve or think, not both at the same time.

\n
\n

Reading these, I thought, that's not what being hungry like feels like for me. But while being hungry doesn't feel like that, those descriptions were nonetheless familiar. And then it hit me.

\n

He wasn't describing the symptoms of hunger. He was describing the symptoms of hypoglycemia, more commonly known as low blood sugar. Blood sugar is one of the main systems responsible for regulating appetite, so for most people, having low blood sugar and being hungry are one and the same. The main focus of the Atkins diet, for example, is reducing swings in blood sugar, thereby reducing appetite. The Shangri-La diet seems like it would have a similar effect.

\n

Being diabetic (the kind caused by immunology, not the kind caused by diet), I monitor and control my blood sugar, so I have ample opportunities to observe how it affects my eating habits and how I feel. Like most insulin-dependent diabetics, I have been trained with a fairly detailed model of blood sugar, how it's affected by food and insulin, and procedures to follow if it's too high or low. The standard procedure for low blood sugar, taught to all diabetics, is to test blood sugar, eat exactly 15g (60 calories) of sugar, wait 15 minutes, then test again. In practice, I have sometimes responded to hypoglycemia, not with 15g of sugar as the procedure specifies, but with 15g of sugar, immediately followed by a thousand plus calories of binge eating - basically, as much food as I could shove down in the time between when I first started eating, and when my blood sugar returned to normal (about ten minutes). This behavior is common among people on diets stricter than they can handle. For me, someone not on a diet, with a mostly full stomach, it's downright odd. Or is it?

\n

Being hungry is not the same as having low blood sugar. Hypoglycaemia feels like extreme hunger (plus a few other symptoms), but while extreme hunger takes a lot of food to get rid of, it only takes 60 calories and 15 minutes to completely eliminate hypoglycaemia. If you're hungry, you ought to suppress it. If you're hypoglycaemic, on the other hand, you need to deal with it swiftly, and in a controlled manner. What happens if you don't? As a diabetic, this, too, is in my training. The pancreas will release glucagon, a hormone that causes the liver to release stored sugar into the bloodstream. Getting rid of stored energy is good for a dieter, right? Well, in this case, no it isn't; the sugar stored in the liver would have been released the next time you exercised. Rather than burning fat, you're burning short-term energy reserves, so that when you do make it to the gym, you'll hit a wall more quickly. And of course, while your blood sugar is low and you aren't eating, you can't focus and you quickly burn through willpower.

\n

Today's best diets prevent low blood sugar entirely, rendering the hunger vs. hypoglycaemia distinction moot. However, if you can't tell the difference between hunger and hypoglycaemia, then you can't tell whether it's your diet failing or your willpower. Blood sugar test kits are affordable and don't require a prescription, and once you know what low blood sugar feels like, you won't need the kit anymore. There is much more to dieting than just controlling blood sugar, of course, but we do know that blood sugar is important. So why has no one proposed the Prick Your Finger diet? Why do none of the popular diets involve measuring blood sugar at all, ever?

\n

Mild hypoglycaemia feels like a caffeine overdose without the energy: irritability, palpitations, and tingling in the extremities. It is a distinctly alien feeling, and includes an urgent desire for food. Only sugar can eliminate it; fat, protein and complex carbohydrates will not help at all, and should be avoided. Severe hypoglycaemia produces other symptoms, but can only be produced using medication.

" } }, { "_id": "mCQf5FCXjwTBeLC6u", "title": "Is masochism necessary?", "pageUrl": "https://www.lesswrong.com/posts/mCQf5FCXjwTBeLC6u/is-masochism-necessary", "postedAt": "2009-04-10T23:48:35.000Z", "baseScore": 9, "voteCount": 22, "commentCount": 147, "url": null, "contents": { "documentId": "mCQf5FCXjwTBeLC6u", "html": "

Followup to Stuck in the middle with Bruce:

\n

Bruce is a description of masochistic personality disorder.  Bruce's dysfunctional behavior may or may not be related to sexual masochism [safe for work], which is demonized by most people in America.  Yet there are ordinary, socially-accepted behaviors that seem partly masochistic to me:

\n\n

Question 1: Can you list more?

\n

Question 2: Doubtless some of the behaviors I listed have completely different explanations, some of which might not involve masochism at all.  Which do you think involve enjoying pain?  Can you cluster them by causal mechanism?

\n

Question 3: When we find ourselves acting masochistically, should we try to \"correct\" it?  Or is it part of a healthy human's nature?  If so, what's the evolutionary-psych explanation?  (I was surprised not to find any evo-psych explanations for masochism on the web; or even any general theory of masochism that tried to unite two different behaviors.  All I found were the ideas that sexual masochism is caused by bad childhood models of love, and that masochistic personality is caused by other, unspecified bad experiences.  No suggestion that masochism is part of our normal pleasure mechanism.)

\n

Some hypotheses:

\n
    \n
  1. Evolution implemented \"need to explore\" (in the \"exploration/exploitation\" sense) as pleasure in new experiences, and adaptation to any particular often-repeated stimulus.  This could result in seeking ever-higher levels of stimulation, even above the pain threshold.  (This could affect a culture as well as an organism, giving the progression Vivaldi -> Bach -> Mozart -> Beethoven -> Wagner -> Stravinsky -> Berg -> screw it, let's invent rock and roll and start over.  My original belief was that this progression was caused by people trying to signal sophistication, rather than by honest enjoyment of music.  But maybe some people <DELETION of \"jaded\"> honestly enjoy Berg.)
  2. \n
  3. We have a \"pain thermostat\" to get us to explore / prevent us from being too cowardly, and modern life leaves us below our set point.  (Is masochism more prevalent now than in the bad old days?)
      \n
    1. An objection to this is that sometimes, when people are in emotional pain, they work through it by throwing themselves into further emotional pain (e.g., by listening to Pink Floyd).
        \n
      1. An objection to this objection is that primal scream therapy seems not actually to work except in the short term.
      2. \n
    2. \n
  4. \n
  5. Pain triggers endorphins in order to help us fight or flee, and it feels good.
  6. \n
  7. We enjoy fighting and athletic competition, and pain is associated with these things we enjoy.
  8. \n
\n

My guess is that, if it's a side-effect (e.g., 3) or a non-causal association (4), it's okay to eliminate masochism.  Otherwise, that could be risky.

\n

These all lead up to Question 4, which is a fun-theory question:  Would purging ourselves of masochism make life less fun?

\n

ADDED: Question 5: Can we train ourselves not to be Bruce without damaging our enjoyment of these other things?

" } }, { "_id": "LKHJ2Askf92RBbhBp", "title": "Metauncertainty", "pageUrl": "https://www.lesswrong.com/posts/LKHJ2Askf92RBbhBp/metauncertainty", "postedAt": "2009-04-10T23:41:52.946Z", "baseScore": 26, "voteCount": 24, "commentCount": 4, "url": null, "contents": { "documentId": "LKHJ2Askf92RBbhBp", "html": "

Response to: When (Not) To Use Probabilities

“It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.” —E. T. Jaynes

The uncertainty due to vague (non math) language is no different than uncertainty by way of \"randomizing\" something (after all, probability is in the mind). The principle still holds; you should be able to come up with a better way of doing things if you can put in the extra thought.

In some cases, you can't afford to waste time or it's not worth the thought, but when dealing with things such as the deciding whether to run the LHC or signing up for cryonics, there's time, and it's sorta a big deal, so it pays to do it right.


If you're asked \"how likely is X?\", you can answer \"very unlikely\" or \"0.127%\". The latter may give the impression that the probability is known more precisely than it is, but the first is too vague; both strategies do poorly on the log score.

If you are unsure what probability to state, state this with... another probability distribution.

\n

\"My probability distribution over probabilities is an exponential with a mean of 0.127%\" isn't vague, it isn't overconfident (at the meta^1 level), and gives you numbers to actually bet on.

The expectation value of the metaprobability distribution (integral from 0 to 1 of Pmeta*p*dp) is equal to the probability you give when trying to maximize your expected log score .

To see this, we write out the expected log score (Integral from 0 to 1 of Pmeta*(p*log(q)+(1-p)log(1-q))dp). If you split this into two integrals and pull out the terms that are independent of p, the integrals just turn into the expectation value of p, and the formula is now that of the log score with p replaced with mean(p). We already know that the log score is maximized when q = p, so in this case we set q = mean(p)

This is a very useful result when dealing with extremes where we are not well calibrated. Instead of punting and saying \"err... prolly aint gonna happen\", put a probability distribution on your probability distribution and take the mean. For example, if you think X is true, but you don't know if you're 99% sure or 99.999% sure, you've got to bet at ~99.5%.

It is still no guarantee that you'll be right 99.5% of times (by assumption we're not calibrated!), but you can't do any better given your metaprobability distribution.
 
You're not saying \"99.5% of the time I'm this confident, I'm right\". You're just saying \"I expect my log score to be maximized if I bet on 99.5%\". The former implies the latter, but the latter does not (necessarily) imply the former.

This method is much more informative than \"almost sure\", and gives you numbers to act on when it comes time to \"shut up and multiply\". Your first set of numbers may not have \"come from numbers\", but the ones you quote now do, which is an improvement. Theoretically this could be taken up a few steps of meta, but once is probably enough.

\n

Note: Anna Salamon's comment makes this same point.    

" } }, { "_id": "JRt4qrbbpgEW5LBWG", "title": "Maybe Theism Is OK ", "pageUrl": "https://www.lesswrong.com/posts/JRt4qrbbpgEW5LBWG/maybe-theism-is-ok", "postedAt": "2009-04-10T21:09:21.105Z", "baseScore": -2, "voteCount": 16, "commentCount": 37, "url": null, "contents": { "documentId": "JRt4qrbbpgEW5LBWG", "html": "

I would like to argue that there could be a more tolerant view of religion/theism here on Less Wrong. The extent to which theism is vilified here seems disproportionate to me.

\n

It depends on the specific scenario how terrible religion is. It is easy to look at the very worst examples of religion and conclude that religion can be irrational in a terribly wrong way. However, religion can also be nearly rational. Considering that any way we view the world is an illusion to some extent. Indeed the whole point of this site is to learn ways to shed more of our illusions, not that we have no illusions.

\n

There are the religious beliefs that contradict empirical observation and those that are independent of it...

\n

A) Could it be rational for a person to hold beliefs that are independent of empirical observation if (a) the person concedes that they are irrational not empirically based and (b) is willing to drop them if they prove to not be useful?

\n

B) Could it be rational for a person to hold unusual beliefs as a result of contradicting empirical observations?

\n

As a least convenient world exercise, what is the most rational belief in God that you can think of?

\n

 

" } }, { "_id": "geqg9mk73NQh6uieE", "title": "Akrasia and Shangri-La", "pageUrl": "https://www.lesswrong.com/posts/geqg9mk73NQh6uieE/akrasia-and-shangri-la", "postedAt": "2009-04-10T20:53:14.746Z", "baseScore": 53, "voteCount": 51, "commentCount": 97, "url": null, "contents": { "documentId": "geqg9mk73NQh6uieE", "html": "

Continuation ofThe Unfinished Mystery of the Shangri-La Diet

\n

My post about the Shangri-La Diet is there to make a point about akrasia.  It's not just an excuse: people really are different and what works for one person sometimes doesn't work for another.

\n

You can never be sure in the realm of the mind... but out in material foodland, I know that I was, in fact, drinking extra-light olive oil in the fashion prescribed.  There is no reason within Roberts's theory why it shouldn't have worked.

\n

Which just means Roberts's theory is incomplete.  In the complicated mess that is the human metabolism there is something else that needs to be considered.  (My guess would be \"something to do with insulin\".)

\n

But if the actions needed to implement the Shangri-La Diet weren't so simple and verifiable... if some of them took place within the mind... if it took, not a metabolic trick, but willpower to get to that amazing state where dieting comes effortlessly and you can lose 30 pounds...

\n

Then when the Shangri-La Diet didn't work, we unfortunate exceptions would get yelled at for doing it wrong and not having enough willpower.  Roberts already seems to think that his diet ought to work for everyone; when someone says it's not working, Roberts tells them to drink more extra-light olive oil or try a slightly different variant of the diet, rather than saying, \"This doesn't work for some people and I don't know why.\"

\n

If the failure had occurred somewhere inside the dark recesses of my mind where it could be blamed on me, rather than within my metabolism...

\n

If Roberts's hypothesis is correct, then I'm sure that plenty of people have made some dietary change, started losing weight due to the disrupted flavor-calorie association, and congratulated themselves on their wonderful willpower for eating less.  When I moved out of my parents' home and started eating less and exercising and losing more than a pound a week, you can bet I was congratulating myself on my amazing willpower.

\n

Hah.  No, I just stumbled onto a metabolic pot of gold that let me lose a lot of weight using a sustainable expenditure of willpower.  When that pot of gold was exhausted, willpower ceased to avail.

\n

(The metabolically privileged don't believe in metabolic privilege, since they are able to lose weight by trying! harder! to diet and exercise, and the diet and exercise actually work the way they're supposed to... I remember the nine-month period in my life where that was true.)

\n

When I look at the current state of the art in fighting akrasia, I see the same sort of mess.

\n

People try all sorts of crazy things—and as in dieting, there's secretly a general reason why any crazy thing might seem to work: if you expect to win an internal conflict, you've already programmed yourself to do the right thing because you expect that to be your action; it takes less willpower to win an internal conflict you expect to win.

\n

And people make up all sorts of fantastic stories to explain why their tricks worked for them.

\n

But their tricks don't work for everyone—some others report success, some don't.  The inventors do not know the deep generalizations that would tell them why and who, explain the rule and the exception.  But the stories the inventors have created to explain their own successes, naturally praise their own willpower and other virtues, and contain no element of luck... and so they exhort others:  Try harder!  You're doing it wrong!

\n

There is a place in the mind for willpower.  Don't get me wrong, it's useful stuff.  But people who assign their successes to willpower—who congratulate themselves on their stern characters—may be a tad reluctant to appreciate just how much you can be privileged or disprivileged by having a mental metabolism where expending willpower is effective, where you can achieve encouraging results, at an acceptable cost to yourself, and sustain the effort in the long run.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Collective Apathy and the Internet\"

\n

Previous post: \"Beware of Other-Optimizing\"

" } }, { "_id": "BD4oExxQguTgpESdm", "title": "The Unfinished Mystery of the Shangri-La Diet", "pageUrl": "https://www.lesswrong.com/posts/BD4oExxQguTgpESdm/the-unfinished-mystery-of-the-shangri-la-diet", "postedAt": "2009-04-10T20:30:15.899Z", "baseScore": 50, "voteCount": 51, "commentCount": 232, "url": null, "contents": { "documentId": "BD4oExxQguTgpESdm", "html": "

Followup toBeware of Other-Optimizing

\n

Once upon a time, Seth Roberts (a professor of psychology at Berkeley, on the editorial board of Nutrition) noticed that he'd started losing weight while on vacation in Europe.  For no apparent reason, he'd stopped wanting to eat.

\n

Some time later, The Shangri-La Diet swept... the econoblogosphere, anyway.  People including some respectable economists tried it, found that it actually seemed to work, and told their friends.

\n

The Shangri-La Diet is unfortunately named - I would have called it \"the set-point diet\".  And even worse, the actual procedure sounds like the wackiest fad diet imaginable:

\n

Just drink two tablespoons of extra-light olive oil early in the morning... don't eat anything else for at least an hour afterward... and in a few days it will no longer take willpower to eat less; you'll feel so full all the time, you'll have to remind yourself to eat.

\n

Why?  I'm tempted to say \"No one knows\" just to see what kind of comments would show up, but that would be cheating.  Roberts does have a theory motivating the diet, an elegant combination of pieces individually backed by previous experiments:

\n\n

I'm not going to go into all the existing evidence that backs up each step of this theory, but the theory is very beautiful and elegant.  The actual Shangri-La Diet is painfully simple by comparison: consume nearly tasteless extra-light olive oil, being careful not to associate it with any flavors before or after, to raise your body weight a little without raising your set point.  Your body weight goes above your set point, and you stop feeling hungry.  Then you eat less... and your weight drops... and your set point drops a little less than that... but then next morning it's time for your next dose of extra-light olive oil, which once again puts your (decreased) weight a bit above the set point.  The regular dose of almost flavorless calories tilts the dynamic balance downward.  That's the theory.

\n

Many people, including some trustworthy econblogger types, have reported losing 1-2 pounds/week by implementing the actual actions of the Shangri-La Diet, up to 30 pounds or even more in some cases.  Without expending willpower.

\n

I tried it.  It didn't work for me.

\n

Now here's the frustrating thing:  The Shangri-La Diet does not contain an obvious exception for Eliezer Yudkowsky.  On the theory as stated, it should just work.  But I am not the only person who reports trying this diet (and a couple of variations that Roberts recommended) without anything happening, except possibly some weight gain due to the added calories.

\n

And here's the more frustrating thing:  Roberts's explanation felt right.  It's one of those insights that you grasp and then so much else snaps into place.

\n

It explained that frustrating experience I'd often had, wherein I would try a new food and it would fill me up for a whole day - and then, as I kept on eating this amazing food in an effort to keep my total intake down, the satiation effect would go away.

\n

It explained why I'd lost on the order of 50-60 pounds - with what, in retrospect, was very little effort - when I first moved out of my parents' house and to a new city and started eating non-Jewish food.  In retrospect, I was eating an amazingly little amount each day, like 1200 calories, but without any feeling of hunger.  And then my weight slowly started creeping up again, and no amount of exercise - to which (ha!) I'd originally attributed the weight loss - seemed able to stop it.

\n

It's always hard to pick reality out of the gigantic morass of competing dietary theories.  One of the elegant charms of Robert's hypothesis is that it helps explain why this is so - the mess of incoherent results.  Any new diet will seem to work for a few months or weeks, you're losing weight and everything seems wonderful, you tell all your friends and they buy the same diet book, and then bam the flavor-calorie association kicks back in and you're back to hell.  The number-one result of weight-loss science is that 95% of people who lose weight regain it.

\n

(I haven't heard any complaints from people regaining weight they lost on the Shangri-La Diet, however - if it works for you at all, it seems to go on working.  Most of the complaints on the forums are from people who suddenly plateau after losing 30 pounds, but who want to lose more.  Or people like me, who try it, and find that it doesn't seem to do anything, or that we're gaining weight with no apparent loss of appetite.)

\n

I have a pretty strong feeling - I don't know if I should trust it, since I'm not a dietary scientist - that Roberts's hypothesis is at least partially right.  It makes a lot of data snap into focus.  The pieces are well-supported individually.

\n

But I don't think that Roberts has the whole story.  There's something missing - something that would explain why the Shangri-La Diet lets some people control their weight as easily as a thermostat setting, and why others lose 30 pounds and then plateau well short of their goal, and why others simply find the Shangri-La diet ineffective.  The Mystery of Shangri-La is not how the diet works when it does work; Roberts has made an excellent case for that.  The question is why it sometimes doesn't work.  There is a deeper law, I strongly suspect, that governs both the rule and the exception.

\n

The problem is, though - and here's the really frustrating part - Roberts seems to think he does have the whole answer.  If the diet doesn't work at first, his answer is to try more oil... which is a pretty scary answer if you're already gaining weight from the extra calorie intake!  I decided not to go down this route because it didn't seem to work for the people on the forums who were reporting that the Shangri-La Diet didn't work for them.  They just gained even more weight.

\n

And what really makes this a catastrophe is that this theory has never been analyzed by controlled experiment, which drives me up the frickin' WALL.  Roberts himself is a big advocate of \"self-experimentation\", which I suppose explains why he's not pushing harder for testing.  (Though it's not like Roberts is a standard pseudoscientist, he's an academic in good standing.)  But with reports of such drastic success from so many observers, some of them reliable, outside dietary scientists ought to be studying this.  What the fsck, dietary scientists?  Get off your butts and study this thing!  NOW!  Report these huge results in a peer-reviewed journal so that everyone gets excited and starts studying the exceptions to the rule!

\n

It's awful; it seems like Roberts has gotten so close to burying the scourge Obesity, but the theory is still missing some final element, some completing piece that would explain the rule and the exception, and with that last piece it might be possible to make the diet work for everyone...

\n

If we had a large-sized rationalist community going that had solved the group effort coordination problem, those of us who are metabolically disprivileged would be pooling resources and launching our own controlled study of this thing, and entering every conceivable variable we could report into the matrix, and hiring a professional biochemist to analyze our metabolisms before and afterward, and we would cryopreserve anyone who got in our way.  You have no idea.

\n

(Warning:  Do not try the Shangri-La diet at home based on only the info here, there's a couple of caveats and I can't think offhand of a good complete description on the 'Net.  Also you might want to reconsider the recommendation to use fructose in the sugar water route, because IIRC fructose has been shown to contribute to insulin resistance or something like that - sucrose may actually make more sense, despite the higher glycemic index.)

\n

Continued inAkrasia and Shangri-La

" } }, { "_id": "gt4D8WXp96Aq8knCW", "title": "Spay or Neuter Your Irrationalities", "pageUrl": "https://www.lesswrong.com/posts/gt4D8WXp96Aq8knCW/spay-or-neuter-your-irrationalities", "postedAt": "2009-04-10T20:08:45.734Z", "baseScore": 3, "voteCount": 14, "commentCount": 8, "url": null, "contents": { "documentId": "gt4D8WXp96Aq8knCW", "html": "

No human person has, so far as I am aware, managed to eradicate all irrationalities from their thinking.  They are unavoidable, and this is particularly distressing when the irrationalities are lurking in your brain like rats in the walls and you don't know what they are.  Of course you don't know what they are - they are irrationalities, and you are a rationalist, so if you had identified them, they would be dying (quickly or slowly, but dying).  It's only natural for someone committed to rationality to want to indiscriminately exterminate the threats to the unattainable goal.

But are they all worth getting rid of?

It is my opinion that they are not: some irrationalities are small and cute and neutered, and can be confined and kept where you can see them, like pet gerbils instead of rats in the walls.

I'll give you an example: I use iTunes for my music organization and listening.  iTunes automatically records the number of times I have listened to each song and displays it.  Within a given playlist, I irrationally believe that all of these numbers have to match: if I have listened to the theme from The Phantom of the Opera exactly fifty-two times, I have to also have listened to \"The Music of the Night\" exactly fifty-two times, no matter how much I want to listen to the theme on repeat all afternoon.

Does this make any sense?  No, of course not, but it isn't worth my time to get rid of it.  It is small - it affects only a tiny corner of my life, and if it starts to get in the way of my musical preferences, I can cheat it by resetting play counts or fast-forwarding through songs (like I could get around the chore of feeding a gerbil with an automatic food dispenser).  It is \"cute\" - I can use it as a conversation starter and people generally find it a mildly entertaining quirk, not evidence that I need psychiatric help.  I have it metaphorically neutered - since I make no effort to suppress it, I'm able to recognize the various emotional reactions that satsifying or frustrating this irrational preference creates, and I would notice them if they cropped up anywhere else, where I would deal with them appropriately.  I also don't encourage it to memetically spread to others.  I keep it where I can see it - I make note of when I take actions to satisfy my irrational preference, and acknowledge in so doing that it's my reason and my reason doesn't make much sense.

In short, I treat it like a pet.  If it started being more trouble than it would be to root it out of my brain, I'd go through the necessary desensitization, just as I would get rid of a pet gerbil that bit me or kept me up at night even if this meant two hours each way on the bus to the Humane Society.

" } }, { "_id": "TyNtW86j5CSbjQnAT", "title": "That Crisis thing seems pretty useful", "pageUrl": "https://www.lesswrong.com/posts/TyNtW86j5CSbjQnAT/that-crisis-thing-seems-pretty-useful", "postedAt": "2009-04-10T17:10:32.945Z", "baseScore": 21, "voteCount": 17, "commentCount": 7, "url": null, "contents": { "documentId": "TyNtW86j5CSbjQnAT", "html": "

Since there's been much questioning of late over \"What good is advanced rationality in the real world?\", I'd like to remind everyone that it isn't all about post-doctoral-level reductionism.

\n

In particular, as a technique that seems like it ought to be useful in the real world, I exhibit the highly advanced, difficult, multi-component Crisis of Faith aka Reacting To The Damn Evidence aka Actually Changing Your Mind.

\n

Scanning through this post and the list of sub-posts at the bottom (EDIT: copied to below the fold) should certainly qualify it as \"extreme rationality\" or \"advanced rationality\" or \"x-rationality\" or \"Bayescraft\" or whatever you want to distinguish from \"traditional rationality as passed down from Richard Feynman\".

\n

An actual sit-down-for-an-hour Crisis of Faith might be something you'd only use once or twice in every year or two, but on important occasions.  And the components are often things that you could practice day in and day out, also to positive effect.

\n

I think this is the strongest foot that I could put forward for \"real-world\" uses of my essays.  (Anyone care to nominate an alternative?)

\n

Below the fold, I copy and paste the list of components from the original post, so that we have them at hand:

\n" } }, { "_id": "riaLsnntuxkPnWF6H", "title": "How theism works", "pageUrl": "https://www.lesswrong.com/posts/riaLsnntuxkPnWF6H/how-theism-works", "postedAt": "2009-04-10T16:16:20.261Z", "baseScore": 60, "voteCount": 67, "commentCount": 39, "url": null, "contents": { "documentId": "riaLsnntuxkPnWF6H", "html": "

There's a reason we can all agree on theism as a good source of examples of irrationality.

\n

Let's divide the factors that lead to memetic success into two classes: those based on corresponding to evidence, and those detached from evidence. If we imagine a two-dimensional scattergram of memes rated against these two criteria, we can define a frontier of maximum success, along which any idea can only gain in one criterion by losing on the other. This doesn't imply that evidential and non-evidential success are opposed in general; just that whatever shape memespace has, it will have a convex hull that can be drawn across this border.

\n

Religion is what you get when you push totally for non-evidential memetic success. All ties to reality are essentially cut. As a result, all the other dials can be pushed up to 11. God is not just wise, nice, and powerful - he is all knowing, omnibenificent, and omnipotent. Heaven and Hell are not just pleasant and unpleasant places you can spend a long time in - they are the very best possible and the very worst possible experiences, and for all eternity. Religion doesn't just make people better; it is the sole source of morality. And so on; because all of these things happen \"offstage\", there's no contradictory evidence when you turn the dials up, so of course they'll end up on the highest settings.

\n

This freedom is theism's defining characteristic. Even the most stupid pseudoscience is to some extent about \"evidence\": people wouldn't believe in it if they didn't think they had evidence for it, though we now understand the cognitive biases and other effects that lead them to think so. That's why there are no homeopathic cures for amputation.

\n

I agree with other commentators that the drug war is the other real world idea that I would attack here without fear of contradiction, but I would still say that drug prohibition is a model of sanity compared to theism. Theism really is the maddest thing you can believe without being considered mad.

\n

Footnote: This was originally a comment on The uniquely awful example of theism, but I was encouraged to make a top-level post from it. I should point out that there are issues with my dividing line between \"evidence-based\" and \"not evidence-based\", since you could argue that mathematics is not evidence-based and nor is the belief that evidence is a good way to learn about the world; however, it should be clear that neither of these has the freedom that religion has to make up whatever will make people most likely to spread the word.

" } }, { "_id": "6NvbSwuSAooQxxf7f", "title": "Beware of Other-Optimizing", "pageUrl": "https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing", "postedAt": "2009-04-10T01:58:49.232Z", "baseScore": 192, "voteCount": 158, "commentCount": 119, "url": null, "contents": { "documentId": "6NvbSwuSAooQxxf7f", "html": "

I've noticed a serious problem in which aspiring rationalists vastly overestimate their ability to optimize other people's lives.  And I think I have some idea of how the problem arises.

\n

You read nineteen different webpages advising you about personal improvement—productivity, dieting, saving money.  And the writers all sound bright and enthusiastic about Their Method, they tell tales of how it worked for them and promise amazing results...

\n

But most of the advice rings so false as to not even seem worth considering.  So you sigh, mournfully pondering the wild, childish enthusiasm that people can seem to work up for just about anything, no matter how silly.  Pieces of advice #4 and #15 sound interesting, and you try them, but... they don't... quite... well, it fails miserably.  The advice was wrong, or you couldn't do it, and either way you're not any better off.

\n

And then you read the twentieth piece of advice—or even more, you discover a twentieth method that wasn't in any of the pages—and STARS ABOVE IT ACTUALLY WORKS THIS TIME.

\n

At long, long last you have discovered the real way, the right way, the way that actually works.  And when someone else gets into the sort of trouble you used to have—well, this time you know how to help them.  You can save them all the trouble of reading through nineteen useless pieces of advice and skip directly to the correct answer.  As an aspiring rationalist you've already learned that most people don't listen, and you usually don't bother—but this person is a friend, someone you know, someone you trust and respect to listen.

\n

And so you put a comradely hand on their shoulder, look them straight in the eyes, and tell them how to do it.

\n

I, personally, get quite a lot of this.  Because you see... when you've discovered the way that really works... well, you know better by now than to run out and tell your friends and family.  But you've got to try telling Eliezer Yudkowsky.  He needs it, and there's a pretty good chance that he'll understand.

\n

It actually did take me a while to understand.  One of the critical events was when someone on the Board of the Institute Which May Not Be Named, told me that I didn't need a salary increase to keep up with inflation—because I could be spending substantially less money on food if I used an online coupon service.  And I believed this, because it was a friend I trusted, and it was delivered in a tone of such confidence.  So my girlfriend started trying to use the service, and a couple of weeks later she gave up.

\n

Now here's the the thing: if I'd run across exactly the same advice about using coupons on some blog somewhere, I probably wouldn't even have paid much attention, just read it and moved on.  Even if it were written by Scott Aaronson or some similar person known to be intelligent, I still would have read it and moved on.  But because it was delivered to me personally, by a friend who I knew, my brain processed it differently—as though I were being told the secret; and that indeed is the tone in which it was told to me.  And it was something of a delayed reaction to realize that I'd simply been told, as personal advice, what otherwise would have been just a blog post somewhere; no more and no less likely to work for me, than a productivity blog post written by any other intelligent person.

\n

And because I have encountered a great many people trying to optimize me, I can attest that the advice I get is as wide-ranging as the productivity blogosphere.  But others don't see this plethora of productivity advice as indicating that people are diverse in which advice works for them.  Instead they see a lot of obviously wrong poor advice.  And then they finally discover the right way—the way that works, unlike all those other blog posts that don't work—and then, quite often, they decide to use it to optimize Eliezer Yudkowsky.

\n

Don't get me wrong.  Sometimes the advice is helpful.  Sometimes it works.  \"Stuck In The Middle With Bruce\"—that resonated, for me.  It may prove to be the most helpful thing I've read on the new Less Wrong so far, though that has yet to be determined.

\n

It's just that your earnest personal advice, that amazing thing you've found to actually work by golly, is no more and no less likely to work for me than a random personal improvement blog post written by an intelligent author is likely to work for you.

\n

\"Different things work for different people.\"  That sentence may give you a squicky feeling; I know it gives me one.  Because this sentence is a tool wielded by Dark Side Epistemology to shield from criticism, used in a way closely akin to \"Different things are true for different people\" (which is simply false).

\n

But until you grasp the laws that are near-universal generalizations, sometimes you end up messing around with surface tricks that work for one person and not another, without your understanding why, because you don't know the general laws that would dictate what works for who.  And the best you can do is remember that, and be willing to take \"No\" for an answer.

\n

You especially had better be willing to take \"No\" for an answer, if you have power over the Other.  Power is, in general, a very dangerous thing, which is tremendously easy to abuse, without your being aware that you're abusing it.  There are things you can do to prevent yourself from abusing power, but you have to actually do them or they don't work.  There was a post on OB on how being in a position of power has been shown to decrease our ability to empathize with and understand the other, though I can't seem to locate it now.  I have seen a rationalist who did not think he had power, and so did not think he needed to be cautious, who was amazed to learn that he might be feared...

\n

It's even worse when their discovery that works for them, requires a little willpower.  Then if you say it doesn't work for you, the answer is clear and obvious: you're just being lazy, and they need to exert some pressure on you to get you to do the correct thing, the advice they've found that actually works.

\n

Sometimes—I suppose—people are being lazy.  But be very, very, very careful before you assume that's the case and wield power over others to \"get them moving\".  Bosses who can tell when something actually is in your capacity if you're a little more motivated, without it burning you out or making your life incredibly painful—these are the bosses who are a pleasure to work under.  That ability is extremely rare, and the bosses who have it are worth their weight in silver.  It's a high-level interpersonal technique that most people do not have.  I surely don't have it.  Do not assume you have it, because your intentions are good.  Do not assume you have it, because you'd never do anything to others that you didn't want done to yourself.  Do not assume you have it, because no one has ever complained to you.  Maybe they're just scared.  That rationalist of whom I spoke—who did not think he held power and threat, though it was certainly obvious enough to me—he did not realize that anyone could be scared of him.

\n

Be careful even when you hold leverage, when you hold an important decision in your hand, or a threat, or something that the other person needs, and all of a sudden the temptation to optimize them seems overwhelming.

\n

Consider, if you would, that Ayn Rand's whole reign of terror over Objectivists can be seen in just this light—that she found herself with power and leverage, and could not resist the temptation to optimize.

\n

We underestimate the distance between ourselves and others.  Not just inferential distance, but distances of temperament and ability, distances of situation and resource, distances of unspoken knowledge and unnoticed skills and luck, distances of interior landscape.

\n

Even I am often surprised to find that X, which worked so well for me, doesn't work for someone else.  But with so many others having tried to optimize me, I can at least recognize distance when I'm hit over the head with it.

\n

Maybe being pushed on does work... for you.  Maybe you don't get sick to the stomach when someone with power over you starts helpfully trying to reorganize your life the correct way.  I don't know what makes you tick.  In the realm of willpower and akrasia and productivity, as in other realms, I don't know the generalizations deep enough to hold almost always.  I don't possess the deep keys that would tell me when and why and for who a technique works or doesn't work.  All I can do is be willing to accept it, when someone tells me it doesn't work... and go on looking for the deeper generalizations that will hold everywhere, the deeper laws governing both the rule and the exception, waiting to be found, someday.

" } }, { "_id": "dLL6yzZ3WKn8KaSC3", "title": "The uniquely awful example of theism", "pageUrl": "https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism", "postedAt": "2009-04-10T00:30:08.149Z", "baseScore": 46, "voteCount": 53, "commentCount": 175, "url": null, "contents": { "documentId": "dLL6yzZ3WKn8KaSC3", "html": "

When an LW contributor is in need of an example of something that (1) is plainly, uncontroversially (here on LW, at least) very wrong but (2) an otherwise reasonable person might get lured into believing by dint of inadequate epistemic hygiene, there seems to be only one example that everyone reaches for: belief in God. (Of course there are different sorts of god-belief, but I don't think that makes it count as more than one example.) Eliezer is particularly fond of this trope, but he's not alone.

\n

How odd that there should be exactly one example. How convenient that there is one at all! How strange that there isn't more than one!

\n

In the population at large (even the smarter parts of it) god-belief is sufficiently widespread that using it as a canonical example of irrationality would run the risk of annoying enough of your audience to be counterproductive. Not here, apparently. Perhaps we-here-on-LW are just better reasoners than everyone else ... but then, again, isn't it strange that there aren't a bunch of other popular beliefs that we've all seen through? In the realm of politics or economics, for instance, surely there ought to be some.

\n

Also: it doesn't seem to me that I'm that a much better thinker than I was a few years ago when (alas) I was a theist; nor does it seem to me that everyone on LW is substantially better at thinking than I am; which makes it hard for me to believe that there's a certain level of rationality that almost everyone here has attained, and that makes theism vanishingly rare.

\n

I offer the following uncomfortable conjecture: We all want to find (and advertise) things that our superior rationality has freed us from, or kept us free from. (Because the idea that Rationality Just Isn't That Great is disagreeable when one has invested time and/or effort and/or identity in rationality, and because we want to look impressive.) We observe our own atheism, and that everyone else here seems to be an atheist too, and not unnaturally we conclude that we've found such a thing. But in fact (I conjecture) LW is so full of atheists not only because atheism is more rational than theism (note for the avoidance of doubt: yes, I agree that atheism is more rational than theism, at least for people in our epistemic situation) but also because

\n\n

Does any of this matter? I think it might, because

\n\n\n\n\n

So. Is theism really a uniquely awful example? If so, then surely there must be insights aplenty to be had from seeing what makes it so unique. If not, though ... Anyone got any other examples of things just about everyone here has seen the folly of, even though they're widespread among otherwise-smart people? And, if not, what shall we do about it?

" } }, { "_id": "B3b29FJboqnANJRDz", "title": "Extreme Rationality: It Could Be Great", "pageUrl": "https://www.lesswrong.com/posts/B3b29FJboqnANJRDz/extreme-rationality-it-could-be-great", "postedAt": "2009-04-09T22:00:13.538Z", "baseScore": 16, "voteCount": 21, "commentCount": 12, "url": null, "contents": { "documentId": "B3b29FJboqnANJRDz", "html": "

Reply to: Extreme Rationality: It's Not That Great

\n

I considered making this into a comment on Yvain's last post, but I'd like to redirect the discussion slightly. Yvain's warning is important, but we're left with the question of how to turn the current state of the art in rationality into something great. I think we are all on the same page that more is possible. Now we just need to know how to get there.

\n

Even though Yvain disapproved of Eliezer's recent post on day jobs, I thought the two shared a common thread: rationalists should be careful about staying in Far-mode too long. I took Eliezer's point to be more about well-developed rationalist communities, and Yvain's to be about our rag-tag band of aspirants, but I think they are both speaking to the same issue. All of this has to be for a purpose,  and we can't become ungrounded.

\n

Near- and Far-mode have to be balanced. This shouldn't be surprising, because in this context, Near and Far roughly equate to applied and theoretical work. The two intermingle and build off one another. The history of math and physics is filled with paired problems: calculus and dynamics, Fourier series and heat distribution, least-squares and astronomy, etc. Real world problems need theory to be solved, but theory needs problems to motivate and test it.

\n

My guess is that any large subject develops through the following iterative alteration between Near and Far:

\n

    F1. Develop general theory.
    F2. Refine and check for consistency and correctness.
    F3. Consolidate theory.
    N1. Apply existing theory to problems.
    N2. Evaluate successes and failures.
    GOTO F1.

\n

This looks like a close relative of our trusty friend, the scientific method, and is similarly idealized. In terms of this process, I think the Less Wrong community is between F2 and F3. We have lots of phrases, techniques, and standard examples laying around, and work has been done on testing them for conceptual soundness. The wiki represents an attempt to begin consolidating this information so we can move onto more applied domains.

\n

Assuming this process is productive, how long will it take to produce something useful? If Newton invented undergraduate material in math and physics, as is often quipped, I think existing x-rationality theory and techniques are on a JR High level, at best. I'm not surprised x-rationality hasn't produced clear benefits yet. The commonly agreed upon rule of thumb is that it takes about 10 years or 10,000 hours of practice to become an expert in a subject. X-rationality as a subject is around 30 years old, and OB was only founded in 2006. Most of the current experts should be coming from fields like psychology, game theory, logic, physics, economics, or AI, where the 10,000 hours were acquired indirectly over a career. I think rationality theory will count as a success once someone can acquire PhD level expertise in rationality by age 25 or 30 like in other subjects and can spend a career only on these topics.

\n

I'd also like to reemphasize the comments of pjeby and hegemonicon, in conjuction with Yvain, on consciously using x-rationality. I know I need to do more work on integrating OB concepts into my everyday life. I don't think the material referenced in OB isn't going to produce many visible benefits, but I'd bet those concepts will have to come naturally before anything really useful could be learned, much less created. For example, if someone has to consciously think about what the Cartesian plane represents or what a function is, they are going to have a difficult time learning calculus.

\n

I don't think the current lack of success is of too much worry. This is a long-term project, and I'd be suspicious if breakthroughs came too easily. As long as this community stays grounded, and can move between theory and application, I remain hopeful.

\n

Is my assessment of x-rationality's long term prospects correct? How does my vision accord with everyone else's?

" } }, { "_id": "WMYHEcs5tyFESkjsr", "title": "Silver Chairs, Paternalism, and Akrasia", "pageUrl": "https://www.lesswrong.com/posts/WMYHEcs5tyFESkjsr/silver-chairs-paternalism-and-akrasia", "postedAt": "2009-04-09T21:24:19.367Z", "baseScore": 34, "voteCount": 42, "commentCount": 40, "url": null, "contents": { "documentId": "WMYHEcs5tyFESkjsr", "html": "

Inspired in part by Robin Hanson's excellent article on paternalism a while back, and in response to the various akrasia posts.

\n

In C.S. Lewis's fourth Narnia book, The Silver Chair, the protagonists (two children and a Marsh-wiggle) are faced with a dilemma regarding the title object.  To wit, they met an eloquent and quite sane-seeming young man, who after a while reveals that he has a mental disorder: for an hour every night, he loses his mind and must be restrained in the Silver Chair; and if he were to be released during that time he would become a giant, evil snake (it is a fantasy novel, after all).  The heroes determine to witness this, and the young man calmly straps himself into the chair.  After a few moments, a change comes over him and he begins struggling and begging for release, claiming the other personality is the false one.  The children are nonplussed: which person(ality) should they believe?  And (a separate question) which should they help?

\n

In the book this dilemma is resolved by means of a cheat*, but we in real life have no such thing.  We do, however, have an abundance of Silver Chairs, in the form of psychotropic drugs from alcohol to hallucinogens to fancy antidepressants and antipsychotics. Of course not every person who takes such drugs is in a Silver Chair situation, but consider for instance the alcoholic who insists he doesn't have a problem, or the paranoid schizophrenic who fears that any drug is an attempt to poison him.  Now we as observers or authorities may know from statistics or even from their personal histories that the detoxxed/drugged-up versions of these people would be happy for the change and not want to return to the previous state, but does that mean it's right (in a paternalistic sense, meaning for their own good) to force them towards what we call mental health?

\n

I would say it is not, that our preference for one side of the Silver Chair over the other is simple bias in favor of mental states similar to our own.  From our places near normality we can't imagine wanting to be in these bizarre mental states, so we assume that the people who are in them don't really want to be either.  They might claim to, sure, but why believe them?  After all, they're crazy.  For two amusing thought experiments in this line which have been considered in detail by others, let the bizarre mental state in question take the values \"religious belief\", and \"sense of humor\".  For a sobering real-world application, consider the fate of homosexuals until a few decades ago. And then think about how, as Eliezer has said, the future like the past will be filled with people whose values we would find abhorrent.

\n

\n

This idea has internal relevance as well.  You could easily consider, for instance, the self introspecting at home who wants to lose weight and the self in a restaurant who wants to order cheesecake as two sides of a Silver Chair**.  And I think that view is more helpful than just calling it \"akrasia\", because it presents the situation as two aspects of your personality which happen to want different things, instead of some \"weakness\" which is interfering with your \"true will\". Then instead of castigating yourself for weakness of will, you merely think \"I suppose my desire for cheesecake was stronger than I anticipated.  When I return to a state where my desire to lose weight is dominant, I shall have to make stricter plans.\"

\n

Again, I see a bias: we think that the desires (and in fact the entire mental state) which we have while, e.g., sitting alone calmly in a quiet room are the \"true\" ones, or even the \"right\" ones in some moral sense, and that feelings or thoughts we have at other times are \"lesser\" or akrasic, simply because at the time when we're introspecting we can't feel the power of those other situations, and of course we rightly privilege our calm-quiet thinking for its prowess in answering objective questions.  We spend (presumably) the bulk of our lives not engaged in quiet introspection, so why should we defer to what our desires are then?

\n

Of course, one can always say \"When I calmly introspect and plan things in advance, I end up happier/more successful than if I were to give in to my impulses\".  To which I would respond \"That's fine.  If happiness or success is what you want, and that method is effective, then go for it.\"  My point is that, just as you shouldn't condemn someone else for not conforming to the desires or thought patterns you think they ought to have, much less force them to conform, neither should you condemn yourself.  Your utils come from doing what you want, not being happy or successful, or finding the most efficient way to satisfy as many of your desires as possible, or anything else.

\n

This idea also seems to have relevance to the topic-which-shall-not-be-named, but I guess this isn't the time for that.

\n

 

\n

 

\n

* Specifically, the chairbound personality invokes the Holy Name of God, which breaks the symmetry. Not a solution many readers of this site would go for, I think.

\n

** That phrasing is admittedly quite awkward; I guess the two sides would be \"(sitting) in\" and \"out\" of the chair.

\n

† I once read that brain scans show that one cannot remember the sensation of sex/orgasms in the same way one can remember other more ordinary sensations.  That doesn't jive with my personal experience, but if true I think it gives interesting evidence.  A related phenomenon sometimes mentioned by poets (and which I have experienced) is that as you fall in love with someone, you actually find it harder to remember what they look like.

\n

‡ One can also object that impulsive desires are incoherent: e.g. hyperbolic discounting.  But I would say that incoherence is a property of epistemic systems, i.e. things that must be explained by other things.  A desire doesn't need to be explained by anything or agree with anything; it merely is.  And paradoxes of wanting both X and !X don't seem to arise (or if they do, you can always kick in some rationality at that point).

" } }, { "_id": "5epdtcyuDiuRFFJk5", "title": "Secret Identities vs. Groupthink", "pageUrl": "https://www.lesswrong.com/posts/5epdtcyuDiuRFFJk5/secret-identities-vs-groupthink", "postedAt": "2009-04-09T20:26:21.052Z", "baseScore": 24, "voteCount": 26, "commentCount": 12, "url": null, "contents": { "documentId": "5epdtcyuDiuRFFJk5", "html": "

From Marginal Revolution:

\n
\n

A new meta-analysis (pdf) of 72 studies, involving 4,795 groups and over 17,000 individuals has shown that groups tend to spend most of their time discussing the information shared by members, which is therefore redundant, rather than discussing information known only to one or a minority of members. This is important because those groups that do share unique information tend to make better decisions.

Another important factor is how much group members talk to each other. Ironically, Jessica Mesmer-Magnus and Leslie DeChurch found that groups that talked more tended to share less unique information.

\n
\n

A result that shouldn't surprise this group. I've noticed obvious attempts to avoid this tendency in Less Wrong (for instance, Yvain's avoiding further Christian-bashing). We've had at least one post asking specifically for information that was unique. And I don't know about the rest of you, but I've already had plenty of new food for thought on Less Wrong.

\n

But are we tapping the full potential? Each of us has, or should have, a secret identity. The nice thing about those identities is that they give us access to unique knowledge. We've been asked (though I can't find the link) to avoid large posts applying learned rationality techniques to controversial topics, for fear of killing minds, which seems reasonable to me. Is there a better way to allow discipline-specific knowledge to be shared among Less Wrong readers without setting off our politicosensors? It seems beneficial not only for improved rationality training, but also to enhance our secret identities. For instance, I, as an economist-in-training, would like to know not just what an anthropologist can tell me, but what a Bayesian-trained anthropologist can tell me.

\n

\"\"

" } }, { "_id": "Zrg8zohTAJKGAo5if", "title": "\"Playing to Win\"", "pageUrl": "https://www.lesswrong.com/posts/Zrg8zohTAJKGAo5if/playing-to-win", "postedAt": "2009-04-09T15:45:07.922Z", "baseScore": 29, "voteCount": 20, "commentCount": 16, "url": null, "contents": { "documentId": "Zrg8zohTAJKGAo5if", "html": "

John F. Rizzo is an expert on losing. However, if you want to win, you would do better to seek advice from an expert on winning.

\n

David Sirlin is such an expert, a renowned Street Fighter player and game designer. He wrote a series of articles with the title \"Playing to Win\", about playing competitive games at a high level, which were so popular that he expanded them into a book. You can either read it for free online (donations are appreciated) or purchase a dead tree edition.

\n

Any further summary would simply be redundant when you could simply read Sirlin's own words, so here is the link:

\n

http://www.sirlin.net/ptw

" } }, { "_id": "LgavAYtzFQZKg95WC", "title": "Extreme Rationality: It's Not That Great", "pageUrl": "https://www.lesswrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great", "postedAt": "2009-04-09T02:44:20.056Z", "baseScore": 252, "voteCount": 244, "commentCount": 281, "url": null, "contents": { "documentId": "LgavAYtzFQZKg95WC", "html": "

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using \"extreme rationality\" or \"x-rationality\" in the sense of \"techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training.\" It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use \"benefits\" or \"practical benefits\" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these \"benefits\" of \"x-rationality\"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

\n
\n

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

\n
\n

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

\n

\n

One factor we have to once again come back to is akrasia2. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase \"limiting factor\" formally, the way you'd think of the limiting reagent in chemistry. When there's a limiting reagent, it doesn't matter how much more of the other reagents you add, the reaction's not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn't going to increase success very much.

This is a very large part of the story, but not the whole story. If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

And - when I was young, I used to watch The Journey of Allen Strange on Nickleodeon. It was a children's show about this alien who came to Earth and lived with these kids. I remember one scene where Allen the Alien was watching the kids play pool. \"That's amazing,\" Allen told them. \"I could never calculate differential equations in my head that quickly.\" The kids had to convince him that \"it's in the arm, not the head\" - that even though the movement of the balls is governed by differential equations, humans don't actually calculate the equations each time they play. They just move their arm in a way that feels right. If Allen had been smarter, he could have explained that the kids were doing some very impressive mathematics on a subconscious level that produced their arm's perception of \"feeling right\". But the kids' point still stands; even though in theory explicit mathematics will produce better results than eyeballing it, in practice you can't become a good pool player just by studying calculus.

A lot of human rationality follows the same pattern. Isaac Newton is frequently named as a guy who knew no formal theories of science or rationality, who was hopelessly irrational in his philosophical beliefs and his personal life, but who is still widely and justifiably considered the greatest scientist who ever lived. Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

\n

Yes, yes, beisutsukai should be able to develop quantum gravity in a month and so on. But until someone on Less Wrong actually goes and does it, that story sounds a lot like when Alfred Korzybski claimed that World War Two could have been prevented if everyone had just used more General Semantics.

\n

And then there's just plain noise. Your success in the world depends on things ranging from your hairstyle to your height to your social skills to your IQ score to cognitive constructs psychologists don't even have names for yet. X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with \"A good x-rationalist will know if fashion is important, and study fashion.\" A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook. And having a general rational outlook, as I mentioned before, is limited in its effectiveness by poor application and akrasia.

\n

I no longer believe mastering all these Overcoming Bias and Less Wrong techniques will turn me into Anasûrimbor Kellhus or John Galt. I no longer even believe mastering all these Overcoming Bias techniques will turn me into Eliezer Yudkowsky (who, as his writings from 2001 indicate, had developed his characteristic level of awesomeness before he became interested in x-rationality at all)3. I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe 0.2 in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment. I think it was primarily a Happy Death Spiral around how wonderfully super-awesome x-rationality was that made me once think otherwise.

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

\n

...

\n

...

\n

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

For the next time period - a week, a month, whatever - take special note of every decision you make. By \"decision\", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought. Make a tick mark, literal or mental, so you can count how many of these there are.

Then note whether you make that decision rationally. If yes, also record whether you made that decision x-rationally. I don't just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there's a serious (let's arbitrarily say 33%) chance that using x-rationality instead of normal rationality actually changed the result of your decision.

Finally, note whether, once you came to the rational conclusion, you actually followed it. This is not a trivial matter. For example, before writing this blog post I wondered briefly whether I should use the time studying instead, used normal (but not x-) rationality to determine that yes, I should, and then proceeded to write this anyway. And if you get that far, note whether your x-rational decisions tend to turn out particularly well.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. Not pretty at all. Not only do I make fewer conscious decisions than I thought, but the ones I do make I rarely apply even the slightest modicum of rationality to, and the ones I apply rationality to it's practically never x-rationality, and when I do apply everything I've got I don't seem to follow those decisions too consistently.

I'm not so great a rationalist anyway, and I may be especially bad at this. So I'm interested in hearing how different your results are. Just don't rig it. If you find yourself using x-rationality twenty times more often than you were when you weren't performing the experiment, you're rigging it, consciously or otherwise5.

Eliezer writes:

\n
\n

The novice goes astray and says, \"The Art failed me.\"
The master goes astray and says, \"I failed my Art.\"

\n
\n

Yet one way to fail your Art is to expect more of it than it can deliver. No matter how good a swimmer you are, you will not be able to cross the Pacific. This is not to say crossing the Pacific is impossible. It just means it will require a different sort of thinking than the one you've been using thus far. Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

\n

Footnotes:

\n

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

\n

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to \"kicking\" to our \"punching\". I'm not sure why he considers them to be the same Art rather than two related Arts.

\n

3: This is actually an important point. I think there are probably quite a few smart, successful people who develop an interest in x-rationality, but I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

\n

4: This is a terribly controlled experiment, and the only way its data can be meaningfully interpreted at all is through what one of my professors called the \"ocular trauma test\" - when the data hits you between the eyes. If people claim they always follow their rational decisions, I think I will be more likely to interpret it as lack of enough cognitive self-consciousness to notice when they're doing something irrational than an honest lack of irrationality.

\n

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

" } }, { "_id": "FZaDFYbnRoHmde7F6", "title": "\"Stuck In The Middle With Bruce\"", "pageUrl": "https://www.lesswrong.com/posts/FZaDFYbnRoHmde7F6/stuck-in-the-middle-with-bruce", "postedAt": "2009-04-09T00:24:02.553Z", "baseScore": 105, "voteCount": 103, "commentCount": 101, "url": null, "contents": { "documentId": "FZaDFYbnRoHmde7F6", "html": "

I was somewhat disappointed to find a lack of Magic: the Gathering players on LessWrong when I asked about it in the off-topic thread. You see, competitive Magic is one of the best, most demanding rationality battlefields that I know about. Furthermore, Magic is discussed extensively on the Internet, and many articles in which people try to explain how to become a better Magic player are, essentially, describing how to become more rational: how to better learn from experience, make judgments from noisy data, and (yes) overcome biases that interfere with one's ability to make better decisions.

\n

Because people here don't play Magic, I can't simply link to those articles and say, \"Here. Go read.\" I have to put everything into context, because Magic jargon has become its own language, distinct from English. Think I'm kidding? I was able to follow match coverage written in French using nothing but my knowledge of Magic-ese and what I remembered from my high school Spanish classes. Instead of simply linking, in order to give you the full effect, I'd have to undertake a project equivalent to translating a work in a foreign language.

\n

So it is with great trepidation that I give you, untranslated, one of the \"classics\" of Magic literature.

\n

Stuck In The Middle With Bruce by John F. Rizzo.

\n

Now, John \"Friggin'\" Rizzo isn't one of the great Magic players. Far from it. He is, however, one of the great Magic writers, to the extent that the adjective \"great\" can be applied to someone who writes about Magic. His bizarre stream-of-consciousness writing style, personal stories, and strongly held opinions have made him a legend in the Magic community. \"Stuck in the Middle with Bruce\" is his most famous work, as incomprehensible as it may be to those who don't speak our language (and even to those that do).

\n

So, why am I choosing to direct you to this particular piece of writing? Well, although Rizzo doesn't know much about winning, he knows an awful lot about what causes people to lose, and that's the topic of this particular piece - people's need to lose.

\n

Does Bruce whisper into your ear, too?

" } }, { "_id": "4xEohME6vfXNmHmAz", "title": "Less Wrong IRC Meetup", "pageUrl": "https://www.lesswrong.com/posts/4xEohME6vfXNmHmAz/less-wrong-irc-meetup", "postedAt": "2009-04-08T22:16:28.907Z", "baseScore": 4, "voteCount": 8, "commentCount": 13, "url": null, "contents": { "documentId": "4xEohME6vfXNmHmAz", "html": "

Less Wrong will be having a meetup on Saturday at 7pm UTC (convert to other time zones), in the #lesswrong IRC channel on Freenode. If all goes well, this will be a recurring event. If you haven't used IRC before, Mibbit provides a web-based client you can use.

\n

We may do some Paranoid Debating. Discuss rules and procedures here. A few people should bring questions, but avoid looking at the answers if you can avoid it. Depending how many people show up, we'll may need to break into multiple groups. Once we've finalized the rules and done it a few times, I (or someone else) can write a bot to assign roles and keep score.

\n

(Edit: Downgraded Paranoid Debating from being the purpose of the meetup to being a likely activity.)

" } }, { "_id": "dz3Mmr2Cykz6RRfhK", "title": "Rationality, Cryonics and Pascal's Wager", "pageUrl": "https://www.lesswrong.com/posts/dz3Mmr2Cykz6RRfhK/rationality-cryonics-and-pascal-s-wager", "postedAt": "2009-04-08T20:28:48.644Z", "baseScore": 18, "voteCount": 22, "commentCount": 58, "url": null, "contents": { "documentId": "dz3Mmr2Cykz6RRfhK", "html": " \n

This has been covered by Eliezer on OB, but I the debate will work better with the LW voted commenting system, and I hope I can add something to the OB debate, which I feel left the spectre of Pascalian religious apology clinically dead but not quite information theoretically dead. Anna Salamon writes:

\n
\n

The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.

\n

Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcoming bias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.

\n
\n

This really got me worried: do I really rationally believe in the efficacy of cryonics and not of religion? Or did I write the bottom line first and then start thinking of justifications?

\n

Of course, it is easy to write a post justifying cryonics in a way that shuns religion. That's what everyone wants to hear on this forum! What is hard is doing it in a way that ensures you're not just writing even more justification with no chance of retracting the bottom line. I hope that with this post I have succeeded in burying the Pascalian attack on cryonics for good; and in removing a little more ignorance about my true values.

\n


\n

\n

To me, the justification for wanting to be cryopreserved is that there is, in fact, a good chance (more than the chance of rolling a 5 or a 6 on a six sided die)1 that I will be revived into a very nice world indeed, and that the chance of being revived into a hell I cannot escape from is less than or equal to this (I am a risk taker). How sensitive is this to the expected goodness and length-in-time of the utopia I wake up in? If the utopia is as good as Iain M Banks' culture, I'd still be interested in spending 5% of my income a year and 5% of my time getting frozen if the probability was around about the level of rolling two consecutive sixes.

\n

Does making the outcome better change things? Suppose we take the culture and \"upgrade it\" by fulfilling all of my fantasies:  The Banksian utopia I have described is analogous to the utopia of the tired peasant compared to what is possible. An even better utopia which appeals to me on an intellectual and subtly sentimental level would involve continued personal growth towards experiences beyond raw peak experience as I know it today. This perhaps pushes me to tolerating probabilities around the four sixes level, (1/(6*6*6*6) ~ 1/1000) but no further. For me this probability feels like \"just a little bit less unlikely than impossible\". 

\n

Now, how does this bear on Pascal's wager? Well, I just don't register long-term life outcomes that happen with a probability of less than one in a thousand. End of story! Heaven *could not be good enough* and hell *could not be bad enough* to make it matter to me, and I can be fairly sure about this because I have just visualized a plausible heaven that I actually \"believe in\".

\n

Now what is my actual probability estimate of Cryonics working? Robin talks about breaking it down into a series of events and estimating their conditional probabilities. My breakdown of the probability of a successful outcome if you die right now is:

\n
    \n
  1. The probability that human civilization will survive into the sufficiently far future (my estimate: 50%)
  2. \n
  3. The probability that you get cryopreserved rather than autopsied or shot in the head, and you get cooled down sufficiently quickly (my estimate: 80%, though this will improve)
  4. \n
  5. The probability that cryonics preserves appropriate brain structure (my estimate: 75%)
  6. \n
  7. The probability that you don't get destroyed whilst frozen, for example by incompetent financial management of cryonics companies (my estimate: 80%)
  8. \n
  9. The probability that someone will revive you into a pleasant society conditional upon the above (my estimate: 95%)
  10. \n
\n

Yielding a disappointingly low probability of 0.228. [I expect this to improve to ~0.4 by the time I get old enough for it to be a personal consideration.] I don't think that one could be any more optimistic than the above. But this probability is tantalizing: Enough to get me very excited about all those peak experiences and growth that I described above, though it probably won't happen. It is roughly the probability of tossing a coin twice and getting heads both times.

\n

It is also worth mentioning that the analyses I have seen relating to the future of humanity indicate that a Banksian almost-utopia is unlikely, that the positive scenarios are *very positive*, and negative scenarios usually involve the destruction of human technological society. My criterion of personal identity will probably be the limiting factor in how good a future I can experience. If I am prepared to spend 5% of my time and effort pursuing a 1 in 100 chance of the this maxed-out utopia, I should be prepared to put quite a lot of effort into making sure I \"make it\" given the probability I've just estimated.

\n

If someone were to convince me that the probability of cryonics working was, in fact, less than 1 in 1000, I would (quite rationally) give up on it.

\n

This relatively high probability I've estimated (two heads on a coin) has other consequences for cryonaughts alive today, if they believe it. We should be prepared to expend a non-negligible amount of effort moving somewhere where the probability of quick suspension is as high as possible. Making cryonics more popular will make probabilities 1, 2, and 4 increase. (2 will increase because people will have a stake in the future after their deanimation). The cryonics community should therefore spend some time and effort convincing more people to be cryopreserved, though this is a hard problem, intimately related to the purpose of Less Wrong, rationality and to secular ethics and secular \"religions\" such as secular humanism, h+ and the brights. Those who are pro-cryonics and have old relatives should be prepared to bite the social cost of attempting to persuade those relatives that cryonics is worth thinking about, at least to the extent that they care about their relatives. This is an actionable item that I intend to action with all of my 4 remaining grandparents in the next few months.

\n

I have seen (but cannot find the citation for, though see this) research that predicts that 50% of people will suffer from dementia for the 6 months before they die by 2020 (and that this will get worse over time as life expectancy increases). If we add to my list above a term for \"the probability that you won't be information theoretically dead before you're legally dead\", and set it to 50%, the overall probability takes a huge hit; in addition, a controlled deanimation improves the probability of being deanimated without damage. Any cryonaught who really shares my beliefs about the rewards and probabilities for cryonics should be prepared to deainmate theselves before they would naturally die, perhaps by a significant amount, say 10 years. (yes, I know this is illegal, but it is a useful thought experiment, and it indicates that we should be campaigning hard for this). If you really believe the probabilities I've given for cryonics, you should deanimate instead of retiring. At a sufficiently high probability of cryonics working, you should rationally attempt to deanimate immediately or within a few years, no matter how old you are, in order to maximize the amount of your personal development which occurs in a really good environment. It seems unlikely that this situation will come to pass, but it is an interesting thought experiment; if you would not be prepared, under sufficiently compelling circumstances, to prematurely deanimate, you may be in cryonics for nonrational reasons.

\n

 

\n

1 [The use of dice rather than numbers to represent probabilities in this article comes from my war-gaming days. I have a good emotional intuition as to how unlikely rolling a 6 is, it is more informative to me than 0.1666. I've won and lost battles based on 6+ saving throws. I recommend that readers play some game that involves dice to get a similarly good intuition]

" } }, { "_id": "gBewgmzcEiks2XdoQ", "title": "Mandatory Secret Identities", "pageUrl": "https://www.lesswrong.com/posts/gBewgmzcEiks2XdoQ/mandatory-secret-identities", "postedAt": "2009-04-08T18:10:20.193Z", "baseScore": 69, "voteCount": 86, "commentCount": 186, "url": null, "contents": { "documentId": "gBewgmzcEiks2XdoQ", "html": "

Previously in seriesWhining-Based Communities

\n
\n

\"But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy.  I expected much of them, and they came to expect much of themselves.\" —Jeffreyssai

\n
\n

Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...

\n

...becoming a teacher and having their own martial arts dojo someday.

\n

To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything.  Writers tend to look down on literary critics' understanding of the art form itself, for just this reason.  (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying \"This wine has a great bouquet\", and goes off to tell their students \"You've got to make sure your wine has a great bouquet\".  When the student asks, \"How?  Does it have anything to do with grapes?\" the critic replies disdainfully, \"That's for grape-growers!  I teach wine.\")

\n

Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn.  You do that on Sundays, or full-time after you retire.

\n

And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities.  They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors.  And to enforce this, I suggest the rule:

\n

  Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))

\n

That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.

\n

Some notes:

\n

• This doesn't set Rationality_Respect1 equal to Non_Rationality_Respect0.  It establishes an upper bound.  This doesn't mean you can find random awesome people and expect them to be able to teach you.  Explicit, abstract, cross-domain understanding of rationality and the ability to teach it to others is, unfortunately, an additional discipline on top of domain-specific life success.  Newton was a Christian etcetera.  I'd rather hear what Laplace had to say about rationality—Laplace wasn't as famous as Newton, but Laplace was a great mathematician, physicist, and astronomer in his own right, and he was the one who said \"I have no need of that hypothesis\" (when Napoleon asked why Laplace's works on celestial mechanics did not mention God).  So I would respect Laplace as a rationality instructor well above Newton, by the min() function given above.

\n

• We should be generous about what counts as a secret identity outside the Bayesian Conspiracy.  If it's something that outsiders do in fact see as impressive, then it's \"outside\" regardless of how much Bayesian content is in the job.  An experimental psychologist who writes good papers on heuristics and biases, a successful trader who uses Bayesian algorithms, a well-selling author of a general-audiences popular book on atheism—all of these have worthy secret identities.  None of this contradicts the spirit of being good at something besides rationality—no, not even the last, because writing books that sell is a further difficult skill!  At the same time, you don't want to be too lax and start respecting the instructor's ability to put up probability-theory equations on the blackboard—it has to be visibly outside the walls of the dojo and nothing that could be systematized within the Conspiracy as a token requirement.

\n

• Apart from this, I shall not try to specify what exactly is worthy of respect.  A creative mind may have good reason to depart from any criterion I care to describe.  I'll just stick with the idea that \"Nice rationality instructor\" should be bounded above by \"Nice secret identity\".

\n

But if the Bayesian Conspiracy is ever to populate itself with instructors, this criterion should not be too strict.  A simple test to see whether you live inside an elite bubble is to ask yourself whether the percentage of PhD-bearers in your apparent world exceeds the 0.25% rate at which they are found in the general population.  Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer—that's nothing to sneeze at.  The vast majority of people go through their whole lives without being that interesting.  Any of these three would have some tales to tell of real-world use, on Sundays at the small rationality dojo where they were instructors.  What I'm trying to say here is: don't demand that everyone be Robin Hanson in their secret identity, that is setting the bar too high.  Selective reporting makes it seem that fantastically high-achieving people have a far higher relative frequency than their real occurrence.  So if you ask for your rationality instructor to be as interesting as the sort of people you read about in the newspapers—and a master rationalist on top of that—and a good teacher on top of that—then you're going to have to join one of three famous dojos in New York, or something.  But you don't want to be too lax and start respecting things that others wouldn't respect if they weren't specially looking for reasons to praise the instructor.  \"Having a good secret identity\" should require way more effort than anything that could become a token requirement.

\n

Now I put to you:  If the instructors all have real-world anecdotes to tell of using their knowledge, and all of the students know that the desirable career path can't just be to become a rationality instructor, doesn't that sound healthier?

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Beware of Other-Optimizing\"

\n

Previous post: \"Whining-Based Communities\"

" } }, { "_id": "K9mSWuKpZSk7t8FaH", "title": "E-Prime", "pageUrl": "https://www.lesswrong.com/posts/K9mSWuKpZSk7t8FaH/e-prime", "postedAt": "2009-04-08T13:01:01.604Z", "baseScore": 19, "voteCount": 31, "commentCount": 20, "url": null, "contents": { "documentId": "K9mSWuKpZSk7t8FaH", "html": "

I found this and thought we could find a use for it.

\n
\n

Wikipedia describes E-Prime, short for English-Prime, as a modified form of English. E-Prime uses very slightly simplified syntax and vocabulary, eliminating all forms of the verb to be.

\n

Some people use E-Prime as a mental discipline to filter speech and translate the speech of others. For example, the sentence \"the movie was good\", translated into E-Prime, could become \"I liked the movie\". The translation communicates the speaker's subjective experience of the movie rather than the speaker's judgment of the movie. In this example, using E-Prime makes it harder for the writer or reader to confuse a statement of opinion with a statement of fact.

\n
\n

Discuss! In E-Prime!

" } }, { "_id": "HnpWwo3iCbkRZZRhF", "title": "Obvious identity fail", "pageUrl": "https://www.lesswrong.com/posts/HnpWwo3iCbkRZZRhF/obvious-identity-fail", "postedAt": "2009-04-08T06:00:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "HnpWwo3iCbkRZZRhF", "html": "

Paul Graham points out something important: religion and politics are generally unfruitful topics of discussion because people have identities tied to them.

An implication:

The most intriguing thing about this theory, if it’s right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.

This seems obvious. For one thing, if you are loyal to anything that incorporates a particular view of the world rather than to truth per se, you have to tend away from believing true things. 

Ramana Kumar says this is not obvious, and (after discussion of this and other topics) that I shouldn’t care if things seem obvious, and should just point them out anyway, as they’re often not, to him at least (so probably to most). This seems a good idea, except that a microsecond’s introspection reveals that I really don’t want to say obvious things. Why? Because my identity fondly includes a bit about saying not-obvious things. Bother. 

Is it dangerous here? A tiny bit, but I don’t seem very compelled to change it. And nor, I doubt, would be many others with more important things. If you identify with being Left or Right more than being correct to begin with, what would make you want to give it up? 

Ramana suggests that if having an identity is inescapable but the specifics are flexible, then the best plan is perhaps to identify with some small set of things that impels you to kick a large set of other things out of your identity. 

What makes people identify with some things and use/believe/be associated with/consider probable/experience others without getting all funny about it anyway?

As a side note, I don’t fully get the concept. I just notice it happens, including in my head sometimes, and that it seems pretty pertinent to people insisting on being wrong. If you can explain how it works or what it means, I’m curious.

" } }, { "_id": "iNhRdRqwFqQHPRqE4", "title": "Zero-based karma coming through", "pageUrl": "https://www.lesswrong.com/posts/iNhRdRqwFqQHPRqE4/zero-based-karma-coming-through", "postedAt": "2009-04-08T01:19:18.700Z", "baseScore": 9, "voteCount": 12, "commentCount": 40, "url": null, "contents": { "documentId": "iNhRdRqwFqQHPRqE4", "html": "

Our friends at Tricycle will push through a zero-based karma system (no self-voting possible) sometime this evening.  At present this will only cover future posts/comments - they may go back and revise history some time in the indefinite future (or not), but apparently that would be overly complicated for now.  We'll see how this works.

" } }, { "_id": "PCpzG9NJeviXM5YSq", "title": "Help, help, I'm being oppressed!", "pageUrl": "https://www.lesswrong.com/posts/PCpzG9NJeviXM5YSq/help-help-i-m-being-oppressed", "postedAt": "2009-04-07T23:22:21.112Z", "baseScore": 43, "voteCount": 41, "commentCount": 145, "url": null, "contents": { "documentId": "PCpzG9NJeviXM5YSq", "html": "

Followup toWhy Support the Underdog?
Serendipitously related to: Whining-Based Communities

\n

Pity whatever U.N. official has to keep track of all the persecution going on. With two hundred plus countries in the world, there's just so much of it.

Some places persecute Christians. Here's a Christian writer from a nation we'll call Country A:

\n
\n

Global reports indicate that over 150,000 Christians were martyred last year, chiefly in foreign countries. However, statistics are changing: persecution of Christians is on the increase at home. What's happening to bring about this change? According to some experts a pattern is emerging reminiscent of Jewish persecution in post war Germany. \"Isolation of, and discrimination against Christians is growing almost geometrically\" says Don McAlvany in The Midnight Herald. \"This is the way it started in Germany against the Jews. As they became more isolated and marginalized by the Nazi propaganda machine, as popular hatred and prejudice against the Jews increased among the German people, wholesale persecution followed.  Could this be where the growing anti-Christian consensus in this country is taking us?\"

\n
\n

And some countries persecute atheists. Here's an atheist activist describing what we'll call Country B.

\n
\n

Godless atheists are the most despised and distrusted minority in our country. The growing attention to atheism and atheists has given rise to increased anti-atheist bigotry in the media. Circumstances for them can be difficult enough that they have to stay in the closet and hide their atheism from friends and family. Atheists have to fear discrimination on the job, in the community, and even in their own families if their atheism is made known. Some even have to contend with harassment and vandalism. Distrust and hatred of atheists is widespread enough through our society that they have plenty of reasons to be concerned.

\n
\n

Some countries persecute Muslims. A Muslim youth in Country C:

\n
\n

The government has continuously persecuted Arabs and Muslims with extremist and unpopular views, charging them with terrorism and criminal acts related to terrorism. I am proud of [Muslims] who stand up to this system of injustice and to our country's gulag. They may beat them, but they will continues to suffer because in this country, Arabs are never innocent, they are merely guilty of lesser crimes. Even if they are proven innocent, after years of suffering and being defamed, the gulag and the political persecution will continue.

\n
\n

And some countries persecute everyone except Muslims. A politician in Country D writes:

\n
\n

The gathering storm I have been warning of for years has now formed over us. Yet instead of fighting the gradual incursion of Sharia and the demands of an intolerant, even militant Islam, we are cowering and fatalistic.

\n
\n

Since countries A, B, C, and D are all America1, what's up with all these people claiming persecution?

\n

\n

I don't doubt that there are examples of Christians, atheists, Muslims, and non-Muslims all getting persecuted in the US. There's no rule that says only one group can be persecuted at a time, especially in a society as pluralistic as our own. But compare the claim \"There are a few incidents of people persecuting Christians\" with the claim \"Christians are a persecuted group in our society.\" The first reduces to an objectively true statement. The second is a sorta-meaningless \"dangling variable\" that can be declared either true or false depending on what connotation you want to send.

And people tend to take the liberty to call the is_persecuted variable \"true\" for their own group and \"false\" for groups they don't like. Why does everyone want to be persecuted so badly? Here are some reasons I can think of:

1. The tendency to support the underdog. Being persecuted is about as underdog as you can get, and underdog supporters everywhere are quick to leap to the support of persecuted groups.

2. To create an incentive for fair-minded people to \"level the playing field\" by raising their status. I read about a tribe in India involved in a media campaign to inform everyone just how persecuted they really were. Why? They wanted to be added to India's affirmative action program, which would give them a better chance at government jobs. Likewise, when Christians talk about persecution, they usually point out that one great way to stop this persecution would be to put up the Ten Commandments in all public places.

3. To self-handicap. If I'm unsuccessful, it's not because I'm lazy or unqualified, it's beacuse they were persecuting me! Likewise, if I'm successful, then I managed to triumph in the face of adversity. I'm practically Martin Luther King or someone.

4. To build in-group cohesiveness. People come together in the face of a common enemy.

5. To explain away a lack of success. Let's say you're a fundamentalist Christian and you notice most of the rest of America dislikes you and thinks you're crazy. You might say \"Well, by Aumann's Agreement Theorem, they probably know something I don't, and I should moderate my religious views.\" But if your Revolutionary is AWOL, your Apologist could conclude that there is a sinister campaign going on to discredit Christianity, and everyone has fallen for this campaign but you and your friends.

I think these all play a role, with 1 and 2 the most important.

But one common thread in psychology is that the mind very frequently wants to have its cake and eat it too. Last week, we agreed that people like supporting the underdog, but we also agreed that there's a benefit to being on top; that when push comes to shove a lot of people are going to side with Zug instead of Urk. What would be really useful in winning converts would be to be a persecuted underdog who was also very powerful and certain to win out. But how would you do that?

Some Republicans have found a way. Whether they're in control of the government or not, the right-wing blogosphere invariably presents them as under siege, a rapidly dwindling holdout of Real American Values in a country utterly in the grip of liberalism.

But they don't say anything like \"Everyone's liberal, things are hopeless, might as well stay home.\" They believe in a silent majority. Liberals control all sorts of nefarious institutions that are currently exercising a stranglehold on power and hiding the truth, but most Americans, once you pull the wool off their eyes, are conservatives at heart and just as angry about this whole thing as they are. Any day now, they're going to throw off the yoke of liberal tyranny and take back their own country.

This is a great system. Think about it. Not only should you support the Republicans for support-the-underdog and level-the-playing-field reasons, you should also support them for majoritarian reasons and because their side has the best chance of winning. It's the best possible world short of coming out and saying \"Insofar as it makes you want to vote for us, we are in total control of the country, but insofar as that makes you not want to vote for us, we are a tiny persecuted minority who need your help\".

We're coming dangerously close to talking politics here, but this isn't just a Republican phenomenon. It underlies a lot of the uses of the word \"elite\" - this sense that there's a small minority of wrong-headed people who disagree with you in control of everything, even though the vast majority of people are secretly on your side. Whether it's the \"neoliberal capitalist elite\", the \"east coast intellectual elite\" or whatever, it's a one word Pavlovian trigger that activates this concept of your favorite group simultaneously being dominant and being persecuted by those darned elites.

There are branches of social science that consciously devote themselves solely to officially identifying the Powerful and the Powerless in every issue and conflict. They have their uses. But as rationalists, we need to devote ourselves to the separate task of disentangling the question at hand from the question of who is more powerful. Otherwise, we are at the mercy of the underdog bias, the support-the-winning-team bias, and any mutant combinations of them that may arise2.

\n

As is often the case, reduction of statements with objective truth-values can save your hide here. If every time Chris the Christian says \"Christians are persecuted,\" you hear \"Christians aren't allowed to stick the Ten Commandments up in schools,\" then you're no longer vulnerable to his appeal to pity.

\n

What other defenses are there against the human tendency to obsess over which side is more powerful, instead of which side is right?

\n

Footnotes:

\n

1: The first comment comes from Worthy News, the second from About Atheism, the third from Mideast Youth, and the fourth is Senator Rick Santorum

\n

2: Has anyone else ever watched two people in an argument completely abandon discussion over who is right, and instead turn to which person's side is persecuted worse, as if they were more or less the same question anyway? It's not a pretty sight.

" } }, { "_id": "rg7vPTtyLMfT6Qqud", "title": "Whining-Based Communities", "pageUrl": "https://www.lesswrong.com/posts/rg7vPTtyLMfT6Qqud/whining-based-communities", "postedAt": "2009-04-07T20:31:50.409Z", "baseScore": 87, "voteCount": 90, "commentCount": 99, "url": null, "contents": { "documentId": "rg7vPTtyLMfT6Qqud", "html": "

Previously in seriesSelecting Rationalist Groups
Followup toRationality is Systematized Winning, Extenuating Circumstances

\n

Why emphasize the connection between rationality and winning?  Well... that is what decision theory is for.  But also to place a Go stone to block becoming a whining-based community.

\n

Let's be fair to Ayn Rand:  There were legitimate messages in Atlas Shrugged that many readers had never heard before, and this lent the book a part of its compelling power over them.  The message that it's all right to excel—that it's okay to be, not just good, but better than others—of this the Competitive Conspiracy would approve.

\n

But this is only part of Rand's message, and the other part is the poison pill, a deadlier appeal:  It's those looters who don't approve of excellence who are keeping you down.  Surely you would be rich and famous and high-status like you deserve if not for them, those unappreciative bastards and their conspiracy of mediocrity.

\n

If you consider the reasonableness-based conception of rationality rather than the winning-based conception of rationality—well, you can easily imagine some community of people congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down.  Wrapping themselves up in their own bitterness for reality refusing to comply with the greatness they thought they should have.

\n

But this is not how decision theory works—the \"rational\" strategy adapts to the other players' strategies, it does not depend on the other players being rational.  If a rational agent believes the other players are irrational then it takes that expectation into account in maximizing expected utility.  Van Vogt got this one right: his rationalist protagonists are formidable from accepting reality swiftly and adapting to it swiftly, without reluctance or attachment.

\n

Self-handicapping (hat-tip Yvain) is when people who have been made aware of their own incompetence or probable future failure, deliberately impose handicaps on themselves—on the standard model, in order to give themselves an excuse for failure.  To make sure they had an excuse, subjects reduced preparation times for athletic events, studied less, exerted less effort, gave opponents an advantage, lowered their own expectations, even took a drug they had been told was performance-inhibiting...

\n

So you can see how much people value having an excuse—how much they'll pay to make sure they have something outside themselves to blame, in case of failure.  And this is a need which many belief systems fill—they provide an excuse.

\n

It's the government's fault, that taxes you and suppresses the economy—if it weren't for that, you would be a great entrepreneur.  It's the fault of those less competent who envy your excellence and slander you—if not for that, the whole world would pilgrimage to admire you.  It's racism, or sexism, that keeps you down—if it weren't for that, you would have gotten so much further in life with the same effort.  Your rival Bob got the promotion by bootlicking.  Those you call sinners may be much wealthier than you, but that's because God set up the system to reward the good deeds of the wicked in this world and punish them for their sins in the next, vice versa for the virtuous:  \"A boor cannot know, nor can a fool understand this: when the wicked bloom like grass and all the doers of iniquity blossom—it is to destroy them till eternity.\"

\n

And maybe it's all true.  The government does impose taxes and barriers to new businesses.  There is racism and sexism.  Scientists don't run out and embrace new ideas without huge amounts of work to evangelize them.  Loyalty is a huge factor in promotions and flattery does signify loyalty.  I can't back religions on that divine plan thing, but still, those wealthier than you may have gotten there by means more vile than you care to use...

\n

And so what?  In other countries there are those with far greater obstacles and less opportunity than you.  There are those born with Down's Syndrome.  There's not a one of us in this world, even the luckiest, whose path is entirely straight and without obstacles.  In this unfair world, the test of your existence is how well you do in this unfair world.

\n

I earlier suggested that we view our parents and environment and genes as having determined which person makes a decision—plucking you out of Platonic person-space to agonize in front of the burning orphanage, rather than someone else—but you determine what that particular person decides.  If, counterfactually, your genes or environment had been different, then it would not so much change your decision as determine that someone else would make that decision.

\n

In the same sense, I would suggest that a baby with your genes, born into a universe entirely fair, would by now be such a different person that as to be nowhere close to \"you\", your point in Platonic person-space.  You are defined by the particular unfair challenges that you face; and the test of your existence is how well you do with them.

\n

And in that unfair challenge, the art of rationality (if you can find it) is there to help you deal with the horrible unfair challenge and by golly win anyway, not to provide fellow bitter losers to hang out with.  Even if the government does tax you and people do slander you and racists do discriminate against you and others smarm their way to success while you keep your ethics... still, this whole business of rationality is there to help you win anyway, if you can find the art you need.  Find the art together, win together, if we can.  And if we can't win, it means we weren't such good rationalists as we thought, and ought to try something different the next time around.  (If it's one of those challenges where you get more than one try.)

\n

From within that project—what good does a sense of violated entitlement do?  At all?  Ever?  What good does it do to tell ourselves that we did everything right and deserved better, and that someone or something else is to blame?  Is that the key thing we need to change, to do better next time?

\n

Immediate adaptation to the realities of the situation!  Followed by winning!

\n

That is how I would cast down the gauntlet, just to make really, really sure we don't go down the utterly, completely, pointlessly unhelpful, surprisingly common path of mutual bitterness and consolation.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Mandatory Secret Identities\"

\n

Previous post: \"Incremental Progress and the Valley\"

" } }, { "_id": "E7CKXxtGKPmdM9ZRc", "title": "Of Lies and Black Swan Blowups", "pageUrl": "https://www.lesswrong.com/posts/E7CKXxtGKPmdM9ZRc/of-lies-and-black-swan-blowups", "postedAt": "2009-04-07T18:26:41.675Z", "baseScore": 33, "voteCount": 41, "commentCount": 8, "url": null, "contents": { "documentId": "E7CKXxtGKPmdM9ZRc", "html": "\n\n\n\n \n\n \n\n

Judge Marcus Einfeld, age 70, Queen’s Counsel since 1977, Australian Living Treasure 1997, United Nations Peace Award 2002, founding president of Australia’s Human Rights and Equal Opportunities Commission, retired a few years back but routinely brought back to judge important cases . . .

\n\n

. . . went to jail for two years over a series of perjuries and lies that started with a $77, 6-mph-over speeding ticket.

\n\n

That whole suspiciously virtuous-sounding theory about honest people not being good at lying, and entangled traces being left somewhere, and the entire thing blowing up in a Black Swan epic fail, actually does have a certain number of exemplars in real life, though obvious selective reporting is at work in our hearing about this one.

\n\n" } }, { "_id": "KsZNM2aiSKNTXfhZm", "title": "Eternal Sunshine of the Rational Mind", "pageUrl": "https://www.lesswrong.com/posts/KsZNM2aiSKNTXfhZm/eternal-sunshine-of-the-rational-mind", "postedAt": "2009-04-07T15:10:27.725Z", "baseScore": 10, "voteCount": 15, "commentCount": 18, "url": null, "contents": { "documentId": "KsZNM2aiSKNTXfhZm", "html": "

What if you could choose which memories and associations to retain and which to discard? Using that capability rationally (whatever that word means to you) would be a significant challenge -- and that challenge has just come one step closer to being a reality.

\r\n
\r\n

Dr. Fenton had already devised a clever way to teach animals strong memories for where things are located. He teaches them to move around a small chamber to avoid a mild electric shock to their feet. Once the animals learn, they do not forget. Placed back in the chamber a day later, even a month later, they quickly remember how to avoid the shock and do so.

\r\n

But when injected — directly into their brain — with a drug called ZIP that interferes with PKMzeta, they are back to square one, almost immediately. “When we first saw this happen, I had grad students throwing their hands up in the air, yelling,” Dr. Fenton said. “Well, we needed a lot more than that” one study.

\r\n

They now have it. Dr. Fenton’s lab repeated the experiment, in various ways...

\r\n
" } }, { "_id": "p3TfgGvbAd3tfxYe3", "title": "What isn't the wiki for?", "pageUrl": "https://www.lesswrong.com/posts/p3TfgGvbAd3tfxYe3/what-isn-t-the-wiki-for", "postedAt": "2009-04-07T10:36:49.348Z", "baseScore": 9, "voteCount": 9, "commentCount": 25, "url": null, "contents": { "documentId": "p3TfgGvbAd3tfxYe3", "html": "

The new wiki is off to a flying start - it's less than 15 hours old, and already it has over 20 articles and five authors. It's probably about time we worked out what it was for.

\n

I created it because as things stand, I can't point my friends to Less Wrong and say \"come and learn about rationality and take part in these fascinating and potentially important discussions!\" The discussions we have here assume years of reading Overcoming Bias and close attention to what's been said there and here; it must be practically impenetrable to newcomers. So for me the primary goal is simply to provide a glossary, to give newcomers a fighting chance of understanding what on Earth we are talking about and why. I think it can do more than that, but before I come to that, let me say a little about what I think it's not for.

\n

The way I would currently like to see it, the wiki is not there to duplicate what is already done elsewhere. So it's not a place for discussion - that's what this site is for, and the features to support discussion here are far stronger than they are there, what with voting, threading and so forth. By the same token, it's not a place to advance your ideas - it's better to do that here, where people can comment on them and where it's clearly tagged as the work of one author rather than some sort of collective conclusion.

\n

I'd like to avoid duplication in other areas, too. Anything that can go in Wikipedia instead of our wiki should do: we will get better results if we and they are editing the same biography of Eliezer Yudkowsky, rather than creating a fork. To that end, I've created a {{wikilink}} template that can go at the top of an article, linking to the article with the same name in Wikipedia. Have a look at our current article on Newcomb's paradox - there is far more detail in the linked Wikipedia article, but there are some things we carry because they (rightly) won't: the sometimes non-standard vocabulary we tend to use around it (eg \"Omega\") and links to related articles in Overcoming Bias/Less Wrong on the subject, which they might not choose to keep (since Wikipedia is not a link farm).

\n

Similarly, we don't want to provide our own index of heuristics and biases, since there's one on Wikipedia and another on the Psychology wiki, and most of what they lack on the subject we can fix there rather than trying to address by duplication.

\n

It's often easier to say what a thing is not for than what it is for. What have I missed out here that we should be using the wiki for; am I right to discourage what I set out above; what else do we need to say about how best to use it? Because we could be discussing anything in a given week, but a wiki evolves more slowly, I'd like to hope that if in a year's time I meet someone who seems open to the ideas we discuss here and wants to learn more, it's the wiki I'd point them at rather than this website; it might eventually be the best starting point on how to become less wrong.

" } }, { "_id": "ceXpD4vjzfiNkNYTp", "title": "Newcomb's Problem vs. One-Shot Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/ceXpD4vjzfiNkNYTp/newcomb-s-problem-vs-one-shot-prisoner-s-dilemma", "postedAt": "2009-04-07T05:32:37.012Z", "baseScore": 14, "voteCount": 22, "commentCount": 16, "url": null, "contents": { "documentId": "ceXpD4vjzfiNkNYTp", "html": "

\n

Continuation of: http://lesswrong.com/lw/7/kinnairds_truels/i7#comments

\n

Eliezer has convinced me to one-box Newcomb's problem, but I'm not ready to Cooperate in one-shot PD yet. In http://www.overcomingbias.com/2008/09/iterated-tpd.html?cid=129270958#comment-129270958, Eliezer wrote:

\n
\n

PDF, on the 100th [i.e. final] move of the iterated dilemma, I cooperate if and only if I expect the paperclipper to cooperate if and only if I cooperate, that is:

\n

Eliezer.C <=> (Paperclipper.C <=> Eliezer.C)

\n
\n

The problem is, the paperclipper would like to deceive Eliezer into believing that Paperclipper.C <=> Eliezer.C, while actually playing D. This means Eliezer has to expend resources to verify that Paperclipper.C <=> Eliezer.C really is true with high probability. If the potential gain from cooperation in a one-shot PD is less than this cost, then cooperation isn't possible. In Newbomb’s Problem, the analogous issue can be assumed away, by stipulating that Omega will see through any deception. But in the standard game theory analysis of one-shot PD, the opposite assumption is made, namely that it's impossible or prohibitively costly for players to convince each other that Player1.C <=> Player2.C.

\n

It seems likely that this assumption is false, at least for some types of agents and sufficiently high gains from cooperation. In http://www.nabble.com/-sl4--prove-your-source-code-td18454831.html, I asked how superintelligences can prove their source code to each other, and Tim Freeman responded with this suggestion:

\n
\n

Entity A could prove to entity B that it has source code S by consenting to be replaced by a new entity A' that was constructed by a manufacturing process jointly monitored by A and B.  During this process, both A and B observe that A' is constructed to run source code S.  After A' is constructed, A shuts down and gives all of its resources to A'.

\n
\n

But this process seems quite expensive, so even SIs may not be able to play Cooperate in one-shot PD, unless the stakes are pretty high. Are there cheaper solutions, perhaps ones that can be applied to humans as well, for players in one-shot PD to convince each other what decision systems they are using?

\n

On a related note, Eliezer has claimed that truly one-shot PD is very rare in real life. I would agree with this, except that the same issue also arises from indefinitely repeated games where the probability of the game ending after the current round is too high, or the time discount factor is too low, for a tit-for-tat strategy to work.

" } }, { "_id": "aoLncXJMFfTH9vL5a", "title": "On Comments, Voting, and Karma - Part I", "pageUrl": "https://www.lesswrong.com/posts/aoLncXJMFfTH9vL5a/on-comments-voting-and-karma-part-i", "postedAt": "2009-04-07T02:44:26.333Z", "baseScore": 10, "voteCount": 12, "commentCount": 47, "url": null, "contents": { "documentId": "aoLncXJMFfTH9vL5a", "html": "

There has been a great deal of discussion here about the proper methods of voting on comments and on how karma should be assigned.  I believe it's finally reached the point where a post is warranted that covers some of the issues involved.  (This may be just because I find myself frequently in disagreement with others about it.)

\n

The Automatic Upvote

\n

First, there is the question of whether one should be able to upvote one's own comment.  This actually breaks apart into two related concerns:

\n

(1) One is able to upvote one's own comments, and

\n

(2) One gains a point of karma just for posting a comment.

\n

These need not be tied.  We could have (2) without (1) by awarding a point of karma for commenting, without changing the comment's score.  We could also have (1) without (2) by simply not counting self-upvotes for karma.

\n

I am in favor of (2).  The main argument against (2) is that it rewards quantity over quality.  The main argument for (2) is that it offers an automatic incentive to post comments; that is, it rewards commenting over silence.  As we're community-building, I think the latter incentive is more important than the former.  But I'm not sure this is worth arguing further - it serves as a distraction from the benefits of (1).

\n

I am also in favor of (1).  As a default, all comments have a base rating of 0.  Since one is allowed to vote on one's own comments, and upvoting is the default for one's own comments, this makes comments effectively start at a rating of 1.  The argument against this is that it makes more sense for comments to start with a rating of 0, so that someone else liking a comment gives it a positive rating, while someone disliking it gives it a negative rating.  I disagree with this assessment.

\n

If I post a comment, it's because it's the best comment I could think of to add to the discussion.  I will usually not bother saying something if I don't think it's the sort of thing that I would upvote.  When I see someone else's comment that I don't think is very good, I downvote it.  Since they already upvoted it, I'm in effect disagreeing that this was something worth saying.  The score now reflects this - a score of 0 shows that one person thought it was a worthwhile comment, and one person did not.

\n

Furthermore, if I was not able to vote on my own comments, I would be much more reluctant to upvote.  Since I would not be able to upvote my comment, upvoting someone else's comment would suggest that I think their comment is better than my own. But by hypothesis, I thought my comment was nearly the best thing that could be said on the subject; thus, upvotes will be rare.

\n

And so I say that we implement a compromise - (1) and not (2).

\n

What should upvote/downvote mean?

\n

I think it is established pretty well that upvote means \"High quality comment\" or \"I would like to see more comments like this one\", while downvote means \"Low quality comment\" or \"I would like to see fewer comments like this one\".  However, this definition still retains a good bit of ambiguity.

\n

It is too easy to think of upvote and downvote as 'agree' and 'disagree'.  Even guarding myself against this behavior I find the cursor drifting to downvote as soon as I think, \"Well that's obviously wrong\".  But that's clearly not what the concept is there for.  Comments voted up appear higher on the page (on certain views), which allows casual readers to see the best comments and discussions on any particular post.  If we use upvote and downvote to mean 'agree' and 'disagree', then this is effectively an echo chamber, where the only comments to float to the top are the ones that jive with the groupthink.

\n

Instead, upvote and downvote should reflect overall quality of a comment.  There are several criteria I tend to use to judge a good comment (this list is not all-inclusive):

\n
    \n
  1. Did the comment add any information, or did it just add to the noise? (+)
  2. \n
  3. Does the comment include references or links to relevant information? (+)
  4. \n
  5. Does the comment reflect a genuinely held point-of-view that adds to the discussion? (+)
  6. \n
  7. Is the comment part of a discussion that might lead somewhere interesting? (+)
  8. \n
  9. Is the comment obvious spam / trolling? (-)
  10. \n
  11. Is the comment off-topic? (-)
  12. \n
\n

Since we feel the need to voice whether we agree or disagree with comments, but 'I agree' and 'I disagree' comments are noisy, it's been suggested that there should be separate buttons to indicate agreement and disagreement.  Thus, someone posting a well-argued on-topic defense of theism can get the upvote and 'disagree', while someone posting an off-topic 'physicalism is true' can get the downvote and 'agree'.  Presumably, we'd only count upvotes and downvotes for karma, but we could use 'agree' and 'disagree' for \"most controversial\" or other views/metrics.

\n

Whether votes should require an explanation

\n

It has been suggested that votes, or downvotes specifically, should require an explanation.  I disagree with both sentiments.  First, requiring explanations for downvotes but not upvotes would bias the voting positively, which would have the effect of rewarding quantity over quality and decrease the impact of downvotes.

\n

But requiring explanations for votes is in general a bad idea.  This site is already a burden to keep up with; for those of us that do a lot of voting, writing an explanation for each one would be too much time and effort.  Requiring an explanation for every vote would doubtless result in a lot less voting.  Also, explaining votes is almost always off-topic, so adds to the noise here without really contributing to the discussion.

\n

Note Yvain's more personal rationale:

\n
I'm not prepared to write an essay explaining exactly was wrong with each of them, especially if the original commenter wasn't prepared to take three seconds to write a halfway decent response.
\n

Adding to the burden of those already performing the service of voting unduly penalizes those who are doing good, to the end of appeasing those who are contributing to the noise here.

\n

Relevant Comments

\n

For reference, some links to relevant posts and sections of comments.  I tried to be inclusive, since there have been a lot of discussions about these issues - more relevant ones hopefully near the top. (Please comment if you know of any other relevant discussions)

\n

1 2 3 what upvote and downvote should mean and whether there should be agree/disagree buttons

\n

4 whether karma should be the sum of individual post scores, or (perhaps) an average

\n

5 super-votes

\n

6 The utility of comment karma

\n

7 whether one should unselect the self-upvote

\n

8 9 whether downvotes should require explanation

\n

10 whether Eliezer Yudkowsky gets fewer upvotes than others

\n

11 whether karma can be used to gauge rationality

\n

12 whether people downvote for disagreeing with groupthink

\n

13 whether karma promotes a closed-garden effect

\n

14 whether administrators should delete comments entirely

\n

15 Lesswrong Antikibitzer: tool for hiding comment authors and vote counts

\n

ETA: I might concede that this post is possibly off-topic for Less Wrong - but the blog/community site about \"Less Wrong\" does not exist yet, so this seems like the best place to post it.

\n

ETA2: Public records of upvotes/downvotes might solve some of these problems; discuss.

" } }, { "_id": "XYrcTJFJoYKX2DxNL", "title": "Extenuating Circumstances", "pageUrl": "https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances", "postedAt": "2009-04-06T22:57:31.701Z", "baseScore": 59, "voteCount": 45, "commentCount": 42, "url": null, "contents": { "documentId": "XYrcTJFJoYKX2DxNL", "html": "

Followup toTsuyoku Naritai

\n
\n

\"Just remember, there but for a massive genetic difference, environmental factors, and conscious choices, go you or I.\" -- Justin Corwin

\n
\n

Failures don't have single causes.  We choose single causes to focus on, but nothing in the universe emerges from a single parent event.  Every assassination ever committed is the fault of every asteroid that wasn't in the right place to hit the assassin.

\n

What good, then, does it do to blame circumstances for your failure?  What good does it do? - to look over a huge causal lattice in which your own decisions played a part, and point to something you can't control, and say:  \"There is where it failed.\"  It might be that a surgical intervention on the past, altering some node outside yourself, would have let you succeed instead of fail.  But what good does this counterfactual do you?  Will you choose that outside reality be different on your next try?

\n

And yet... when I look at other people, not myself, I find myself taking \"extenuating circumstances\" into account a great deal.  I go to great lengths to \"save the world\" (as I believe from my epistemic vantage point).  When I consider doing less, I consider that this would make me a horrible awful unforgivable person.  And then I cheerfully shake hands with others who aren't trying at all to save the world.  I seem to want to have my cake and eat it too - to instantiate Goetz's Paradox:  \"Society tells you to work to make yourself more valuable.  Then it tells you that when you reason morally, you must assume that all lives are equally valuable.  You can't have it both ways.\"

\n

Is this an inherent subjective asymmetry - does morality just look different from the outside than inside?  If so, is that okay, or is it a sign of self-contradiction?  Or is it condescension on my part - that I think less of others and so hold them to lower standards?

\n

I've pondered this question for a while, and this is the main defense I can offer against the charge of condescension:

\n

I wouldn't tell others to take into account \"extenuating circumstances\" in judging themselves.

\n

Indeed, that would feel like an act of sabotage - like slashing their tires.  Too much of life consists of holding ourselves to a high enough standard.

\n

There are people who blame themselves too easily - people depressed, falling into despair and not moving forward, because they blame themselves for things they couldn't help.

\n

But you really want to be very careful with applying this kind of reasoning to yourself, because a whole whack of a lot of people who were successful in life got there by driving straight through problems that couldn't be helped.  I'm minded of a recent comment on Hacker News (not sure where) about someone who wanted to work at a certain game company, only there were no jobs available and no H.R. contact listed... so they looked at the numbers listed and deduced the corporate phone prefix, then systematically dialed telephone numbers until they got the CEO's office, and then pled their case to be hired.  They did not, in fact, get that job; but, not surprisingly, did eventually end up employed in their chosen industry.  Contrast to someone who reasons, \"I won't be able to get a job now - there's a recession!\"

\n

For yea, I have watched some people be stopped in their tracks without trying by \"obstacles\" that other people I know, Silicon Valley entrepreneur types, would roll over like a steamroller flattening out a speedbump.  That difference probably accounts for a lot of real life performance, and it probably has a great deal to do with what, exactly, you regard as a valid excuse - a condition that makes a failure not reflect badly on you.

\n

A lot of people would regard being 14 years old as a valid excuse for not starting your own company.  Not Ben Casnocha, though.

\n

If someone has advanced to the point of explicitly pleading some excuse, then that's probably the point at which I do begin to hold them accountable.  When someone says to me, \"I haven't signed up my teenage son for cryonics because I'm religious so I must not believe in that sort of thing,\" I think, Well, I can't blame them, they don't know anything about rationality.  But if they say, \"I haven't signed up my teenage son for cryonics, because I don't know anything about rationality, so you can't expect me to give the correct answer to this dilemma,\" then at that point I really might start blaming them.

\n

The way a real extenuating circumstance looks from the inside is not that you think \"I have an extenuating circumstance, so I can be excused for failing to do X\", but rather, that X just doesn't seem like an available option at all, or X seems like it would have so many penalties attached that it's not in fact the best option which you could and should perform but aren't performing.

\n

Similarly, if ignorance is your extenuating circumstance, you just don't realize that X is a good idea, rather than thinking to yourself, \"I am ignorant of the fact that X is a good idea, therefore I can be excused for failing to choose it.\"

\n

Like \"to believe falsely\", if there were a verb meaning \"to forgive due to extenuating circumstances\", it would have no first-person, present-tense indicative.

\n

So I would advise others, like myself, not to think in terms of \"extenuating circumstances\" at all; I would advise people to hold themselves accountable for every dilemma they have advanced to the point of explicitly perceiving as a dilemma - the same rule I use internally.  This is my defense against the charge of condescension / Goetz's Paradox - that at this sufficiently meta level, I would tell others to use the same rule as I.

\n

If I hold myself responsible for doing certain things, it is because I perceive them as morally-good, prosocially-obligatory options... which I may nonetheless have some difficulty in doing... but which I nonetheless could realistically do without overspending my mental energy budget.

\n

That's my own main limit, incidentally, my mental energy budget.  I am constantly wrestling with the fact of its reality, because it sounds like such a hideously wonderful excuse that my reflexes keep on doubting it.  On any given occasion I'm never sure if it's a valid justification.  And so when I do run up against my limits hard enough that there's no mistaking them, there's a certain guilty pleasure of validation, that I can feel less guilty about having done less on other occasions...

\n

Well, that's my own demon to wrestle with.  My intended point here is that I could be wrong about my mental energy budget.  I could be doing too little, relative to what I can do without overspending myself.  But I couldn't possibly say-in-the-moment, \"I'm failing to choose the right option, but as an extenuating circumstance, I underestimated my own mental energy.\"  If I am wrong, some forgiving other might look upon it as an \"extenuating circumstance\".  But I cannot myself say, \"And if I'm wrong, then that's an extenuating circumstance.\"  From my own perspective, rather, I am obliged to not be wrong.

\n

\"There are no outs.  Even if someone else would call it an extenuating circumstance and forgive me for giving up, I'll just get it done anyway.\"  I'm not a venture capitalist, but if I were, that's the attitude I'd want to see in a startup founder before I invested money.

\n

If after all that the project failed anyway, then I might really believe that everything that could be done, had been done...

\n

...so long as it wasn't my project, which is supposed by golly to succeed and not fail due to \"extenuating circumstances\".

\n

A true rationalist should win, after all.

" } }, { "_id": "vgxMj9X3KwXdNtTTS", "title": "What do fellow rationalists think about Mensa?", "pageUrl": "https://www.lesswrong.com/posts/vgxMj9X3KwXdNtTTS/what-do-fellow-rationalists-think-about-mensa", "postedAt": "2009-04-06T22:08:13.104Z", "baseScore": 6, "voteCount": 10, "commentCount": 33, "url": null, "contents": { "documentId": "vgxMj9X3KwXdNtTTS", "html": "

It's not a typical OB/LW subject, but Robin correctly pointed out that most rationalists are outside OB/LW, and so I'm asking about one of the organizations that might hold many of them.

\n

A couple of weeks ago I took a supervised IQ test by Mensa due to curiosity and for some CV padding (cheap signaling is a perfectly rational thing to do). Now I got a letter back from them that I'm in top whatever %, and they'd like me to join. I wasn't really planning joining Mensa, or anything else, so I'm wondering - does any of fellow rationalists have any experience with them? Is it worth bothering?

\n

As a bonus here's a quick description of their supervised IQ testing process:

\n\n

They compute percentile based on both tests separately, and higher of two counts as the result. So you can has 0 points on one (if at all possible), and respectively 148 / 132 on the other, and you're in (2 stddev above mean, or top 2%). The tests obviously check knowledge of obscure English words and meanings and ability to deal with pressure in addition to intelligence as such. Well, I guess no test is perfect.

\n

So Mensa - good or bad?

" } }, { "_id": "ZXLnFxLgpm3KtLo6q", "title": "Rationalist wiki, redux", "pageUrl": "https://www.lesswrong.com/posts/ZXLnFxLgpm3KtLo6q/rationalist-wiki-redux", "postedAt": "2009-04-06T19:24:43.022Z", "baseScore": 12, "voteCount": 17, "commentCount": 29, "url": null, "contents": { "documentId": "ZXLnFxLgpm3KtLo6q", "html": "

This site is very likely impenetrable to the newcomer. You one-box and defect on the True Prisoner's Dilemma, but is that just because of a cached thought, or is it your Tsuyoku Naratai?  So I've created the LessWrong Wiki on Wikia. I'd like this to become a respository of useful definitions and links: it can support our discussions here, and create something lasting from the ephemerality of a blog.

\n

badger already created a Wiki, but as you can see in the updates to that article badger and others pretty quickly concluded that TiddlyWiki wouldn't be up to the job. MediaWiki, the software Wikipedia and Wikia use, is the monster of them all, and will give us good support for practically anything we want to do, including mathematical notation. I've ported across a couple of articles from the old wiki onto the new, but many more are needed. The \"download\" link in TiddlyWiki and a text editor may help.

\n

EDIT: Usernames are global across all of Wikia, so you may not be able to use the same name there as here. Sorry.

" } }, { "_id": "px4nYEy3rDqeegJw3", "title": "Average utilitarianism must be correct?", "pageUrl": "https://www.lesswrong.com/posts/px4nYEy3rDqeegJw3/average-utilitarianism-must-be-correct", "postedAt": "2009-04-06T17:10:02.598Z", "baseScore": 5, "voteCount": 32, "commentCount": 168, "url": null, "contents": { "documentId": "px4nYEy3rDqeegJw3", "html": "

I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.

\n

My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem.  We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains.  I rewrote the title yet again, and have here a restatement that I hope is clearer.

\n\n

Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:

\n
\n

Despite these advantages, average utilitarianism has not obtained much acceptance in the philosophical literature. This is due to the fact that the principle has implications generally regarded as highly counterintuitive. For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984). That total well-being should not matter when we are considering lives worth ending is hard to accept. Moreover, average utilitarianism has implications very similar to the Repugnant Conclusion (see Sikora 1975; Anglin 1977).

\n
\n

(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)

" } }, { "_id": "WhzCbrxG4KzFz7W4d", "title": "Newcomb's Problem standard positions", "pageUrl": "https://www.lesswrong.com/posts/WhzCbrxG4KzFz7W4d/newcomb-s-problem-standard-positions", "postedAt": "2009-04-06T17:05:23.522Z", "baseScore": 7, "voteCount": 12, "commentCount": 22, "url": null, "contents": { "documentId": "WhzCbrxG4KzFz7W4d", "html": "

Marion Ledwig's dissertation summarizes much of the existing thinking that's gone into Newcomb's Problem.

\n

(For the record, I myself am neither an evidential decision theorist, nor a causal decision theorist in the current sense.  My view is not easily summarized, but it is reflectively consistent without need of precommitment or similar dodges; my agents see no need to modify their own source code or invoke abnormal decision procedures on Newcomblike problems.)

" } }, { "_id": "b88EtWvjyRQc89XzT", "title": "Rationalists should beware rationalism", "pageUrl": "https://www.lesswrong.com/posts/b88EtWvjyRQc89XzT/rationalists-should-beware-rationalism", "postedAt": "2009-04-06T14:16:30.733Z", "baseScore": 32, "voteCount": 39, "commentCount": 32, "url": null, "contents": { "documentId": "b88EtWvjyRQc89XzT", "html": "

Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism.

By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it.

\n

But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect.

\n

I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me.

\n

But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, or be a pure side-effect from some actual adaptation. If we set out by trying to find an evolutionary reason for some behavior, we are assuming from the start that there must be one, when it isn't a given that there is. And even a good theory need not explain every observation.

\n

Obviously I'm not saying that we should never come up with such theories. Be wary of those who speak of being open-minded and modestly confess their ignorance. But we should avoid giving them excess weight, and instead assign them very broad confidence intervals. This seems to contradict the claim that the human mind is well adapted to its EEA. Is evolutionary psychology wrong? Maybe the creationists are correct after all writes Roko, implying that it is crucial for us to come up with an explanation (yes, I do know that this is probably just a dramatic exaggaration on Roko's part, but it made such a good example that I couldn't help but to use it). But regardless of whether or not we do come up with an explanation, that explanation doesn't carry much weight if it doesn't provide testable predictions. And even if it did provide such predictions, we'd need to find confirming evidence first, before lending it much credence.

\n

I suspect that we rationalists may have a tendency towards rationalism, as in the meaning above. In order to learn how to think, we study math and probability theory. We consider different fallacies, and find out how to dismantle broken reasoning, both that of others and our own. We learn to downplay the role of our personal experiences, recognizing that those may be just the result of a random effect and a small sample size. But learning to think more like a mathematician, whose empiricism resides in the realm of pure thought, does not predispose us to more readily go collect evidence from the real world. Neither does the downplaying of our personal experiences. Many are computer science majors, used to being in the comfortable position of being capable of testing their hypotheses without needing to leave their office. It is, then, an easy temptation to come up with a nice-sounding theory which happens to explain the facts, and then consider the question solved. Reason must reign supreme, must it not?

\n

But if we really do so, we are endangering our ability to find the truth in the future. Our existing preconceptions constrain part of our creativity, and if we believe untested hypotheses too uncritically, the true ones may never even occur to us. If we believe in one falsehood, then everything that we build on top of it will also be flawed.

\n

This isn't to say that all tests would necessarily have to involve going out of your room to dig for fossils. A hypothesis does get some validation from simply being compatible with existing knowledge - that's how they pass the initial \"does this make sense\" test in the first place. Certainly, a scholarly article citing several studies and theories in its support is already drawing on considerable supporting evidence. It often happens that a conclusion, built on top of previous knowledge, is so obvious that you don't even need to test it. Roko's post, while not yet in this category, drew on already established arguments relating to the Near-Far distinction and other things, and I do in fact find it rather plausible. Unless contradictory evidence comes in, I'll consider it the best explanation of the underdog phenomenon, one which I can build further hypotheses on. But I do keep in mind that none of its predictions have been tested yet, and that it might still be wrong.

\n

It is therefore that I say: certainly do come up with all kinds of hypotheses, but if they haven't been tested, be careful not to believe in them too much.

" } }, { "_id": "kjua3pfeGiskAAac2", "title": "Heuristic is not a bad word", "pageUrl": "https://www.lesswrong.com/posts/kjua3pfeGiskAAac2/heuristic-is-not-a-bad-word", "postedAt": "2009-04-06T06:55:10.539Z", "baseScore": 12, "voteCount": 20, "commentCount": 13, "url": null, "contents": { "documentId": "kjua3pfeGiskAAac2", "html": "

An insect tries to escape through the windowpane, tries the same again and again, and does not try the next window which is open and through which it came into the room. A man is able, or at least should be able, to act more intelligently. —George Polya, How To Solve It

Intelligence makes humans capable of many impressive feats. Unlike flies and birds, we don't bang up against windows multiple times trying to get out of our houses. We can travel to the moon. We have taken over the planet. Why? Because intelligence enables us to solve problems.

All problems start the same way. They start unsolved. Each fact humans have figured out was initially unfigured out by us. Then we did something, which converted the unknown fact into a known fact, changed the state of a problem from unsolved to solved.

I emphasize the unknown starting state of problems to make a point: problem solving, the basis of human achievement, depends on a process of discovery, discovery of new facts, new possibilities, new methods, and new ways of thought.

Heuristic—the art and science of discovery—has been integral for human progress. The word \"heuristic\" is related to \"Eureka!\"

\n

Heuristics and biases

\n

Unfortunately, heuristic is a bad word. At least, that's the impression you might get, seeing it hand-in-hand with \"bias\" in the psychological literature. In Judgment under Uncertainty: Heuristics and Biases, Tversky and Kahneman acknowledge that \"in general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.\" On Overcoming Bias, heuristics seem primarily discussed as resulting in biases.

\n

Bias-reduction is a form of skepticism that is a critical part of rationality. Due to the uncertain nature of the territory of reality, many notions of the territory are wrong. Rational skepticism helps us identify false assumptions, areas where our map will depart from the territory.

While bias-reduction is necessary in the search for rationality, it is not sufficient. It's a mistake in cartography to have areas of your map that are filled in wrong, but it's also a mistake to have areas on your map blank that you could have filled in, at least with something approximate. A map with wrong patches will not take you to your destination, but neither will a map with blank patches. Believing things that are false is one error which will prevent us from finding the truth or winning in our endeavors, yet another error in rationality is failing to recognize or believe things that are true, probable, or useful.

\n

How can we draw our maps more accurately in the first place, so that they need less corrections? This is a job for heuristic.

\n

Heuristic as rational creativity

\n

Rationality depends on both bias-reduction and heuristic. Heuristic is the creative faculty, while overcoming bias and other skeptical techniques are the critical faculty. As Ben Kovitz proposes on the Heuristic Wiki, \"Heuristic is about how to steer your attention so that you find things that meet the criteria of logic.\" From the start, heuristic depends on avoiding bias, or else it will be based on false assumptions and spiral off in the wrong direction. The results of even well-calibrated heuristics require critical scrutiny. Yet no matter how good your ability to critique ideas may be—to separate the wheat from the chaff—you will never learn anything if your attention is wasted on ideas that are overwhelmingly chaff; heuristic is about growing better wheat in the first place, making your winnowing efforts more productive.

While granting and emphasizing the fallibility of heuristic and its danger of taking us away from truth, and that most applications of heuristic will be crap, I also want to explore the potential of heuristic to take us towards truth. I want to understand how heuristic works in practice, not just acknowledge the benefits of heuristic in principle. Heuristic enabled Tversky and Kahneman to make new discoveries about bias; it enabled Einstein to formulate General Relativity and arrogantly state his confidence in it regardless of future experiments.

\n

While many of the heuristics currently scrutinized for bias seem like quaint quirks of human psychology to which we condescendingly admit usefulness in some situations, we must recognize that all of human knowledge came from heuristic and started off as a guess. To the extent that we think that humans have solved any problems—albeit approximately or provisionally—we should value heuristic. Perhaps the best heuristics are so far unarticulated or undiscovered.

The varied results of heuristic lead me not to pessimism about heuristic, but rather to optimism about how we might identify the strong heuristics currently in use and develop even stronger ones. In future posts, I intend to delve deeper into what heuristic is, why we need it, and how to practice specific heuristics. I don't yet know a great deal about heuristic on a conscious level, but I want to figure it out.

" } }, { "_id": "oNwbEPrat8pyBrimk", "title": "Rationality Toughness Tests", "pageUrl": "https://www.lesswrong.com/posts/oNwbEPrat8pyBrimk/rationality-toughness-tests", "postedAt": "2009-04-06T01:12:31.928Z", "baseScore": 28, "voteCount": 27, "commentCount": 17, "url": null, "contents": { "documentId": "oNwbEPrat8pyBrimk", "html": "

(Epistemic) rationality has two major components:

\n\n

Attending takes time, energy, quiet, etc.  Circumstances where human rationality degrades include when:

\n\n

It seems relatively easy to test rationality smarts; repeatedly give folks info and time to work new problems and measure their accuracy, calibration, etc.  And I have an idea for testing for rationality toughness: compare performance on info-similar pairs of good/bad-circumstance problems. 

For example, assume people are better at evaluating if a spouse is cheating when considering an acquaintance in their social circle, relative to a stranger or their own spouse. If so, we could pose them a pair of problems with very similar info structure, one with an easy spouse and one with a hard spouse.  The closeness of their response in these two cases would then be a measure of their rationality toughness.

Of course this test may fail if the similarity is too obvious, or the pair are asked too closely in time.  But maybe we don't even need to ask the same person the two questions; perhaps we could usefully compare someone's answer on a hard question to answers from a pool of similar people on matched easy questions.

While I haven't thought this through, it already suggests a training technique: consider matched hard/easy circumstance problems and compare your answers, separated by enough time that you forget most of your previous analysis.

" } }, { "_id": "KPp6DZXAR9SumbzJz", "title": "Rationalist Wiki", "pageUrl": "https://www.lesswrong.com/posts/KPp6DZXAR9SumbzJz/rationalist-wiki", "postedAt": "2009-04-06T00:19:25.192Z", "baseScore": 11, "voteCount": 13, "commentCount": 24, "url": null, "contents": { "documentId": "KPp6DZXAR9SumbzJz", "html": "

Some (including myself) have suggested that a rationality wiki would be a useful supplement to this site. In the spirit of getting things done, I set one up here: http://rationality.tiddlyspot.com/ The password to save edits is omega.

\n

The TiddlyWiki framework it uses is very lightweight and won't be satisfactory as a long-term solution. I do think it has potential as a minimalist beginner's guide though, and could serve us well for the time being. I am not very knowledgable about wiki software in general, but TiddlyWiki has served me well for multiple personal wikis. I planned on developing it a little further before revealing it to the community, but other commitments demand my attention. Please feel free to contribute.

\n

Comments, suggestions? Is it better to start with something that can handle a significant user base and future growth, or should it stay small and self-contained to remain accessible to beginners?

\n

Update: I think it's becoming clear this can't serve as more than a short-term hack, even for a minimalist beginner's guide. At least it is provoking discussion. I'm still hoping for contributions so we have a leg up once an official solution emerges. If you do contribute, please try to keep markup to a minimum to facilitate a future conversion.

" } }, { "_id": "kKAmxmQq9umJiMFSp", "title": "Real-Life Anthropic Weirdness", "pageUrl": "https://www.lesswrong.com/posts/kKAmxmQq9umJiMFSp/real-life-anthropic-weirdness", "postedAt": "2009-04-05T22:26:38.377Z", "baseScore": 33, "voteCount": 33, "commentCount": 90, "url": null, "contents": { "documentId": "kKAmxmQq9umJiMFSp", "html": "

In passing, I said:

\n
\n

From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.

\n
\n

And lo, CronoDAS said:

\n
\n

Well... one of my grandmothers' neighbors, whose son I played with as a child, did indeed win the lottery. (AFAIK, it was a relatively modest jackpot, but he did win!)

\n
\n

To which I replied:

\n
\n

Well, yes, some of the modest jackpots are statistically almost possible, in the sense that on a large enough web forum, someone else's grandmother's neighbor will have won it. Just not your own grandmother's neighbor.

\n

Sorry about your statistical anomalatude, CronoDAS - it had to happen to someone, just not me.

\n
\n

There's a certain resemblance here - though not an actual analogy - to the strange position your friend ends up in, after you test the Quantum Theory of Immortality.

\n

For those unfamiliar with QTI, it's a simple simultaneous test of many-worlds plus a particular interpretation of anthropic observer-selection effects:  You put a gun to your head and wire up the trigger to a quantum coinflipper.  After flipping a million coins, if the gun still hasn't gone off, you can be pretty sure of the simultaneous truth of MWI+QTI.

\n

But what is your watching friend supposed to think?  Though his predicament is perfectly predictable to you - that is, you expected before starting the experiment to see his confusion - from his perspective it is just a pure 100% unexplained miracle.  What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge.  This is the main plausible exception I know to Aumann's Agreement Theorem.

\n

Pity those poor folk who actually win the lottery!  If the hypothesis \"this world is a holodeck\" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.  (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

\n

It's a sad situation to be in - but don't worry: it will always happen to someone else, not you.

" } }, { "_id": "v9ouxk76FMDQ4Fir9", "title": "Supporting the underdog is explained by Hanson’s Near/Far distinction", "pageUrl": "https://www.lesswrong.com/posts/v9ouxk76FMDQ4Fir9/supporting-the-underdog-is-explained-by-hanson-s-near-far", "postedAt": "2009-04-05T20:22:02.593Z", "baseScore": 31, "voteCount": 31, "commentCount": 27, "url": null, "contents": { "documentId": "v9ouxk76FMDQ4Fir9", "html": "

Yvain can’t make head nor tails of the apparently near universal human tendency to root for the underdog. [Read Yvain’s post before going any further]..

\n

He uses the following plausible-sounding story from a small hunter-gatherer tribe in our Era of Evolutionary Adaptedness to illustrate why support for the underdog seems to be an antiprediction of the standard theory of human evolutionary psychology:

\n
\n

Suppose Zug and Urk are battling it out for supremacy in the tribe. Urk comes up to you and says “my faction are hopelessly outnumbered and will probably be killed, and our property divided up amongst Zug’s supporters.” Those cave-men with genes that made them support the underdog would join Urk’s faction and be wiped out. Their genes would not make it very far in evolution’s ruthless race, unless we can think of some even stronger effect that might compensate for this.

\n
\n

Yvain cites an experiment where people supported either Israel or Palestine depending on who they saw as the underdog. This seems to contradict the claim that the human mind is well adapted to its EEA.

\n

\n

A lot of people tried to use the “truel” situation as an explanation: in a game of three players, it is rational for the weaker two to team up against the stronger one. But the choice of which faction to join is not a truel between three approximately equal players: as an individual you will have almost no impact upon which faction wins, and if you join the winning side you won’t necessarily be next on the menu: you will have about as much chance as anyone else in Zug’s faction of doing well if there is another mini-war. People who proffered this explanation are guilty of not being more surprised by fiction than reality. To start with, if this theory were correct, we would expect to see soldiers defecting away from the winning side in the closing stages of a war... which, to my knowledge, is the opposite of what happens. 

\n

SoulessAutomaton comes closest to the truth when he makes the following statement:

\n
\n

there may be a critical difference between voicing sympathy for the losing faction and actually joining it and sharing its misfortune.

\n
\n

Yes! Draw Distinctions!

\n

I thought about what the answer to Yvain’s puzzle was before reading the comments – and decided that Robin’s Near/Far distinction is the answer.

\n
\n

All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. 

\n

Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits. 

\n
\n

When you put people in a social-science experiment room and tell them, in the abstract, about the Isreal/Palestine conflict, they are in “far” mode. This situation is totally unlike having to choose which side to join in an actual fight – where your brain goes into “near” mode, and you quickly (I predict) join the likely victors. This explains the apparent contradiction between the Israel experiment and the situation in a real fight between Zug’s faction and Urk’s faction.

\n

In a situation where there is an extremely unbalanced conflict that you are “distant” from, there are various reasons I can think of for supporting the underdog: but the common theme is that when the mind is in “far” mode, its primary purpose is to signal how nice it is, rather than to actually acquire resources. Why do we want to signal to others that we are nice people? We do this because they are more likely to cooperate with us and trust us! If evolution built a cave-man who went around telling other cave-men what a selfish bastard he was... well, that cave-man wouldn't last long. 

\n

When people support, for example, Palestine, they don't say \"I support Palestine because it is the underdog\", they say \"I support Palestine because they are the party with the ethical high ground, they are in the right, Israel is in the wrong\". In doing so, they have signalled that they support people for ethical reasons rather than self-interested reasons. Someone who is guided by ethical principles rather than self-interest makes a better ally. Conversely, someone who supports the stronger side signalls that they are more self-interested and less concerned with ethical considerations. Admittedly, this is a signal that you can fake to some extent: there is probably a tradeoff between the probability that the winning side will punish you, and the value that supporting someone for ethical reasons carries. When the conflict is very close, the probability of you becoming involved makes the signal too expensive. When the conflict is far, the signal is almost (but not quite) free.

\n

You also put yourself in a better bargaining position for when you meet the victorious side: you can complain that they don't really deserve all their conquest-acquired wealth because they stole it anyway. In a world where people genuinely think that they are nicer than they really are (which is, by the way, the world of humans), being able to frame someone as being the \"bad guy\" puts you in a position of strength when negotiating. They might make concessions to preserve their self-image. In a world where you can't lie perfectly, preserving your self-image as a nice person or a nice tribe is worth making some concessions for.

\n

All that remains to explain is what situation in our evolutionary past corresponds to hearing about a faraway conflict (Like Israel/Palestine for westerners who don’t live there or have any true interest). This I am not sure about: perhaps it would be like hearing of a distant battle between two tribes? Or a conflict between two factions of your tribe, which occurs in such a way that you cannot take sides?

\n

My explanation makes the prediction that if you performed a social-science experiment where people felt sufficiently close to the conflict to be personally involved, they would support the likely winner. This might involve making people very frightened and thus not pass ethics committee approval, though.

\n

The only good experience I have with “near” tribal conflicts is my experiences at school; whenever some poor underdog was being bullied, I felt compelled to join in with the bullying, in exactly the same “automatic” way that I feel compelled to support the underdog in Far situations. I just couldn’t help myself. 

\n

Hat-tip to Yvain for admitting he couldn’t explain this. The path to knowledge is paved with grudging admissions of your ignorance. 

\n

 

" } }, { "_id": "GZ8t3uJRPSQb2sAH3", "title": "Formalizing Newcomb's", "pageUrl": "https://www.lesswrong.com/posts/GZ8t3uJRPSQb2sAH3/formalizing-newcomb-s", "postedAt": "2009-04-05T15:39:03.228Z", "baseScore": 22, "voteCount": 39, "commentCount": 117, "url": null, "contents": { "documentId": "GZ8t3uJRPSQb2sAH3", "html": "

This post was inspired by taw urging us to mathematize Newcomb's problem and Eliezer telling me to post stuff I like instead of complaining.

\n

To make Newcomb's problem more concrete we need a workable model of Omega. Let me count the ways:

\n

1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it's logical to one-box.

\n

2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the \"real you\" the million.

\n

3) Omega \"scans your brain and predicts your decision\" without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.

\n

(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.)

\n

(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)

\n

4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.

\n

I guess this one is settled, folks. Any questions?

" } }, { "_id": "MBs78fg6JMTMatQZQ", "title": "Voting etiquette", "pageUrl": "https://www.lesswrong.com/posts/MBs78fg6JMTMatQZQ/voting-etiquette", "postedAt": "2009-04-05T14:28:31.031Z", "baseScore": 13, "voteCount": 17, "commentCount": 37, "url": null, "contents": { "documentId": "MBs78fg6JMTMatQZQ", "html": "

Not all that surprisingly, there's quite a lot of discussion on LW about questions like

\n\n

This generally happens in dribs and drabs, typically in response to more specific questions of the form

\n\n

and therefore tends to clutter up discussions that are meant to be about something else. So maybe it's worth seeing if we can arrive at some sort of consensus about the general issues, at which point maybe we can write that up and refer newcomers to it.

\n

(The outcome may be that we find that there's no consensus to be had. That would be useful information too.)

\n

I'll kick things off with a few unfocused thoughts.

\n

What voting is for: establishing the nearest thing we have to the consensus view of the LW community, so as to (1) help readers guess what might be most worth reading and (2) help writers adjust their writing (if they wish) to please the audience more. Note that these purposes are somewhat separate from ...

\n

What karma is for: motivating people to participate, motivating people to participate well, giving readers an indication of which writers are most worth reading.

\n

It seems to me that voting is working reasonably well -- I find a reasonable correlation between comment ratings and comment quality. I'm not convinced that karma is working so well; what's rewarded by the system is prolific posting at least as much as high-quality posting. Doing away with the auto-self-upvote (and making it impossible to upvote one's own comments) seems likely to be an improvement. Or maybe making each comment count for (say) 1/4 as much as an upvote.

\n

Explanations for votes: Lots of comments get voted up; quite a lot get voted down. The practice of explaining votes (even just downvotes) would make for cluttered threads. Also: upvotes and downvotes are anonymous, which is largely a good thing. So, here's one possibility. (It might just be unnecessary complication). When you vote something up or down, you get the chance (or the obligation?) to write a brief explanation of why; it doesn't go into the thread as a comment, but gets associated with the comment you voted on (without your name attached). Then hovering over a comment's score (or something) could pop up a list of votes each way and their explanations, if any. Still anonymous; out of the way when not specifically asked for; but gives some hope of finding why something was downvoted, and also a way of distinguishing between +1 -0 and +14 -13.

" } }, { "_id": "6BHcfSqNRjaYRoc2S", "title": "Off-Topic Discussion Thread: April 2009", "pageUrl": "https://www.lesswrong.com/posts/6BHcfSqNRjaYRoc2S/off-topic-discussion-thread-april-2009", "postedAt": "2009-04-05T03:23:33.076Z", "baseScore": 12, "voteCount": 15, "commentCount": 68, "url": null, "contents": { "documentId": "6BHcfSqNRjaYRoc2S", "html": "

Dale McGowan writes:

\n
\n

And it needs to go well beyond one greeter. EVERY MEMBER of EVERY GROUP should make it a point to chat up new folks—and each other, for that matter. And not just about the latest debunky book. Ask where he’s from, what she does for a living, whether he follows the Mets or the Yankees. You know, mammal talk.

\n
\n

In this spirit, I propose the creation of a fully off-topic discussion thread.

\n

Here is our monthly place to discuss topics entirely unrelated to Less Wrong that (of course) have not appeared in recent posts.

\n

ETA: There are two behaviors I would love to see associated with this thread. First of all, discussions often drift off-topic in the middle of a thread. In these cases \"let's take this to the off-topic thread\" would be an excellent response.  Secondly, given who's doing the discussing, I could easily see, say, a discussion about recent developments in some webcomic blossoming into a LW-worthy insight, in which case someone could spawn a new thread.

\n

" } }, { "_id": "A4MK9RQqSAJZjanQD", "title": "Why Support the Underdog?", "pageUrl": "https://www.lesswrong.com/posts/A4MK9RQqSAJZjanQD/why-support-the-underdog", "postedAt": "2009-04-05T00:01:29.756Z", "baseScore": 45, "voteCount": 47, "commentCount": 102, "url": null, "contents": { "documentId": "A4MK9RQqSAJZjanQD", "html": "

One of the strangest human biases is the almost universal tendency to support the underdog.

I say \"human\" because even though Americans like to identify themselves as particular friends of the underdog, you can find a little of it everywhere. Anyone who's watched anime knows the Japanese have it. Anyone who's read the Bible knows the Israelites had it (no one was rooting for Goliath!) From mythology to literature to politics to sports, it keeps coming up.

I say \"universal\" because it doesn't just affect silly things like sports teams. Some psychologists did a study where they showed participants two maps of Israel: one showing it as a large country surrounding the small Palestinian enclaves, and the other showing it as a tiny island in the middle of the hostile Arab world. In the \"Palestinians as underdogs\" condition, 55% said they supported Palestine. In the \"Israelis as underdogs\" condition, 75% said they supported Israel. Yes, you can change opinion thirty points by altering perceived underdog status. By comparison, my informal experiments trying to teach people relevant facts about the region's history changed opinion approximately zero percent.

\n

\n

(Oh, and the Israelis and Palestinians know this. That's why the propaganda handbooks they give to their respective supporters - of course they give their supporters propaganda handbooks! - specifically suggest the supporters portray their chosen cause as an underdog. It's also why every time BBC or someone shows a clip about the region, they get complaints from people who thought it didn't make their chosen side seem weak enough!)

\n

And there aren't many mitigating factors. Even when the underdog is obviously completely doomed, we still identify with them: witness Leonidas at Thermopylae. Even when the underdog is evil and the powerful faction is good, we can still feel a little sympathy for them; I remember some of my friends and I talking about bin Laden, and admitting that although he was clearly an evil terrorist scumbag, there was still something sort of awesome about a guy who could take on the entire western world from a cave somewhere.

I say \"strangest\" because I can't make heads or tails of why evolutionary psychology would allow it. Let's say Zug and Urk are battling it out for supremacy of your hunter-gatherer tribe. Urk comes to you and says \"Hey, my faction is really weak. We don't have a chance against Zug, who is much stronger than us. I think we will probably be defeated and humiliated, and our property divided up among Zug's supporters.\"

The purely rational response seems to be \"Wow, thanks for warning me, I'll go join Zug's side right now. Riches and high status as part of the winning faction, here I come!\"

Now, many of us probably would join Zug's side. But introspection would tell us we were opposing rational calculation on Zug's side to a native, preconscious support for Urk. Why? The native preconscious part of our brain is usually the one that's really good at ending up on top in tribal power struggles. This sort of thing goes against everything it usually stands for.

I can think of a few explanations, none of them satisfying. First, it could be a mechanism to prevent any one person from getting too powerful. Problem is, this sounds kind of like group selection. Maybe the group does best if there's no one dictator, but from an individual point of view, the best thing to do in a group with a powerful dictator is get on that dictator's good side. Any single individual who initiates the strategy of supporting the underdog gets crushed by all the other people who are still on the dictator's team.

Second, it could be a mechanism to go where the rewards are highest. If a hundred people support Zug, and only ten people support Urk, then you have a chance to become one of Urk's top lieutenants, with all the high status and reproductive opportunities that implies if Urk wins. But I don't like this explanation either. When there's a big disparity in faction sizes, you have no chance of winning, and when there's a small disparity in faction sizes, you don't gain much by siding with the smaller faction. And as size differential between groups increases, the smaller faction's chance of success should drop much more quickly than the opportunities for status with the smaller faction should rise.

So I admit it. I'm stumped. What does Less Wrong think?

" } }, { "_id": "DMmzrGnxaPs4mtFsH", "title": "The First London Rationalist Meetup", "pageUrl": "https://www.lesswrong.com/posts/DMmzrGnxaPs4mtFsH/the-first-london-rationalist-meetup", "postedAt": "2009-04-04T17:49:19.283Z", "baseScore": 15, "voteCount": 13, "commentCount": 14, "url": null, "contents": { "documentId": "DMmzrGnxaPs4mtFsH", "html": "

Here's a brief summarly of the first meetup. It took place on cafe on top of Waterstone bookstore on Saturday 2009-04-04, starting at 14:00 and lasting until about 17:15. Six people showed up: Tomasz (me), Michael, Will, Julian, Shane, and Marc.

\n

We started with a game of estimation warewolf - there was 1 decider, and 3 participants, one of the dishonest. 2 people showed up later, so they joined the game as openly honest. I don't think the decider position was really necessary, we could simply randomly assign people to honest and dishonest set. Or if the dishonest guy was necessary, we were confused enough without any active help. The subject was \"maize production of Mexico\". We did our estimate twice. First based on what Mexicans are likely to eat. Our estimates were:

\n\n

Most of the time we discussed it first, and then took the median of our guesses as the estimate. The discussion was really the fun bit, I'll give a few examples later. There was a lot of joking about anchoring effect, but I don't think there was that much of it.

\n

Then we hid the first estimate, and tried the same question again, in a different way, estimating:

\n\n

That was almost two orders of magnitude off. I was so sure that the second estimate failed any sanity check that I took a 10000:1 bet against it (conditional on our arithmetics being correct), taking 1p against 100 pound donation to the Singularity Institute. At this point we gave our point estimates, and checked it against reality. If my notes are correct, they were:

\n\n

So the Singularity Institute isn't getting their money, our first estimate was very accurate considering our lack of clue about the subject matter, the second was widely off, and everybody except Marc gave more credence to the first. They might have been convinced by me taking that absurd bet via Aumann's Theorem.

\n

Errors on individual estimates were:

\n\n

After that, we just had chat about the Cabal on Wikipedia (which doesn't exist), tvtropes, early Internet sentimentalism, option pricing, financial crisis, quantum computers, prospects of AGI, and other random subjects. We also invented a new way of paying the bill \"by the bailout\" - everybody puts as much money on the plate, or takes as much from it as they want, as long as the final result sum was right.

\n

We're most likely going to do the next meeting in about a month, in the same place. Here's the picture. Please correct me if I misremembered anything important.

\n

\"\"

" } }, { "_id": "oZNXmHcdhb4m7vwsv", "title": "Incremental Progress and the Valley", "pageUrl": "https://www.lesswrong.com/posts/oZNXmHcdhb4m7vwsv/incremental-progress-and-the-valley", "postedAt": "2009-04-04T16:42:38.405Z", "baseScore": 99, "voteCount": 87, "commentCount": 114, "url": null, "contents": { "documentId": "oZNXmHcdhb4m7vwsv", "html": "

Yesterday I said:  \"Rationality is systematized winning\"

\"But,\" you protest, \"the reasonable person doesn't always win!\"

What do you mean by this?  Do you mean that every week or two, someone who bought a lottery ticket with negative expected value, wins the lottery and becomes much richer than you?  That is not a systematic loss; it is selective reporting by the media.  From a statistical standpoint, lottery winners don't exist—you would never encounter one in your lifetime, if it weren't for the selective reporting.

Even perfectly rational agents can lose.  They just can't know in advance that they'll lose.  They can't expect to underperform any other performable strategy, or they would simply perform it.

\"No,\" you say, \"I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would.  I'm talking about how religious people are happier—\"

Ah.  Well, here's the the thing:  An incremental step in the direction of rationality, if the result is still irrational in other ways, does not have to yield incrementally more winning.

The optimality theorems that we have for probability theory and decision theory, are for perfect probability theory and decision theory.  There is no companion theorem which says that, starting from some flawed initial form, every incremental modification of the algorithm that takes the structure closer to the ideal, must yield an incremental improvement in performance.  This has not yet been proven, because it is not, in fact, true.

\"So,\" you say, \"what point is there then in striving to be more rational?  We won't reach the perfect ideal.  So we have no guarantee that our steps forward are helping.\"

You have no guarantee that a step backward will help you win, either.  Guarantees don't exist in the world of flesh; but contrary to popular misconceptions, judgment under uncertainty is what rationality is all about.

\"But we have several cases where, based on either vaguely plausible-sounding reasoning, or survey data, it looks like an incremental step forward in rationality is going to make us worse off.  If it's really all about winning—if you have something to protect more important than any ritual of cognition—then why take that step?\"

Ah, and now we come to the meat of it.

I can't necessarily answer for everyone, but...

My first reason is that, on a professional basis, I deal with deeply confused problems that make huge demands on precision of thought.  One small mistake can lead you astray for years, and there are worse penalties waiting in the wings.  An unimproved level of performance isn't enough; my choice is to try to do better, or give up and go home.

\"But that's just you.  Not all of us lead that kind of life.  What if you're just trying some ordinary human task like an Internet startup?\"

My second reason is that I am trying to push some aspects of my art further than I have seen done.  I don't know where these improvements lead.  The loss of failing to take a step forward is not that one step, it is all the other steps forward you could have taken, beyond that point.  Robin Hanson has a saying:  The problem with slipping on the stairs is not falling the height of the first step, it is that falling one step leads to falling another step.  In the same way, refusing to climb one step up forfeits not the height of that step but the height of the staircase.

\"But again—that's just you.  Not all of us are trying to push the art into uncharted territory.\"

My third reason is that once I realize I have been deceived, I can't just shut my eyes and pretend I haven't seen it.  I have already taken that step forward; what use to deny it to myself?  I couldn't believe in God if I tried, any more than I could believe the sky above me was green while looking straight at it.  If you know everything you need to know in order to know that you are better off deceiving yourself, it's much too late to deceive yourself.

\"But that realization is unusual; other people have an easier time of doublethink because they don't realize it's impossibleYou go around trying to actively sponsor the collapse of doublethink.  You, from a higher vantage point, may know enough to expect that this will make them unhappier.  So is this out of a sadistic desire to hurt your readers, or what?\"

Then I finally reply that my experience so far—even in this realm of merely human possibility—does seem to indicate that, once you sort yourself out a bit and you aren't doing quite so many other things wrong, striving for more rationality actually will make you better off.  The long road leads out of the valley and higher than before, even in the human lands.

The more I know about some particular facet of the Art, the more I can see this is so.  As I've previously remarked, my essays may be unreflective of what a true martial art of rationality would be like, because I have only focused on answering confusing questions—not fighting akrasia, coordinating groups, or being happy.  In the field of answering confusing questions—the area where I have most intensely practiced the Art—it now seems massively obvious that anyone who thought they were better off \"staying optimistic about solving the problem\" would get stomped into the ground.  By a casual student.

When it comes to keeping motivated, or being happy, I can't guarantee that someone who loses their illusions will be better off—because my knowledge of these facets of rationality is still crude.  If these parts of the Art have been developed systematically, I do not know of it.  But even here I have gone to some considerable pains to dispel half-rational half-mistaken ideas that could get in a beginner's way, like the idea that rationality opposes feeling, or the idea that rationality opposes value, or the idea that sophisticated thinkers should be angsty and cynical.

And if, as I hope, someone goes on to develop the art of fighting akrasia or achieving mental well-being as thoroughly as I have developed the art of answering impossible questions, I do fully expect that those who wrap themselves in their illusions will not begin to compete.  Meanwhile—others may do better than I, if happiness is their dearest desire, for I myself have invested little effort here.

I find it hard to believe that the optimally motivated individual, the strongest entrepreneur a human being can become, is still wrapped up in a blanket of comforting overconfidence.  I think they've probably thrown that blanket out the window and organized their mind a little differently.  I find it hard to believe that the happiest we can possibly live, even in the realms of human possibility, involves a tiny awareness lurking in the corner of your mind that it's all a lie.  I'd rather stake my hopes on neurofeedback or Zen meditation, though I've tried neither.

But it cannot be denied that this is a very real issue in very real life.  Consider this pair of comments from Less Wrong:

I'll be honest —my life has taken a sharp downturn since I deconverted. My theist girlfriend, with whom I was very much in love, couldn't deal with this change in me, and after six months of painful vacillation, she left me for a co-worker. That was another six months ago, and I have been heartbroken, miserable, unfocused, and extremely ineffective since.

Perhaps this is an example of the valley of bad rationality of which PhilGoetz spoke, but I still hold my current situation higher in my preference ranking than happiness with false beliefs.

And:

My empathies: that happened to me about 6 years ago (though thankfully without as much visible vacillation).

My sister, who had some Cognitive Behaviour Therapy training, reminded me that relationships are forming and breaking all the time, and given I wasn't unattractive and hadn't retreated into monastic seclusion, it wasn't rational to think I'd be alone for the rest of my life (she turned out to be right). That was helpful at the times when my feelings hadn't completely got the better of me.

So—in practice, in real life, in sober fact—those first steps can, in fact, be painful.  And then things can, in fact, get better.  And there is, in fact, no guarantee that you'll end up higher than before.  Even if in principle the path must go further, there is no guarantee that any given person will get that far.

If you don't prefer truth to happiness with false beliefs...

Well... and if you are not doing anything especially precarious or confusing... and if you are not buying lottery tickets... and if you're already signed up for cryonics, a sudden ultra-high-stakes confusing acid test of rationality that illustrates the Black Swan quality of trying to bet on ignorance in ignorance...

Then it's not guaranteed that taking all the incremental steps toward rationality that you can find, will leave you better off.  But the vaguely plausible-sounding arguments against losing your illusions, generally do consider just one single step, without postulating any further steps, without suggesting any attempt to regain everything that was lost and go it one better.  Even the surveys are comparing the average religious person to the average atheist, not the most advanced theologians to the most advanced rationalists.

But if you don't care about the truth—and you have nothing to protect—and you're not attracted to the thought of pushing your art as far as it can go—and your current life seems to be going fine—and you have a sense that your mental well-being depends on illusions you'd rather not think about—

Then you're probably not reading this.  But if you are, then, I guess... well... (a) sign up for cryonics, and then (b) stop reading Less Wrong before your illusions collapse!  RUN AWAY!

" } }, { "_id": "agFgSc8D7yn852QDN", "title": "On dollars, utility, and crack cocaine", "pageUrl": "https://www.lesswrong.com/posts/agFgSc8D7yn852QDN/on-dollars-utility-and-crack-cocaine", "postedAt": "2009-04-04T00:00:24.951Z", "baseScore": 16, "voteCount": 36, "commentCount": 100, "url": null, "contents": { "documentId": "agFgSc8D7yn852QDN", "html": "

The lottery came up in a recent comment, with the claim that the expected return is negative - and the implicit conclusion that it's irrational to play the lottery.  So I will explain why this is not the case.

\n

It's convenient to reason using units of equivalent value.  Dollars, for instance.  A utility function u(U) maps some bag of goods U (which might be dollars) into a value or ranking.  In general, u(kn) / u(n) < k.  This is because a utility function is (typically) defined in terms of marginal utility.  The marginal utility to you of your first dollar is much greater than the marginal utility to you of your 1,000,000th dollar.  It increases the possible actions available to you much more than your 1,000,000th dollar does.

\n

Utility functions are sigmoidal.  A serviceable utility function over one dimension might be u(U) = k * ([1 / (1 + e-U)] - .5).  It's steep around U=0, and shallow for U >> 0 and U << 0.

\n

Sounds like I'm making a dry, academic mathematical point, doesn't it?  But it's not academic.  It's crucial.  Because neglecting this point leads us to make elementary errors such as asserting that it isn't rational to play the lottery or become addicted to crack cocaine.

\n

For someone with $ << 0, the marginal utility of $5 to them is minimal.  They're probably never going to get out of debt; someone has a lien on their income and it's going to be taken from them anyway; and if they're $5 richer it might mean they'll lose $4 in government benefits.  It can be perfectly reasonable, in terms of expected utility, for them to play the lottery.

\n

Not in terms of expected dollars.  Dollars are the input to the utility function.

\n

Rationally, you might expect that u(U) = 0 for all U < 0.  Because you can always kill yourself.  Once your life is so bad that you'd like to kill yourself, it could make perfect sense to play the lottery, if you thought that winning it would help.  Or to take crack cocaine, if it gives you a few short intervals over the next year that are worth living.

\n

Why is this important?

\n

Because we look at poor folks playing the lottery, and taking crack cocaine, and we laugh at them and say, Those fools don't deserve our help if they're going to make such stupid decisions.

\n

When in reality, some of them may be making <EDITED> much more rational decisions than we think. </EDITED>

\n

If that doesn't give you a chill, you don't understand.

\n

 

\n

(I changed the penultimate line in response to numerous comments indicating that the commenters reserve the word \"rational\" for the unobtainable goal of perfect utility maximization.  I note that such a definition defines itself into being irrational, since it is almost certainly not the best possible definition.)

" } }, { "_id": "QSnybzF5WLAziXNw4", "title": "First London Rationalist Meeting upcoming", "pageUrl": "https://www.lesswrong.com/posts/QSnybzF5WLAziXNw4/first-london-rationalist-meeting-upcoming", "postedAt": "2009-04-03T22:28:05.105Z", "baseScore": 5, "voteCount": 8, "commentCount": 9, "url": null, "contents": { "documentId": "QSnybzF5WLAziXNw4", "html": "

It's an extremely short notice, but we're going to have the first meeting tomorrow, that is - Saturday (2009-04-04) 14:00, in cafe on top of the Waterstones bookstore near the Piccadilly Circuis Tube station.

\n

If you want to know more, email me (Tomasz.Wegrzanowski@gmail.com) for details. Or just come straight away.

\n

Hopefully we can get it going, and the second meeting with be better organized.

" } }, { "_id": "dvky3MWgPYrMamoXz", "title": "Another Call to End Aid to Africa", "pageUrl": "https://www.lesswrong.com/posts/dvky3MWgPYrMamoXz/another-call-to-end-aid-to-africa", "postedAt": "2009-04-03T18:55:52.000Z", "baseScore": 11, "voteCount": 10, "commentCount": 38, "url": null, "contents": { "documentId": "dvky3MWgPYrMamoXz", "html": "

Dambisa Moyo, an African economist, has joined her voice to the other African economists [e.g. James Shikwati] calling for a full halt to Western aid.  Her book is called Dead Aid and it asserts a direct cause-and-effect relationship between $1 trillion of aid and the rise in African poverty rates from 11% to 66%.

Though it's an easy enough signal to fake, I find it noteworthy that Moyo - in this interview at least - repeatedly pleads for some attention to "logic and evidence":

"I think the whole aid model is couched in pity.  I don’t want to cast\naspersions as to where that pity comes from.  But I do think it’s based\non pity because based on logic and evidence, it is very clear that aid\ndoes not work.  And yet if you speak to some of the biggest supporters\nof aid, whether they are academics or policy makers or celebrities,\ntheir whole rationale for giving more aid to Africa is not couched in\nlogic or evidence; it’s based largely on emotion and pity."

I was just trying to think of when was the last time I heard a Western politician - or even a mainstream Western economist in any public venue - draw an outright battle line between logic and pity.  Oh, there are plenty of demagogues who claim the evidence is on their side, but they won't be so outright condemning of emotion - it's not a winning tactic.  Even I avoid drawing a battle line so stark.

Moyo says she's gotten a better reception in Africa than in the West.  Maybe you need to see your whole continent wrecked by emotion and pity before "logic and evidence" start to sound appealing.

" } }, { "_id": "7GWpbtRaJQ3HcamKo", "title": "Winning is Hard", "pageUrl": "https://www.lesswrong.com/posts/7GWpbtRaJQ3HcamKo/winning-is-hard", "postedAt": "2009-04-03T17:02:31.856Z", "baseScore": -10, "voteCount": 21, "commentCount": 11, "url": null, "contents": { "documentId": "7GWpbtRaJQ3HcamKo", "html": "

Let us say you are playing Steve Omohundro's meal choosing game 1, however the negatives are a bit harsher and more realistic than just a dodgy soufle. You are given two choices on the menu, oysters and fugu. Your goal avoid death, sickness and eat tasty food. You don't know much about either, although you do know that shellfish has made you ill in the past so you give it a lower expected utility (pretend you don't know what fugu is).

Eating the poorly prepared fugu kills you dead every time, do not pass go, do not update your utility values of choosing an option (although the utility of it would be 0, if you were allowed to update). Eating oysters gives you a utility of 1.

So how do we win in this situation? In a way it is easy: Don't eat the fugu! But by what principled fashion should you choose not to eat the fugu? Microeconomics is not enough, with negative expected utility from shellfish you would pick the fugu! Also you do not get to update your utilities when you  eat the fugu, so your expected utilities can't converge with experience. So we are in a bit of a pickle.

Can humans solve these kinds of problems, if so how do we do it? The answer is poorly, in a patch work fashion and we get information on the fugu type problems from our genome and culture. For example we avoid bitter things, are scared of snakes and are careful if we are up high are because our ancestors had to have had thes bits of information (and more) to avoid death. They got them by chance, which isn't exactly principled. But all these are still needed for winning. We can also get the information culturally, but that can leave us open to taboos against harmless things such as eating pork, which we might be foolish to test ourselves. It is hardly principled either.

So in this kind of scenario it is not sufficient to be economically rational to win, you have to have a decent source of knowledge. Getting a decent source of knowledge is hard.

1 See the appendix of the Nature of Self-Improving Artificial Intelligence   starting page 37

" } }, { "_id": "4ARtkT3EYox3THYjF", "title": "Rationality is Systematized Winning", "pageUrl": "https://www.lesswrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning", "postedAt": "2009-04-03T14:41:25.255Z", "baseScore": 109, "voteCount": 102, "commentCount": 267, "url": null, "contents": { "documentId": "4ARtkT3EYox3THYjF", "html": "

Followup toNewcomb's Problem and Regret of Rationality

\n

\"Rationalists should win,\" I said, and I may have to stop saying it, for it seems to convey something other than what I meant by it.

\n

Where did the phrase come from originally?  From considering such cases as Newcomb's Problem:  The superbeing Omega sets forth before you two boxes, a transparent box A containing $1000 (or the equivalent in material wealth), and an opaque box B that contains either $1,000,000 or nothing.  Omega tells you that It has already put $1M in box B if and only if It predicts that you will take only box B, leaving box A behind.  Omega has played this game many times before, and has been right 99 times out of 100.  Do you take both boxes, or only box B?

\n

A common position - in fact, the mainstream/dominant position in modern philosophy and decision theory - is that the only reasonable course is to take both boxes; Omega has already made Its decision and gone, and so your action cannot affect the contents of the box in any way (they argue).  Now, it so happens that certain types of unreasonable individuals are rewarded by Omega - who moves even before they make their decisions - but this in no way changes the conclusion that the only reasonable course is to take both boxes, since taking both boxes makes you $1000 richer regardless of the unchanging and unchangeable contents of box B.

\n

And this is the sort of thinking that I intended to reject by saying, \"Rationalists should win!\"

\n

Said Miyamoto Musashi:  \"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means.  Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement.  It is essential to attain this.  If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.\"

\n

Said I:  \"If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.\"

\n

This is the distinction I had hoped to convey by saying, \"Rationalists should win!\"

\n

\n

There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do.  But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable.  Reason is just a way of doing things, not necessarily the most formidable; it is how professors talk to each other in debate halls, which sometimes works, and sometimes doesn't.  If a hoard of barbarians attacks the debate hall, the truly prudent and flexible agent will abandon reasonableness.

\n

No.  If the \"irrational\" agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is \"rational\".

\n

For I do fear that a \"rationalist\" will clutch to themselves the ritual of cognition they have been taught, as loss after loss piles up, consoling themselves:  \"I have behaved virtuously, I have been so reasonable, it's just this awful unfair universe that doesn't give me what I deserve.  The others are cheating by not doing it the rational way, that's how they got ahead of me.\"

\n

It is this that I intended to guard against by saying:  \"Rationalists should win!\"  Not whine, win.  If you keep on losing, perhaps you are doing something wrong.  Do not console yourself about how you were so wonderfully rational in the course of losing.  That is not how things are supposed to go.  It is not the Art that fails, but you who fails to grasp the Art.

\n

Likewise in the realm of epistemic rationality, if you find yourself thinking that the reasonable belief is X (because a majority of modern humans seem to believe X, or something that sounds similarly appealing) and yet the world itself is obviously Y.

\n

But people do seem to be taking this in some other sense than I meant it - as though any person who declared themselves a rationalist would in that moment be invested with an invincible spirit that enabled them to obtain all things without effort and without overcoming disadvantages, or something, I don't know.

\n

Maybe there is an alternative phrase to be found again in Musashi, who said:  \"The Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.\"

\n

\"Rationality is the spirit of winning\"?  \"Rationality is the Way of winning\"?  \"Rationality is systematized winning\"?  If you have a better suggestion, post it in the comments.

" } }, { "_id": "tQFttLYwXyyjgNGPy", "title": "Open Thread: April 2009", "pageUrl": "https://www.lesswrong.com/posts/tQFttLYwXyyjgNGPy/open-thread-april-2009", "postedAt": "2009-04-03T13:57:49.099Z", "baseScore": 7, "voteCount": 6, "commentCount": 134, "url": null, "contents": { "documentId": "tQFttLYwXyyjgNGPy", "html": "

Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.

\n

(Carl's open thread for March was only a week ago or thereabouts, but if we're having these monthly then I think it's better for them to appear near -- ideally at -- the start of each month, to make it that little bit easier to find something when you can remember roughly when it was posted. The fact that that open thread has had 69 comments in that time seems like good evidence that \"almost anyone can post articles\" is sufficient reason for not bothering with open threads.)

\n

[EDIT, 2009-04-04: oops, I meant \"is NOT sufficient reason\" in that last sentence. D'oh.]

" } }, { "_id": "CcmN333Nm6Hmze6Cs", "title": "The Brooklyn Society For Ethical Culture", "pageUrl": "https://www.lesswrong.com/posts/CcmN333Nm6Hmze6Cs/the-brooklyn-society-for-ethical-culture", "postedAt": "2009-04-03T08:06:51.628Z", "baseScore": 19, "voteCount": 20, "commentCount": 28, "url": null, "contents": { "documentId": "CcmN333Nm6Hmze6Cs", "html": "

Dale McGowan writes:

\n
\n

In the past seven years or so, I’ve seen quite a few humanistic organizations from the inside — freethought groups, Ethical Societies, Congregations for Humanistic Judaism, UUs, etc. Met a lot of wonderful people working hard to make their groups succeed. All of the groups have different strengths, and all are struggling with One Big Problem: creating a genuine sense of community.

\n

I’ve written before about community and the difficulty freethought groups generally have creating it. Some get closer than others, but it always seems to fall a bit short of the sense of community that churches so often create. And I don’t think it has a thing to do with God.

\n

The question I hear more and more from freethought groups is, “How can we bring people in the door and keep them coming back?” The answer is to make our groups more humanistic — something churches, ironically, often do better than we do.

\n

Now I’ve met an organization founded on freethought principles that seems to get humanistic community precisely right. It’s the Brooklyn Society for Ethical Culture [...], host of my seminar and talk last weekend, and the single most effective humanistic community I have ever seen.

\n

So what do they have going for them? My top ten list:

\n
\n

Read on at Meming of Life

" } }, { "_id": "gRkFrDmuE82difQRH", "title": "Where are we?", "pageUrl": "https://www.lesswrong.com/posts/gRkFrDmuE82difQRH/where-are-we", "postedAt": "2009-04-02T21:51:24.646Z", "baseScore": 24, "voteCount": 25, "commentCount": 313, "url": null, "contents": { "documentId": "gRkFrDmuE82difQRH", "html": "

I'm enjoying lesswrong.com a lot so far, and it sounds like the last LW/OB meetup was a lot of fun. MBlume asks:

\n
So far there've only been LW/OB meetups in the Bay area -- is there any way we could plot the geographic distribution of LW members and determine whether there are other spots where we could get a good meetup going?
\n

I don't think that there are so many of us that we need an automated system for this; the threading system should be enough.I'll post a few top-level comments for various parts of the world, and encourage you all to follow up and tell us where you are. Ideally, find a comment that has where you live in it already and add \"me too\".

\n

I'll try to keep this post updated with useful things. I can't wait to play Paranoid Debating!

\n

Edit: Please don't post where you live in a new top-level comment! Try to find a comment referring to the rough geographic region you live in and post under that; it'll make this post easier to navigate.  I've divided the world into three (North America, Europe, everywhere else); posting under those comments will help.  Thanks!

" } }, { "_id": "joGBPZPM6hcT9iuTS", "title": "\"Robot scientists can think for themselves\"", "pageUrl": "https://www.lesswrong.com/posts/joGBPZPM6hcT9iuTS/robot-scientists-can-think-for-themselves", "postedAt": "2009-04-02T21:16:22.682Z", "baseScore": -1, "voteCount": 4, "commentCount": 11, "url": null, "contents": { "documentId": "joGBPZPM6hcT9iuTS", "html": "

I recently saw this Reuters article on Yahoo News. In typical science reporting fashion, the headline seems to be pure hyperbole - does anyone here know enough to clarify what the groups referenced have actually achieved?

\n

This links represent what I could find:

\n

Homepage of the \"Robot Scientist\" project:http://www.aber.ac.uk/compsci/Research/bio/robotsci/ 

\n

Homepage of Hod Lipson: http://www.mae.cornell.edu/lipson/

\n

Hod Lipson's 2007 paper \"Automated reverse engineering of nonlinear dynamical systems\" (pdf)

" } }, { "_id": "ddAEkE7F4cywqsHRq", "title": "Aumann voting; or, How to vote when you're ignorant", "pageUrl": "https://www.lesswrong.com/posts/ddAEkE7F4cywqsHRq/aumann-voting-or-how-to-vote-when-you-re-ignorant", "postedAt": "2009-04-02T18:54:15.828Z", "baseScore": 12, "voteCount": 27, "commentCount": 37, "url": null, "contents": { "documentId": "ddAEkE7F4cywqsHRq", "html": "

As Robin Hanson is fond of pointing out, people would often get better answers by taking other people's answers more into account.  See Aumann's Agreement Theorem.

\n

The application is obvious if you're computing an answer for your personal use.  But how do you apply it when voting?

\n

Political debates are tug-of-wars.  Say a bill is being voted on to introduce a 7-day waiting period for handguns.  You might think that you should vote on the merits of a 7-day waiting period.  This isn't what we usually do.  Instead, we've chosen our side on the larger issue (gun control: for or against) ahead of time; and we vote whichever way is pulling in our direction.

\n

To use the tug-of-war analogy:  There's a knot tied in the middle of the rope, and you have some line in the sand where you believe the knot should end up.  But you don't stop pulling when the knot reaches that point; you keep pulling, because the other team is still pulling.  So, if you're anti-gun-control, you vote against the 7-day waiting period, even if you think it would be a good idea; because passing it would move the knot back towards the other side of your line.

\n

Tug-of-war voting makes intuitive sense if you believe that an irrational extremist is usually more politically effective than a reasonable person is.  (It sounds plausible to me.)  If you've watched a debate long enough to see that the \"knot\" does a bit of a random walk around some equilibrium that's on the other side of your line, it can make sense to vote this way.

\n

How do you apply Aumann's theorem to tug-of-war voting?

\n

I think the answer is that you try to identify which side has more idiots, and vote on the other side.

\n

I was thinking of this because of the current online debate between Arthur Caplan and Craig Venter on DNA privacy.  I don't have a strong opinion which way to vote, largely because it's nowhere stated clearly what it is that you're voting for or against.

\n

So I can't tell what the right answer is myself.  But I can identify idiots.  Applying Aumann's theorem, I take it on faith that the non-idiot population can eventually work out a good solution to the problem.  My job is to cancel out an idiot.

\n

My impression is that there is a large class of irrational people who are generally \"against\" biotechnology because they're against evolution or science.  (This doesn't come out in the comments on economist.com, which are surprisingly good for this sort of online debate, and unfortunately don't supply enough idiots to be statistically significant.)  I have enough experience with this group and their opposite number to conclude that they are not counterbalanced by a sufficient number of uncritically pro-science people.

\n

So I vote against the proposition, even though the vague statement \"People's DNA sequences are their business, and nobody else's\" sounds good to me.  I am picking sides not based on the specific issue at hand, but on what I perceive as being the larger tug-of war; and pulling for the side with fewer idiots.

\n

Do you think this is a good heuristic?

\n

You might break your answer into separate parts for \"tug-of-war voting\" (which means to choose sides on larger debates rather than on particular issues) and \"cancel out an idiot\" (which can be used without adopting tug-of-war voting).

\n

EDIT: Really, please do say if your comment refers to \"tug-of-war\" voting or \"cancelling out an idiot\".  Perhaps I should have broken them into separate posts.

" } }, { "_id": "ZEj9ATpv3P22LSmnC", "title": "Selecting Rationalist Groups", "pageUrl": "https://www.lesswrong.com/posts/ZEj9ATpv3P22LSmnC/selecting-rationalist-groups", "postedAt": "2009-04-02T16:21:11.355Z", "baseScore": 42, "voteCount": 43, "commentCount": 34, "url": null, "contents": { "documentId": "ZEj9ATpv3P22LSmnC", "html": "

Previously in seriesPurchase Fuzzies and Utilons Separately
Followup toConjuring an Evolution To Serve You

\n

GreyThumb.blog offered an interesting comparison of poor animal breeding practices and the fall of Enron, which I previously posted on in some detail.  The essential theme was that individual selection on chickens for the chicken in each generation who laid the most eggs, produced highly competitive chickens—the most dominant chickens that pecked their way to the top of the pecking order at the expense of other chickens.  The chickens subjected to this individual selection for egg-laying prowess needed their beaks clipped, or housing in individual cages, or they would peck each other to death.

\n

Which is to say: individual selection is selecting on the wrong criterion, because what the farmer actually wants is high egg production from groups of chickens.

\n

While group selection is nearly impossible in ordinary biology, it is easy to impose in the laboratory: and breeding the best groups, rather than the best individuals, increased average days of hen survival from 160 to 348, and egg mass per bird from 5.3 to 13.3 kg.

\n

The analogy being to the way that Enron evaluated its employees every year, fired the bottom 10%, and gave the top individual performers huge raises and bonuses.  Jeff Skilling fancied himself as exploiting the wondrous power of evolution, it seems.

\n

If you look over my accumulated essays, you will observe that the art contained therein is almost entirely individual in nature... for around the same reason that it all focuses on confronting impossibly tricky questions:  That's what I was doing when I thought up all this stuff, and for the most part I worked in solitude.  But this is not inherent in the Art, not reflective of what a true martial art of rationality would be like if many people had contributed to its development along many facets.

\n

Case in point:  At the recent LW / OB meetup, we played Paranoid Debating, a game that tests group rationality.  As is only appropriate, this game was not the invention of any single person, but was collectively thought up in a series of suggestions by Nick Bostrom, Black Belt Bayesian, Tom McCabe, and steven0461.

\n

In the game's final form, Robin Gane-McCalla asked us questions like \"How many Rhode Islands would fit into Alaska?\" and a group of (in this case) four rationalists tried to pool their knowledge and figure out the answer... except that before the round started, we each drew facedown from a set of four cards, containing one spade card and one red card.  Whoever drew the red card got the job of trying to mislead the group.  Whoever drew the spade showed the card and became the spokesperson, who had to select the final answer.  It was interesting, trying to play this game, and realizing how little I'd practiced basic skills like trying to measure the appropriateness of another's confidence or figure out who was lying.

\n

A bit further along, at the suggestion of Steve Rayhawk, and slightly simplified by myself, we named 60% confidence intervals for the quantity with lower and upper bounds; Steve fit a Cauchy distribution to the interval (\"because it has a fatter tail than a Gaussian\") and we were scored according to the log of our probability density on the true answer, except for the red-card drawer, who got the negative of this number.

\n

The Paranoid Debating game worked surprisingly well—at least I had fun, despite somehow managing to draw the red card three out of four times.  I can totally visualize doing this at some corporate training event or even at parties.  The red player is technically acting as an individual and learning to practice deception, but perhaps practicing deception (in this controlled, ethically approved setting) might help you be a little less gullible in turn.  As Zelazny observes, there is a difference in the arts of discovering lies and finding truth.

\n

In a real institution... you would probably want to optimize less for fun, and more for work-relevance: something more like Black Belt Bayesian's original suggestion of The Aumann Game, no red cards.  But where both B3 and Tom McCabe originally thought in terms of scoring individuals, I would suggest forming people into groups and scoring the groups.  An institution's performance is the sum of its groups more directly than it is the sum of its individuals—though of course there are interactions between groups as well.  Find people who, in general, seem to have a statistical tendency to belong to high-performing groups—these are the ones who contribute much to the group, who are persuasive with good arguments.

\n

I wonder if there are any hedge funds that practice \"trio trading\", by analogy with pair programming?

\n

Hal Finney called Aumann's Agreement Theorem \"the most interesting, surprising, and challenging result in the field of human bias: that mutually respectful, honest, and rational debaters cannot disagree on any factual matter once they know each other's opinions\".  It is not just my own essays that are skewed toward individual application; the whole trope of Traditional Rationality seems to me skewed the same way.  It's the individual heretic who is the hero, and Authority the untrustworthy villain whose main job is not to resist the heretic too much, to be properly defeated.  Science is cast as a competition between theories in an arena with rules designed to let the strongest contender win.  Of course, it may be that I am selective in my memory, and that if I went back and read my childhood books again, I would notice more on group tactics that originally slipped my attention... but really, Aumann's Agreement Theorem doesn't get enough attention.

\n

Of course most Bayesian math is not widely known—the Agreement Theorem is no exception here.  But even the intuitively obvious counterpart of the Agreement Theorem, the treatment of others' beliefs as evidence, receives little shrift in Traditional Rationality.  This may have something to do with Science developing in the midst of insanity and in defiance of Authority; that is a historical fact about how Science developed.  But if the high performers of a rationality dojo need to practice the same sort of lonely dissent... well, that must not be a very effective rationality dojo.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Incremental Progress and the Valley\"

\n

Previous post: \"Purchase Fuzzies and Utilons Separately\"

" } }, { "_id": "XpxbQLofRHvRq3jkA", "title": "Constrained talk on free speech", "pageUrl": "https://www.lesswrong.com/posts/XpxbQLofRHvRq3jkA/constrained-talk-on-free-speech", "postedAt": "2009-04-02T13:57:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "XpxbQLofRHvRq3jkA", "html": "

I went to a public lecture last night on the question ‘How do we balance freedom of speech and religious sensitivity?’. It featured four distinguished academics ‘exploring legal, philosophical and cultural perspectives’. I was interested to go because I couldn’t think of any reason the ‘balance’ should be a jot away from free speech on this one, and I thought if smart people thought it worth discussing, there might be arguments I haven’t heard.

\n
\nThe most interesting thing I discovered in the evening was that something pretty mysterious to me is going on. The speakers implicitly assumed there was some middle of the road ‘balance’, without addressing why there should be at all. So they talked about how to assign literary merit to The Satanic Verses, how globalization might mean that we could offend more people by accident, whether it is consistent with other rights to give rights to groups, what the law can do about it now, etc. That these are the pertinent issues in answering the question wasn’t questioned. Jeremy Shearmur looked like he might at one point, but his argument was basically ‘I think I’d find Piss Christ pretty offensive if I were a Christian – it’s disgusting to me that anyone would make it anyway – and so ignorant of Christianity’. More interesting discussion of the question could be found in any bar (some of it was interesting, it just wasn’t about the question).

\n
\n
What am I missing here? Is it seriously the consensus (in Australia?) that censorship is in order for items especially offensive to religious people? Is there some argument for this I’m missing? What makes the situation special compared to other free speech issues? The offense? Then why not ban other things offensive to some observers? Ugly houses, swearing, public displays of homosexual affection.. The religion? Is there some reason especially unlikely beliefs are to be protected, or just any beliefs that claim their own sacredness? Are these academics afraid of something I don’t know about? Is it much more controversial than I thought to support free speech in general? Or is the question just a matter of balancing the political correctness of saying ‘yay free speech’ and of ‘yay religious tolerance’?
\n
\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "BLtyodE8X222tr2kf", "title": "Romance is magical", "pageUrl": "https://www.lesswrong.com/posts/BLtyodE8X222tr2kf/romance-is-magical", "postedAt": "2009-04-02T09:51:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "BLtyodE8X222tr2kf", "html": "

People seem to generally believe they have high romantic standards, and that they aren’t strongly influenced by things like looks, status and money. Research says our standards aren’t that high, that they drop if the standard available drops for a single evening, and that superficial factors make more of a difference than we think. Our beliefs about what we want are wrong. It’s not an obscure topic though; the evidence should be in front of us. How do we avoid noticing? We’re pretty good at not noticing things we don’t want to – we can probably do it unaided. Here there is a consistent pattern though.

\n

Consider the hypothesis that there is approximately one man in the world for me. I meet someone who appears to be him within a month of looking. This is not uncommon, though it has a one in many million chance of happening under my hypothesis, if I look insanely hard. This should make me doubt my hypothesis in favor of one where there are several, or many million men in the world for me. What do I really do? Feel that since something so unlikely (under the usual laws of chance) occurred it must be a sign that we were really meant for each other, that the universe is looking out for us, that fate found us deserving, or whatever. Magic is a nice addition to the theory, as it was what we wanted in the relationship anyway. Romantic magic and there being a Mr Right are complimentary beliefs, so meeting someone nice confirms the idea that there was exactly one perfect man in the world rather than suggesting it’s absurd.

\n

I can’t tell how serious anyone is about this, but ubiquitously when people happen to meet the girl of their dreams on a bus where they were the only English speaking people they put it down to fate, rather than radically lowered expectations. When they marry someone from the same small town they say they were put there for each other. When their partner, chosen on grounds of intellectual qualities, happens to also be rich and handsome their friends remark at how fortune has smiled on them. When people hook up with anyone at all they tell everyone around how unlikely it was that they should both have been at that bus stop on that day, and how since somehow they did they think it’s a sign.

\n

We see huge evidence against our hypothesis, invoke magic/friendly-chance as an explanation, then see this as confirmation that the original magic-friendly hypothesis was right.

\n

Does this occur in other forms of delusion? I think so. We often use the semi-supernatural to explain gaps caused by impaired affective forecasting. As far as I remember we overestimate strength of future emotional responses, tend to think whatever happens was the best outcome, and whatever we own is better than what we could have owned (e.g. you like the children you’ve got more than potential ones you could have had if you had done it another day). We explain these with ‘every cloud has a silver lining’, or ‘everything happens for a reason’, or ‘it turns out it was meant to happen – now I’ve realised how wonderful it is to spend more time at home’, ‘I was guided to take that option – see how well it turned out!’ or as happens often to Mother; ‘the universe told me to go into that shop today, and uncannily enough, there was a sale there and I found this absolutely wonderful pair of pants!’.

\n

Supernatural explanations aren’t just for gaps in our understanding. They are also for gaps between what we want to believe and are forced by proximity to almost notice.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "CnF9wZzn7aLvWe4zy", "title": "Wrong Tomorrow", "pageUrl": "https://www.lesswrong.com/posts/CnF9wZzn7aLvWe4zy/wrong-tomorrow", "postedAt": "2009-04-02T08:18:07.000Z", "baseScore": 10, "voteCount": 7, "commentCount": 11, "url": null, "contents": { "documentId": "CnF9wZzn7aLvWe4zy", "html": "

Wrong Tomorrow by Maciej Cegłowski is a very simple site for listing pundit predictions and tracking them [FAQ].  It doesn't come with prices and active betting... but a simple registry of this kind can scale much faster than a market, and right now we're in a situation where no one is bothering to track pundit predictions or report on pundit track records.  Predictions are produced as simple entertainment or as simple political theater, without the slightest fear of accountability.

This site is missing some features, but it looks to me like a starting attempt at what's needed - a Wikipedia-like, user-contributed, low-barrier-to-entry database of all pundit predictions, past and present.

" } }, { "_id": "6KDxoSMQuRR6gwrna", "title": "Accuracy Versus Winning", "pageUrl": "https://www.lesswrong.com/posts/6KDxoSMQuRR6gwrna/accuracy-versus-winning", "postedAt": "2009-04-02T04:47:37.156Z", "baseScore": 12, "voteCount": 21, "commentCount": 77, "url": null, "contents": { "documentId": "6KDxoSMQuRR6gwrna", "html": "

Consider the problem of an agent who is offered a chance to improve their epistemic rationality for a price.  What is such an agent's optimal strategy?

\n

A complete answer to this problem would involve a mathematical model to estimate the expected increase in utility associated with having more correct beliefs.  I don't have a complete answer, but I'm pretty sure about one thing: From an instrumental rationalist's point of view, to always accept or always refuse such offers is downright irrational.

\n

And now for the kicker: You might be such an agent.

\n

\n

One technique that humans can use to work towards epistemic rationality is to doubt themselves, since most people think they are above average in a wide variety of areas (and it's reasonable to assume that merit in at least some of these areas is normally distributed.)  But having a negative explanatory style, which is one way to doubt yourself, has been linked with sickness and depression.

\n

And the inverse is also true.  Humans also seem to be rewarded for a certain set of beliefs: those that help them maintain a somewhat-good assessment of themselves.  Having an optimistic explanatory style (in an nutshell, explaining good events in a way that makes you feel good, and explaining bad events in a way that doesn't make you feel bad) has been linked with success in sports, sales and school.

\n

If you're unswayed by my empirical arguments, here's a theoretical one.  If you're a human and you want to have correct beliefs, you must make a special effort to seek evidence that your beliefs are wrong.  One of our known defects is our tendency to stick with our beliefs for too long.  But if you do this successfully, you will become less certain and therefore less determined.

\n

In some circumstances, it's good to be less determined.  But in others, it's not.  And to say that one should always look for disconfirming evidence, or that one should always avoid looking for disconfirming evidence, is idealogical according to the instrumental rationalist.

\n

Who do you think is going to be more motivated to think about math: someone who feels it is their duty to become smarter, or a naive student who believes he or she has the answer to some mathematical problem and is only lacking a proof?

\n

You rarely see a self-help book, entreprenuership guide, or personal development blog telling people how to be less confident.  But that's what an advocate of rationalism does.  The question is, do the benefits outweigh the costs?

" } }, { "_id": "WmAKcY22T4kdfppkr", "title": "You don't need Kant", "pageUrl": "https://www.lesswrong.com/posts/WmAKcY22T4kdfppkr/you-don-t-need-kant", "postedAt": "2009-04-01T18:09:02.148Z", "baseScore": 2, "voteCount": 1, "commentCount": 59, "url": null, "contents": { "documentId": "WmAKcY22T4kdfppkr", "html": "

Related to: Comments on Degrees of Radical Honesty, OB: Belief in Belief, Cached Thoughts.

\n
\n

\"Nothing worse could happen to these labours than that anyone should make the unexpected discovery that there neither is, nor can be, any a priori knowledge at all.... This would be the same thing as if one sought to prove by reason that there is no reason\" (Critique of Practical Reason, Introduction).

\n
\n

You don't need Kant to demonstrate the value of honesty. In fact, summoning his revenant can be a dangerous thing to do. You end up in the somewhat undesirable situation of having almost the right conclusion, but having it for the wrong reasons. Reasons you weren't even aware of, because they were all collapsed into the belief, \"I believe in person X\".

\n

One of the annoying things about philosophy is that the dead simply don't die. Once a philosopher or philosophical doctrine gains some celebrity in the community, it's very difficult to convince anyone afterward that said philosopher or doctrine was flawed. In other words, the philosophical community tends to have problems with relinquishment. Therefore, there are still many philosophers that spend their careers studying, for example, Plato, apparently not with the intent to determine what parts of what Plato wrote are correct or still applicable, but rather with the intent to defend Plato from criticism. To prove Plato was right.

\n

Since the community doesn't value relinquishment, the cost of writing a flawed criticism is very low. Therefore, journals are glutted with so-called \"negative results\": \"Kant was wrong\", \"Hegel was wrong\", etc. No one seriously believes otherwise, but writing positive philosophical results is hard, and not writing at all isn't a viable career option for a professional philosopher.

\n

To its credit, MBlume refrains from bringing up Kant in his article on radical honesty, where he cites other, more feasible variants of radical honesty. However, in the comments, Kant rears his ugly head.

\n

Demosthenes writes:

\n
\n

\"Kant disagrees and seems to warn that the principle of truth telling is universal; you can't go around deciding who has a right to truth and who does not. Furthermore, he suggests that your lie could have terrible unforeseen consequences.

\n

...

\n

I am more utilitarian than Kant, but it is not hard to ignore \"proximity\" and come up with a cost/benefit calculation that agrees with him.\"

\n
\n

mdcaton writes:

\n
\n

\"Is this question really so hard? Remind me never to hide from Nazis at your house!

\n

First off, Kant's philosophy was criticized on exactly these grounds, i.e. that by his system, when the authorities come to your door to look for a friend you're harboring, you should turn him in. I briefly scanned for clever Kant references (e.g. \"introduce the brownshirts to your strangely-named cat, Egorial Imperative\") but found none. Kant clarified that he did not think it immoral to lie to authorities looking to execute your friend.\"

\n
\n

The problem with bringing up Kant here is that he simply doesn't belong. \"Don’t [lie] to anyone unless you’d also slash their tires, because they’re Nazis or whatever,\" is very different from Kant saying (paraphrasing), \"Never lie, ever, or else you're a bad person.\" An argument against the former by conflating it with the latter doesn't accomplish anything. Further, there's no mention of all the stuff Kant has to assume in order to argue for the Categorical Imperative and, finally, the value of radical honesty.

\n

Luckily, we only need the first couple pages of the Critique of Practical Reason to get to the Categorical Imperative. I want to flag down three very large assumptions that Kant needs, which I believe few rationalists would want to espouse. First, let me fill in the latter part of the inferential chain: given the existence of freedom, God, the immortality of the soul, and a supernatural consciousness, Kant will argue that any mind with a \"morally determined willpower\" will conclude that it should act in accordance with subjective principles that in principle could be universally applicable (i.e., the Categorical Imperative). I don't want to get in to what that actually means for Kant, as it's not really relevant, but suffice it to say that the Categorical Imperative implies that lying is always, anywhere, and for anyone ethically wrong.

\n

Freedom, God, and the Immortality of the Soul

\n

Skip this section if you don't care about Kant.

\n

Freedom here means completely acausal, metaphysical freedom from a Mind Projection Fallacy that treats our mind as somehow different from the body. Kant uses the concept of metaphysical freedom (and not, for example, merely our everyday experience of determining our course of action) to argue that there are such things as moral laws.

\n
\n

\"Inasmuch as the reality of the concept of freedom is proved by an apodeictic law of practical reason, it is the keystone of the whole system of pure reason, even the speculative, and all other concepts (those of God and immortality) which, as being mere ideas, remain in it unsupported, now attach themselves to this concept, and by it obtain consistence and objective reality; that is to say, their possibility is proved by the fact that
freedom actually exists, for this idea is revealed by the moral law.\" (CoPrR, Introduction)

\n
\n

I think in a perverse way Kant knew he was becoming Escher-headed by believing in metaphysical freedom.

\n
\n

\"Lest any one should imagine that he finds an inconsistency here when I call freedom the condition of the moral law, and hereafter maintain in the treatise itself that the moral law is the condition under which we can first become conscious of freedom, I will merely remark that freedom is the ratio essendi of the moral law, while the moral law is the ratio cognoscendi of freedom.\" (CoPrR, Introduction)

\n
\n

If one doesn't assume completely acausal, metaphysical freedom and tries to follow Kant's argument, the whole thing falls apart. There's no longer (for Kant) any reason to believe in moral laws, and therefore in the Categorical Imperative, and therefore in radical honesty.

\n

God here is, strangely enough, not necessarily the Christian God, though presumably Kant meant as such. Both it and an eternal soul are necessary to realize the goodness of the Categorical Imperative described above. Without either of these, there's no reason to obey the Categorical Imperative, as being \"Good\" would then simply be impossible.

\n
\n

\"The realization of the summum bonum [the Greatest Good] in the world is the necessary object of a will determinable by the moral law. But in this will the perfect accordance of the mind with the moral law is the supreme condition of the summum bonum. This then must be possible, as well as its object, since it is contained in the command to promote the latter. Now, the perfect accordance of the will with the moral law is holiness, a perfection of which no rational being of the sensible world is capable at any moment of his existence. Since, nevertheless, it is required as practically necessary, it can only be found in a progress in infinitum towards that perfect accordance, and on the principles of pure practical reason it is necessary to assume such a practical progress as the real object of our will.\" (CoPrR, Chapter Two, Part IV)

\n
\n

Moral of the Story

\n

What we have then is a very powerful theme that has woven its way into our list of cached thoughts. Whenever someone mentions the value of being honest, some proportion of the population is primed to think of Kant and his variant of radical honesty to the exclusion of other variants. Some proportion of that proportion is then primed with various anti-philosophy memes which immediately attack Kantian radical honesty to the conflation of it with other things. What is lost is the realization that Kantian radical honesty is in this era a straw man; everyone already knows it (and attempts to fix it while still being authentic to Kant, i.e., Kantian Studies) is inherently flawed, because it is based on a set of irrational assumptions.

\n

My suggested strategy to avoid this in the future is this: whenever you find yourself citing the beliefs of another person, try to avoid referring to them as \"the beliefs of X\" unless you are actually talking about their beliefs (or the beliefs recorded in their writings, etc.). Be aware of creating straw men by comparing your interlocutor's beliefs with the beliefs of a famous philosopher, and certainly don't knock your straw man down by citing the beliefs of one of that philosopher's critics.

\n

EDIT: Made it more obvious that MBlume proposed more than one variant of radical honesty.

" } }, { "_id": "jpFk49CHMcQf7e5L7", "title": "Proverbs and Cached Judgments: the Rolling Stone", "pageUrl": "https://www.lesswrong.com/posts/jpFk49CHMcQf7e5L7/proverbs-and-cached-judgments-the-rolling-stone", "postedAt": "2009-04-01T15:40:15.528Z", "baseScore": 18, "voteCount": 29, "commentCount": 30, "url": null, "contents": { "documentId": "jpFk49CHMcQf7e5L7", "html": "

People have long noted that individuals diagnosed as schizophrenic usually manifest disturbances of language, communication, and abstract thought.  One way to examine that disturbance is to ask patients to interpret various common proverbs, as psychiatrists have done since before the turn of the century.  (Interested readers can find a layperson-suitable discussion of this method's utility in the modern day at the following link: AAPL newsletter.)

\n

Originally, patients' responses were evaluated by their correctness.  Now they're graded on their degree of abstraction.  Responses that understand the sayings literally or in simplistically concrete terms are generally considered to be signs of a failure to abstract, although illiterate or mentally challenged individuals also tend to respond that way, and individuals encountering a proverb for the first time are less likely to recognize its symbolic meaning.  It seems clear that cultural exposure to proverbial forms, to the idiomatic usage of phrases and scenarios, affects how we recognize such methods of communication.

\n

But why was the 'correctness' criterion dropped?  Because perfectly normal people, whom no one would consider schizophrenic, often gave interpretations that wildly conflicted with what the interviewer considered to be the correct one.  Which interpretations were 'correct' depended heavily on the traditions and cultures that the listeners came from.

\n

Let's consider a classic example of a proverb often given divergent interpretations:

\n
\n

The rolling stone gathers no moss.

\n
\n

People from societies where stability and slowly-developed connections are valued consider this saying to be a warning of the dangers of activity and change.  Without staying still, beautiful moss won't grow. People from societies where activity and change are valued, however, consider it to be a prescription for how to avoid decay and degeneration.  If you don't keep moving, you'll be covered by moss!

\n

When asked to explain their interpretation, the value of moss growth is typically presented as desirable or undesirable, depending on the defended meaning.  But if you start out by asking people whether moss is something to seek or avoid, there's no clear preference outside of specific contexts.  People generally don't have aesthetic preferences either way; overall, people don't care.

\n

So the symbolic meaning of the mossy growth doesn't determine how people interpret the saying; people invest the moss with meaning to justify the judgment they had already reached.  This is may be an example of what people at this site would call a 'cached thought'.  Rather than giving a reason for their judgment, people reply with rationalizations that have nothing to do with why they reached their conclusion.  Rather than thinking about why they decided as they did, people bring out a ready smokescreen.

\n

What's the actual logical structure of the saying? Rational analysis sheds a great deal of light on the question.  The meaning can be stated in various ways, all equivalent.

\n
\n

Stability is required for the development of certain states.  Activity is incompatible with the development of certain states.  (Desirable/undesirable) states can be (encouraged/prevented) by (engaging in/avoiding) (necessary precursors/incompatible conditions).

\n
\n

The saying encodes a pattern that expresses a relationship, but the pattern is devoid of evaluation.  It's a blank screen upon which people project their pre-existing values and judgments.  To truly understand the proverb, it's necessary to recognize which aspects of our perception are the saying itself, and which are our own ideas projected onto it.

" } }, { "_id": "3p3CYauiX8oLjmwRF", "title": "Purchase Fuzzies and Utilons Separately", "pageUrl": "https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately", "postedAt": "2009-04-01T09:51:01.855Z", "baseScore": 228, "voteCount": 172, "commentCount": 88, "url": null, "contents": { "documentId": "3p3CYauiX8oLjmwRF", "html": "

Yesterday:

\n
\n

There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...

\n

If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine.  But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done.  One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.

\n
\n

I hold open doors for little old ladies.  I can't actually remember the last time this happened literally (though I'm sure it has, sometime in the last year or so).  But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car's interior.  I looked in to see if there were packages being taken out, but this was not so.  I looked around to see if anyone was doing anything with the car.  And finally I went up to the house and knocked, then rang the bell.  And yes, the trunk had been accidentally left open.

\n

Under other circumstances, this would be a simple act of altruism, which might signify true concern for another's welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable.  I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn't deduct any penalty points for the others.  Just so long as people get helped.

\n

But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others.  That is: can I really defend this as the best use of my time, given the other things I claim to believe?

\n

The obvious defense—or perhaps, obvious rationalization—is that an act of altruism like this one acts as an willpower restorer, much more efficiently than, say, listening to music.  I also mistrust my ability to be an altruist only in theory; I suspect that if I walk past problems, my altruism will start to fade.  I've never pushed that far enough to test it; it doesn't seem worth the risk.

\n

But if that's the defense, then my act can't be defended as a good deed, can it?  For these are self-directed benefits that I list.

\n

Well—who said that I was defending the act as a selfless good deed?  It's a selfish good deed.  If it restores my willpower, or if it keeps me altruistic, then there are indirect other-directed benefits from that (or so I believe).  You could, of course, reply that you don't trust selfish acts that are supposed to be other-benefiting as an \"ulterior motive\"; but then I could just as easily respond that, by the same principle, you should just look directly at the original good deed rather than its supposed ulterior motive.

\n

Can I get away with that?  That is, can I really get away with calling it a \"selfish good deed\", and still derive willpower restoration therefrom, rather than feeling guilt about it being selfish?  Apparently I can.  I'm surprised it works out that way, but it does.  So long as I knock to tell them about the open trunk, and so long as the one says \"Thank you!\", my brain feels like it's done its wonderful good deed for the day.

\n

Your mileage may vary, of course.  The problem with trying to work out an art of willpower restoration is that different things seem to work for different people.  (That is:  We're probing around on the level of surface phenomena without understanding the deeper rules that would also predict the variations.)

\n

But if you find that you are like me in this aspect—that selfish good deeds still work—then I recommend that you purchase warm fuzzies and utilons separately.  Not at the same time.  Trying to do both at the same time just means that neither ends up done well.  If status matters to you, purchase status separately too!

\n

If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:

\n\n

I would furthermore advise the billionaire that what they spend on utilons should be at least, say, 20 times what they spend on warm fuzzies—5% overhead on keeping yourself altruistic seems reasonable, and I, your dispassionate judge, would have no trouble validating the warm fuzzies against a multiplier that large.  Save that the original, fuzzy act really should be helpful rather than actively harmful.

\n

(Purchasing status seems to me essentially unrelated to altruism.  If giving money to the X-Prize gets you more awe from your friends than an equivalently priced speedboat, then there's really no reason to buy the speedboat.  Just put the money under the \"impressing friends\" column, and be aware that this is not the \"altruism\" column.)

\n

But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time.  Writing a check for $10,000,000 to a breast-cancer charity—while far more laudable than spending the same $10,000,000 on, I don't know, parties or something—won't give you the concentrated euphoria of being present in person when you turn a single human's life around, probably not anywhere close.  It won't give you as much to talk about at parties as donating to something sexy like an X-Prize—maybe a short nod from the other rich.  And if you threw away all concern for warm fuzzies and status, there are probably at least a thousand underserved existing charities that could produce orders of magnitude more utilons with ten million dollars.  Trying to optimize for all three criteria in one go only ensures that none of them end up optimized very well—just vague pushes along all three dimensions.

\n

Of course, if you're not a millionaire or even a billionaire—then you can't be quite as efficient about things, can't so easily purchase in bulk.  But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries.  Volunteer at a soup kitchen.  Or just get your warm fuzzies from holding open doors for little old ladies.  Let that be validated by your other efforts to purchase utilons, but don't confuse it with purchasing utilons.  Status is probably cheaper to purchase by buying nice clothes.

\n

And when it comes to purchasing expected utilons—then, of course, shut up and multiply.

" } }, { "_id": "L7pzhjx2HuJeuuwea", "title": "Introducing CADIE", "pageUrl": "https://www.lesswrong.com/posts/L7pzhjx2HuJeuuwea/introducing-cadie", "postedAt": "2009-04-01T07:32:42.274Z", "baseScore": 0, "voteCount": 15, "commentCount": 8, "url": null, "contents": { "documentId": "L7pzhjx2HuJeuuwea", "html": "

Apparently there is no need to worry about the topic that must not be named anymore, for Google has taken care of everything. Behold the dawning of a new age!

\n

Introducing CADIE

" } }, { "_id": "N5NPyjeFTNak5YqtZ", "title": "Degrees of Radical Honesty", "pageUrl": "https://www.lesswrong.com/posts/N5NPyjeFTNak5YqtZ/degrees-of-radical-honesty", "postedAt": "2009-03-31T20:36:10.497Z", "baseScore": 34, "voteCount": 33, "commentCount": 51, "url": null, "contents": { "documentId": "N5NPyjeFTNak5YqtZ", "html": "

The Black Belt Bayesian writes:

\n
\n

Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever.

\n
\n

Eliezer adds:

\n
\n

If you'll lie when the fate of the world is at stake, and others can guess that fact about you, then, at the moment when the fate of the world is at stake, that's the moment when your words become the whistling of the wind.

\n
\n

These are both radically high standards of honesty. Thus, it is easy to miss the fact that they are radically different standards of honesty. Let us look at a boundary case.

\n

Thomblake puts the matter vividly:

\n
\n

Suppose that Anne Frank is hiding in the attic, and the Nazis come asking if she's there. Harry doesn't want to tell them, but Stan insists he mustn't deceive the Nazis, regardless of his commitment to save Anne's life.

\n
\n

So, let us say that you are living in Nazi Germany, during WWII, and you have a Jewish family hiding upstairs. There's a couple of brownshirts with rifles knocking on your door. What do you do?

\n

I see four obvious responses to this problem (though there may be more)

\n
    \n
  1. \"Yes, there are Jews living upstairs, third door on the left\" -- you have promoted maximally accurate beliefs in the Nazi soldiers. Outcome: The family you are sheltering will die horribly.
  2. \n
  3. \"I cannot tell you the answer to that question\" -- you have not deceived the Nazis. They spend a few minutes searching the house. Outcome: The family you are sheltering will die horribly.
  4. \n
  5. \"No, there are no Jews here\" -- your words are like unto the whistling of the wind. The Nazis expect individuals without Jews in their homes to utter these words with near certainty. They expect individuals with Jews in their homes to utter these words with near certainty. These words make no change in P(there are Jews here) as measured by the Nazis. Even a couple of teenaged brownshirts will possess this much rationality. Outcome: The family you are sheltering will die horribly.
  6. \n
  7. Practice the Dark Arts. Heil Hitler enthusiastically, and embrace the soldiers warmly. Thank them for the work they are doing in defending your fatherland from the Jewish menace. Bring them into your home, and have your wife bring them strong beer, and her best sausages. Over dinner, tell every filthy joke you know about rolling pennies through ghettos. Talk about the Jewish-owned shop that used to be down the street, and how you refused to go there, but walked three miles to patronize a German establishment. Tell of the Jewish moneylender who ruined your cousin. Sing patriotic songs while your beautiful adolescent daughter plays the piano. Finally, tell the soldiers that your daughter's room is upstairs, that she is shy, and bashful, and would be disturbed by two strange young men looking through her things. Appeal to their sense of chivalry. Make them feel that respecting your daughter's privacy is the German thing to do -- is what the Feurer himself would want them to do.  Before they have time to process this, clasp their hands warmly, thank them for their company, and politely but firmly show them out.  Outcome: far from certain, but there is a significant chance that the family you are sheltering live long, happy lives.
  8. \n
\n

I am certain that YVain could have a field day with the myriad ways in which response 4 does not represent rational discourse. Nonetheless, in this limited problem, it wins.

\n

(It should also be noted that response 4 came to me in about 15 minutes of thinking about the problem. If I actually had Jews in my attic, and lived in Nazi Germany, I might have thought of something better).

\n

However:

\n

What if you live in the impossible possible world in which a nuclear blast could ignite the atmosphere of the entire earth? What if you are yourself a nuclear scientist, and have proven this to yourself beyond any doubt, but cannot convey the whole of the argument to a layman? The fate of the whole world could depend on your superiors believing you to be the sort of man who will not tell a lie.  And, of course, in order to be the sort of man who would not tell a lie, you must not tell lies.

\n

Do we have wiggle room here? Neither your superior officer, nor the two teenaged brownshirts, are Omega, but your superior bears a far greater resemblance. The brownshirts are young, are ruled by hormones. It is easy to practice the Dark Arts against them, and get away with it. Is it possible to grab the low-hanging fruit to be had by deceiving fools (at least, those who are evil and whose tires you would willingly slash), while retaining the benefits of being believed by the wise?

\n

I am honestly unsure, and so I put the question to you all.

\n


ETA: I have of course forgotten about the unrealistically optimistic option:

\n

5: Really, truly, promote maximally accurate beliefs. Teach the soldiers rationality from the ground up. Explain to them about affective death spirals, and make them see that they are involved in one.  Help them to understand that their own morality assigns value to the lives hidden upstairs.  Convince them to stop being nazis, and to help you protect your charges.

\n

If you can pull this off without winding up in a concentration camp yourself (along with the family you've been sheltering) you are a vastly better rationalist than I, or (I suspect) anyone else on this forum.

" } }, { "_id": "MFHFjtNjHchDaoERJ", "title": "Building Communities vs. Being Rational", "pageUrl": "https://www.lesswrong.com/posts/MFHFjtNjHchDaoERJ/building-communities-vs-being-rational", "postedAt": "2009-03-31T18:35:35.268Z", "baseScore": 21, "voteCount": 26, "commentCount": 18, "url": null, "contents": { "documentId": "MFHFjtNjHchDaoERJ", "html": "

I've noticed a distinct trend lately in that I've been commenting less and less on posts as time goes by. I've been wondering if its just that the new car smell of lesswrong has been wearing off, or if it is something else.

\n

Well, I think I've identified it. I just don't care for discussions about how to go about building communities. It may, in the long run, be beneficial to work out how to build communities of rationalists, but in the meantime I find these discussions are making this less and less a community I want to be a part of, and (if I am not unique) may be having the opposite effect that they intend.

\n

Don't get me wrong. I am not saying these discussions are unimportant or are not germane to the building of this site. I am saying that if a new person comes here and reads the last posts, are they going to want to stay? For myself, I find I am willing to be part of a community of enthusiastic rationalists (which is why I started reading this blog in the first place), but  I have NO interest in being part of a community that spends all its time debating on how to build the community.

\n

Lately, to me, this place has seemed more of the latter and less of the former.

" } }, { "_id": "ZpDnRCeef2CLEFeKM", "title": "Money: The Unit of Caring", "pageUrl": "https://www.lesswrong.com/posts/ZpDnRCeef2CLEFeKM/money-the-unit-of-caring", "postedAt": "2009-03-31T12:35:48.366Z", "baseScore": 223, "voteCount": 191, "commentCount": 132, "url": null, "contents": { "documentId": "ZpDnRCeef2CLEFeKM", "html": "

Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational, self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal.  Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent's utility function and its probabilistic expectations about its own algorithms.

\n

This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.

\n

In our society, this common currency of expected utilons is called \"money\".  It is the measure of how much society cares about something.

\n

This is a brutal yet obvious point, which many are motivated to deny.

\n

With this audience, I hope, I can simply state it and move on.  It's not as if you thought \"society\" was intelligent, benevolent, and sane up until this point, right?

\n

I say this to make a certain point held in common across many good causes.  Any charitable institution you've ever had a kind word for, certainly wishes you would appreciate this point, whether or not they've ever said anything out loud.  For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here...

\n

\n

Many people, when they see something that they think is worth doing, would like to volunteer a few hours of spare time, or maybe mail in a five-year-old laptop and some canned goods, or walk in a march somewhere, but at any rate, not spend money.

\n

Believe me, I understand the feeling.  Every time I spend money I feel like I'm losing hit points.  That's the problem with having a unified quantity describing your net worth:  Seeing that number go down is not a pleasant feeling, even though it has to fluctuate in the ordinary course of your existence.  There ought to be a fun-theoretic principle against it.

\n

But, well...

\n

There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone to work for five hours at the soup kitchen.

\n

There's this thing called \"Ricardo's Law of Comparative Advantage\".  There's this idea called \"professional specialization\".  There's this notion of \"economies of scale\".  There's this concept of \"gains from trade\".  The whole reason why we have money is to realize the tremendous gains possible from each of us doing what we do best.

\n

This is what grownups do.  This is what you do when you want something to actually get done.  You use money to employ full-time specialists.

\n

Yes, people are sometimes limited in their ability to trade time for money (underemployed), so that it is better for them if they can directly donate that which they would usually trade for money.  If the soup kitchen needed a lawyer, and the lawyer donated a large contiguous high-priority block of lawyering, then that sort of volunteering makes sense—that's the same specialized capability the lawyer ordinarily trades for money.  But \"volunteering\" just one hour of legal work, constantly delayed, spread across three weeks in casual minutes between other jobs?  This is not the way something gets done when anyone actually cares about it, or to state it near-equivalently, when money is involved.

\n

To the extent that individuals fail to grasp this principle on a gut level, they may think that the use of money is somehow optional in the pursuit of things that merely seem morally desirable—as opposed to tasks like feeding ourselves, whose desirability seems to be treated oddly differently.  This factor may be sufficient by itself to prevent us from pursuing our collective common interest in groups larger than 40 people.

\n

Economies of trade and professional specialization are not just vaguely good yet unnatural-sounding ideas, they are the only way that anything ever gets done in this world.  Money is not pieces of paper, it is the common currency of caring.

\n

Hence the old saying:  \"Money makes the world go 'round, love barely keeps it from blowing up.\"

\n

Now, we do have the problem of akrasia—of not being able to do what we've decided to do—which is a part of the art of rationality that I hope someone else will develop; I specialize more in the impossible questions business.  And yes, spending money is more painful than volunteering, because you can see the bank account number go down, whereas the remaining hours of our span are not visibly numbered.  But when it comes time to feed yourself, do you think, \"Hm, maybe I should try raising my own cattle, that's less painful than spending money on beef?\"  Not everything can get done without invoking Ricardo's Law; and on the other end of that trade are people who feel just the same pain at the thought of having less money.

\n

It does seem to me offhand that there ought to be things doable to diminish the pain of losing hit points, and to increase the felt strength of the connection from donating money to \"I did a good thing!\"  Some of that I am trying to accomplish right now, by emphasizing the true nature and power of money; and by inveighing against the poisonous meme saying that someone who gives mere money must not care enough to get personally involved.  This is a mere reflection of a mind that doesn't understand the post-hunter-gatherer concept of a market economy.  The act of donating money is not the momentary act of writing the check, it is the act of every hour you spent to earn the money to write that check—just as though you worked at the charity itself in your professional capacity, at maximum, grownup efficiency.

\n

If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine.  But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done.  One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.  I may post more about this later.

\n

To a first approximation, money is the unit of caring up to a positive scalar factor—the unit of relative caring.  Some people are frugal and spend less money on everything; but if you would, in fact, spend $5 on a burrito, then whatever you will not spend $5 on, you care about less than you care about the burrito.  If you don't spend two months salary on a diamond ring, it doesn't mean you don't love your Significant Other.  (\"De Beers: It's Just A Rock.\")  But conversely, if you're always reluctant to spend any money on your SO, and yet seem to have no emotional problems with spending $1000 on a flat-screen TV, then yes, this does say something about your relative values.

\n

Yes, frugality is a virtue.  Yes, spending money hurts.  But in the end, if you are never willing to spend any units of caring, it means you don't care.

" } }, { "_id": "DXBbQBHACYwAdKKyx", "title": "The Benefits of Rationality?", "pageUrl": "https://www.lesswrong.com/posts/DXBbQBHACYwAdKKyx/the-benefits-of-rationality", "postedAt": "2009-03-31T11:17:39.503Z", "baseScore": 21, "voteCount": 34, "commentCount": 80, "url": null, "contents": { "documentId": "DXBbQBHACYwAdKKyx", "html": "

Robin wrote how being rational can harm you. Let's look at the other side: what significant benefits does rationality give?

\n

The community here seems to agree that rationality is beneficial. Well, obviously people need common sense to survive, but does an additional dose of LessWrong-style rationality help us appreciably in our personal and communal endeavors?

\n

Does LessWrong make us WIN?

\n

(If we don't WIN, our evangelism rings a little hollow. Science didn't spread due to evangelism, science spread because it works. Art spreads because people love it. I want to hold my Art to this standard. Push-selling a solution while it's still inferior might be the locally optimal decision but it corrupts long-term, as many of us have seen in the IT industry. That's if the example of all religions and political movements isn't enough for you. Beware the Evangelism Death Spiral!)

\n

We may claim internal benefits such as improved clarity of thought from each new blog insight. But religious people claim similar internal benefits that actually spill out into the measurable world, such as happiness and charitability. This fact gives us envy and we attempt to use our internal changes to group together for world-benefitting tasks. To my mind this looks like putting the cart before the horse: why compete with religion on its terms, don't we have utility functions of our own to satisfy?

\n

No, feelings won't do. If feelings turn you on, do drugs or get religious. Rationalism needs to verifiably bring external benefit. Don't help me become pure from racism or somesuch. Help me WIN, and the world will beat a path to our door.

\n

Okay, interpersonal relationships are out. Then the most obvious area where rationalism could help is business. And the most obvious community-beneficial application (riffing on some recent posts here) would be scientists banding together and making a profitable part-time business to fund their own research. I can see how many techniques taught here could help, e.g. PD cooperation techniques. If a \"rationalism case study\" of this sort ever gets launched, I for one will gladly offer my effort. Of course this is just one suggestion; everything's possible.

\n

One thing's definite for me: rationalism needs to be grounded in real-world victories for each one of us. Otherwise what's the point?

" } }, { "_id": "f42BHX7rMw2dyFJfT", "title": "Helpless Individuals", "pageUrl": "https://www.lesswrong.com/posts/f42BHX7rMw2dyFJfT/helpless-individuals", "postedAt": "2009-03-30T11:10:37.791Z", "baseScore": 92, "voteCount": 96, "commentCount": 245, "url": null, "contents": { "documentId": "f42BHX7rMw2dyFJfT", "html": "

When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all.

\n

Well—there are governments with specialized militaries and police, which can extract taxes.  That's a non-ancestral idiom which dates back to the invention of sedentary agriculture and extractible surpluses; humanity is still struggling to deal with it.

\n

There are corporations in which the flow of money is controlled by centralized management, a non-ancestral idiom dating back to the invention of large-scale trade and professional specialization.

\n

And in a world with large populations and close contact, memes evolve far more virulent than the average case of the ancestral environment; memes that wield threats of damnation, promises of heaven, and professional priest classes to transmit them.

\n

But by and large, the answer to the question \"How do large institutions survive?\" is \"They don't!\"  The vast majority of large modern-day institutions—some of them extremely vital to the functioning of our complex civilization—simply fail to exist in the first place.

\n

I first realized this as a result of grasping how Science gets funded: namely, not by individual donations.

\n

Science traditionally gets funded by governments, corporations, and large foundations.  I've had the opportunity to discover firsthand that it's amazingly difficult to raise money for Science from individuals.  Not unless it's science about a disease with gruesome victims, and maybe not even then.

\n

Why?  People are, in fact, prosocial; they give money to, say, puppy pounds.  Science is one of the great social interests, and people are even widely aware of this—why not Science, then?

\n

Any particular science project—say, studying the genetics of trypanotolerance in cattle—is not a good emotional fit for individual charity.  Science has a long time horizon that requires continual support.  The interim or even final press releases may not sound all that emotionally arousing.  You can't volunteer; it's a job for specialists.  Being shown a picture of the scientist you're supporting at or somewhat below the market price for their salary, lacks the impact of being shown the wide-eyed puppy that you helped usher to a new home.  You don't get the immediate feedback and the sense of immediate accomplishment that's required to keep an individual spending their own money.

\n

Ironically, I finally realized this, not from my own work, but from thinking \"Why don't Seth Roberts's readers come together to support experimental tests of Roberts's hypothesis about obesity?  Why aren't individual philanthropists paying to test Bussard's polywell fusor?\"  These are examples of obviously ridiculously underfunded science, with applications (if true) that would be relevant to many, many individuals.  That was when it occurred to me that, in full generality, Science is not a good emotional fit for people spending their own money.

\n

In fact very few things are, with the individuals we have now.  It seems to me that this is key to understanding how the world works the way it does—why so many individual interests are poorly protected—why 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress, for example.

\n

So how does Science actually get funded?  By governments that think they ought to spend some amount of money on Science, with legislatures or executives deciding to do so—it's not quite their own money they're spending.  Sufficiently large corporations decide to throw some amount of money at blue-sky R&D.  Large grassroots organizations built around affective death spirals may look at science that suits their ideals.  Large private foundations, based on money block-allocated by wealthy individuals to their reputations, spend money on Science which promises to sound very charitable, sort of like allocating money to orchestras or modern art.  And then the individual scientists (or individual scientific task-forces) fight it out for control of that pre-allocated money supply, given into the hands of grant committee members who seem like the sort of people who ought to be judging scientists.

\n

You rarely see a scientific project making a direct bid for some portion of society's resource flow; rather, it first gets allocated to Science, and then scientists fight over who actually gets it.  Even the exceptions to this rule are more likely to be driven by politicians (moonshot) or military purposes (Manhattan project) than by the appeal of scientists to the public.

\n

Now I'm sure that if the general public were in the habit of funding particular science by individual donations, a whole lotta money would be wasted on e.g. quantum gibberish—assuming that the general public somehow acquired the habit of funding science without changing any other facts about the people or the society.

\n

But it's still an interesting point that Science manages to survive not because it is in our collective individual interest to see Science get done, but rather, because Science has fastened itself as a parasite onto the few forms of large organization that can exist in our world.  There are plenty of other projects that simply fail to exist in the first place.

\n

It seems to me that modern humanity manages to put forth very little in the way of coordinated effort to serve collective individual interests.  It's just too non-ancestral a problem when you scale to more than 50 people.  There are only big taxers, big traders, supermemes, occasional individuals of great power; and a few other organizations, like Science, that can fasten parasitically onto them.

" } }, { "_id": "nhNqtgmYDPSPoQP26", "title": "Kling, Probability, and Economics", "pageUrl": "https://www.lesswrong.com/posts/nhNqtgmYDPSPoQP26/kling-probability-and-economics", "postedAt": "2009-03-30T05:15:24.400Z", "baseScore": 1, "voteCount": 10, "commentCount": 3, "url": null, "contents": { "documentId": "nhNqtgmYDPSPoQP26", "html": "

Related to: Beautiful Probability, Probability is in the Mind

\n

Arnold Kling ponders probability:

\n
\n

How one thinks about probability affects how one thinks about economics. Consider the use of the word \"probability\" in each of the following sentences:

\n

1. What is the probability that when a fair coin is flipped it will come up heads?
2. What is the probability that exactly two number-one seeds will make it to the final four in the March Madness basketball tournament?
3. What is the probability that New York City will rank higher relative to other cities five years from now in terms of college graduates?

\n

We would answer the first question by saying that the probability is 50 percent, based on the very definition of a fair coin. This is an axiomatic interpretation of probability. The axiomatic view treats probability as a matter of pure logic, with statements that do not require any empirical testing.

\n

We would answer the second question by looking up historical records for the NCAA basketball tournament. This is the frequentist account of probability, which treats probability as counting outcomes from repeated trials. A frequentist would claim that the only way we can know that a coin has a 50 percent probability of coming up heads is by actually flipping a coin enough times to verify this empirically.

\n

The third question cannot be answered on the basis of axioms or observed frequencies. The probability estimate is purely subjective. The subjective account of probability is that it reflects an individual belief that cannot be proven either logically or empirically.

\n
\n

In the tradition of Reddit, and a little inspired by Robin, this is a simple link to an interesting page somewhere else - I leave comment and discussion to the very awesome Less Wrong community.

\n

Edit: Eliezer has in the past been uncomplimentary of the \"accursèd frequentists\". In at least Beautiful Probability and Probability is in the Mind, he has characterized (for at least some problems) the \"frequentist\" approach as being wrong, and the \"Bayesian\" approach as being right. Kling suggests different problems for which different approaches are approrpriate.

" } }, { "_id": "655TmdcwAgryPGPWS", "title": "Framing Effects in Anthropology", "pageUrl": "https://www.lesswrong.com/posts/655TmdcwAgryPGPWS/framing-effects-in-anthropology", "postedAt": "2009-03-29T22:05:59.514Z", "baseScore": 11, "voteCount": 22, "commentCount": 15, "url": null, "contents": { "documentId": "655TmdcwAgryPGPWS", "html": "

A large number of cognitive errors are grouped under \"framing effects\", the tendency of a fact to sound different when presented in different ways. Economists discuss framing effects in terms of changed decisions: for example, a patient will be more likely to agree to a treatment with a \"ninety percent survival rate\" than a \"ten percent death rate\", even though these are denotatively the same. Other social sciences use \"framing\" more broadly. For them, a frame is similar to a cultural filter through which we interpret and evaluate data.

\n

Anthropologists are particularly wary of framing effects. The thought \"primitive culture\" immediately summons a set of associations - medicine men, chiefs, thatched huts, festivals, superstitions - that anthropologists risks interpreting new information about a tribe in light of what they think tribal cultures should be like. The problem is only compounded by the difficulty anthropologists have getting complete and accurate information from potentially reclusive societies.

\n

One especially well-known anthropological work is Horace Miner's description of the Nacirema, a North American tribe centered around the northwest Chesapeake Bay area. He was especially interested in their purification customs, which he described as \"an extreme of human behavior\". Below the cut is Miner's essay, Body Ritual among the Nacirema. Do you think Miner is affected by a framing bias? Where does the bias manifest itself?

\n

\n
\n

The anthropologist has become so familiar with the diversity of ways in which different peoples behave in similar situations that he is not apt to be surprised by even the most exotic customs. In fact, if all of the logically possible combinations of behavior have not been found somewhere in the world, he is apt to suspect that they must be present in some yet undescribed tribe.  This point has, in fact, been expressed with respect to clan organization by Murdock.  In this light, the magical beliefs and practices of the Nacirema present such unusual aspects that it seems desirable to describe them as an example of the extremes to which human behavior can go.

\n

Nacirema culture is characterized by a highly developed economy which has evolved in a rich natural habitat. While much of the people's time is devoted to economic pursuits, a large part of the fruits of these labors and a considerable portion of the day are spent in ritual activity. The focus of this activity is the human body, the appearance and health of which loom as a dominant concern in the ethos of the people. While such a concern is certainly not unusual, its ceremonial aspects and associated philosophy are unique.

\n

The fundamental belief underlying the whole system appears to be that the human body is ugly and that its natural tendency is to debility and disease. Incarcerated in such a body, man's only hope is to avert these characteristics through the use of the powerful influences of ritual and ceremony. Every household has one or more shrines devoted to this purpose. The more powerful individuals in the society have several shrines in their houses and, in fact, the opulence of a house is often referred to in terms of the number of such ritual centers it possesses. Most houses are of wattle and daub construction, but the shrine rooms of the more wealthy are walled with stone. Poorer families imitate the rich by applying pottery plaques to their shrine walls.  While each family has at least one such shrine, the rituals associated with it are not family ceremonies but are private and secret. The rites are normally only discussed with children, and then only during the period when they are being initiated into these mysteries. I was able, however, to establish sufficient rapport with the natives to examine these shrines and to have the rituals described to me.

\n

The focal point of the shrine is a box or chest which is built into the wall. In this chest are kept the many charms and magical potions without which no native believes he could live. These preparations are secured from a variety of specialized practitioners. The most powerful of these are the medicine men, whose assistance must be rewarded with substantial gifts.  However, the medicine men do not provide the curative potions for their clients, but decide what the ingredients should be and then write them down in ancient and secret symbols. This writing is understood only by the medicine men and by the herbalists who, for another gift, provide the required charm.

\n

The charm is not disposed of after it has served its purpose, but is placed in the charmbox of the household shrine. As these magical materials are specific for certain ills, and the real or imagined maladies of the people are many, the charm-box is usually full to overflowing. The magical packets are so numerous that people forget what their purposes were and fear to use them again. While the natives are very vague on this point, we can only assume that the idea in retaining all the old magical materials is that their presence in the charm-box, before which the body rituals are conducted, will in some way protect the worshipper.

\n

Beneath the charm-box is a small font. Each day every member of the family, in succession, enters the shrine room, bows his head before the charm-box, mingles different sorts of holy water in the font, and proceeds with a brief rite of ablution. The holy waters are secured from the Water Temple of the community, where the priests conduct elaborate ceremonies to make the liquid ritually pure.

\n

In the hierarchy of magical practitioners, and below the medicine men in prestige, are specialists whose designation is best translated \"holy-mouth-men.\" The Nacirema have an almost pathological horror of and fascination with the mouth, the condition of which is believed to have a supernatural influence on all social relationships. Were it not for the rituals of the mouth, they believe that their teeth would fall out, their gums bleed, their jaws shrink, their friends desert them, and their lovers  reject them. They also believe that a strong relationship exists between oral and moral characteristics. For example, there is a ritual ablution of the mouth for children which is supposed to improve their moral fiber.

\n

The daily body ritual performed by everyone includes a mouth-rite. Despite the fact that these people are so punctilious about care of the mouth, this rite involves a practice which strikes the uninitiated stranger as revolting. It was reported to me that the ritual consists of inserting a small bundle of hog hairs into the mouth, along with certain magical powders, and then moving the bundle in a highly formalized series of gestures.

\n

In addition to the private mouth-rite, the people seek out a holy-mouth-man once or twice a year. These practitioners have an impressive set of paraphernalia, consisting of a variety of augers, awls, probes, and prods. The use of these objects in the exorcism of the evils of the mouth involves almost unbelievable ritual torture of the client. The holy-mouth-man open the clients mouth and, using the above mentioned tools, enlarges any holes which decay may have created in the teeth. Magical materials are put into these holes. If there age no naturally occurring holes in the teeth, large sections of one or more teeth are gouged out so that the supernatural substance can be applied. In the client's view, the purpose of these ministrations is to arrest decay and to draw friends. The extremely sacred and traditional character of the rite is evident in the fact that the natives return to the holy--mouth-men year after year, despite the fact  that their teeth continue to decay.

\n

It is to be hoped that, when a thorough  study of the Nacirema is made, there will  be careful inquiry into the personality  structure of these people. One has but to  watch the gleam in the eye of a holy-  mouth-man, as he jabs an awl into an  exposed nerve, to suspect that a certain  amount of sadism is involved. If this can be  established, a very interesting pattern  emerges, for most of the population shows  definite masochistic tendencies. It was to  these that Professor Linton referred in discussing a distinctive part of the daily  body ritual which is performed only by  men. This part of the rite involves scraping  and lacerating the surface of the face with a  sharp instrument. Special women's rites are  performed only four times during each  lunar month, but what they lack in  frequency is made up in barbarity. As part  of this ceremony, women bake their heads  in small ovens for about an hour. The  theoretically interesting point is that what  seems to be a preponderantly masochistic  people have developed sadistic specialists.

\n

The medicine men have an imposing  temple, or latipso, in every community of  any size. The more elaborate ceremonies  required to treat very sick patients can only  be performed at this temple. These ceremonies involve not only the thaumaturge  but a permanent group of vestal maidens  who move sedately about the temple  chambers in distinctive costume and head-dress.

\n

The latipso ceremonies are so harsh that  it is phenomenal that a fair proportion of  the really sick natives who enter the temple ever recover. Small children whose indoctrination is still incomplete have been  known to resist attempts to take them to  the temple because \"that is where you go to  die.\" Despite this fact, sick adults are not  only willing but eager to undergo the  protracted ritual purification, if they can  afford to do so. No matter how ill the  supplicant or how grave the emergency, the  guardians of many temples will not admit a  client if he cannot give a rich gift to the  custodian. Even after one has gained admission and survived the ceremonies, the  guardians will not permit the neophyte to  leave until he makes still another gift.

\n

The supplicant entering the temple is  first stripped of all his or her clothes. In  everyday life the Nacirema avoids exposure  of his body and its natural functions.  Bathing and excretory acts are performed  only in the secrecy of the household shrine,  where they are ritualized as part of the  body-rites. Psychological shock results  from the fact that body secrecy is suddenly  lost upon entry into the latipso. A man,  whose own wife has never seen him in an  excretory act, suddenly finds himself naked  and assisted by a vestal maiden while he  performs his natural functions into a sacred  vessel. This sort of ceremonial treatment is  necessitated by the fact that the excreta are  used by a diviner to ascertain the course  and nature of the client's sickness. Female  clients, on the other hand, find their naked  bodies are subjected to the scrutiny,  manipulation and prodding of the medicine men.

\n

Few supplicants in the temple are well  enough to do anything but lie on their  hard  beds. The daily ceremonies, like the rites of  the holy-mouth-men, involve discomfort  and torture. With ritual precision, the  vestals awaken their miserable charges each  dawn and roll them about on their beds of  pain while performing ablutions, in the  formal movements of which the maidens are highly trained. At other times they  insert magic wands in the supplicant's  mouth or force him to eat substances which  are supposed to be healing. From time to  time the medicine men come to their clients  and jab magically treated needles into their  flesh. The fact that these temple ceremonies  may not cure, and may even kill the  neophyte, in no way decreases the people's  faith in the medicine men.

\n

There remains one other kind of  practitioner, known as a \"listener.\" This  witchdoctor has the power to exorcise the devils that lodge in the heads of people who  have been bewitched. The Nacirema  believe that parents bewitch their own  children. Mothers are particularly suspected of putting a curse on children while  teaching them the secret body rituals. The  counter-magic of the witchdoctor is unusual in its lack of ritual. The patient simply tells the \"listener\" all his troubles and  fears, beginning with the earliest difficulties  he can remember. The memory displayed  by the Nacirerna in these exorcism sessions  is truly remarkable. It is not uncommon for  the patient to bemoan the rejection he felt  upon being weaned as a babe, and a few  individuals even see their troubles going  back to the traumatic effects of their own  birth.

\n

In conclusion, mention must be made of  certain practices which have their base in  native esthetics but which depend upon the  pervasive aversion to the natural body and  its functions. There are ritual fasts to make  fat people thin and ceremonial feasts to  make thin people fat. Still other rites are  used to make women's breasts larger if they  are small, and smaller if they are large. General dissatisfaction with breast shape is symbolized in the fact that the ideal form is virtually outside the range of human variation. A few women afflicted with almost inhuman hyper-mamrnary development are so idolized that they make a   handsome living by simply going from village to village and permitting the natives to stare at them for a fee.

\n

Reference has already been made to the fact that excretory functions are ritualized,   routinized, and relegated to secrecy. Natural reproductive functions are similarly distorted. Intercourse is taboo as a topic and scheduled as an act. Efforts are made to   avoid pregnancy by the use of magical   materials or by limiting intercourse to certain phases of the moon. Conception is   actually very infrequent. When pregnant, women dress so as to hide their condition.  Parturition takes place in secret, without   friends or relatives to assist, and the majority of women do not nurse their infants.

\n

Our review of the ritual life of the Nacirema has certainly shown them to be a   magic-ridden people. It is hard to un-   derstand how they have managed to exist   so long under the burdens which they have   imposed upon themselves. But even such   exotic customs as these take on real   meaning when they are viewed with the insight provided by Malinowski when he  wrote:

\n

\"Looking from far and above, from our  high places of safety in the developed civilization, it is easy to see all the crudity and irrelevance of magic. But without its power and guidance early man could not   have mastered his practical difficulties as he has done, nor could man have advanced to the higher stages of civilization.\"

\n
\n

Now, spell \"Nacirema\" backwards and read it again.

" } }, { "_id": "zKiLtGJjw2erQ7eE3", "title": "Most Rationalists Are Elsewhere", "pageUrl": "https://www.lesswrong.com/posts/zKiLtGJjw2erQ7eE3/most-rationalists-are-elsewhere", "postedAt": "2009-03-29T21:46:49.307Z", "baseScore": 69, "voteCount": 70, "commentCount": 34, "url": null, "contents": { "documentId": "zKiLtGJjw2erQ7eE3", "html": "

Most healthy intellectual blogs/forums participate in conversations among larger communities of blogs and forums.  Rather than just \"preaching to a choir\" of readers, such blogs often quote and respond to posts on other blogs.  Such responses sometimes support, and sometimes criticize, but either way can contribute to a healthy conversation.

If folks at Less Wrong saw themselves as a part of a larger community of rationalists, they would realize that most rationalist authors and readers are not at Less Wrong.  To participate in a healthy conversation among the wider community of rationalists, they would often respond to posts at other sites, and expect other sites to respond often to them.  In contrast, an insular group defined by something other than its rationality would be internally focused, rarely participating in such larger conversations.

Today at Overcoming Bias I respond to a post by Eliezer here at Less Wrong. Though I post occasionally here at Less Wrong, I will continue to post primarily at Overcoming Bias.  I consider myself part of a larger rationalist community, and will continue to riff off relevant posts here and elsewhere.  I hope you will continue to see me as a part of your relevant world. 

\n

I worry a little that Less Wrong karma score incentives may encourage an inward focus, since karma is so far only scored for internal site activity.

" } }, { "_id": "5aaPPRAM6JdLqceqX", "title": "Deliberate and spontaneous creativity", "pageUrl": "https://www.lesswrong.com/posts/5aaPPRAM6JdLqceqX/deliberate-and-spontaneous-creativity", "postedAt": "2009-03-29T19:45:07.798Z", "baseScore": 28, "voteCount": 25, "commentCount": 30, "url": null, "contents": { "documentId": "5aaPPRAM6JdLqceqX", "html": "

Related to: Spock's Dirty Little Secret, Does Blind Review Slow Down Science?

\n

After finding out that old scientists don't actually resist change, I decided to do a literature search to find out if the related assumption was true. Is it mainly just the young scientists who are productive? (This should be very relevant for rationalists, since we and scientists in general have the same goal - to find the truth.)

\n

The answer was a pretty resounding no. Study after study after study found that the most productive scientists were those in middle age, not youth. Productivity is better predicted by career age than chronological age. One study suggested that middle-aged scientists aren't more productive as such, but have access to better resources, and that the age-productivity connection disappears once supervisory position is controlled for. Another argued that it was the need for social networking that led the middle-aged to be the most productive. So age, by itself, doesn't seem to affect scientific productivity much, right?

\n

Well, there is one exception. Dietrich and Srinivasan found that paradigm-busting discoveries come primarily from relatively young scientists. They looked at different Nobel Prize winners and finding out the age when the winners had first had the idea that led them to the discovery. In total, 60% of the discoveries were made by people aged below 35 and around 30% were made by people aged between 35 and 45. The data is strongest for theoretical physics, which shows that 90% of all theoretical contributions occurred before the age of 40 and that no theoretician over the age of 50 had ever had an idea that was deemed worthy of the Nobel prize. Old scientists are certainly capable of expanding and building on an existing paradigm, but they are very unlikely to revolutionize the whole paradigm. Why is this so?

\n

Actually, this wasn't something that Dietrich just happened to randomly stumble on - he was testing a prediction stemming from an earlier hypothesis of his. In \"the cognitive neuroscience of creativity\", he presents a view of two kinds of systems for creativity: deliberate and spontaneous (actually four - deliberate/cognitive, deliberate/emotional, spontaneous/cognitive and spontaneous/emotional, but the cognitive-emotional difference doesn't seem relevant for our purposes). Summarizing the differences relevant to the aging/creativity question:

\n
\n

According to this framework, insights can occur during two modes of processing, deliberate and spontaneous. Deliberate searches for insights are instigated by circuits in the prefrontal cortex and thus tend to be structured, rational, and conforming to internalized values and belief systems. Spontaneous insights occur when the frontal attentional system does not actively select the content of consciousness, allowing unconscious thoughts that are more random, unfiltered, and bizarre to be represented in working memory. Several lines of evidence corroborate the notion that deliberate insights are different from spontaneous insights. For instance, the prefrontal cortex is recruited in long-term memory retrieval (for reviews, see Cabeza & Nyberg, 2000; Hasegawa, Hayashi, & Miyashita, 1999) and thus can be said to have a search engine that can “pull” taskrelevant information from long-term storage in the posterior cortices, momentarily representing it in the working memory buffer. Once online, the prefrontal cortex can use its capacity for cognitive flexibility to superimpose the retrieved information to form new ideas. (...)

\n

...suggesting that solutions that would violate what is known about the world are not readily considered in deliberate creativity. Moreover, the prefrontal cortex houses a person’s cultural values and belief system (Damasio, 1994). Thus, the processes of effortful retrieval and recombination of knowledge yield results that are highly consistent with a person’s world view and past experiences (see Dietrich, 2004). Another critical limitation of the deliberate processing mode is due to the fact that any information that is retrieved deliberately and is thus explicitly available for conscious manipulation is subject to the capacity limit of working memory. (...)

\n

In contrast, the spontaneous processing mode produces insights that are different qualitatively because they are not initiated by prefrontal database searches that are limited to preconceived mental paradigms as well as quantitatively because information is not subject to the capacity limit of working memory. During the inevitable times when the frontal attentional system is downregulated, for instance in daydreaming, thoughts that are unguided by societal norms and unfiltered by conventional rationality become represented in working memory (Dietrich, 2003). In such a mental state, conscious thinking is characterized by unsystematic drifting, and the sequence of thoughts manifesting itself in consciousness is more chaotic, permitting more loosely connected associations to emerge. (...)

\n

The prefrontal cortex is the last structure to develop phylogenically and ontogenically (Fuster, 2000, 2002). In humans, it is not fully matured until the early 20’s, which is likely the reason why the creativity of children is less structured and appropriate. Likewise, evidence suggests that prefrontal functions are among the first to deteriorate with age. Data from humans and other animals show that aging individuals are less able to inhibit well-learned rules and have less independence from immediate environmental cues or long-term memories (e.g., Axelrod, Jiron, & Henry, 1993; Means & Holstein, 1992). This tendency to adhere to outdated rules might be compounded by the fact that mental states that enable the spontaneous processing mode, such as REM sleep or daydreaming, go dramatically down with age (Hobson et al., 2000; Singer, 1975). Thus, in addition to perseveration, the deliberate processing mode, which favors solutions that tend to be consistent with a person’s belief system, becomes the more dominant problem solving mode of thought in the elderly.

\n
\n

So, it seems like the older we get, the more likely it is that our thinking is dominated by pre-conceived ideas. This isn't automatically a bad thing, of course - those \"pre-conceived ideas\" are the ones we've been building for our whole lives. But it isn't good if that prevents us from coming up with entirely new yet good ideas. The empirical evidence seems to suggest it does.

\n

What can we do to combat this? Different cognition-affecting drugs are one answer that automatically springs to mind, but many of those are for a large part both illegal and unsafe. Maybe we should try to spend more time daydreaming the older we get, or explicitly using our cognitive creativity to try to generate ideas which smack to us as senseless at first? But there are far more ideas that both seem and are senseless to us, than there are ideas which seem senseless and actually aren't, so the low hit ratio may be pretty exhausting.

" } }, { "_id": "geNZ6ZpfFce5intER", "title": "Akrasia, hyperbolic discounting, and picoeconomics", "pageUrl": "https://www.lesswrong.com/posts/geNZ6ZpfFce5intER/akrasia-hyperbolic-discounting-and-picoeconomics", "postedAt": "2009-03-29T18:26:11.914Z", "baseScore": 86, "voteCount": 78, "commentCount": 86, "url": null, "contents": { "documentId": "geNZ6ZpfFce5intER", "html": "

Akrasia is the tendency to act against your own long-term interests, and is a problem doubtless only too familiar to us all. In his book \"Breakdown of Will\", psychologist George C Ainslie sets out a theory of how akrasia arises and why we do the things we do to fight it. His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms \"picoeconomics\". The foundation is a curious discovery from experiments on animals and people: the phenomenon of hyperbolic discounting.

\n

We all instinctively assign a lower weight to a reward further in the future than one close at hand; this is \"discounting the future\". We don't just account for a slightly lower probability of recieving a more distant award, we value it at inherently less for being further away. It's been an active debate on overcomingbias.com whether such discounting can be rational at all. However, even if we allow that discounting can be rational, the way that we and other animals do it has a structure which is inherently irrational: the weighting we give to a future event is, roughly, inversely proportional to how far away it is. This is hyperbolic discounting, and it is an empirically very well confirmed result.

\n

I say \"inherently irrational\" because it is inconsistent over time: the relative cost of a day's wait is considered differently whether that day's wait is near or far. Looking at a day a month from now, I'd sooner feel awake and alive in the morning than stay up all night reading comments on lesswrong.com. But when that evening comes, it's likely my preferences will reverse; the distance to the morning will be relatively greater, and so my happiness then will be discounted more strongly compared to my present enjoyment, and another groggy morning will await me. To my horror, my future self has different interests to my present self, as surely as if I knew the day a murder pill would be forced upon me.

\n

If I knew that a murder pill really would be forced upon me on a certain date, after which I would want nothing more than to kill as many people as possible as gruesomly as possible, I could not sit idly by waiting for that day to come; I would want to do something now to prevent future carnage, because it is not what the me of today desires. I might attempt to frame myself for a crime, hoping that in prison my ability to go on a killing spree would be contained. And this is exactly the behavour we see in people fighting akrasia: consider the alcoholic who moves to a town in which alcohol is not sold, anticipating a change in desires and deliberately constraining their own future self. Ainslie describes this as \"a relationship of limited warfare among successive selves\".

\n

And it is this warfare which Ainslie analyses with the tools of behavioural economics. His analysis accounts for the importance of making resolutions in defeating akrasia, and the reasons why a resolution is easier to keep when it represents a \"bright clear line\" that we cannot fool ourselves into thinking we haven't crossed when we have. It also discusses the dangers of willpower, and the ways in which our intertemporal bargaining can leave us acting against both our short-term and our long-term interests.

\n

I can't really do more than scratch the surface on how this analysis works in this short article; you can read more about the analysis and the book on Ainslie's website, picoeconomics.org. I have the impression that defeating akrasia is the number one priority for many lesswrong.com readers, and this work is the first I've read that really sets out a mechanism that underlies the strange battles that go on between our shorter and longer term interests.

" } }, { "_id": "qQpktQnjS9rAoXiDy", "title": "Recognizing the Candlelight as Fire: Joshu Washes the Bowl", "pageUrl": "https://www.lesswrong.com/posts/qQpktQnjS9rAoXiDy/recognizing-the-candlelight-as-fire-joshu-washes-the-bowl", "postedAt": "2009-03-29T18:13:00.412Z", "baseScore": -14, "voteCount": 21, "commentCount": 21, "url": null, "contents": { "documentId": "qQpktQnjS9rAoXiDy", "html": "

Joshu Washes the Bowl

\n

A monk told Joshu: `I have just entered the monastery. Please teach me.'

\n

Joshu asked: `Have you eaten your rice porridge?'

\n

The monk replied: `I have eaten.'

\n

Joshu said: `Then you had better wash your bowl.'

\n

At that moment the monk was enlightened.

\n
\n

Mumon's Comment: Joshu is the man who opens his mouth and shows his heart. I doubt if this monk really saw Joshu's heart. I hope he did not mistake the bell for a pitcher.

\n

It is too clear and so it is hard to see.
A dunce once searched for fire with a lighted lantern.
Had he known what fire was,
He could have cooked his rice much sooner.

" } }, { "_id": "2eaYboiek2WoCya2P", "title": "Bay area OB/LW meetup, today, Sunday, March 29, at 5pm", "pageUrl": "https://www.lesswrong.com/posts/2eaYboiek2WoCya2P/bay-area-ob-lw-meetup-today-sunday-march-29-at-5pm", "postedAt": "2009-03-29T12:10:52.862Z", "baseScore": 1, "voteCount": 6, "commentCount": 3, "url": null, "contents": { "documentId": "2eaYboiek2WoCya2P", "html": "

Eliezer and Michael Vassar will be there, as will many other exciting LW-ers.  Robin Gane-McCalla will be leading us in some rationality-related games.  More information here.

\n

Whether or not you can come today, you may want to sign up on our meet-up page, if you're local.

" } }, { "_id": "p7BPWw3kmGdbJeH5p", "title": "Ask LW: What questions to test in our rationality questionnaire?", "pageUrl": "https://www.lesswrong.com/posts/p7BPWw3kmGdbJeH5p/ask-lw-what-questions-to-test-in-our-rationality", "postedAt": "2009-03-29T12:03:50.210Z", "baseScore": 17, "voteCount": 26, "commentCount": 47, "url": null, "contents": { "documentId": "p7BPWw3kmGdbJeH5p", "html": "

 

\n

We’ve had quite a bit of discussion around LW, and OB, on the questions:

\n\n

Rationalists that we are, it’s time to put our experiments where our mouths are.  So here’s my plan:

\n

Step 1: Assemble a set of questions that might possibly help us understand: (a) how rational people are; (b) where they got that rationality from; and (c) what effects their rationality has on their lives.  Include any questions that might help in the formulation of useful conjectures.  After collecting the data, look for correlations, spaghetti-at-the-wall style.  Try factor analysis.  

\n

Step 2 [Perhaps after iterating the quick-and-dirty Step 1 correlational approach a bit, to develop better candidate metrics]:  Run some more careful experimental tests of various sorts, both with a “rationality training group” that meets for extended periods of time, and, if LW is willing, with shorter training experiments with randomized LW subgroups.  Try to build an atmosphere and knowledge base on LW where more people go out and do useful experiments.

I have an initial questionnaire draft below, although I skipped the answer-choices for brevity.  Please post your suggestions for informative questions include and/or to drop.  As good suggestions come in, I’ll edit the questionnaire draft to include them.  It would be nice if the questionnaire we actually use draws on the combined background of the LW community.

Please also post hypotheses for what kinds of correlations you expect to see and/or to not see, when the questionnaire is actually run.  If you note your hypotheses now, before the data comes in, we’ll know we should increase our credence in your theory instead of just accusing you of hindsight bias.

Once we have a good questionnaire draft, I’ll put the questionnaire on the web and call for LW readers to fill out the questionnaire.  I’ll also try to get people to fill out the questionnaire from some non-LW groups, e.g. Stanford students.  Then I’ll post the questionnaire data, and we can all have fun interpreting it.

Section A.  Demographic information.  Possible confounders, i.e. variables other than “rationality” that may influence correct beliefs.

\n
    \n
  1. Age (from a multiple choice list, so we don’t identify individuals)
  2. \n
  3. Sex  [Why: everyone else asks for these, and they might have good reason.
  4. \n
  5. SAT, ACT, and GRE scores, if any.  [Why: as a proxy for IQ.  IQ helps with many cognitive tasks, probably including rationality questions.  We want to be able to tell the difference between “IQ helps people earn money” and “rationality helps people earn money, even after controlling for IQ”.]
  6. \n
\n

Section B.  Educational variables that may help cause rationality.

\n
    \n
  1. Parents’ education.
  2. \n
  3. Parents’ scientific literacy. 
  4. \n
  5. Parents’ religious views.  
  6. \n
  7. Whether your parents were crazier than average, and/or more rational than average.
  8. \n
  9. Amount of formal education.  College major.
  10. \n
  11. Occupation.
  12. \n
  13. How many non-fiction books did you read in the last month?  How many fiction books? [Why: people are probably more likely to give accurate data if we ask about e.g. “the last month”, than if we ask vaguer question like “how much do you usually read?”]
  14. \n
  15. How many self-help or business books did you read in the last month?
  16. \n
  17. When is the last time you sought out someone who was better than you at some skill you wanted to learn, and you asked them questions to try to figure out what you should be doing?
  18. \n
  19. Have you read any books about heuristics and biases?
  20. \n
  21. Have you read OB or LW at all? \n\n
  22. \n
  23. Which of the following activities have you trained in:  mathematics, programming, engineering or practical tinkering, music, meditation, martial arts, debate, strategy games (go, chess, backgammon, etc.).
  24. \n
\n

Section C.  Indicators of real-world success.

\n
    \n
  1. Income.
  2. \n
  3. [Marriage and divorce history?  Whether you’re in a stable relationship?  Whether you’re happy with their relationship?  Whether you have an easy time getting dates?  How do people usually test for “success” here?]
  4. \n
  5. Number of best friends, for some operationalizations of “best friends” (e.g., people you could borrow $500 from; people with whom you can talk about nearly anything; ?)  [What questions are standard, here?]
  6. \n
  7. Whether you’ve ever been in a car accident
  8. \n
  9. Happiness
  10. \n
  11. Whether you’ve been overall “more successful”, “less successful”, or “about as successful” as most people in their high school graduating class, and in their college graduating class.
  12. \n
  13. Whether you’ve “learned more”, “learned less”, or “learned about as much” since graduating {high school / college} as most people in your {high school / college} graduating class.
  14. \n
  15. How often did you exercise in the last week?
  16. \n
  17. Do you smoke?
  18. \n
  19. High school and college GPAs
  20. \n
  21. Do you have a current driver’s license?
  22. \n
  23. Are there any late bills, bounced checks, bad debts, etc. on your credit record?
  24. \n
  25. How many dental cavities did you get in the last two years?
  26. \n
\n

Section D.  Standard heuristics and biases questions

\n

[Several standard questions, and variations on standard questions, that I’d rather not give details on so I don’t cause LW readers to get them right.  The goal here is to find ways of testing for standard biases among people who have read the standard articles.  If anyone has clever ideas for how to disguise the questions, please do email your ideas to annasalamon at gmail, and please don’t post your ideas in the comments.]

Section E.  Current beliefs

\n
    \n
  1. Religious views.
  2. \n
  3. Are you signed up for cryonics?  Views on cryonics.
  4. \n
  5. Views on group (gender, race) differences in IQ.  (Not the origins of the differences; just whether there are group differences in today’s adults).
  6. \n
  7. Views on the odds nuclear war over the next few decades
  8. \n
  9. How good-looking are you, relative to other people of your age and gender?
  10. \n
  11. Views on Pascal’s wager
  12. \n
  13. Views on consciousness
  14. \n
  15. Views on evolution
  16. \n
  17. Views on whether global warming is happening, and whether it is significant
  18. \n
\n

[Why: to see how good people are at forming accurate beliefs.  And to get a bit of information on whether the above beliefs are accurate, by seeing whether the beliefs correlate with other rationality-indicators.]

Section F.  Value placed on truth

\n
    \n
  1. Is it better to have accurate beliefs, or beliefs that give you morale or meaning?
  2. \n
  3. Do you try to believe good things about your friends?
  4. \n
  5. Do you try to believe good things about people who are different from you (e.g., people of different ethnic or religious backgrounds, people from different countries, people with different sexual orientations)?  Why?
  6. \n
  7. How important is it to you to have accurate beliefs?
  8. \n
  9. Imagine a scale from 1 to 10 that measures the process by which you form beliefs.  Let “1” mean “there may be emotional or other non-rational pressures, but those pressures have little impact on my resulting conclusions”.  Let “10” mean “I basically just made up these beliefs because they felt good, seemed socially useful, matched my fears, or had some other non-truth-related property”.  On this scale, how did you form your beliefs concerning: \n\n
  10. \n
\n

Section G.  Attempts to seek information

\n
    \n
  1. In the last week, how much time did you spend trying to understand: \n\n
  2. \n
  3. How well do you know your friends and family?  How do you know how well you know them?
  4. \n
  5. How well do you know yourself?  How do you know?
  6. \n
  7. How well do you understand those aspects of the world that enable real-world measurable success, e.g. income?  How do you know?
  8. \n
  9. Have you experimented with different ways to do your job effectively?
  10. \n
  11. Have well do you understand the larger world?  How do you know?
  12. \n
  13. Think about the last time you had a fight or conflict with someone.  How much time did you spend rehearsing the evidence for your side?  How much time did you spend trying with honest curiosity to figure out what happened?
  14. \n
  15. How often do you notice that one of your pieces of knowledge conflicts with your model of some other part of the world (e.g., that you don't understand why the floating toy in the pool bops to the top at the angle it does, or why
  16. \n
\n

Section H:  Models of one's own thinking skill [This is the only section with open-ended rather than multiple-choice questions.  Respondants can skip this section while filling out the rest]

\n
    \n
  1. What’s the worst mistake you made in the last year?  What did you do about it?
  2. \n
  3. What are the largest gaps in your current thinking skills?
  4. \n
  5. What are your greatest strengths as a thinker?
  6. \n
  7. On what topics are you most prone to self-deception?
  8. \n
  9. What is the biggest improvement you’ve made in your ability to form accurate beliefs over the last year?
  10. \n
  11. What safeguards do you use, to try to notice flaws in your own beliefs?
  12. \n
\n

 

\n

ADDED: The idea here is not to generate an actual, first-round test of individuals' rationality.  The idea is to take a bunch of questions that might plausibly correlate with that nebulous mix of concepts, \"rationality\", and to see how well those questions correlate with one another.  We won't get a \"your're more rational than 70% of the population\" out of this questionnaire: no way, no how.  We may well get a some suggestive data about clusters of questions and answers where respondants' answers tend to correlate with one another, and so suggest possible underlying factors worth more careful investigation.

\n

Psychologists often do cheap, bad studies before they do slow, careful, expensive studies, to get an initial look at what might be true.

" } }, { "_id": "4PPE6D635iBcGPGRy", "title": "Rationality: Common Interest of Many Causes", "pageUrl": "https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes", "postedAt": "2009-03-29T10:49:08.001Z", "baseScore": 89, "voteCount": 73, "commentCount": 53, "url": null, "contents": { "documentId": "4PPE6D635iBcGPGRy", "html": "

It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander.  Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.  The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

\n

But of course, not all the rationalists I create will be interested in my own project—and that's fine.  You can't capture all the value you create, and trying can have poor side effects.

\n

If the supporters of other causes are enlightened enough to think similarly...

\n

Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side.  They won't capture all the value they create.  And that's fine.  They'll capture some of the value others create.  Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

\n

But this requires—I know I'm repeating myself here, but it's important—that you be willing not to capture all the value you create.  It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever.  It requires that you don't regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support.  You only reap some of your own efforts, but you reap some of others' efforts as well.

\n

If you and they don't agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement.  (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

\n

A certain person who was taking over as the president of a certain organization once pointed out that the organization had not enjoyed much luck with its message of \"This is the best thing you can do\", as compared to e.g. the X-Prize Foundation's tremendous success conveying to rich individuals of \"Here is a cool thing you can do.\"

\n

This is one of those insights where you blink incredulously and then grasp how much sense it makes.  The human brain can't grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics.  Saying, \"This is the best thing\" doesn't add much motivation beyond \"This is a cool thing\".  It just establishes a much higher burden of proof.  And invites invidious motivation-sapping comparison to all other good things you know (perhaps threatening to diminish moral satisfaction already purchased).

\n

If we're operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more)—or at least, that most potential supporters of interest fit this description—then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

\n

\"But,\" you say, \"dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!\"  To which I reply:  But human beings really aren't expected utility maximizers, as cognitive systems.  Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances, people want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they're willing to help.

\n

There are, of course, limits to this principle of benign tolerance.  If someone has a project to help stray puppies get warm homes, then it's probably best to regard them as trying to exploit bugs in human psychology for their personal gain, rather than a worthy sub-task of the great common Neo-Enlightenment project of human progress.

\n

But to the extent that something really is a task you would wish to see done on behalf of humanity... then invidious comparisons of that project to Your-Favorite-Project, may not help your own project as much as you might think.  We may need to learn to say, by habit and in nearly all forums, \"Here is a cool rationalist project\", not, \"Mine alone is the highest-return in expected utilons per marginal dollar project.\"  If someone cold-blooded enough to maximize expected utility of fungible money without regard to emotional side effects explicitly asks, we could perhaps steer them to a specialized subforum where anyone willing to make the claim of top priority fights it out.  Though if all goes well, those projects that have a strong claim to this kind of underserved-ness will get more investment and their marginal returns will go down, and the winner of the competing claims will no longer be clear.

\n

If there are many rationalist projects that benefit from raising the sanity waterline, then their mutual tolerance and common investment in spreading rationality could conceivably exhibit a commons problem.  But this doesn't seem too hard to deal with: if there's a group that's not willing to share the rationalists they create or mention to them that other Neo-Enlightenment projects might exist, then any common, centralized rationalist resources could remove the mention of their project as a cool thing to do.

\n

Though all this is an idealistic and future-facing thought, the benefits—for all of us—could be finding some important things we're missing right now.  So many rationalist projects have few supporters and far-flung; if we could all identify as elements of the Common Project of human progress, the Neo-Enlightenment, there would be a substantially higher probability of finding ten of us in any given city.  Right now, a lot of these projects are just a little lonely for their supporters.  Rationality may not be the most important thing in the world—that, of course, is the thing that we protect—but it is a cool thing that more of us have in common.  We might gain much from identifying ourselves also as rationalists.

" } }, { "_id": "gBjAJvSEg6zn9eMya", "title": "Hygienic Anecdotes", "pageUrl": "https://www.lesswrong.com/posts/gBjAJvSEg6zn9eMya/hygienic-anecdotes", "postedAt": "2009-03-29T05:46:07.838Z", "baseScore": 10, "voteCount": 11, "commentCount": 11, "url": null, "contents": { "documentId": "gBjAJvSEg6zn9eMya", "html": "

Bayesians must condition their beliefs on all available evidence; it is not cheating to use less than ideal sources of information. However, this process also requires conditioning on the evidence for your evidence. Outside of academic journals, evidence is often difficult to trace back to the source and is dependent on our notoriously faulty memory. Given the consequences of low-fidelity copying, should rationalists trust evidence they can't remember the source of, even if they remember reading the primary source themselves? Should community members be expected to produce citations on demand?

\n

This issue came to mind while trying to find a study I vaguely remembered about how the increased happiness of the religious could be explained by increased community involvement and while trying to factcheck PhilGoetz's now infamous anecdote about Steve Jobs. I started contemplating the standards for relaying highly relevant, but potentially wrong or distorted information.

\n

Luckily factchecking is much easier in the age of the internet. Wikipedia serves as a universally accessible standard reference, and Google serves well for everything else. But sometimes my google-fu is not strong enough. So, I'll put this to the community: how should rationalists balance the tradeoff between neglecting evidence and propogating bad information?

\n

Hygienic practices have been touched on before, but I haven't seen any consensus on this issue. Are the standards for what you personally condition on and what you share in discussion different? What needs a citation and what doesn't? Does anyone have recommendations for ways to better track the sources of evidence, i.e. reference management software?

\n

 

" } }, { "_id": "Fy2b55mLtghd4fQpx", "title": "The Zombie Preacher of Somerset", "pageUrl": "https://www.lesswrong.com/posts/Fy2b55mLtghd4fQpx/the-zombie-preacher-of-somerset", "postedAt": "2009-03-28T22:29:34.968Z", "baseScore": 101, "voteCount": 86, "commentCount": 36, "url": null, "contents": { "documentId": "Fy2b55mLtghd4fQpx", "html": "

Related to: Zombies? Zombies!, Zombie Responses, Zombies: The Movie, The Apologist and the Revolutionary

\n

All disabling accidents are tragic, but some are especially bitter. The high school sports star paralyzed in a car crash. The beautiful actress horribly disfigured in a fire. The pious preacher who loses his soul during a highway robbery.

As far as I know, this last one only happened once, but once was enough. Simon Browne was an early eighteenth century pastor of a large Dissident church. The community loved him for his deep faith and his remarkable intelligence, and his career seemed assured.

One fateful night in 1723, he was travelling from his birthplace in Somerset to his congregation in London when a highway robber accosted the coach carrying him and his friend. With quick reflexes and the element of surprise, Browne and his friend were able to disarm the startled highway robber and throw him to the ground. Browne tried to pin him down while the friend went for help, but in the heat of the moment he used excessive force and choked the man to death. This horrified the poor preacher, who was normally the sort never to hurt a fly.

Whether it was the shock, the guilt, or some unnoticed injury taken in the fight, something strange began to happen to Simon Browne. In his own words, he gradually became:

\n
\n

...perfectly empty of all thought, reflection, conscience, and consideration, entirely destitute of the knowledge of God and Christ, unable to look backward or forward, or inward or outward, having no conviction of sin or duty, no capacity of reviewing his conduct, and, in a word, without any principles of religion or even of reason, and without the common sentiments or affections of human nature, insensible even to the good things of life, incapable of tasting any present enjoyments, or expecting future ones...all body, without so much as the remembrance of the ruins of that mind I was once a tenant in...and the thinking being that was in me is, by a consumption continual, now wholly perished and come to nothing.

\n
\n

Simon Browne had become a p-zombie.

\n

\n

Needless to say, Browne's friends and congregation didn't believe him. Browne seemed as much in possession of his wits as ever. His writing, mostly on abstruse theological topics and ecumenialism, if anything accelerated. According to a friend:

\n
\n

What was most extraordinary in his case was this; that, excepting the single point I have mentioned, on which the distraction turned, his imagination was not only more lively, but his judgment was even improved. And it has been observed that, at the very time that he himself imagined he had no rational soul, he was so acute a disputant (his friends said) that he could reason as if he had two souls.

\n
\n

Despite everyone's insistence that he was fine, Simon Browne would have none of it. His soul had gone missing, and no one without a soul was qualified to lead a religious organization. Despite pleas to remain, he quit his job as pastor and retired to the country. After a brief period spent bemoaning his fate, he learned to take it in stride and began writing prodigously, authoring dictionaries, textbooks on grammars, essays on theology, and even several beautiful hymns still sung in churches today. Did his success convince him he was ensouled after all? No. He claimed:

\n
\n

...only an animal life, in common with brutes, so that though he retained the faculty of speaking in a manner that appeared rational to others, he had all the while no more notion of what he said than a parrot, being utterly divested of consciousness.

\n
\n

And, appreciating the absurdity of his conundrum, asked:

\n
\n

Who, by the most unreasonable and ill-founded conceit in the world, [could] have imagined that a thinking being could, for seven years together, live a stranger to its own powers, exercises, operations, and state?

\n
\n

Considering it pointless to exercise or to protect his own health, he died prematurely in his Somerset house in 1732. His friends mourned a potentially brilliant pastor driven to an early death by an inexplicable insanity.

But was his delusion really inexplicable?

David Berman is probably the top expert on the Simon Browne case, and the author of the only journal article dedicated specifically to the topic: Simon Browne: the soul-murdered theologian (other books that devote some space to Browne can be read here and here). I've been unable to access Berman's paper (if anyone can get it free, please send it to me) but I had the good fortune to be in his Philosophy of Mind class several years ago. If I remember correctly, Dr. Berman had a complex Freudian theory involving repression of erotic feelings. I don't remember enough to do it justice and I'm not going to try. But with all due respect to my former professor, I think he's barking up the wrong tree.

Simon Browne's problem seems strangely similar to neurological illness.

You remember anosognosia, when patients with left-arm paralysis thought their left arms were working just fine? Somatoparaphrenia is a closely related disorder. Your arm is working just fine, but you deny you have an arm at all. It must be someone else's. Some scientists link somatoparaphrenia to Body Integrity Identity Disorder, a condition in which people are desperate to amputate their working limbs for no apparent reason. BIID sufferers are sane enough to recognize that they do currently have a left arm, but it feels alien and unwelcome, and they want it gone.

(according to Wikipedia, one cure being investigated for BIID is squirting cold water in the patient's right ear...)

Somatoparaphrenia is an identity problem - people lose identity with their limbs. That arm might work, but it doesn't seem like it's working for me. Every other rational process remains intact in somatoparaphrenics. A somatoparaphrenic physicist could do quantum calculations while still insisting that someone else's leg was attached to his hip for some reason.

Cotard's Delusion is an even worse condition where the patient insists she is dead or nonexistent. Tantalizingly, patients with Cotard's occasionally use religious language, claiming to have been eternally damned or without a soul - a symptom shared by Simon Browne. Unlike anosognosia and somatoparaphrenia, it is not necessarily caused by stroke - all sorts of things, neurological or psychological, can bring it on. V. S. Ramachandran (yes, him again) theorizes that Cotard's may be a disconnect between certain recognition circuits and certain emotional circuits, preventing the patient from feeling an emotional connection with himself.

Browne reminds me also of \"blindsight\", the phenomenon where a patient is capable of seeing but not consciously aware of doing so. Ask a patient what she sees, and she'll swear she sees nothing - she is, after all, totally blind. Ask a patient to guess which of four quarters of the visual field a light is in, and she'll look at you like an idiot. How should she know? She's blind! Harass the patient until she finally guesses, and she'll get it right, at odds phenomenally greater than chance. Ask her how she knew, and she'll say it was a lucky guess.

Simon Browne sits somewhere in between all of these. Like the Cotard patient, he denied having a self, and considered himself eternally damned. Like the somatoparaphreniac, he completely lost identification with a certain part of himself (in this case, the mind!) and insisted it didn't exist while retaining the ability to use it and to reason accurately in other domains. And like the blindsight patient, he was able to process information at a level usually restricted to conscious experience without any awareness of doing so.

I don't know any diagnosis that exactly fits Browne's symptoms (Cotard's comes close but falls a little short). But the symptoms seem so reminiscent of neurological injury that I would be surprised if Dr. Berman's psychoanalysis was the full story.

So, what does Simon Browne add to the p-zombie debate?

Either nothing or everything. We can easily dismiss him as a complete nutcase, no more accurate in describing his mental states than a schizophrenic is accurate in describing his conversations with angels. Or we can come up with a neurological explanation in which he has conscious experience, but considers it alien to himself.

I acknowledge the possibility, but it rings hollow. Browne's friends were unanimous in describing him as rational and intelligent. And Browne himself was very clear that he had no mental experience whatsoever, not that he had some mental experience that didn't seem like his own.

But if we accepted Browne as mostly truthful, it demonstrates consciousness is not an inseparable byproduct of normal mental operation. It is possible to take consciousness, remove it, and have a p-zombie left. Not a perfect Chalmerian p-zombie - Browne made it very clear that he noticed and cared deeply about his loss of consciousness, and didn't go around claiming he was still fully aware or any nonsense like that - but a p-zombie nonetheless.

That is a heck of a conclusion to draw from one poorly studied case (there is rumored to be a second similar case, one Lewis Kennedy, but I can't find information on this one). However, Simon Browne at the very least deserves to be shelved alongside the other scarce and contradictory evidence on this topic. Let's give the poor preacher the last word:

\n
\n

God should still have left me the power of speech, [that] I may at last convince [you] that my case has not been a delusion of fancy, but the most tremendous reality.

\n
\n

 

" } }, { "_id": "aeWRzgJMt2ASQHHiw", "title": "When It's Not Right to be Rational", "pageUrl": "https://www.lesswrong.com/posts/aeWRzgJMt2ASQHHiw/when-it-s-not-right-to-be-rational", "postedAt": "2009-03-28T16:15:15.367Z", "baseScore": 5, "voteCount": 25, "commentCount": 22, "url": null, "contents": { "documentId": "aeWRzgJMt2ASQHHiw", "html": "

By now I expect most of us have acknowledged the importance of being rational.  But as vital as it is to know what principles generally work, it can be even more important to know the exceptions.

\n

As a process of constant self-evaluation and -modification, rationality is capable of adopting new techniques and methodologies even if we don't know how they work.  An 'irrational' action can be rational if we recognize that it functions.  So in an ultimate sense, there are no exceptions to rationality's usefulness.

\n

In a more proximate sense, though, does it have limits?  Are there ever times when it's better *not* to explicitly understand your reasons for acting, when it's better *not* to actively correlate and integrate all your knowledge?

\n

I can think one such case:  It's often better not to look down.

\n

People who don't spend a lot of time living precariously at the edge of long drops don't develop methods of coping.  When they're unexpectedly forced to such heights, they often look down.  Looking down, subcortical instincts are activated that cause them to freeze and panic, overriding their conscious intentions.  This tends to prevent them from accomplishing whatever goals brought them to that location, and in situations where balance is required for safety, the panic instinct can even cause them to fall.

\n

If you don't look down, you may know intellectually that you're above a great height, but at some level your emotions and instincts aren't as triggered.  You don't *appreciate* the height on a subconscious level, and so while you may know you're in danger and be appropriately nervous, your conscious intentions aren't overridden.  You don't freeze.  You can keep your conscious understanding compartmentalized, not bringing to mind information which you possess but don't wish to be aware of.

\n

The general principle seems to be that it is useful to avoid fully integrated awareness of relevant data if acknowledging that data dissolves your ability to regulate your emotions and instincts.  If they run amok, your reason will be unseated.  Careful application of doublethink, and avoiding confronting emotionally-charged facts that aren't absolutely necessary to respond appropriately to the situation, is probably the best course of action.

\n

If you expect that you're going to be dealing with heights in the future, you can train yourself not to fall into vertigo.  But if you don't have opportunities for training down your reactions, not looking down is the next best thing.

" } }, { "_id": "p5DmraxDmhvMoZx8J", "title": "Church vs. Taskforce", "pageUrl": "https://www.lesswrong.com/posts/p5DmraxDmhvMoZx8J/church-vs-taskforce", "postedAt": "2009-03-28T09:23:25.560Z", "baseScore": 87, "voteCount": 84, "commentCount": 88, "url": null, "contents": { "documentId": "p5DmraxDmhvMoZx8J", "html": "

I am generally suspicious of envying crazy groups or trying to blindly copycat the rhythm of religion—what I called \"hymns to the nonexistence of God\", replying, \"A good 'atheistic hymn' is simply a song about anything worth singing about that doesn't happen to be religious.\"

\n

But religion does fill certain holes in people's minds, some of which are even worth filling.  If you eliminate religion, you have to be aware of what gaps are left behind.

\n

If you suddenly deleted religion from the world, the largest gap left would not be anything of ideals or morals; it would be the church, the community.  Among those who now stay religious without quite really believing in God—how many are just sticking to it from wanting to stay with their neighbors at the church, and their family and friends?  How many would convert to atheism, if all those others deconverted, and that were the price of staying in the community and keeping its respect?  I would guess... probably quite a lot.

\n

In truth... this is probably something I don't understand all that well, myself.  \"Brownies and babysitting\" were the first two things that came to mind.  Do churches lend helping hands in emergencies?  Or just a shoulder to cry on?  How strong is a church community?  It probably depends on the church, and in any case, that's not the correct question.  One should start by considering what a hunter-gatherer band gives its people, and ask what's missing in modern life—if a modern First World church fills only some of that, then by all means let us try to do better.

\n

So without copycatting religion—without assuming that we must gather every Sunday morning in a building with stained-glass windows while the children dress up in formal clothes and listen to someone sing—let's consider how to fill the emotional gap, after religion stops being an option.

\n

To help break the mold to start with—the straitjacket of cached thoughts on how to do this sort of thing—consider that some modern offices may also fill the same role as a church.  By which I mean that some people are fortunate to receive community from their workplaces: friendly coworkers who bake brownies for the office, whose teenagers can be safely hired for babysitting, and maybe even help in times of catastrophe...?  But certainly not everyone is lucky enough to find a community at the office.

\n

Consider further—a church is ostensibly about worship, and a workplace is ostensibly about the commercial purpose of the organization.  Neither has been carefully optimized to serve as a community.

\n

Looking at a typical religious church, for example, you could suspect—although all of these things would be better tested experimentally, than just suspected—

\n\n

By using the word \"optimal\" above, I mean \"optimal under the criteria you would use if you were explicitly building a community qua community\".  Spending lots of money on a fancy church with stained-glass windows and a full-time pastor makes sense if you actually want to spend money on religion qua religion.

\n

I do confess that when walking past the churches of my city, my main thought is \"These buildings look really, really expensive, and there are too many of them.\"  If you were doing it over from scratch... then you might have a big building that could be used for the occasional wedding, but it would be time-shared for different communities meeting at different times on the weekend, and it would also have a nice large video display that could be used for speakers giving presentations, lecturers teaching something, or maybe even showing movies.  Stained glass?  Not so high a priority.

\n

Or to the extent that the church membership lends a helping hand in times of trouble—could that be improved by an explicit rainy-day fund or contracting with an insurer, once you realized that this was an important function?  Possibly not; dragging explicit finance into things changes their character oddly.  Conversely, maybe keeping current on some insurance policies should be a requirement for membership, lest you rely too much on the community...  But again, to the extent that churches provide community, they're trying to do it without actually admitting that this nearly all of what people get out of it.  Same thing with the corporations whose workplaces are friendly enough to serve as communities; it's still something of an accidental function.

\n

Once you start thinking explicitly about how to give people a hunter-gatherer band to belong to, you can see all sorts of things that sound like good ideas.  Should you welcome the newcomer in your midst?  The pastor may give a sermon on that sometime, if you think church is about religion.  But if you're explicitly setting out to build community—then right after a move is when someone most lacks community, when they most need your help.  It's also an opportunity for the band to grow.  If anything, tribes ought to be competing at quarterly exhibitions to capture newcomers.

\n

But can you really have a community that's just a community—that isn't also an office or a religion?  A community with no purpose beyond itself?

\n

Maybe you can.  After all, hunter-gatherer tribes have any purposes beyond themselves?—well, there was survival and feeding yourselves, that was a purpose.

\n

But anything that people have in common, especially any goal they have in common, tends to want to define a community.  Why not take advantage of that?

\n

Though in this age of the Internet, alas, too many binding factors have supporters too widely distributed to form a decent band—if you're the only member of the Church of the Subgenius in your city, it may not really help much.  It really is different without the physical presence; the Internet does not seem to be an acceptable substitute at the current stage of the technology.

\n

So to skip right to the point—

\n

Should the Earth last so long, I would like to see, as the form of rationalist communities, taskforces focused on all the work that needs doing to fix up this world.  Communities in any geographic area would form around the most specific cluster that could support a decent-sized band.  If your city doesn't have enough people in it for you to find 50 fellow Linux programmers, you might have to settle for 15 fellow open-source programmers... or in the days when all of this is only getting started, 15 fellow rationalists trying to spruce up the Earth in their assorted ways.

\n

That's what I think would be a fitting direction for the energies of communities, and a common purpose that would bind them together.  Tasks like that need communities anyway, and this Earth has plenty of work that needs doing, so there's no point in waste.  We have so much that needs doing—let the energy that was once wasted into the void of religious institutions, find an outlet there.  And let purposes admirable without need for delusion, fill any void in the community structure left by deleting religion and its illusionary higher purposes.

\n

Strong communities built around worthwhile purposes:  That would be the shape I would like to see for the post-religious age, or whatever fraction of humanity has then gotten so far in their lives.

\n

Although... as long as you've got a building with a nice large high-resolution screen anyway, I wouldn't mind challenging the idea that all post-adulthood learning has to take place in distant expensive university campuses with teachers who would rather be doing something else.  And it's empirically the case that colleges seem to support communities quite well.  So in all fairness, there are other possibilities for things you could build a post-theistic community around.

\n

Is all of this just a dream?  Maybe.  Probably.  It's not completely devoid of incremental implementability, if you've got enough rationalists in a sufficiently large city who have heard of the idea.  But on the off-chance that rationality should catch on so widely, or the Earth should last so long, and that my voice should be heard, then that is the direction I would like to see things moving in—as the churches fade, we don't need artificial churches, but we do need new idioms of community.

" } }, { "_id": "p7WXmG6Fbo3eaSwm3", "title": "Defense Against The Dark Arts: Case Study #1", "pageUrl": "https://www.lesswrong.com/posts/p7WXmG6Fbo3eaSwm3/defense-against-the-dark-arts-case-study-1", "postedAt": "2009-03-28T02:31:18.306Z", "baseScore": 149, "voteCount": 149, "commentCount": 54, "url": null, "contents": { "documentId": "p7WXmG6Fbo3eaSwm3", "html": "

Related to: The Power of Positivist Thinking, On Seeking a Shortening of the Way, Crowley on Religious Experience

\n

Annoyance wants us to stop talking about fancy techniques and get back to basics. I disagree with the philosophy behind his statement, but the principle is sound. In many areas of life - I'm thinking mostly of sports, but not for lack of alternatives - mastery of the basics beats poorly-grounded fancy techniques every time.

One basic of rationality is paying close attention to an argument. Dissecting it to avoid rhetorical tricks, hidden fallacies, and other Dark Arts.  I've been working on this for years, and I still fall short on a regular basis.

Medical educators have started emphasizing case studies in their curricula. Instead of studying arcane principles of disease, student doctors cooperate to analyze a particular patient in detail, ennumerate the principles needed to diagnose her illness, and pay special attention to any errors the patients' doctors made during the treatment. The cases may be rare tropical infections, but they're more often the same everyday diseases common in the general population, forcing the student doctors to always keep the basics in mind. We could do with a tradition of case studies in rationality, though we'd need safeguards to prevent degeneration into political discussion.

Case studies in medicine are most interesting when all the student doctors disagree with each other. To that end, I've chosen as the first case a statement that received sixteen upvotes on Less Wrong, maybe the highest I've ever seen for a comment. I don't mean to insult or embarass everyone who liked it. I liked it too. My cursor was already hovering above the \"Vote Up\" button by the time I starting having second thoughts. But it deserves dissection, and its popularity gives me a ready response when someone says this material is too basic for 'master rationalists' like ourselves:

\n
\n

In his youth, Steve Jobs went to India to be enlightened. After seeing that the nation claiming to be the source of this great spiritual knowledge was full of hunger, ignorance, squalor, poverty, prejudice, and disease, he came back and said that the East should look to the West for enlightenment.

\n
\n

This anecdote is short, witty, flattering, and utterly opaque to reason. It bears all the hallmarks of the Dark Arts.

\n


I admit I am not a disinterested party here. The statement was in response to my claim that Indian yoga was a successful technique for inducing exotic and occasionally useful mental states. I don't like being told I'm wrong any more than anyone else does. But here I don't think I am. I see at least five fallacies.

First, a hidden assumption: if A is superior to B, A cannot learn anything from B. This assumption is clearly false. I know brilliant scientists whose spelling is atrocious. I acknowledge that these people are much smarter than I am, but I still correct their spelling. Anyone who said \"Dr. A should not be learning spelling from Yvain, Yvain should be learning science from Dr. A\" would be missing the point. If Dr. A wants to learn spelling, he might as well learn it from me. And best of all if we both learn from each other!

A related fallacy would be that Dr. A is so much smarter than the rest of us that he should not care about spelling. But if spelling is important to his work (perhaps he's writing a journal article) he needs to do everything he can to perfect it. If he could spell correctly, he would be even further ahead of the rest of us than he already is. The goal isn't to become a bit better than your peers and then rest on your laurels. The goal is to become as skilled as necessary.

The error is an interesting variant of the halo effect: that anyone superior at most things must be superior at all things.

Second, the statement assumes that India is a single monolithic entity with or without spiritual wisdom. But even the most gushing Orientalist would not study at the feet of a call-centre worker in Bangalore. Whatever spiritual wisdom may exist in India, it will be believed by a small fraction of Indian religions, be practiced by a small fraction of the believers, and be mastered by a small number of the practioners. And if Crowley is to be believed, it will be understood by a small fraction of the masters.

Compare the question: if America is so good at science, why does it have so many creationists? Well, because the people who are good at science aren't the same ones believing in creationism, that's why. And the people who are good at science don't have enough power in society to do anything about the creationism issue. This does not reflect poorly on the truth-value of scientific theories discovered by Americans.

I'm not one of those fallacy classification nuts, but for completeness' sake, this is a fallacy of composition.

Third, the statement assumes that spiritual wisdom makes people less poor and squalid. The converse of this statement certainly isn't true - being rich and sanitary doesn't give you any spiritual value, as large segments of western civilization have spent the past three hundred years amply demonstrating. People commonly interpret spiritual wisdom as conferring a disdain for material goods. So we wouldn't necessarily expect to see a lot of material well-being in a spiritually wise society.

Part of this is a problem with the definition of \"spiritual wisdom\". It can mean anything from \"being a moral person who cares about others\" to \"being wise and able to make good decisions\" to \"having mastery of certain mental techniques that produce awe-inspiring experiences\" Under the first and second definition, a spiritually attained country should be a nice place to live. Under the third definition, not so much. Crowley endorses the third definition, and believes that most spiritually wise people dismiss the mundane world as unworthy of their attention anyway. But this contradicts our usual intuitions about \"spirituality\" and \"wisdom\".

This is a failure of definition, and it's why I prefer \"high level of mystical attainment\" to \"spiritually wise\" when discussing Crowley's theories.

Fourth, this is hardly a controlled experiment. India is historically, geographically, racially, religiously, climatologically, and culturally different from the West. Attributing a certain failure to religious causes alone is highly dubious. In fact, when we think about it for a while, cramming a billion plus people into a sweltering malarial flood plain, dividing them evenly between two religions that hate each other's guts, then splitting off the northwest corner and turning it into a large populous nuclear-armed arch-enemy that declares war on them every couple of decades is probably not a recipe for success no matter what your spirituality. All we can say for certain is that India's spirituality is not sufficiently wonderful to overcome its other disadvantages.

People who like Latin call this cum hoc ergo propter hoc.

Fifth, this equivocates the heck out of the word \"enlightenment\". Compare \"enlightenment\" meaning the set of rational values associated with Newton, Descartes, and Hume, to \"enlightenment\", meaning gaining important knowledge, to \"enlightenment\", meaning achieving a state of nirvana free from worldly desire. The West is the acknowledged master of the first definition, and India the acknowledged master of the third definition. The anecdote's claim seems to be that since the West is the acknowledged master of the first type of enlightenment, and could teach India some useful things about politics and economics in the second sense of enlightenment, India can't teach the West about the third sense of enlightenment...which would make sense, if the types of enlightenment were at all related instead of being three different things called by the same name.

This is a fallacy of equivocation.

Just because I can point out a few fallacies in a statement doesn't make it worthless. Spiritual wisdom doesn't always correlate with decent living conditions, but the lack of decent living conditions is some evidence against the presence of spiritual wisdom. Likewise, a country's success or failure doesn't always depend on its religion, but religion is one of many contributing factors that does make a difference.

Still, five fallacies is a lot for a two sentence anecdote.

I don't think we all liked this anecdote so much because of whatever tiny core of usefulness managed to withstand those five fallacies. I think we liked it because it makes a good way to shut up hippies.

Hippies are always going on about how superior India is to the West in every way because of its \"spirituality\" and such, and how many problems are caused by \"spiritually bankrupt\" Western science. And here we are, people who quite like Western science, rolling our eyes at how stupid the hippie is being. Doesn't she realize that Western science gives her all of the comforts that make her life bearable, from drinkable water to lice-free clothing? And this anecdote - it strikes a blow for our team. It makes us feel good. We don't need to look to India for enlightenment! India should look to us! Take that, hippie!

But reversed stupidity is not intelligence. Just because the hippie is wrong about India, doesn't mean we have to be wrong in the opposite direction. It might be useful to share it with this hypothetical hippie, just to start her thinking. But it's not something we can seriously endorse.

Nor do I accept the defense that it was not specifically posted with the conclusion \"Therefore, ignore Crowley's views on yoga.\" Merely placing it directly below an article on enlightenment from India is a declaration of war and a hijack attempt on the train of thought. Saying \"I hear people of African descent have a higher violent crime rate\" is not a neutral act when spoken right before a job interview with a black person.

Defense Against the Dark Arts needs to become total and automatic, because it is the foundation upon which the complicated rationalist techniques are built. There's no point studying some complex Bayesian evidence-summing manuever that could determine the expected utility of studying yoga if an anecdote about Steve Jobs can keep you from even considering it.

How do you know you have mastered this art? When the statements

\n
\n

In his youth, Steve Jobs went to India to be enlightened. After seeing that the nation claiming to be the source of this great spiritual knowledge was full of hunger, ignorance, squalor, poverty, prejudice, and disease, he came back and said that the East should look to the West for enlightenment.

\n
\n

and

\n
\n

For complex historical reasons, the average Westerner is richer than the average Indian. Therefore, there is minimal possibility that any Indian people ever discovered interesting mental techniques.

\n
\n

sound exactly alike.

" } }, { "_id": "DxBHRkpkTZuAxBE66", "title": "The Hidden Origins of Ideas", "pageUrl": "https://www.lesswrong.com/posts/DxBHRkpkTZuAxBE66/the-hidden-origins-of-ideas", "postedAt": "2009-03-28T02:27:57.531Z", "baseScore": 5, "voteCount": 15, "commentCount": 7, "url": null, "contents": { "documentId": "DxBHRkpkTZuAxBE66", "html": "

 

\r\n

It is well known that people tend to inherit their world view toghether with their genes.  Buddhists are born to the Buddhists, Muslims are born to the Muslims and Republicans are born to the Republicans. While rejecting Predestination, a XVI century catholic could be fairly certain that, unlike hell-bound pagans in the Amazonian forests, most of his descendants would also be catholics.

\r\n

Naturally independent minds can occasionally break with the tradition. A catholic, finding the Pope’s stance on Predestination inconsistent with the Scriptures, might turn to Protestantism. Hence, the invention of the printing press that made Bibles widely available may have been the root cause of the Reformation. Similarly, the spread of literacy to the lower classes may have eroded the influence of the church and popularized the secular ideologies, such as Marxism.

\r\n

But could it be that when we break with the traditional mode of thinking, we are driven not by superior intellects or newly acquired knowledge, but rather by something we are not even aware of? Let’s take as an example the spread of seemingly unrelated ideologies of Protestantism and Marxism.

\r\n

 

\r\n

 \"\"

\r\n

 

\r\n

From left to right: The european countries painted blue are those with Germanic majority, those with large numbers of protestants (>45% of all believers), and those where communists electoral vote failed to rise above 10% within the last 60 years.

\r\n

While the maps are not identical, there seems to be a strong correlation between peoples’ ethnic origins, their religious histories and the openness to the communist ideas. Of course, correlation does not imply causation. However, strong correlation between our views and those of people with a similar background, may suggest that factors other than logic are responsible for them. Unless, as in my case, a similar background means smarter/ more virtuous/ more rational/ getting secret revelations from Omega/… (circle the right answer).

\r\n

 

\r\n

 

" } }, { "_id": "tSsQzfwSxSpmKTAWJ", "title": "Less Wrong Facebook Page", "pageUrl": "https://www.lesswrong.com/posts/tSsQzfwSxSpmKTAWJ/less-wrong-facebook-page", "postedAt": "2009-03-27T22:35:38.661Z", "baseScore": 10, "voteCount": 17, "commentCount": 42, "url": null, "contents": { "documentId": "tSsQzfwSxSpmKTAWJ", "html": "

At Tom Talbot's suggestion, I have created a Less Wrong Facebook group, in hopes that being able to see one another's faces will improve group bonding.

" } }, { "_id": "S38KtkKeotQrepR8G", "title": "Altruist Coordination -- Central Station", "pageUrl": "https://www.lesswrong.com/posts/S38KtkKeotQrepR8G/altruist-coordination-central-station", "postedAt": "2009-03-27T22:24:58.196Z", "baseScore": 7, "voteCount": 10, "commentCount": 13, "url": null, "contents": { "documentId": "S38KtkKeotQrepR8G", "html": "

Related to: Can Humanism Match Religion's Output?

\n

I thought it would be helpful for us to have a central space to pool information about various organizations to which we might give our money and/or time.  Honestly, a wiki would be ideal, but it seems this should do nicely.

\n

Comment to this post with the name of an organization, and a direct link to where we can donate to them.  Provide a summary of the group's goals, and their plans for reaching them.  If you can link to outside confirmation of the group's efficiency and effectiveness, please do so.

\n

Respond to these comments adding information about the named group, whether to criticize or praise it.

\n

Hopefully with the voting system, we should be able to collect the most relevent information we have available reasonably quickly.

\n

If you choose to contribute to a group, respond to that group's comment with a dollar amount, so that we can all see how much we have raised for each organization.

\n

Feel free to replace \"dollar amount\" with \"dollar amount/month\" in the above, if you wish to make such a commitment.  Please do not do this unless you are (>95%) confident that said commitment will last at least a year.

\n

If possible, mention this page, or this site, while donating.

" } }, { "_id": "AvTmLRperBRXyeqL9", "title": "On Seeking a Shortening of the Way", "pageUrl": "https://www.lesswrong.com/posts/AvTmLRperBRXyeqL9/on-seeking-a-shortening-of-the-way", "postedAt": "2009-03-27T17:11:52.039Z", "baseScore": 12, "voteCount": 42, "commentCount": 42, "url": null, "contents": { "documentId": "AvTmLRperBRXyeqL9", "html": "

\"The most instructive experiences are those of everyday life.\"  - Friedrich Nietzsche

\n

What is it that the readers of lesswrong are looking for?  One claim that's been repeated frequently is that we're looking for rationality tricks, shortcuts and clever methods for being rational.  Problem is:  there aren't any.

\n

People generally want novelty and gimmicks.  They're exciting and interesting!  Useful advice tends to be dull, tedious, and familiar.  We've heard it all before, and it sounded like a lot of hard work and self-discipline.  If we want to lose weight, we don't do the sensible and quite difficult thing and eat a balanced diet while increasing our levels of exercise.  We try fad diets and eat nothing but grapefruits for a week, or we gorge ourselves on meats and abhor carbohydrates so that our metabolisms malfunction.  We lose weight that way, so clearly it's just as good as exercising and eating properly, right?

\n

We cite Zen stories but don't take the time and effort to research their contexts, while at the same time sniggering a the actual beliefs inherent in that system.  We wax rhapsodic about psychedelics and dismiss the value of everyday experiences as trivial - and handwave away praise of the mundane as utilization of \"applause lights\".

\n

We talk about the importance of being rational, but don't determine what's necessary to do to become so.

\n

Some of the greatest thinkers of the past had profound insights after paying attention to parts of everyday life that most people don't give a second thought.  Archimedes realized how to determine the volume of a complex solid while lounging in a bath.  Galileo recognized that pendulums could be used to reliably measure time while letting his mind drift in a cathedral.

\n

Sure, we're not geniuses, so why try to pay attention to ordinary things?  Shouldn't we concern ourselves with the novel and extraordinary instead?

\n

Maybe we're not geniuses because we don't bother paying attention to ordinary things.

" } }, { "_id": "3fNL2ssfvRzpApvdN", "title": "Can Humanism Match Religion's Output?", "pageUrl": "https://www.lesswrong.com/posts/3fNL2ssfvRzpApvdN/can-humanism-match-religion-s-output", "postedAt": "2009-03-27T11:32:29.359Z", "baseScore": 84, "voteCount": 84, "commentCount": 116, "url": null, "contents": { "documentId": "3fNL2ssfvRzpApvdN", "html": "

Perhaps the single largest voluntary institution of our modern world—bound together not by police and taxation, not by salaries and managers, but by voluntary donations flowing from its members—is the Catholic Church.

\n

It's too large to be held together by individual negotiations, like a group task in a hunter-gatherer band.  But in a larger world with more people to be infected and faster transmission, we can expect more virulent memes.  The Old Testament doesn't talk about Hell, but the New Testament does.  The Catholic Church is held together by affective death spirals—around the ideas, the institutions, and the leaders.  By promises of eternal happiness and eternal damnation—theologians don't really believe that stuff, but many ordinary Catholics do.  By simple conformity of people meeting in person at a Church and being subjected to peer pressure.  &c.

\n

We who have the temerity to call ourselves \"rationalists\", think ourselves too good for such communal bindings.

\n

And so anyone with a simple and obvious charitable project—responding with food and shelter to a tidal wave in Thailand, say—would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists.

\n

For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits.

\n

True, the Catholic Church also goes around opposing the use of condoms in AIDS-ravaged Africa.  True, they waste huge amounts of the money they raise on all that religious stuff.  Indulging in unclear thinking is not harmless, prayer comes with a price.

\n

To refrain from doing damaging things, is a true victory for a rationalist...

\n

Unless it is your only victory, in which case it seems a little empty.

\n

If you discount all harm done by the Catholic Church, and look only at the good... then does the average Catholic do more gross good than the average atheist, just by virtue of being more active?

\n

Perhaps if you are wiser but less motivated, you can search out interventions of high efficiency and purchase utilons on the cheap...  But there are few of us who really do that, as opposed to planning to do it someday.

\n

Now you might at this point throw up your hands, saying:  \"For so long as we don't have direct control over our brain's motivational circuitry, it's not realistic to expect a rationalist to be as strongly motivated as someone who genuinely believes that they'll burn eternally in hell if they don't obey.\"

\n

This is a fair point.  Any folk theorem to the effect that a rational agent should do at least as well as a non-rational agent will rely on the assumption that the rational agent can always just implement whatever \"irrational\" policy is observed to win.  But if you can't choose to have unlimited mental energy, then it may be that some false beliefs are, in cold fact, more strongly motivating than any available true beliefs.  And if we all generally suffer from altruistic akrasia, being unable to bring ourselves to help as much as we think we should, then it is possible for the God-fearing to win the contest of altruistic output.

\n

But though it is a motivated continuation, let us consider this question a little further.

\n

Even the fear of hell is not a perfect motivator.  Human beings are not given so much slack on evolution's leash; we can resist motivation for a short time, but then we run out of mental energy (HT: infotropism).  Even believing that you'll go to hell does not change this brute fact about brain circuitry.  So the religious sin, and then are tormented by thoughts of going to hell, in much the same way that smokers reproach themselves for being unable to quit.

\n

If a group of rationalists cared a lot about something... who says they wouldn't be able to match the real, de-facto output of a believing Catholic?  The stakes might not be \"infinite\" happiness or \"eternal\" damnation, but of course the brain can't visualize 3^^^3, let alone infinity.  Who says that the actual quantity of caring neurotransmitters discharged by the brain (as 'twere) has to be so much less for \"the growth and flowering of humankind\" or even \"tidal-wave-stricken Thais\", than for \"eternal happiness in Heaven\"?  Anything involving more than 100 people is going to involve utilities too large to visualize.  And there are all sorts of other standard biases at work here; knowing about them might be good for a bonus as well, one hopes?

\n

Cognitive-behavioral therapy and Zen meditation are two mental disciplines experimentally shown to yield real improvements.  It is not the area of the art I've focused on developing, but then I don't have a real martial art of rationality in back of me.  If you combine a purpose genuinely worth caring about, with discipline extracted from CBT and Zen meditation, then who says rationalists can't keep up?  Or even more generally: if we have an evidence-based art of fighting akrasia, with experiments to see what actually works, then who says we've got to be less motivated than some disorganized mind that fears God's wrath?

\n

Still... that's a further-future speculation that it might be possible to develop an art that doesn't presently exist.  It's not a technique I can use right now.  I present it just to illustrate the idea of not giving up so fast on rationality:  Understanding what's going wrong, trying intelligently to fix it, and gathering evidence on whether it worked—this is a powerful idiom, not to be lightly dismissed upon sighting the first disadvantage.

\n

Really, I suspect that what's going on here has less to do with the motivating power of eternal damnation, and a lot more to do with the motivating power of physically meeting other people who share your cause.  The power, in other words, of being physically present at church and having religious neighbors.

\n

This is a problem for the rationalist community in its present stage of growth, because we are rare and geographically distributed way the hell all over the place.  If all the readers of this blog lived within a 5-mile radius of each other, I bet we'd get a lot more done, not for reasons of coordination but just sheer motivation.

\n

I'll post tomorrow about some long-term, starry-eyed, idealistic thoughts on this particular problem.  Shorter-term solutions that don't rely on our increasing our numbers by a factor of 100 would be better.  I wonder in particular whether the best modern videoconferencing software would provide some of the motivating effect of meeting someone in person; I suspect the answer is \"no\" but it might be worth trying.

\n

Meanwhile... in the short-term, we're stuck fighting akrasia mostly without the reinforcing physical presense of other people who care.  I want to say something like \"This is difficult, but it can be done\" except I'm not sure that's even true.

\n

I suspect that the largest step rationalists could take toward matching the per-capita power output of the Catholic Church would be to have regular physical meetings of people contributing to the same task—not for purposes of coordination, just for purposes of of motivation.

\n

In the absence of that...

\n

We could try for a group norm of being openly allowed—nay, applauded—for caring strongly about something.  And a group norm of being expected to do something useful with your life—contribute your part to cleaning up this world.  Religion doesn't really emphasize the getting-things-done aspect as much.

\n

And if rationalists could match just half the average altruistic effort output per Catholic, then I don't think it's remotely unrealistic to suppose that with better targeting on more efficient causes, the modal rationalist could get twice as much done.

\n

How much of its earnings does the Catholic Church spend on all that useless religious stuff instead of actually helping people?  More than 50%, I would venture.  So then we could say—with a certain irony, though that's not quite the spirit in which we should be doing things—that we should try to propagate a group norm of donating a minimum of 5% of income to real causes.  (10% being the usual suggested minimum religious tithe.)  And then there's the art of picking causes for which expected utilons are orders of magnitude cheaper (for so long as the inefficient market in utilons lasts).

\n

But long before we can begin to dream of any such boast, we secular humanists need to work on at least matching the per capita benevolent output of the worshippers.

" } }, { "_id": "vhxywjnBH6ioRnnt3", "title": "Crowley on Religious Experience", "pageUrl": "https://www.lesswrong.com/posts/vhxywjnBH6ioRnnt3/crowley-on-religious-experience", "postedAt": "2009-03-26T22:59:39.625Z", "baseScore": 41, "voteCount": 46, "commentCount": 89, "url": null, "contents": { "documentId": "vhxywjnBH6ioRnnt3", "html": "

Reply to: The Sacred Mundane, BHTV: Yudkowsky vs. Frank on \"Religious Experience\"

\n

Edward Crowley was a man of many talents. He studied chemistry at Cambridge - a period to which he later attributed his skeptical scientific outlook - but he soon abandoned the idea of a career in science and turned to his other passions. For a while he played competitive chess at the national level. He took to mountain-climbing, and became one of the early 20th century's premier mountaineers, co-leading the first expedition to attempt K2 in the Himalayas. He also enjoyed writing poetry and travelling the world, making it as far as Nepal and Burma in an era when steamship was still the fastest mode of transportation and British colonialism was still a thin veneer over dangerous and poorly-explored areas.

But his real interest was mysticism. He travelled to Sri Lanka, where he studied meditation and yoga under some of the great Hindu yogis. After spending several years there, he achieved a state of mystical attainment the Hindus call dhyana, and set about trying to describe and promote yoga to the West.

\n

He was not the first person to make the attempt, but he was certainly the most interesting. Although his parents were religious fanatics and his father a fundamentalist preacher, he himself had been an atheist since childhood, and he considered the vast majority of yoga to be superstitious claptrap. He set about eliminating all the gods and chants and taboos and mysterian language, ending up with a short system of what he considered empirically validated principles for gaining enlightenment in the most efficient possible way.

\n

Reading Crowley's essay on mysticism and yoga at age seventeen rewrote my view of religion. I had always wondered about eastern religions like Buddhism and Hinduism, which seemed to have some underlying truth to all their talk of \"enlightenment\" and \"meditation\" but which seemed too vague and mysterious for my liking. Crowley stripped the mystery away in one fell swoop.

\n

\n

When listening to Eliezer debate Adam Frank on \"religious experience\", I was disappointed but not surprised to hear just how little they had to say. Even Frank, who was fascinated enough to write a book about it, considered it little more than a sense that something was inspiring or especially impressive. I quoted a bit of Crowley's essay on the thread, and people seemed to like it and want to know more.

But I am very reluctant to share, and do so now only after being specifically requested by a few people. You see, I have been trying to paint a sympathetic view of Crowley over the past few paragraphs. With the unsympathetic view you are familiar already. Under his nickname \"Aleister\", he wrote some of history's most influential occultist works. Even in this domain, he held himself to a high rationalist standard, recording that he tested each spell and ritual beforehand and passed on only the ones that actually worked as advertised.

...I don't know what that means either. Either he was one of those psychopaths gifted with the ability to lie perfectly and absolutely, or a psychotic genius able to induce hallucinations in himself at will. Crowley himself occasionally endorsed this latter explanation, but after pondering it a while decided he didn't care. The important thing, he wrote, was to determine what techniques produced what results. After that, the philosophers could determine whether they were physical or mentally mediated. Besides, he said, the entities he summoned were so different from himself that if they represented faculties of his mind, they were ones to which he had no conscious access.

My point is that I am going to link you to Crowley's essay on mysticism, yoga, and religious experience, and that you might get more out of it if you tried to avoid any bias upon seeing the name \"Aleister Crowley\" on the title page. Yes, I feel properly guilty posting this on a rationalism site, but if we're going to talk about religious experience we might as well listen to the people who have had some.

Although it is Less Wrong tradition to rewrite a theory rather than simply link to it, it would be inappropriate in this case. Getting Crowley filtered would be like having someone summarize Godel, Escher Bach to you - you might learn a few things, but you'd lose the chance to enjoy the superb writing. It's a long essay, but not so long you can't read it in one sitting. Even just reading the Preface gives an idea of the theory. Without further ado: Crowley on Religious Experience.

I post this essay to clarify why I believe three things. First, that both Eliezer and Adam miss the point of religious experience. Second, that certain seemingly supernatural or silly beliefs can be more reasonable than they appear (see for example Crowley's explanation of religious laws on \"virtue\" and \"purity\"). Third, that some mystics'  work is of sufficient relevance to rationalists to be worth study.

" } }, { "_id": "37XbA3ybyLSzMHZm9", "title": "The Mind Is Not Designed For Thinking", "pageUrl": "https://www.lesswrong.com/posts/37XbA3ybyLSzMHZm9/the-mind-is-not-designed-for-thinking", "postedAt": "2009-03-26T21:57:50.014Z", "baseScore": 9, "voteCount": 10, "commentCount": 7, "url": null, "contents": { "documentId": "37XbA3ybyLSzMHZm9", "html": "

There's an interesting article in the latest issue of American Educator: \"Why Don't Students Like School? Because the Mind Is Not Designed For Thinking\" (pdf).

\n

The general subject is cached thoughts.

" } }, { "_id": "CcjcCYYEB5KNHCpEZ", "title": "Sleeping Beauty gets counterfactually mugged", "pageUrl": "https://www.lesswrong.com/posts/CcjcCYYEB5KNHCpEZ/sleeping-beauty-gets-counterfactually-mugged", "postedAt": "2009-03-26T11:44:04.156Z", "baseScore": 6, "voteCount": 19, "commentCount": 34, "url": null, "contents": { "documentId": "CcjcCYYEB5KNHCpEZ", "html": "

Related to: Counterfactual Mugging, Newcomb's Problem and Regret of Rationality

\n

Omega is continuing his eternal mission: To explore strange new philosophical systems... To seek out new paradoxes and new counterfactuals... To boldly go where no decision theory has gone before.

\n

In his usual totally honest, quasi-omniscient, slightly sadistic incarnation, Omega has a new puzzle for you, and it involves the Sleeping Beauty problem as a bonus.

\n

He will offer a similar deal to that in the counterfactual mugging: he will flip a coin, and if it comes up tails, he will come round and ask you to give him £100.

\n

If it comes up heads, instead he will simulate you, and check whether you would give him the £100 if asked (as usual, the use of randomising device in the decision is interpreted as a refusal). From this counterfactual, if you would give him the cash, he’ll send you £260; if you wouldn’t, he’ll give you nothing.

\n

Two things are different from the original setup, both triggered if the coin toss comes up tails: first of all, if you refuse to hand over any cash, he will give you an extra £50 compensation. Second of all, if you do give him the £100, he will force you to take a sedative and an amnesia drug, so that when you wake up the next day, you will have forgotten about the current day. He will then ask you to give him the £100 again.

\n

To keep everything fair and balanced, he will feed you the sedative and the amnesia drug whatever happens (but will only ask you for the £100 a second time if you accepted to give it to him the first time).

\n

Would you want to precommit to giving Omega the cash, if he explained everything to you? The odds say yes: precommitting to accepting to hand over the £100 will give you an expected return of 0.5 x £260 + 0.5 x (-£200) = £30, while precommitting to a refusal gives you an expected return of 0.5 x £0 + 0.5 x £50 = £25.

\n

But now consider what happens at the moment when he actually asks you for the cash.

\n

A standard way to approach these types of problems it to act as if you didn’t know whether you were the real you or the simulated you. This avoids a lot of complications and gets you to the heart of the problem. Here, if you decide to give Omega the cash, there are three situations you can be in: the simulation, reality on the first day, or reality on the second day. The Dutch book odds of being in any of these three situations is the same, 1/3. So the expected return is 1/3(£260-£100-£100) = £20, twenty of her majesty’s finest English pounds.

\n

However, if you decide to refuse the hand-over, then you are in one of two situations: the simulation, or reality on the first day (as you will not get asked on the second day). The Dutch book odds are even, so the expected return is 1/2(£0+£50) = £25, a net profit of £5 over accepting.

\n

So even adding ‘simulated you’ as an extra option, a hack that solves most Omega type problems, does not solve this paradox: the option you precommit to has the lower expected returns when you actually have to decide.

\n

Note that if you depart from the Dutch book odds (what did the Dutch do to deserve to be immortalised in that way, incidentally?), then Omega can put you in situations where you lose money with certainty.

\n

So, what do you do?

\n

 

" } }, { "_id": "Q8evewZW5SeidLdbA", "title": "Your Price for Joining", "pageUrl": "https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining", "postedAt": "2009-03-26T07:16:21.397Z", "baseScore": 97, "voteCount": 89, "commentCount": 58, "url": null, "contents": { "documentId": "Q8evewZW5SeidLdbA", "html": "

In the Ultimatum Game, the first player chooses how to split $10 between themselves and the second player, and the second player decides whether to accept the split or reject it—in the latter case, both parties get nothing.  So far as conventional causal decision theory goes (two-box on Newcomb's Problem, defect in Prisoner's Dilemma), the second player should prefer any non-zero amount to nothing.  But if the first player expects this behavior—accept any non-zero offer—then they have no motive to offer more than a penny.  As I assume you all know by now, I am no fan of conventional causal decision theory.  Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.

\n

And in fact, most Ultimatum \"deciders\" offer an even split; and most Ultimatum \"accepters\" reject any offer less than 20%.  A 100 USD game played in Indonesia (average per capita income at the time: 670 USD) showed offers of 30 USD being turned down, although this equates to two week's wages.  We can probably also assume that the players in Indonesia were not thinking about the academic debate over Newcomblike problems—this is just the way people feel about Ultimatum Games, even ones played for real money.

\n

There's an analogue of the Ultimatum Game in group coordination.  (Has it been studied?  I'd hope so...)  Let's say there's a common project—in fact, let's say that it's an altruistic common project, aimed at helping mugging victims in Canada, or something.  If you join this group project, you'll get more done than you could on your own, relative to your utility function.  So, obviously, you should join.

\n

But wait!  The anti-mugging project keeps their funds invested in a money market fund!  That's ridiculous; it won't earn even as much interest as US Treasuries, let alone a dividend-paying index fund.

\n

Clearly, this project is run by morons, and you shouldn't join until they change their malinvesting ways.

\n

Now you might realize—if you stopped to think about it—that all things considered, you would still do better by working with the common anti-mugging project, than striking out on your own to fight crime.  But then—you might perhaps also realize—if you too easily assent to joining the group, why, what motive would they have to change their malinvesting ways?

\n

Well...  Okay, look.  Possibly because we're out of the ancestral environment where everyone knows everyone else... and possibly because the nonconformist crowd tries to repudiate normal group-cohering forces like conformity and leader-worship...

\n

...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.  Like a 50-way split Ultimatum game, where every one of 50 players demands at least 20% of the money.

\n

If you think how often situations like this would have arisen in the ancestral environment, then it's almost certainly a matter of evolutionary psychology.  System 1 emotions, not System 2 calculation.  Our intuitions for when to join groups, versus when to hold out for more concessions to our own preferred way of doing things, would have been honed for hunter-gatherer environments of, e.g., 40 people all of whom you knew personally.

\n

And if the group is made up of 1000 people?  Then your hunter-gatherer instincts will underestimate the inertia of a group so large, and demand an unrealistically high price (in strategic shifts) for you to join.  There's a limited amount of organizational effort, and a limited number of degrees of freedom, that can go into doing things any one's person way.

\n

And if the strategy is large and complex, the sort of thing that takes e.g. ten people doing paperwork for a week, rather than being hammered out over a half-hour of negotiation around a campfire?  Then your hunter-gatherer instincts will underestimate the inertia of the group, relative to your own demands.

\n

And if you live in a wider world than a single hunter-gatherer tribe, so that you only see the one group representative who negotiates with you, and not the hundred other negotiations that have taken place already?  Then your instincts will tell you that it is just one person, a stranger at that, and the two of you are equals; whatever ideas they bring to the table are equal with whatever ideas you bring to the table, and the meeting point ought to be about even.

\n

And if you suffer from any weakness of will or akrasia, or if you are influenced by motives other than those you would admit to yourself that you are influenced by, then any group-altruistic project which does not offer you the rewards of status and control, may perhaps find itself underserved by your attentions.

\n

Now I do admit that I speak here primarily from the perspective of someone who goes around trying to herd cats; and not from the other side as someone who spends most of their time withholding their energies in order to blackmail those damned morons already on the project.  Perhaps I am a little prejudiced.

\n

But it seems to me that a reasonable rule of thumb might be as follows:

\n

If, on the whole, joining your efforts to a group project would still have a net positive effect according to your utility function—

\n

(or a larger positive effect than any other marginal use to which you could otherwise put those resources, although this latter mode of thinking seems little-used and humanly-unrealistic, for reasons I may post about some other time)

\n

—and the awful horrible annoying issue is not so important that you personally will get involved deeply enough to put in however many hours, weeks, or years may be required to get it fixed up—

\n

—then the issue is not worth you withholding your energies from the project; either instinctively until you see that people are paying attention to you and respecting you, or by conscious intent to blackmail the group into getting it done.

\n

And if the issue is worth that much to you... then by all means, join the group and do whatever it takes to get things fixed up.

\n

Now, if the existing contributors refuse to let you do this, and a reasonable third party would be expected to conclude that you were competent enough to do it, and there is no one else whose ox is being gored thereby, then, perhaps, we have a problem on our hands.  And it may be time for a little blackmail, if the resources you can conditionally commit are large enough to get their attention.

\n

Is this rule a little extreme?  Oh, maybe.  There should be a motive for the decision-making mechanism of a project to be responsible to its supporters; unconditional support would create its own problems.

\n

But usually... I observe that people underestimate the costs of what they ask for, or perhaps just act on instinct, and set their prices way way way too high.  If the nonconformist crowd ever wants to get anything done together, we need to move in the direction of joining groups and staying there at least a little more easily.  Even in the face of annoyances and imperfections!  Even in the face of unresponsiveness to our own better ideas!

\n

In the age of the Internet and in the company of nonconformists, it does get a little tiring reading the 451st public email from someone saying that the Common Project isn't worth their resources until the website has a sans-serif font.

\n

Of course this often isn't really about fonts.  It may be about laziness, akrasia, or hidden rejections.  But in terms of group norms... in terms of what sort of public statements we respect, and which excuses we publicly scorn... we probably do want to encourage a group norm of:

\n

If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.

" } }, { "_id": "v69oPvxe6c9nNFiBa", "title": "Two Blegs", "pageUrl": "https://www.lesswrong.com/posts/v69oPvxe6c9nNFiBa/two-blegs", "postedAt": "2009-03-26T04:42:32.223Z", "baseScore": 4, "voteCount": 5, "commentCount": 5, "url": null, "contents": { "documentId": "v69oPvxe6c9nNFiBa", "html": "
I'm not sure where to post this, so, using this comment thread as cover, I will hereby bleg for the following: \n\n
" } }, { "_id": "cm9FG45yasfijeeQc", "title": "Open Thread: March 2009", "pageUrl": "https://www.lesswrong.com/posts/cm9FG45yasfijeeQc/open-thread-march-2009", "postedAt": "2009-03-26T04:04:07.047Z", "baseScore": 6, "voteCount": 11, "commentCount": 72, "url": null, "contents": { "documentId": "cm9FG45yasfijeeQc", "html": "

Here is our monthly place to discuss Less Wrong topics that have not appeared in recent posts.

" } }, { "_id": "uLhrHa5Q4PqwLz2P6", "title": "Why *I* fail to act rationally", "pageUrl": "https://www.lesswrong.com/posts/uLhrHa5Q4PqwLz2P6/why-i-fail-to-act-rationally", "postedAt": "2009-03-26T03:56:57.416Z", "baseScore": 15, "voteCount": 27, "commentCount": 23, "url": null, "contents": { "documentId": "uLhrHa5Q4PqwLz2P6", "html": "

There is a lot of talk here about sophisticated rationality failures - priming, overconfidence, etc. etc. There is much less talk about what I think is the more common reason for people failing to act rationally in the real world - something that I think most people outside this community would agree is the most common rationality failure mode - acting emotionally (pjeby has just begun to discuss this, but I don't think it's the main thrust of his post...).

\n

While there can be sound evolutionary reasons for having emotions (the thirst for revenge as a Doomsday Machine being the easiest to understand), and while we certainly don't want to succumb to the fallacy that rationalists are emotionless Spock-clones. I think overcoming (or at least being able to control) emotions would, for most people, be a more important first step to acting rationally than overcoming biases.

\n

If I could avoid saying things I'll regret later when angry, avoid putting down colleagues through jealousy, avoid procrastinating because of laziness and avoid refusing to make correct decisions because of fear, I think this would do a lot more to make me into a winner than if I could figure out how to correctly calibrate my beliefs about trivia questions, or even get rid of my unwanted Implicit Associations.

\n

So the question - do we have good techniques for preventing our emotions from making bad decisions for us? Something as simple as \"count to ten before you say anything when angry\" is useful if it works. Something as sophisticated as \"become a Zen Master\" is probably unattainable, but might at least point us in the right direction - and then there's everything in between.

" } }, { "_id": "wvcxAuhmj4DJw7gho", "title": "Open Thread", "pageUrl": "https://www.lesswrong.com/posts/wvcxAuhmj4DJw7gho/open-thread-1", "postedAt": "2009-03-26T02:48:49.944Z", "baseScore": 3, "voteCount": 2, "commentCount": 3, "url": null, "contents": { "documentId": "wvcxAuhmj4DJw7gho", "html": "

For Yvain.

\r\n

I can commit to posting regularly scheduled open threads, if that appeals. Discuss scheduling in comments.

" } }, { "_id": "pyNPXST7feDX45ygt", "title": "Fight Biases, or Route Around Them?", "pageUrl": "https://www.lesswrong.com/posts/pyNPXST7feDX45ygt/fight-biases-or-route-around-them", "postedAt": "2009-03-25T22:23:28.281Z", "baseScore": 29, "voteCount": 30, "commentCount": 9, "url": null, "contents": { "documentId": "pyNPXST7feDX45ygt", "html": "

Continuation of: The Implicit Association Test
Response to: 3 Levels of Rationality Verification

\n

I've not yet seen it pointed out before that we use \"bias\" to mean two different things.

Sometimes we use \"bias\" to mean a hard-coded cognitive process that results in faulty beliefs. Take as examples the in-group bias, the recall bias, the bad guy bias, and various other things discovered by Tversky and Kahneman.

Other times, we use \"bias\" to mean a specific faulty belief generated by such a process, especially one that itself results in other faulty beliefs. For example, Jews are sometimes accused of having a pro-Israel bias. By this we mean that they have a higher opinion of Israel than the evidence justifies; this is a specific belief created by the in-group bias. This belief may itself generate other faulty beliefs; for example, they may have a more negative opinion of Palestinians than the evidence justifies. It is both the effect of a bias, and the cause of other biases.

Let's be clear about this \"more than the evidence justifies\" bit. Hating Hitler doesn't mean you're biased against Hitler. Likewise, having a belief about a particular ethnic group doesn't mean you're biased for or against them. My Asian friends hate it when people sheepishly admit in a guilty whisper that they've heard Asians are good at academics. Asians are good at academics. Just say \"55% chance an average Asian has a GPA above the American population mean\" and leave it at that. This is one of Tetlock's critiques of the Implicit Association Test, and it's a good one. I'd probably link Asians to high achievement on an IAT, but it wouldn't be a bias or anything to get upset about.

And let's also be clear about this faulty belief thing. You don't have to believe something for it to be a belief; consider again the skeptic who flees the haunted house. She claims she doesn't belief in ghosts, and she's telling the truth one hundred percent. She's still going to be influenced by her belief in ghosts. She's not secretly supernaturalist any more than someone who gets \"strongly biased\" on the IAT is secretly racist. But she needs to know she's still going to run screaming from haunted houses, and IAT-takers should be aware they're still probably going to discriminate against black people in some tiny imperceptible way.

\n

\n

Okay, back to the example. So the President appoints Isaac, a synagogue-going Jew, as the new Middle East peace envoy. Due to some amazing breakthrough in the region, both the Israelis and Palestinians agree to accept whatever plan Isaac develops. Isaac's only job is to decide what long-term plan is best for both sides. And he's a good man: he has an honest desire to choose the maximum-utility solution.

Isaac legitimately worries that he has a bias for the Israelis and against the Palestinians. How can he test the hypothesis? He can take a hypothetical souped-up version of the Implicit Association Test1. He finds that yes, he has a strong pro-Israel anti-Palestine bias. Now what does he do?

He can try to route around the bias. This is the approach implicitly endorsed by Overcoming Bias and by rationalism in general. He can take the Outside View and look at successful approaches in other world conflicts. He can use some objective metric to calculate the utility of everything in Israel, and check to make sure neither group is getting an amount disproportionate to their numbers. He can open a prediction market on metrics of success, and implement whatever policies trades at the highest value. All of these will probably improve Isaac's solution a fair bit. But none of them are perfect. In the end, Isaac's the one who has to make a decision that will be underdetermined by all these clever methods, and Isaac is still biased against the Palestinians.

Or he can try to fight the bias.

Diversity workshops try to fight biases directly . These don't work, and that's no surprise. Diversity workshops are telling you, on a conscious level, that minorities really are great people, aren't they? Well, yes. On a conscious level, you already believe that. Isaac already knows, on a conscious level, that the Palestinians deserve a fair solution that protects their interests just as much as the Israelis do. A diversity workshop would be a flashy video in which a condescending narrator explains that point again and again.

We don't have a lot of literature on what does work here, but I predict a few things would help. Make some Palestinian friends, to build mental connections between Palestinians and positive feelings. Learn to distinguish between Palestinian faces. Read works of fiction with sympathetic Palestinian characters. I would say \"live in Palestine\" but by all accounts Palestine is a pretty grim place; he might do better to live in a Palestinian community in America for a while.

Those techniques aren't especially good, but I don't care. We know how to improve them. By making a group take the Implicit Association Test, applying a technique to them, giving them the test again, and seeing how their score changed, we gain the ability to test bias-fighting techniques. I wouldn't want to do this on one person, because the test only has moderate reliability at the individual level. But a group of a few dozen, all practicing the same technique, would be quite sufficient. If another group learns a different technique, we can compare their IAT score improvement and see which technique is better, or if different techniques are better in different circumstances.

Again, there's no reason why this method should be limited to racial biases. No matter how hard I try to evaluate policies on their merits rather than their politics, I am biased towards the US Democratic Party and I know it. This ought to be visible on an IAT, and there ought to be techniques to cure it. I don't know what they are, but I'd like to find them and start testing them.

What about the second method of overcoming bias, routing around it? The IAT is less directly valuable here, but it's not without a role.

In one of the IAT experiments, subjects evaluated essays written by black or white students. This is a fiendishly difficult task upon which to avoid bias. A sneaky researcher can deliberately select essays graded as superior by a blind observer and designate them \"white essays\", so anyone trying to take the easy way out by giving all essays the same grade can be caught immediately. I like this essay task. It's utterly open to any technique you want to use to reduce bias.

So give someone IATs until you find a group they're especially biased against - black people, Palestinians, Korean-Americans, frequentists; any will do. Then make them grade essays by the control group and the disliked group. Collect statistics correlating IAT bias with essay grading bias. If a person using a special technique to route around mental bias can grade essays more accurately than other people with the same level of IAT bias, that person has routed around their bias successfully.

So: How do we tell if a technique for routing around bias works? Test whether people are better able to conduct a rating task than their IAT scores would predict. How do we test a technique for fighting bias directly? See if it lowers IAT scores. All terribly inconvenient because of the IAT's low effect size and reliability, but with a large enough sample size or enough test-retest cycles the thing could be done. And the psychologists who transformed the Bona Fide Pipeline into the IAT may yet transform the IAT into something even more powerful.

This, then, is one solution to schools proliferating without evidence. With enough research, it could be turned into one of the missing techniques of rationality verification.

\n

 

\n

Footnotes

\n

1: Remember, the IAT is only moderately good at evaluating individuals, and has a bad habit of changing its mind each time someone takes it. Much of what is in this essay would work poorly (though probably still better than nothing) with a simple IAT. But having someone take the IAT ten times over ten days and averaging the results might give a more accurate picture (I don't know of any studies on this). And in any case the IAT is quite good at comparing groups of people with sample size >1. And I expect that souped-up versions of the IAT will be out within a few years; these tests have gotten better and better as time goes on.

" } }, { "_id": "x7LMf4foW22wuW7Cz", "title": "The Good Bayesian", "pageUrl": "https://www.lesswrong.com/posts/x7LMf4foW22wuW7Cz/the-good-bayesian", "postedAt": "2009-03-25T21:39:18.934Z", "baseScore": 1, "voteCount": 16, "commentCount": 16, "url": null, "contents": { "documentId": "x7LMf4foW22wuW7Cz", "html": "

    I've talked religion with people of many different ages and creeds, and none of them have ever been content to practice their religion in private.  All belong to a religious community; many contribute money and time above and beyond the minimum requirement.  And in all the religious discussions I've ever had, I've never heard anyone decline to participate because their religion is \"intensely private and individual.\"

\n

    So Eliezer's quote from William James by way of Adam Frank left me scratching my head, as well.  I think of religion first and foremost as a social behavior rather than an individual one.  It's not just that religious people use the claim of private revelation as a defense against reason; it's that they can attend sermons, sing in a choir, recite prayers in unison--and then make that claim of \"solitary\" experience with a straight face!

    Eliezer's post listed some of the ways that theodicy warps rational thought on the individual level, as sort of warning label on the \"poisoned chalice.\"  Religious people are liable to respond that religion may not make people rational, but it makes them altruistic \"good Samaritans.\"  (Never mind that in the parable, the Samaritan is more altruistic than a high priest!)  They claim that any harm religion does to the individual is outweighed by its benefits to the group.

    Rationalists should make the case that religion is harmful to society as a whole, as well as individuals:

\n\n

    I left out at least one obvious argument for the benefit of commenters.

" } }, { "_id": "ru536oPGPJsEkA3Ee", "title": "Spock's Dirty Little Secret", "pageUrl": "https://www.lesswrong.com/posts/ru536oPGPJsEkA3Ee/spock-s-dirty-little-secret", "postedAt": "2009-03-25T19:07:21.908Z", "baseScore": 62, "voteCount": 70, "commentCount": 71, "url": null, "contents": { "documentId": "ru536oPGPJsEkA3Ee", "html": "

Related on OB: Priming and Contamination
Related on LWWhen Truth Isn't Enough

\n

When I was a kid, I wanted to be like Mr. Spock on Star Trek.  He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.

\n

And since Spock loved logic, I tried to learn something about it myself.  But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of \"Wff'n'Proof\" I picked up at a garage sale), I began to get a little dissatisfied with it.

\n

Spock had made it seem like logic was some sort of \"formidable\" thing, with which you could do all kinds of awesomeness.  But real logic didn't seem to work the same way.

\n

I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.

\n

Logic didn't say, \"thou shalt perform this sequence of transformations and thereby produce an Answer\".  Instead, it said something more like, \"do whatever you want, as long as it's well-formed\"...  and left the very real question of what it was you wanted, as an exercise for the logician.

\n

And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.

\n

Of course, I didn't phrase it exactly that way myself...  but I did see that logic could only be used to check things...  not to generate them.  The ideas to be checked, still had to come from somewhere.

\n

But where?

\n

When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.

\n

Because, although I'd already learned to work with formalisms -- i.e., meaningless symbols -- working with actual syllogisms about Socrates and mortals and whatnot was actually a good bit harder.

\n

We were supposed to determine the validity of the syllogisms, but sometimes an invalid syllogism had a true conclusion, while a valid syllogism might have a false one.  And, until I learned to mentally substitute symbols like A and B for the included facts, I found my brain automatically jumping to the wrong conclusions about validity.

\n

So \"logic\", then -- or rationality -- seemed to require three things to actually work:

\n\n

But it wasn't until my late thirties and early forties -- just in the last couple of years -- that I realized a fourth piece, implicit in the first.

\n

And Spock, ironically enough, is the reason I found it so difficult to grasp that last, vital piece:

\n

That to generate possibly-useful ideas in the first place, you must have some notion of what \"useful\" is!

\n

And that for humans at least, \"useful\" can only be defined emotionally.

\n

Sure, Spock was supposed to be immune to emotion -- even though in retrospect, everything he does is clearly motivated by emotion, whether it's his obvious love for Kirk, or his desire to be accepted as a \"real\" rationalis... er, Vulcan.  (In other words, he disdains emotion merely because that's what he's supposed to do, not because he doesn't actually have any.)

\n

And although this is all still fictional evidence, one might compare Spock's version of \"unemotional\" with the character of the undead assasin Kai, from a different science-fiction series.

\n

Kai, played by Michael McManus, shows us a slightly more accurate version of what true emotionlessness might be like: complete and utter apathy.

\n

Kai has no goals or cares of his own, frequently making such comments as \"the dead do not want anything\", and \"the dead do not have opinions\".  He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.

\n

(He'll sleep in his freezer or go on a killing spree, it's all the same to him, though he'll probably tell you the likely consequences of whatever action you see fit to request of him.)

\n

And scientifically speaking, that's a lot closer to what you actually get, if you don't have any emotions.

\n

Not a \"formidable rationalist\" and idealist, like Spock or Eliezer...

\n

But an apathetic zombie, like Kai.

\n

As Temple Grandin puts it (in her book, Animals In Translation):

\n
\n

Everyone uses emotion to make decisions. People with brain damage to their emotional systems have a hard time making any decision at all, and when they do make a decision it's usually bad.

\n
\n

She is, of course, summarizing Antonio Damasio's work in relation to the somatic marker hypothesis and decision coherence.  From the linked article:

\n
\n

Somatic markers explain how goals can be efficiently prioritized by a cognitive system, without having to evaluate the propositional content of existing goals. After somatic markers are incorporated, what is compared by the deliberator is not the goal as such, but its emotional tag. [Emphasis added]

\n

The biasing function of somatic markers explains how irrelevant information can be excluded from coherence considerations. With Damasio's thesis, choice activation can be seen as involving emotion at the most basic computational level. [Emphasis added]
...
This sketch shows how emotions help to prevent our decision calculations from becoming so complex and cumbersome that decisions would be impossible. Emotions function to reduce and limit our reasoning, and thereby make reasoning possible. [Emphasis added]

\n
\n

Now, we can get into all sorts of argument about what constitues \"emotion\", exactly.  I personally like the term \"somatic marker\", though, because it ties in nicely with concepts such as facial micro-expressions and gestural accessing cues.  It also emphasizes the fact that an emotion doesn't actually need to be conscious or persistent, in order to act as a decision influencer and a source of bias.

\n

But I didn't find out about somatic markers or emotional decisions because I was trying to find out more about logic or rationalism.  I was studying akrasia1, and writing about it on my blog.

\n

That is, I was trying to find out why I didn't always do what I \"decided to do\"... and what I could do to fix that.

\n

And in the process, I discovered what somatic markers have to do with akrasia, and with motivated reasoning...  long before I read any of the theories about the underlying machinery.  (After all, until I knew what they did, I didn't know what papers would've been relevant.  And in any case, I was looking for practice, not theory)

\n

Now, in future posts in this series, I'll tie somatic markers, affective synchrony, and Robin Hanson's \"near/far\" hypothesis together into something I call the \"Akrasian Orchestra\"...  a fairly ambitious explanation of why/how we \"don't do what we decide to\" , and for that matter, don't even think the way we decide to.

\n

But for this post, I just want to start by introducing the idea of somatic markers in decision-making, and give a little preview of what that means for rationality.

\n

Somatic markers are effectively a kind of cached thought.  They are, in essence, the \"tiny XML tags of the mind\", that label things \"good\" or \"bad\", or even \"rational\" and \"irrational\". (Which of course are just disguised versions of \"good\" and \"bad\", if you're a rationalist.)

\n

And it's imporant to understand that you cannot escape this labeling, even if you wanted to.  (After all, the only reason you're able to want to, is because this labeling system exists!)

\n

See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes.  Even momentary pairing of positive or negative words with nonsense syllables can carry over into the perception of the taste of otherwise-identical sodas, branded with made-up names using the nonsense syllables!

\n

As you can see, this idea ties in rather nicely with things like priming and the IAT: your brain is always, always, always tagging things for later retrieval.

\n

Not only that, but it's also frequently  replaying these tags -- in somatic, body movement form -- as you think about things.

\n

For example, let's say that you're working on an equation or a computer program...  and you get that feeling that something's not quite right.

\n

As I wrote the preceding sentence, my face twisted into a slight frown, my brow wrinkling slightly as well -- my somatic marker for that feeling of \"not quite right-ness\".  And, if you actually recall a situation like that for yourself, you may feel it too.

\n

Now, some people would claim that this marker isn't \"really\" an emotion: that they just \"logically\" or \"rationally\" decided that something wasn't right with the equation or program or spaceship or whatever.

\n

But if we were to put those same people on a brain scanner and a polygraph, and observe what happens to their brain and body as they \"logically\" think through various possibilities, we would see somatic markers flying everywhere, as hypotheses are being considered and discarded.

\n

It's simply that, while your conscious attention is focused on your logic, you have little interest in attending directly to the emotions that are guiding you.  When you get the \"information scent\" of a good or a bad hypothesis, you simply direct your attention to either following the hypothesis, or discarding it and finding a replacement.

\n

Then, when you stop reasoning, and experience the frustration or elation of your results (or lack thereof), you finally have attention to spare for the emotion itself...  leading to the common illusion that emotion and reasoning don't mix.  (When what actually doesn't mix, at least without practice, is reasoning and paying conscious attention to your emotions/somatic markers at the same time.)

\n

Now, some somatic markers are shared by all humans, such as the universal facial expressions, or the salivation and mouth-pursing that happens when you recall (or imagine) eating something sour.  Others may be more individual.

\n

Some markers persist for longer periods than others -- that \"not quite right\" feeling might just flicker for a moment while you're recalling a situation, but persist until you find an answer, when it's a response to the actual situation.

\n

But it's not even necessary for a somatic marker to be expressed, in order for it to influence your thinking, since emotional associations and speed of recall are tightly linked.  In effect, recall is prioritized by emotional affect...  meaning that your memories are sorted by what makes you feel better.

\n

(Or what makes you feel  less bad ... which is not the same thing, as we'll see later in this series!)

\n

What this means is that all reasoning is in some sense \"motivated\", but it's not always consciously motivated, because your memories are pre-sorted for retrieval in an emotionally biased fashion.

\n

In other words, the search engine of your mind...

\n

Returns paid results first.

\n

This means that, strictly speaking, you don't know your own motivations for thinking or acting as you do, unless you explicitly perform the necessary steps to examine them in the moment.  Even if you previously believe yourself to have worked out those motivations, you cannot strictly know that your analysis still stands, since priming and other forms of conditioning can change those motivations on the fly.

\n

This is the real reason it's important to make beliefs pay rent, and to ground your thinking as much as possible in \"near\" hypotheses: keeping your reasoning tied closely to physical reality represents the only possible \"independent fact check\" on your biased \"search engine\".

\n

Okay, that's enough of the \"emotional decisions are bad and scary\" frame.  Let's take the opposite side now:

\n

Without emotions, we couldn't reason at all.

\n

Spock's dirty little secret is that logic doesn't go anywhere, without emotion.  Without emotion, you have no way to narrow down the field of \"all possible hypotheses\" to \"potentially useful hypotheses\" or \"likely to be true\" hypotheses...

\n

Nor would you have any reason to do so in the first place!

\n

Because the hidden meaning of the word \"reason\", is that it doesn't just mean logical, sensible, or rational...

\n

It also means \"purpose\".

\n

And you can't have a purpose, without an emotion.

\n

If Spock didn't make me feel something good, I might never have studied logic.  If stupid people hadn't made me feel something bad, I might never have looked up to Spock for being smart.  If procrastination hadn't made me feel bad, I never would've studied it.  If writing and finding answers to provocative questions didn't make me feel good, I never would've written as much as I have.

\n

The truth is, we can't do anything -- be it good or bad -- without some emotion playing a key part.

\n

And that fact itself, is neither good nor bad: it's just a fact.

\n

And as Spock himself might say, it's \"highly illogical\" to worry about it.

\n

No matter what your somatic markers might be telling you.

\n

 

\n

Footnotes:

\n

1. I actually didn't know I was studying \"akrasia\"...  in fact, I'd never even heard the term akrasia before, until I saw it in a thread on LessWrong discussing my work.  As far as I was concerned, I was working on \"procrastination\", or \"willpower\", or maybe even \"self-help\" or \"productivity\".  But akrasia is a nice catch-all term, so I'll use it here.

" } }, { "_id": "bCxGjo5PqZt2gXdMQ", "title": "Extreme updating: The devil is in the missing details", "pageUrl": "https://www.lesswrong.com/posts/bCxGjo5PqZt2gXdMQ/extreme-updating-the-devil-is-in-the-missing-details", "postedAt": "2009-03-25T17:55:15.999Z", "baseScore": 7, "voteCount": 10, "commentCount": 17, "url": null, "contents": { "documentId": "bCxGjo5PqZt2gXdMQ", "html": "

Today Ed Yong has a post on Not Exactly Rocket Science that is about updating - actually, the most extreme case in updating, where a person gets to choose between relying completely on their own judgement, or completely on the judgement of others.  He describes 2 experiments by Daniel Gilbert of Harvard in which subjects are given information about experience X, and asked to predict how they would feel (on a linear scale) on experiencing X; they then experience X and rate what they felt on that linear scale.

\n

In both cases, the correlation between post-experience judgements of different subjects is much higher than the correlation between the prediction and the post-experience judgement of each subject.  This isn't surprising - the experiments are designed so that the experience provides much more information than the given pre-experience information does.

\n

What might be surprising is that the subjects believe the opposite: that they can predict their response from information better than from the responses of others.

\n

Whether these experiments are interesting depends on how the subjects were asked the question.  If they were asked, before being given information or being told what that information would be, whether they could predict their response to an experience better by making their own judgement based on information, or from the responses of others, then the result is not interesting.  The subjects in that case did not know that they would be given only a trivial amount of information relative to those who had the experience.

\n

The result is only interesting if the subjects were given the information first, and then asked whether they could predict their response better from that information than from someone else's experience.  Yong's post doesn't say which of these things happened, and doesn't cite the original article, so I can't look it up.  Does anyone know?

\n

I've heard studies like this cited as strong evidence that we should update more; but never heard that critical detail given for any such studies.  Are there any studies which actually show what this study purports to show?

\n

EDIT: Robin posted the citation.  The original paper does not contain the crucial information.  Details in my response to Robin.

\n

EDIT:  The original paper DOES contain the crucial info for the first experiment.  I missed it the first time.  It says:

\n
\n

.. a woman was escorted to the speed-dating room and left to have a 5-min private conversation with the man. Next, the experimenter escorted the woman to another room where she reported how much she had enjoyed the speed date by marking a 100-mm continuous “enjoyment scale” whose end points were marked not at all and very much. This report is hereinafter referred to as her affective report.

\n

Next, a second woman was given one of two kinds of information: simulation information (which consisted of the man’s personal profile and photograph) or surrogation information (which consisted of the affective report provided by the first woman). The second woman was then asked to predict (on the enjoyment scale) how much she would enjoy her speed date with the man. This prediction is hereinafter referred to as her affective forecast.

\n

After making her prediction, the second woman was shown the kind of information (simulation or surrogation) that she had not already received. We did this to ensure that each woman had the same information about theman before the actual speed date. The only difference between the two conditions, then, was whether the second woman had surrogation information or simulation information when she made her forecast.

\n

Next, the second woman was escorted to the dating room, had a speed date, and then reported how much she enjoyed it (on the enjoyment scale). This report is hereinafter referred to as her affective report. The second woman also reported whether she believed that simulation information or surrogation information would have allowed her to make the more accurate prediction about the speed date she had and about a speed date that she might have in the future.

\n
" } }, { "_id": "Fwt4sDDacko8Sh5iR", "title": "The Sacred Mundane", "pageUrl": "https://www.lesswrong.com/posts/Fwt4sDDacko8Sh5iR/the-sacred-mundane", "postedAt": "2009-03-25T09:53:33.583Z", "baseScore": 73, "voteCount": 84, "commentCount": 117, "url": null, "contents": { "documentId": "Fwt4sDDacko8Sh5iR", "html": "

So I was reading (around the first half of) Adam Frank's The Constant Fire, in preparation for my Bloggingheads dialogue with him.  Adam Frank's book is about the experience of the sacred.  I might not usually call it that, but of course I know the experience Frank is talking about.  It's what I feel when I watch a video of a space shuttle launch; or what I feel—to a lesser extent, because in this world it is too common—when I look up at the stars at night, and think about what they mean.  Or the birth of a child, say.  That which is significant in the Unfolding Story.

\n

Adam Frank holds that this experience is something that science holds deeply in common with religion.  As opposed to e.g. being a basic human quality which religion corrupts.

\n

The Constant Fire quotes William James's The Varieties of Religious Experience as saying:

\n
\n

Religion... shall mean for us the feelings, acts, and experiences of individual men in their solitude; so far as they apprehend themselves to stand in relation to whatever they may consider the divine.

\n
\n

And this theme is developed further:  Sacredness is something intensely private and individual.

\n

Which completely nonplussed me.  Am I supposed to not have any feeling of sacredness if I'm one of many people watching the video of SpaceShipOne winning the X-Prize?  Why not?  Am I supposed to think that my experience of sacredness has to be somehow different from that of all the other people watching?  Why, when we all have the same brain design?  Indeed, why would I need to believe I was unique?  (But \"unique\" is another word Adam Frank uses; so-and-so's \"unique experience of the sacred\".)  Is the feeling private in the same sense that we have difficulty communicating any experience?  Then why emphasize this of sacredness, rather than sneezing?

\n

The light came on when I realized that I was looking at a trick of Dark Side Epistemology—if you make something private, that shields it from criticism.  You can say, \"You can't criticize me, because this is my private, inner experience that you can never access to question it.\"

\n

But the price of shielding yourself from criticism is that you are cast into solitude—the solitude that William James admired as the core of religious experience, as if loneliness were a good thing.

\n

Such relics of Dark Side Epistemology are key to understanding the many ways that religion twists the experience of sacredness:

\n

Mysteriousness—why should the sacred have to be mysterious?  A space shuttle launch gets by just fine without being mysterious.  How much less would I appreciate the stars if I did not know what they were, if they were just little points in the night sky?  But if your religious beliefs are questioned—if someone asks, \"Why doesn't God heal amputees?\"—then you take refuge and say, in a tone of deep profundity, \"It is a sacred mystery!\"  There are questions that must not be asked, and answers that must not be acknowledged, to defend the lie.  Thus unanswerability comes to be associated with sacredness.  And the price of shielding yourself from criticism is giving up the true curiosity that truly wishes to find answers.  You will worship your own ignorance of the temporarily unanswered questions of your own generation—probably including ones that are already answered.

\n

Faith—in the early days of religion, when people were more naive, when even intelligent folk actually believed that stuff, religions staked their reputation upon the testimony of miracles in their scriptures.  And Christian archaeologists set forth truly expecting to find the ruins of Noah's Ark.  But when no such evidence was forthcoming, then religion executed what William Bartley called the retreat to commitment, \"I believe because I believe!\"  Thus belief without good evidence came to be associated with the experience of the sacred.  And the price of shielding yourself from criticism is that you sacrifice your ability to think clearly about that which is sacred, and to progress in your understanding of the sacred, and relinquish mistakes.

\n

Experientialism—if before you thought that the rainbow was a sacred contract of God with humanity, and then you begin to realize that God doesn't exist, then you may execute a retreat to pure experience—to praise yourself just for feeling such wonderful sensations when you think about God, whether or not God actually exists.  And the price of shielding yourself from criticism is solipsism: your experience is stripped of its referents.  What a terrible hollow feeling it would be to watch a space shuttle rising on a pillar of flame, and say to yourself, \"But it doesn't really matter whether the space shuttle actually exists, so long as I feel.\"

\n

Separation—if the sacred realm is not subject to ordinary rules of evidence or investigable by ordinary means, then it must be different in kind from the world of mundane matter: and so we are less likely to think of a space shuttle as a candidate for sacredness, because it is a work of merely human hands.  Keats lost his admiration of the rainbow and demoted it to the \"dull catalogue of mundane things\" for the crime of its woof and texture being known.  And the price of shielding yourself from all ordinary criticism is that you lose the sacredness of all merely real things.

\n

Privacy—of this I have already spoken.

\n

Such distortions are why we had best not to try to salvage religion.  No, not even in the form of \"spirituality\".  Take away the institutions and the factual mistakes, subtract the churches and the scriptures, and you're left with... all this nonsense about mysteriousness, faith, solipsistic experience, private solitude, and discontinuity.

\n

The original lie is only the beginning of the problem.  Then you have all the ill habits of thought that have evolved to defend it.  Religion is a poisoned chalice, from which we had best not even sip.  Spirituality is the same cup after the original pellet of poison has been taken out, and only the dissolved portion remains—a little less directly lethal, but still not good for you.

\n

When a lie has been defended for ages upon ages, the true origin of the inherited habits lost in the mists, with layer after layer of undocumented sickness; then the wise, I think, will start over from scratch, rather than trying to selectively discard the original lie while keeping the habits of thought that protected it.  Just admit you were wrong, give up entirely on the mistake, stop defending it at all, stop trying to say you were even a little right, stop trying to save face, just say \"Oops!\" and throw out the whole thing and begin again.

\n

That capacity—to really, really, without defense, admit you were entirely wrong—is why religious experience will never be like scientific experience.  No religion can absorb that capacity without losing itself entirely and becoming simple humanity...

\n

...to just look up at the distant stars.  Believable without strain, without a constant distracting struggle to fend off your awareness of the counterevidence.  Truly there in the world, the experience united with the referent, a solid part of that unfolding story.  Knowable without threat, offering true meat for curiosity.  Shared in togetherness with the many other onlookers, no need to retreat to privacy.  Made of the same fabric as yourself and all other things.  Most holy and beautiful, the sacred mundane.

" } }, { "_id": "QwjGKQ4uhGTC5gAnp", "title": "Contests vs. Real World Problems", "pageUrl": "https://www.lesswrong.com/posts/QwjGKQ4uhGTC5gAnp/contests-vs-real-world-problems", "postedAt": "2009-03-25T01:29:02.264Z", "baseScore": 17, "voteCount": 18, "commentCount": 34, "url": null, "contents": { "documentId": "QwjGKQ4uhGTC5gAnp", "html": "

John Cook draws on the movie Redbelt to highlight the difference between staged contests and real-world fights. The main character of the movie is a Jiu Jitsu instructor who is willing to fight if necessary, but will not compete under arbitrary rules. Cook analogies this to the distinction between academic and real-world problem solving. Academics and students are often bound by restrictions that are useful in their own contexts, but are detrimental to someone who is more concerned with having a solution than where the solution came from.

\n

Robin pointed arbitrary restrictions in academia out to us before, but his question then was regarding topics neglected for being silly. Following Cook's line of reasoning, are there any arbitrary restrictions we have picked up in school or other contexts that are holding us back? Are there rationalist \"cheats\" that are being underused?

" } }, { "_id": "iYJo382hY28K7eCrP", "title": "The Implicit Association Test", "pageUrl": "https://www.lesswrong.com/posts/iYJo382hY28K7eCrP/the-implicit-association-test", "postedAt": "2009-03-25T00:11:25.076Z", "baseScore": 31, "voteCount": 35, "commentCount": 31, "url": null, "contents": { "documentId": "iYJo382hY28K7eCrP", "html": "

Continuation of: Bogus Pipeline, Bona Fide Pipeline
Related to: The Cluster Structure of Thingspace

\n

If you've never taken the Implicit Association Test before, try it now.

Any will do. The one on race is the \"classic\", but the one on gender and careers is a bit easier to watch \"in action\", since the effect is so clear.

The overwhelming feeling I get when taking an Implicit Association Test is that of feeling my cognitive algorithms at work. All this time talking about thingspace and bias and categorization, and all of a sudden I have this feeling to attach the words to...

...which could be completely self-delusional. What is the evidence? Does the Implicit Association Test work?

Let the defense speak first1. The Implicit Association Test correctly picks up control associations. An IAT about attitudes towards insects and flowers found generally positive attitudes to the flowers and generally negative attitudes to the insects (p = .001), just as anyone with their head screwed on properly would expect. People's self-reports were also positively correlated with their IAT results (ie, someone who reported loving flowers and hating insects more than average also had a stronger than average IAT) although these correlations did not meet the 95% significance criterion. The study was repeated with a different subject (musical instruments vs. weapons) and similar results were obtained.

In the next study, the experimenters recruited Japanese-Americans and Korean-Americans. Japan has been threatening, invading, or oppressing  Korea for large chunks of the past five hundred years, and there's no love lost between the two countries. This time, the Japanese-Americans were able to quickly match Japanese names to \"good\" stimuli and Korean names to \"bad\" stimuli, but took much longer to perform the opposite matching. The Korean-Americans had precisely the opposite problem, p < .0001.  People's self-reports were also positively correlated with their IAT results (ie, a Korean who expressed especially negative feelings towards the Japanese on average also had a stronger than average IAT result) to a significant level.

There's been some evidence that the IAT is pretty robust. Most trivial matters like position of items don't much much of a difference. People who were asked to convincingly fake an IAT effect couldn't do it. If the same person takes the test twice, there's a correlation ofabout .6 between the  two attempts2. There's a correlation of .55 between the Bona Fide Pipeline and the IAT (the IAT wins all competitions between the two; it produces twice as big an effect size). There's about a .24 correlation between explicit attitude and IAT score, which is significant at the 90% but not the 95% level; removing certain tests where people seem especially likely to lie on their explicit attitude takes it up to 95. When the two conflict, the IAT occasionally wins. In one study, subjects were asked to evaluate male and female applicants for a job. Their observed bias against women correlated more strongly with their scores on a gender bias IAT than with their own self-report (in other experiments in the same study, explicit self-report was a better predictor. The experimenters concluded both methods were valuable in different areas)

Now comes the prosecution. A common critique of the test is that the same individual often gets two completely different scores taking the same test twice. As far as re-test reliability goes, .6 correlation is pretty good from a theoretical point of view, but more than enough to be frequently embarrassing. It must be admitted: this test, while giving consistent results for populations, is of less use for individuals wondering how much bias they personally have.

\n

Carl Shulman would be heartbroken if I didn't mention Philip Tetlock, so here goes. This is from Would Jesse Jackson Fail the Implicit Association Test?, by Tetlock and Arkes (2004):

\n
\n

Measures of implicit prejudice are based on associations between race-related stimuli and valenced words. Reaction time (RT) data have been characterized as showing implicit prejudice when White names or faces are associated with positive concepts and African-American names or faces with negative concepts, compared to the reverse pairings. We offer three objections to the inferential leap from the comparative RT of different associations to the attribution of implicit prejudice: (a) The data may reflect shared cultural stereotypes rather than personal animus, (b) the affective negativity attributed to participants may be due to cognitions and emotions that are not necessarily prejudiced, and (c) the patterns of judgment deemed to be indicative of prejudice pass tests deemed to be diagnostic of rational behavior.

\n
\n

In other words, there are a bunch of legitimate reasons people might get negative IAT scores. Any connection whatsoever between black people and negative affect will do. It could be the connection that black people generally have low status in our society. It could be that a person knows of all the prejudices against black people without believing them. It could be that a person has perfectly rational negative feelings about black people because of their higher poverty rate, higher crime rate, and so on. Or it could be somethng as simple as that, for whites, black people are the out-group.

...this actually isn't much of a prosecution at all. I consider myself a moderate believer in the IAT, and I think it all sounds pretty reasonable.

What most IAT detractors I've read want to make exquisitely clear is that you can't hand someone an IAT, find an anti-black bias, and say \"Aha! He's a racist! Shame on him!\"3

I think this is pretty obvious4. You can hold beliefs on more than one level. A person may believe there is a dragon in his garage, yet not expect an experiment to detect it. A skeptic may disbelieve in ghosts, but be afraid of haunted houses. A stroke victim may deny an arm is hers while admitting it is attached to her body. And it's supposed to be news that you can give black people some sort of vague negative connotation on a nonconscious level without being Ku Klux Klan material?

There is a certain segment of society which interprets the sun rising in the morning as evidence of racism. It is not surprising that this segment of society also interprets the IAT as evidence for racism. I myself think racism is a bad word. Not in the way \"shit\" is a bad word, but in the way \"wiggin\" is a bad word. It divides experience in a perverse way, drawing a boundary such that Adolf Hitler ends up in the same category as the guy who feels a pang of guilty fear late at night when he sees a big muscular black guy walking towards him5. Taboo the word \"racism\", \"prejudice\", and any other anti-applause-light6, and a lot of the IAT debate loses its meaning.

Which is good, because I think the IAT is about much more than who is or isn't racist. The IAT is a tool for measuring distances in thingspace.

Thingspace, remember, is the sort of space in which we draw categories7. \"Chair\" is a useful category because it describes a cluster of things that are close together in concept-space in a certain way: stools, rocking chairs, office chairs, desk chairs, et cetera. \"Furniture\" is another useful word because it describes another cluster, one that includes the chair cluster and other concepts nearby. Quok, where a \"quok\" is defined as either a chair or Vladimir Lenin, is a useless category, because Lenin isn't anywhere near all the other members.

Speaking of communists, remember back when East and West Germany got reunited? And remember a little further back, when North and South Vietnam got reunited too? Those reunifications, no matter how you feel about them politically, were natural links between culturally and historically similar regions. But imagine trying to unite East Germany with South Vietnam, and West Germany with North Vietnam. The resulting countries would be ungovernable and collapse in a matter of weeks.

If you associate white people with good things, and black people with bad things, then forming the categories \"white and good\" and \"black and bad\" is like reuniting East and West Germany. You're drawing a natural border around a compact area of the map. But being forced into the categories \"white and bad\" and \"black and good\" is about as natural as trying to merge East Germany and South Vietnam into the new country \"Southeast Vietnermany\". You're drawing an arbitrary boundary around two completely unrelated parts of the map and then begging in vain for the disgruntled inhabitants to cooperate with each other.

If you provoke a war between the reunified Germany and Southeast Vietnermany, and watch which side coordinates its forces better, you get the Implicit Association Test.

Why would we want to measure distance in thingspace? Loads of reasons. Take a set of pictures of famous cult leaders, mix them with a set of pictures of famous scientists, and test Less Wrong readers' reaction times associating a picture of Eliezer Yudkowksy's face with either set8. If it's easier to place him with the scientists, or there's no difference, that's some evidence we haven't become a cult yet. If it's easier to place him with the cult leaders, we should start worrying.

Tomorrow: some more serious applications to rationality.

\n

 

\n

Footnotes:

\n

1: Most of these results taken from this, this, and this study.

\n

2: There's some evidence that priming can change your IAT score. For example, subjects shown a picture of a happy black family enjoying a picnic just before an IAT got lower bias scores than a control group who didn't see the picture. And before condemning the test too much for its tendency to give different scores on different occasions, remember back to your school days when you'd have to take endless quizzes on the same subject. Occasionally just by chance you'd get a spread of ten point or so, and if you were on the borderline between passing and failing, you might very well pass one test and fail another test on the exact same material. This doesn't mean grade school tests don't really measure your knowledge, just that there's always a bit of noise. The IAT noise is greater, but not overwhelmingly so.

\n

3: There's also a fear someone might use it for, say, evaluating applicants for a job. Due to its weakness as an individual measurement and the uncertainty about how well it predicts behavior, this would be a terrible idea.

\n

4: Full disclosure: Despite strongly opposing prejudice on a conscious level and generally getting along well with minorities in my personal life, I get assessed as moderately biased on the racism IAT. I had some memorable bad experiences with certain black people in my formative years, so this doesn't much surprise me.

\n

5: In fact, Jesse Jackson (note for non-Americans: a well-known black minister and politician who speaks out against racism) himself admits to occasionally having these pangs of guilty fear - hence the name of Tetlock's article.

\n

6: I think Eliezer once coined a term for the opposite of \"applause light\", for things like \"racism\" and \"scientism\" invoked only so people can feel good about hating them, but I can't seem to find it. Can someone refresh my memory?

\n

7: I was split on whether to use the term thing-space or concept-space here. Eliezer uses concept-space in a very particular way, but \"good\" and \"black\" seem much more concepts than things. I eventually went with thing-space, but I'm not happy about it.

\n

8: This is a facetious example. It's possible in theory, but there would be so much to control for that any result would be practically meaningless.

" } }, { "_id": "aNzLGn6s62uRZvAp2", "title": "Terrorism is not about Terror", "pageUrl": "https://www.lesswrong.com/posts/aNzLGn6s62uRZvAp2/terrorism-is-not-about-terror", "postedAt": "2009-03-24T17:08:52.494Z", "baseScore": 49, "voteCount": 40, "commentCount": 25, "url": null, "contents": { "documentId": "aNzLGn6s62uRZvAp2", "html": "
Statistical analysis of terrorist groups' longevity, aims, methods and successes reveal that groups are self-contradictory and self-sabotaging, generally ineffective; common stereotypes like terrorists being poor or ultra-skilled are false. Superficially appealing counter-examples are discussed and rejected. Data on motivations and the dissolution of terrorist groups are brought into play and the surprising conclusion reached: terrorism is a form of socialization or status-seeking.
\n

 http://www.gwern.net/Terrorism%20is%20not%20about%20Terror

\n

 

" } }, { "_id": "zqNaYn3dmhCg8wzcx", "title": "Levels of Power", "pageUrl": "https://www.lesswrong.com/posts/zqNaYn3dmhCg8wzcx/levels-of-power", "postedAt": "2009-03-24T15:54:08.431Z", "baseScore": -9, "voteCount": 16, "commentCount": 31, "url": null, "contents": { "documentId": "zqNaYn3dmhCg8wzcx", "html": "

Intended for Levels 0-1.9 Related to: Playing Video Games In Shuffle Mode

\n

Taking the advice of talisman, I thought it would be useful to compile a list of levels of general rationalist skill. I think these levels loosely correllate to how effective one is as a rationalist, but they are not the same thing, a level 2 might still compartmentalize. This list is more a specification of the difficulty of material that a person will be able to follow, as well as providing a list of levels above mine, that I can aspire to. Of course, skill is more of a continuum than a discrete set, I've leveled up once before, but I don't remember any single moment in which this occured, just like I don't remember any single moment where I suddenly matured.

\n

The first level, level 0, is the rationality skill of an average 20 year old with internet access, which is the skill of the vast majority of people in the world, if not higher. They've probably forgotten what math they knew beyond a little algebra and arithmetic,  and haven't gotten in to the habit of checking facts. These are the kind of people who think Spock when you mention logic. This level of skill is something any rationalist has to go through, so the majority of people who arrive here without coming from OB will probably fall in to this category. With any luck, these people might discover debate forums, and seeing someone shred an argument to pieces introduces them to the idea of a logical fallacy, and begins their transition in to level 1.

\n
\n

The transition from level 0 to level 1 is fairly painless. With a good reading list, it should probably take about three months. Level 1 is the stage where people start self-identifying as rationalists. The usual transition to this stage comes from thinking about rationality as rhetoric, many rationalists become rationalists after a long time spent arguing with creationists and other crackpots, and slowly developing a list of silly arguments and misconcpetions and why they're so silly, until they've got a big enough list that it becomes a matter of turning on the \"bullshit detector\". After a bit of practice doing this, people become more willing to think hard about their beliefs after their arguments are torn to shreds in the same way that they tore other people's arguments down.

\n
\n
\n

Knowing (to the core of one's being) the fundamentals of logic: that a good argument should go from shared assumptions to a conclusion with clear and explicit reasoning every step of the way - marks a level 1 or higher. People at level 1 also generally know about the scientific method and the scientific ideas that crackpots talk about, some standard fallacies and cognitive biases, and understand the law of truly large numbers: that with a big enough sample space, all sorts of weird coincidences can happen. It's much easier to apply the techniques of rationality to propositions than it is to empirical expectations, which is why most books on the subject cover those, and why you should try to translate the latter in to the former when possible.  Learning how to be rational about empirical expectations is the driving force behind transitioning to level 2.

\n
\n

The transition from level 1 to 2 is a rarer and more lengthly process. Level 2 is also much more specialized. Whereas level 1 consists mainly of reasoning about propositions, level 2 involves higher levels of abstraction i.e. thinking about the rules that are required to reason about propositions in a particular domain. To do this without committing serious mistakes requires a great deal of care and rigor, as well as knowledge of the relevant sciences and philosophical principles. When you realize that all those papers on cognitive biases and common fallacies also apply to you, you become immediately suspicious of anything that isn't absolutely explicit and precise. People at level 2 are also much more likely to admit when they don't know something, because the pattern of defining and using a precise and explicit model is nigh impossible when you don't know what you're talking about. Hence formalization is a very important technique at this level, which requires that people at level 2 know advanced math.

\n

Becoming a level 2 isn't easy, it requires learning an advanced model of how to think about problems in a particular domain that may clash with your naive intuitions, being able to trust the math, and more importantly do the math is an essential skill, and one that is difficult to master. Programming and teaching are good practice techniques at this level, because they both require being explicit about your unconscious reasoning. The process of using a formal model to predict and explain things tends to give people at level 2 much better intuitions about problems in their domain of knowledge, as well as a sense of aesthetics about abstracted models. This sense of aesthetics is probably very important in helping a person trying to reach level 3 and come up with their own.

\n

Level 3 is the level where you are able to produce awesome where you not only have an inutitive understanding of the mathematical models in your field, you also created them. At level 3 you don't have a guiding textbook to help you, you are the one that writes them. Very, very few people reach this level, and those who do are heralded as geniuses, such as Newton, Darwin, Godel, Russel or Einstein. I don't know how to become a level 3, I know Eliezer has had some experience trying, but these are the people who change the world.

" } }, { "_id": "a5Afzce6Ny8oo9p7L", "title": "Hyakujo's Fox", "pageUrl": "https://www.lesswrong.com/posts/a5Afzce6Ny8oo9p7L/hyakujo-s-fox", "postedAt": "2009-03-24T10:14:33.346Z", "baseScore": 16, "voteCount": 26, "commentCount": 31, "url": null, "contents": { "documentId": "a5Afzce6Ny8oo9p7L", "html": "

From \"Hyakujo's Fox\", #2 of the 49 koans in The Gateless Gate:

\n
\n

Once when Hyakujo delivered some Zen lectures an old man attended them, unseen by the monks. At the end of each talk when the monks left so did he. But one day he remained after the had gone, and Hyakujo asked him: `Who are you?'

\n

The old man replied: `I am not a human being, but I was a human being when the Kashapa Buddha preached in this world. I was a Zen master and lived on this mountain. At that time one of my students asked me whether the enlightened man is subject to the law of causation. I answered him: \"The enlightened man is not subject to the law of causation.\" For this answer evidencing a clinging to absoluteness I became a fox for five hundred rebirths, and I am still a fox. Will you save me from this condition with your Zen words and let me get out of a fox's body? Now may I ask you: Is the enlightened man subject to the law of causation?'

\n

Hyakujo said: `The enlightened man is one with the law of causation.'

\n

At the words of Hyakujo the old man was enlightened.

\n
\n

Mumon's poem:

\n
\n

Controlled or not controlled?
The same dice shows two faces.
Not controlled or controlled,
Both are a grievous error.

\n
\n

It really makes you wonder how the hell they got that far while still believing that the wrong answer could turn you into a fox.

" } }, { "_id": "YC3ArwKM8xhNjYqQK", "title": "On Things that are Awesome", "pageUrl": "https://www.lesswrong.com/posts/YC3ArwKM8xhNjYqQK/on-things-that-are-awesome", "postedAt": "2009-03-24T03:24:07.108Z", "baseScore": 26, "voteCount": 36, "commentCount": 25, "url": null, "contents": { "documentId": "YC3ArwKM8xhNjYqQK", "html": "

This post, which touched on the allowedness of admiration, started me thinking about the nature of things that are awesome.

\n

The first thing one does in such a situation is generate examples.  And my brain, asked to enumerate things that are awesome, said:  \"Douglas Hofstadter, E. T. Jaynes, Greg Egan...\"

\n

Upon that initial output of my brain, I had many other thoughts:

\n

(1)  My brain was able to list more than one thing that is awesome.  I am not going to dwell on this, because I think it needless to go around saying, \"Douglas Hofstadter is awesome, but E. T. Jaynes is awesome too,\" as though to deliberately moderate or subtract from the admiration of Hofstadter.  The enjoyment of things that are awesome is an important part of life, and I don't think a healthy mind should have to hold back.  But the more things you know that are awesome, the more there is to enjoy—this doesn't mean you should artificially inflate your estimations of awesomeness, but it does mean that if you can think of only one awesome thing, you must be missing out on a lot of life.  And some awesome things, but not all, are compatible enough with yourself that you can draw upon the awesome—Hofstadter and Jaynes are both like this for me, but Greg Egan is not.  So even leaving aside certain mental health risks from having only one awesome thing—it is both enjoyable, and strengthening, to know of many things that are awesome.

\n

(2)  I can think of many places where I disagree with statements emitted by Douglas Hofstadter and Greg Egan, and even one or two places where I would want to pencil in a correction to Jaynes (his interpretation of quantum mechanics being the most obvious).  In fact, when my brain says \"Greg Egan\" it is really referring to two novels, Permutation City and Quarantine, which overshadow all his other works in my book.  And when my brain says \"Hofstadter\" it is referring to Gödel, Escher, Bach with a small side order of some essays in Metamagical Themas.  For most people their truly awesome work is usually only a slice of their total output, from some particular years (I find that scary as hell, by the way).

\n

(3)  Once you realize that you're only admiring someone's peak work, you also realize that the work is not the person:  I don't actually know Hofstadter, or Greg Egan, or E. T. Jaynes.  I have no idea what they are (were) like in their personal lives, or whether their daily deeds had any trace of the awesome that is in their books.  If you start thinking that a person is supposed to be as universally and consistently awesome as their best work, so that every word from their lips is supposed to be as good as the best book they ever wrote, that's probably some kind of failure mode.  This is not to try to moderate or diminish the awesomeness: for their best work is that awesome, and so there must have been a moment of their life, a time-slice out of their worldline, which was also that awesome.  But what the symbol \"Douglas Hofstadter\" stands for, in my mind, is not all his works, or all his life.

\n

(4)  This made me realize a strange thing:  Whenever someone compliments \"Eliezer Yudkowsky\", they are really complimenting \"Eliezer Yudkowsky's writing\" or \"Eliezer Yudkowsky's best writing that stands out most in my mind\".  People who met me in person were often shocked at how much my in-person impression departed from the picture they had in their minds.  I think this mostly had to do with imagining me as being the sort of actor who would be chosen to play me in the movie version of my life—they imagined way too much dignity.  That forms a large part of the reason why I occasionally toss in the deliberate anime reference, which does seem to have fixed the divergence a bit.  And these days I have videos of myself online.  But then the inside of my head is something different again.  It's an odd thought to realize that everyone else who uses the symbol 'Eliezer Yudkowsky' uses it to refer to a quite different thing than you do.

\n

(5)  What chiefly conveys to me the experience of the awesome is to see someone—pardon me, see someone's work —that is way above me.  My most recent experience of the awesome was reading the third book in Jacqueline Carey's Kushiel series, and realizing that although I want to write with that kind of emotional depth, I can't, and may never be able to in this world.  I looked back at all my own tries in (unpublished) fiction, and it paled to grey by comparison.  It was the same way with reading Hofstadter the first time, and thinking that I could never, ever write as well as Gödel, Escher, Bach; or reading Permutation City, and seeing how far above me Greg Egan was as an idea-based science fiction writer.  And it would have been the same way with Jaynes, if that time I hadn't been thinking to myself, \"No, I must become this good.\"  This is also something of a reply to Carl's comment that we may feel freer to admire those who do not compete with us—for me, the experience of the awesome is most strongly created by seeing someone (or rather their work) outdoing me overwhelmingly, in some place where I have tried my hand.  I don't think there's anything unhealthy about making this a basis of admiration.

\n

(6)  My brain did not immediately enumerate all sorts of things that are too much a part of my background world to be salient:  Science, space travel, the human brain, and the universe are all awesome.  But the latter two are not human works, and you can't draw power from them the same way you can from a human work that is awesome and at least partly imitable.  And the virtue of narrowness seems to play an important part here: an awesome thing that can be viewed in one small chunk and understood in detail will seem more awesome than something big and diffusely awesome.  I would probably admire the space shuttle far more if I knew about it in more detail!

\n

(7)  One of the reasons why I object to Adam Frank's attempt to salvage the concept of \"sacredness\" from religion, instead of reinventing it from scratch, is that e.g. being contaminated by religious experience makes you more likely to think that sacredness should only be about stars or something—those works that were once thought to be of God—whereas there is often a lot more awesomeness stored up in a human work that you know is human.  If I want to canonize something as sacred, I'll take Gödel, Escher, Bach over a mountain any day.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"Your Price for Joining\"

\n

Previous post: \"You're Calling *Who* A Cult Leader?\"

" } }, { "_id": "8KhThQXzsAEZ59iko", "title": "Bogus Pipeline, Bona Fide Pipeline", "pageUrl": "https://www.lesswrong.com/posts/8KhThQXzsAEZ59iko/bogus-pipeline-bona-fide-pipeline", "postedAt": "2009-03-24T00:10:44.198Z", "baseScore": 30, "voteCount": 31, "commentCount": 16, "url": null, "contents": { "documentId": "8KhThQXzsAEZ59iko", "html": "

Related to: Never Leave Your Room

\n

Perhaps you are a psychologist, and you wish to do a study on racism. Maybe you want to know whether racists drink more coffee than non-racists. Sounds easy. Find a group of people and ask them how racist they are, then ask them how much coffee they drink.

Problem: everyone in your study says they're completely non-racist and some of their best friends are black and all races are equally part of this vast multicolored tapestry we call humanity. Maybe some of them are stretching the truth here a bit. Until you figure out which ones, you're never going to find out anything interesting about coffee.

So you build a foreboding looking machine out of gleaming steel, covered with wires and blinking lights. You sit your subjects down in front of the machine, connect them to its electrodes, and say as convincingly as possible that it is a lie detector and they must speak the truth. Your subjects look doubtful. Didn't they hear on TV that lie detectors don't really work? They'll stick to their vehement assertions of tolerance until you get a more impressive-looking machine, thank you.

You get smarter. Before your experiment, you make the subjects fill in a survey, which you secretly copy while they're not looking. Then you bring them in front of the gleaming metal lie detector, and dare them to try to thwart it. Every time they give an answer different from the one on the survey, you frown and tell them that the machine has detected their fabrication. When the subject is suitably impressed, you start asking them about racism.

The subjects start grudgingly admitting they have some racist attitudes. You have invented the Bogus Pipeline.

\n

\n

The Bogus Pipeline is quite powerful. Since its invention in the 70s, several different studies demonstrate that its victims will give significantly less self-enhancing answers to a wide variety of questions than will subjects not connected to the machinery. In cases where facts can be checked, Pipeline subjects' answers tend to be more factually correct than normal subjects'.

In one of the more interesting Bogus Pipeline experiments, Millham and Kellogg wanted to know how much of a person's average self-enhancement is due to self-deception biases, and how much is due to simple lying. They asked people some questions about themselves under normal and Pipeline conditions, using the Marlowe-Crowne scale. This scale really deserves a post of its own, but the short version is that it asks you some loaded questions, and if you take them as an opportunity to say nice things about yourself, you get marked down as a self-enhancer. There was a correlation of .68 between Marlowe-Crowne scores in normal and Pipeline conditions. If we accept that no one deliberately lies under the Pipeline, that means we now know how much self-enhancement is, on average, self-deception rather than deliberate falsehood (tendency towards deliberate falsehoods correlated .37 with Marlowe-Crowne.1)

\n

Interesting stuff. But you still don't know whether racists drink more coffee! Your Bogus Pipeline only eliminates part of the self-enhancement in your subjects' answers. If you want to solve the coffee question once and for all, you can't count on a fake mind-reading device. You need a real mind-reading device. And in the mid 90s, psychology finally developed one.

The Bona Fide Pipeline is far less impressive-looking than the Bogus Pipeline. Though the Bogus Pipeline tries as hard as it can to scream \"mind-reading device\", the Bona Fide Pipeline has a vested interest in preventing its victims from realizing their minds are being read. It is a simple computer terminal.

The Pipeline uses a complicated process to disguise itself as an ordinary study on distraction or face recognition or somesuch, but the active ingredient is this: the subjects play a game where they must hit one key (perhaps \"A\") if the screen displays a good word (for example \"wonderful\"), and a different key (perhaps \"L\") if the screen displays a bad word (for example \"ugly\").

But before it gives you the word, it shows you a picture of a white person or a black person. Remember priming? That picture of a black person is going to prime your brain's concept of \"black person\" and any concepts you associate with \"black person\". If you have racist attitudes, \"bad\" is one concept you associate with \"black person\". You're going to have a very easy time recognizing \"ugly\" as a bad word, because your \"bad\" concept is already activated. But you're going to have a harder time recognizing \"wonderful\" as a good concept, because your brain is already skewed in the opposite direction. It's not impossible, it's just going to take a few hundred more milliseconds. Each of which the Bona Fide Pipeline is recording and processing. At the end, it spits out a score telling you that you took an average of three hundred milliseconds longer to recognize good words when primed with black people's pictures than white people's pictures.

Does this actually work? The original study (Fazio et al, 1995) tested both whites and blacks, and found the whites were more likely to be prejudiced against blacks than the blacks were, which makes sense. In the same study, a black experimenter conversed with the subjects for a while, and rated the quality of the interaction by a typically rigorous rubric. This fuzzy unscientific measure of racist behavior correlated well with the Pipeline's data for the individuals involved. A study by Jackson (1997) find that people who score high on prejudice by Pipeline measures on average give lower scores to an essay written by a student known to be black.

The Bona Fide Pipeline has lately been superseded by its younger, sexier, Harvard-educated cousin, the IAT. More on that, the associated controversy, and the relevance to rationality tomorrow.

\n

Footnotes:

\n

1: I doubt that deceptions can be separated cleanly into self-deception and deliberate falsehood like this. More likely there are many different shades of grey, and the Bogus Pipeline captures some but not all of them.

" } }, { "_id": "BdS9TaSJPZZpq2Zxn", "title": "Thoughts on status signals", "pageUrl": "https://www.lesswrong.com/posts/BdS9TaSJPZZpq2Zxn/thoughts-on-status-signals", "postedAt": "2009-03-23T21:25:26.871Z", "baseScore": 8, "voteCount": 20, "commentCount": 24, "url": null, "contents": { "documentId": "BdS9TaSJPZZpq2Zxn", "html": "

The LW community knows all too well about the status-seeking tendencies everyone has, not excluding themselves. However, the discussion on status signaling needs to be developed further. Here are some questions I don’t think have been addressed: what can we conclude about people who are blatantly signaling higher status? Should we or can we stop people from signaling?

\n

First, let me clarify what I believe to be the nature of status signals. A status signal only exists in certain contexts. A signal in one community may not be affective in another simply because the other community has a different value system. Driving up to a Singularity Summit with 24 inch spinning rims on your car will signal low status, if anything.

\n

An interesting property of status signals is that they expire. If everybody knows that everybody knows that a certain behavior has been used as a status signal in the past, it no longer works. One example of a status signal that is nearing expiration is buying an unacquainted woman a drink at the bar (note the context I am referring to; buying someone a drink may signal high status in other contexts). There is nothing inherently wrong with this act; it’s just that women know that most men are just trying to signal for high status—therefore, the signal won’t work. Some men know that women know about this signal and, thus, stop using the signal.

\n

On LW, one signal on the verge of expiring is being a contrarian about everything or always finding faults with another’s arguments. This, however, could lead to a new anti-signal signal: agreeing too much.  

\n

Signals that have completely expired are infinitely more numerous. For example, showing your resume or college transcript in most contexts is unacceptable. Even when applying for a job, the resume is no longer sufficient—several interviews are now necessary. Of course, in the interviews, the interviewer is just looking for unexpired signals i.e. signals they don’t know are signals.  

\n

This discussion on the expiration of signals raises this question: why do signals expire?

\n

When A realizes that B is signaling, B’s incentive scheme is exposed. A knows that B is trying to make himself appear higher status in the eyes of A or anyone else he is signaling to. Furthermore, A knows that B thinks A doesn’t know the signal is, in fact, a signal. Otherwise, B wouldn’t have done the signal. A now knows that B is trying to impress (a low status behavior by the way) and therefore has the incentive to lie. Since A knows that B doesn’t know that A knows he is signaling, A figures B thinks he can get away with lying or exaggerating the truth. Since A knows that B has the incentive to lie, A will find the signal not credible. In short, a signal expires once it’s common knowledge that the signal is a signal.

\n

In an ideal world, we would all just cooperate and tell the truth about ourselves and we wouldn’t have to play this silly signal game. Unfortunately, if people start cooperating, the incentive to defect just gets higher. As you see, this is a classic Prisoner’s Dilemma game.

\n

How can we get people to tell the truth?

\n

Easy, everyone needs to learn about status-seeking behavior in order to weed out unreliable signals. The signal game may never end, but with everyone’s knowledge of status-seeking behaviors, the signals that aren’t yet weeded out will correspond more accurately to one’s true status.

" } }, { "_id": "K588BN7RNpsrnjK2X", "title": "Book: Psychiatry and the Human Condition", "pageUrl": "https://www.lesswrong.com/posts/K588BN7RNpsrnjK2X/book-psychiatry-and-the-human-condition", "postedAt": "2009-03-23T19:14:41.401Z", "baseScore": 10, "voteCount": 12, "commentCount": 15, "url": null, "contents": { "documentId": "K588BN7RNpsrnjK2X", "html": "

I'm about half-way through this fascinating book, conveniently available for free online, which is at the intersection of psychiatry and evolutionary psychology.  I don't have the time to do it justice, so I'm going to post a few choice excerpts here in the hope that those who are more prolific and insightful than I am will add further analysis.

\n

Just to make sure it's clear how this all ties in to bias, I'll start with a bias-relevant section.  The book ties delusional behavior in with the theory of consciousness as primarily existing for social intelligence purposes, and thus malfunctions in our reading of the social facts such as human intention are what cause delusions:

\n

But some people with delusions are entirely ‘normal’ except for the false belief, and the belief itself is neither impossible nor outlandish. Any other unusual behaviors can be traced back to that false belief. For instance, a man may have the fixed, false and dominating belief that his wife is having an affair with a neighbour. This belief may be so dominating as to lead to a large program of surveillance  - spying on his wife, searching her handbag, examining her clothes etc. Yet the same man may show no evidence of irrationality in other areas of his life, being able to function normally at work and socializing easily with acquaintances, so that only close friends and family are aware of the existence of the delusion. In such instances the delusion is said to be ‘encapsulated’, ie. sealed-off from other aspects of mental life, and these people are said to have a delusional disorder.

\n

...

\n

Delusions are typically stated to have three major defining characteristics. Firstly that a delusional belief is false, secondly that this false belief is behaviorally dominant, and thirdly that the false belief is resistant to counter-argument. All these characteristics are shown by delusional disorders, yet they occur in a context of generally non-pathological cognitive functioning.

\n


Humans are extremely prone to ‘false’ beliefs, or at least beliefs that strike many or most other people as false. Some of these false beliefs are strongly held and dominate behavior. It is trivially obvious that humans are imperfect logicians operating for most of the time on incomplete information, so mistakes are inevitable. But it is striking that although everyone would acknowledge the imperfections of human reasoning, many of these false beliefs are not susceptible to argument. For example, deeply cherished religious and political beliefs are nonetheless based on little or no hard evidence, vary widely, yet may dominate a person’s life, and are sometimes held with unshakeable intensity. And religious and political beliefs may strike the vast majority of other people as obviously false.

\n

...

\n

On reflection, we all harbor beliefs that may strike other people as false, even abhorrent, yet they could not persuade us out of them, at least not over a short timescale. Deeply felt beliefs do sometimes change over a lifetime but not necessarily as a consequence of compelling evidence - people sometimes change their political views, convert to a new religion or to agnosticism, and in their personal lives go through several revisions of their opinion about who is the most beautiful and desirable woman/ man in the world.

\n


In other words, delusions are a part of everyday life - but all these everyday delusions are of a particular sort. They are all delusions in relation to social intelligence. At root, all these false, or at least unjustifiable, beliefs are based upon interpretations of the human world. Even some of the more strange beliefs people have about cosmology and metaphysics often boil down to beliefs about agency - the power and influence of powerful and influential agents - whether human or supernatural.

\n

The book is hosted on HedWeb, and you can see why - it has a DIY transhumanism ethos that is happy to leap from diagnosis to ideas about treatment:

\n

Psychiatry and the Human Condition provides an optimistic vision of a superior alternative approach to psychiatric illness and its treatment, drawing upon modern neuroscience and evolutionary theory. Psychiatric signs and symptoms - such as anxiety, insomnia, malaise, fatigue - are part of life for most people, for much of the time. This is the human condition. But psychiatry has the potential to help. In particular, psychotropic drugs could enable more people to lead lives that are more creative and fulfilled. Current classifications and treatments derive from a century-old framework which now requires replacement. Available psychotropic drugs are typically being used crudely, and without sufficient attention to their psychological effects.

We can do better. This book argues that obsolete categories of diseases and drugs should be scrapped. The new framework of understanding implies that clinical management should focus on the treatment of biologically-valid symptoms and signs, and include a much larger role for self-treatment.

\n

It discusses the economics of hunter-gatherer societies, which provides a clue about several biases:

\n

Most people’s ideas of ‘primitive’ or ‘tribal’ life is based on agricultural or herding modes of production. In such societies there is invariably domination of the mass of people by a ‘chief’ (plus henchmen) who appropriate a large share of resources. But in an ‘immediate return’ or ‘simple hunter-gatherer’ economy there is an extremely egalitarian social system, with very little in the way of wealth differentials. Food is gathered on a roughly daily basis for rapid consumption, and tools or other artifacts were made as required. There was no surplus of food or material goods, no significant storage of accumulated food or other resources, and the constraints of nomadic life meant that artifacts can not be accumulated.

One of the most distinctive features of foraging societies, as contrasted with human societies that currently exist, was that ancestral societies were to a high degree egalitarian and without significant or sustained differentials in resources among men of the same age. There were indeed differentials in resource allocation according to age and sex (eg. adults ate more than children, men ate more than women) - but there was not a class or caste system, society was not stratified into rich and poor people who tended to pass their condition on to their children.

This equality of outcome is achieved in immediate-return economies by a continual process of redistribution through the sharing of food on a daily basis, and through continual equalizing redistribution of other goods. The sharing may be accomplished in various ways in different societies, including gambling games of chance or the continual circulation of artifacts as gifts. But the important common feature is that sharing is enforced by a powerful egalitarian ethos which acts to prevent a concentration of power in few hands, and in which participants are ‘vigilant’ in favour of making sure that no-one else takes more than themselves. If each individual person ensures that no-one else gets more than they do, the outcome is equality.

\n

You can see here the roots of pessimisstic bias (the belief that material wealth is increasing far more slowly than it is), since in the HG society, there was no ability to leverage capital to exponentially grow wealth over time.  Also, this means no intuitive understanding of how small differences in personal or national productivity can lead over time to huge differences in wealth.  The progressive passion for equality through redistribution and their blind spot about the growth-choking effect of these policies makes sense too - in the HG environment, there was no capital growth, so redistribution didn't choke growth, it just helped everyone stay fed.  Suspicion of the very wealthy makes sense, because there was no way for a HG to become very wealthy, as there was no storage of important resources.

\n

Of the three kinds of society as described by Gellner: hunter-gatherer, agrarian, and mercantile, it is probable that hunter-gatherers had the best life, overall. Hunter gatherer societies are the happiest and peasant societies are the most miserable - while industrial-mercantile societies such as our own lie somewhere in between.

That, at any rate, is the conclusion of anthropologist Jerome Barkow - and his opinion is widely confirmed by the reports of many independent anthropologists who have experienced the alternatives of foraging, agrarian and industrial society...

Another line of evidence is patterns of voluntary migration. When industrial mercantile societies develop, they are popular with the miserable peasantry of agrarian societies who flee the land and crowd the cities, if given the chance. Not so the happier hunter gatherers who typically must be coerced into joining industrial life. My great grandparents left their lives as rural peasants and converged from hundreds of miles and several countries to work the coal mines of Northumberland. They swapped the open sky, fields and trees for a life underground and inhabiting dingy rows of colliery houses. Being a miner in the early twentieth century must have been grim, but apparently it was not so bad as being an agricultural laborer.

\n

My hypothesis is that when most people think about people in the third world moving to factory jobs, they model the current state of those people as happy hunter-gatherers. Our idealized vision of the happy past is our instinct about happy hunter-gatherers applied incorrectly to agrarian societies. In practice, there are very few hunter-gatherers left, and the reason people go to sweatshop jobs is because those jobs are far better than the miserable toil of subsistence farming.

(Which rather begs the question of why people move to subsistence farming. Perhaps it's a group selection thing - agrarian societies are so much more productive (they accumulate capital, albeit slowly, and can support much larger population bases which means more ideas and gains from trade) that those who choose them outcompete those who don't.)

\n

It then moves on to meatier psychiatric topics, like the crapitude of the current taxonomy for psychiatric disorders:

\n

These diagnostic systems employ a syndromal system of classification that derives ultimately from the work of the psychiatrist Emil Kraepelin about a hundred years ago, and is therefore termed the ‘neo-Kraeplinian’ nosology. Whether or not a psychiatrist uses the formal diagnostic criteria, the neo-Kraeplinian nosology has now become ossified in the DSM and ICD manuals. Over the past few decades the mass of published commentary and research based on this nosology has created a climate of opinion to challenge which is seen as not so much mistaken as absurd.

\n


Yet the prevailing neo-Kraeplinian nosology is a mish-mash of syndromes that have widely varying plausibility and coherence. Some diagnoses are probably indeed biologically valid - having perhaps a single cause, occurring in a single psychological functional system, or having a unified pathology (some of the anxiety disorders, for instance, such as generalized anxiety, panic and simple phobias) But from the perspective of providing a sound basis for scientific research, especially for the core diagnoses of the ‘functional psychoses’, the whole thing is a terrible, misleading mess.
 
It might be thought that the current diagnostic schemes are supported by a wealth of scientific research. But almost the opposite is the case. Despite widespread skepticism in the research literature about the validity of the current diagnostic categories, it is still the case that almost all biological research is based upon neo-Kraeplinian diagnoses, them rather than neo-Kraeplinian diagnoses being based on research.

\n

There is lots more to be read and said, hopefully this has piqued your interest.

" } }, { "_id": "mY5SaNnugfEcj6957", "title": "Playing Video Games In Shuffle Mode", "pageUrl": "https://www.lesswrong.com/posts/mY5SaNnugfEcj6957/playing-video-games-in-shuffle-mode", "postedAt": "2009-03-23T11:59:33.469Z", "baseScore": 20, "voteCount": 19, "commentCount": 34, "url": null, "contents": { "documentId": "mY5SaNnugfEcj6957", "html": "
\n

One of the missions of OB/LW is to attract new learners, and it's clear that they are succeeding.  But the format feels like a very difficult one for those new to these ideas, with beginner-level ideas interspersed with advanced or unsettled theory and meta-level discussions.    You wouldn't play <insert cool-sounding, anime-ish video game here> with the levels on shuffle mode, but reading Less Wrong must feel like doing so for initiates.

\n

How do we make the site better for learners?  Provide a \"syllabus\" that shows a series of OB and LW posts which should be read in order?  Have a separate beginner site or feed or header?  Put labels on posts that designate them with a level?

\n
" } }, { "_id": "bWPog7DDNAhaqyaKW", "title": "I'm confused. Could someone help?", "pageUrl": "https://www.lesswrong.com/posts/bWPog7DDNAhaqyaKW/i-m-confused-could-someone-help", "postedAt": "2009-03-23T05:26:24.617Z", "baseScore": 1, "voteCount": 14, "commentCount": 12, "url": null, "contents": { "documentId": "bWPog7DDNAhaqyaKW", "html": "

Imagine that I'm offering a bet that costs 1 dollar to accept. The prize is X + 5 dollars, and the odds of winning are 1 in X. Accepting this bet, therefore, has an expected value of 5 dollars a positive expected value, and offering it has an expected value of -5 dollars. It seems like a good idea to accept the bet, and a bad idea for me to offer it, for any reasonably sized value of X.

\n

Does this still hold for unreasonably sized values of X? Specifically, what if I make X really, really, big? If X is big enough, I can reasonably assume that, basically, nobody's ever going to win. I could offer a bet with odds of 1 in 10100 once every second until the Sun goes out, and still expect, with near certainty, that I'll never have to make good on my promise to pay. So I can offer the bet without caring about its negative expected value, and take free money from all the expected value maximizers out there.

\n

What's wrong with this picture?

\n

See also: Taleb Distribution, Nick Bostrom's version of Pascal's Mugging

\n

(Now, in the real world, I obviously don't have 10100 +5 dollars to cover my end of the bet, but does that really matter?)

\n

\n


\nEdit: I should have actually done the math. :(

" } }, { "_id": "xDroHJ3AzWwJ45ufJ", "title": "BHTV: Yudkowsky & Adam Frank on \"religious experience\"", "pageUrl": "https://www.lesswrong.com/posts/xDroHJ3AzWwJ45ufJ/bhtv-yudkowsky-and-adam-frank-on-religious-experience", "postedAt": "2009-03-23T01:33:30.012Z", "baseScore": 16, "voteCount": 19, "commentCount": 37, "url": null, "contents": { "documentId": "xDroHJ3AzWwJ45ufJ", "html": "

BHTV episode with myself and Adam Frank, author of \"The Constant Fire\", on whether or not religious experience is compatible with the scientific experience, or worth trying to salvage.

\n

\n

" } }, { "_id": "9hR2RmpJmxT8dyPo4", "title": "When Truth Isn't Enough", "pageUrl": "https://www.lesswrong.com/posts/9hR2RmpJmxT8dyPo4/when-truth-isn-t-enough", "postedAt": "2009-03-22T20:23:50.562Z", "baseScore": 136, "voteCount": 127, "commentCount": 59, "url": null, "contents": { "documentId": "9hR2RmpJmxT8dyPo4", "html": "

Continuation of: The Power of Positivist Thinking

\n

Consider this statement:

\n
\n

The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve.

\n
\n

A soft positivist would be quite happy with this proposition. If we define \"the ultra-rich\" as, say, the richest two percent of people, then a quick look at the economic data shows they do control the majority of our planet's wealth. Checking up on the guest lists for cocktail parties and customer data for salons, we find that these two activities are indeed disproportionately enjoyed by the rich, so that part of the statement also seems true enough. And as anyone who's been to India or Africa knows, millions of decent hard-working people do starve, and there's no particular reason to think this isn't happening at the same time as some of these rich people attend their cocktail parties. The positivist scribbles some quick calculations on the back of a napkin and certifies the statement as TRUE. She hands it the Official Positivist Seal of Approval and moves on to her next task.

But the truth isn't always enough. Whoever's making this statement has a much deeper agenda than a simple observation on the distribution of wealth and preferred recreational activities of the upper class, one that the reduction doesn't capture.

\n

\n


Philosophers like to speak of the denotation and the connotation of a word. Denotations (not to be confused with dennettations, which are much more fun) are simple and reducible. To capture the denotation of \"old\", we might reduce it to something testable like \"over 65\". Is Methusaleh old? He's over 65, so yes, he is. End of story.

Connotations0 are whatever's left of a word when you subtract the denotation. Is Methusaleh old? How dare you use that word! He's a \"senior citizen!\" He's \"elderly!\" He's \"in his golden years.\" Each of these may share the same denotation as \"old\", but the connotation is quite different.

There is, oddly enough, a children's game about connotations and denotations1. It goes something like this:

\n
\n

I am intelligent. You are clever. He's an egghead.
I am proud. You are arrogant. He's full of himself.
I have perseverance. You are stubborn. He is pig-headed.
I am patriotic. You're a nationalist. He is jingoistic.

\n
\n

Politicians like this game too. Their version goes:

\n
\n

I care about the poor. You are pro-welfare. He's a bleeding-heart.
I'll protect national security. You'll expand the military. He's a warmonger.
I'll slash red tape. You'll decrease bureaucracy. He'll destroy safeguards.
I am eloquent. You're a good speaker. He's a demagogue.
I support free health care. You support national health care. He supports socialized health care.

\n
\n

All three statements in a sentence have the same denotation, but very different connotations. The Connotation Game would probably be good for after-hours parties at the Rationality Dojo2, playing on and on until all three statements in a trio have mentally collapsed together.

Let's return to our original statement: \"The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve.\" The denotation is a certain (true) statement about distribution of wealth and social activities of the rich. The connotation is hard to say exactly, but it's something about how the rich are evil and capitalism is unjust.

There is a serious risk here, and that is to start using this statement to build your belief system. Yesterday, I suggested that saying \"Islam is a religion of peace\" is meaningless but affects you anyway. Place an overly large amount of importance on the \"ultra-rich\" statement, and it can play backup to any other communist beliefs you hear, even though it's trivially true and everyone from Milton Friedman on down agrees with it. The associated Defense Against The Dark Arts technique is to think like a positivist, so that this statement and its reduced version sound equivalent3.

...which works fine, until you get in an argument. Most capitalists I hear encounter this statement will flounder around a bit. Maybe they'll try to disprove it by saying something very questionable, like \"If people in India are starving, then they're just not working hard enough!\" or \"All rich people deserve their wealth!4 \"

Let us take a moment to feel some sympathy for them. The statement sounds like a devastating blow against capitalism, but the capitalists cannot shoot it down because it's technically correct. They are forced to either resort to peddling falsehoods of the type described above, or to sink to the same level with replies like \"That sounds like the sort of thing Stalin would say!\" - which is, of course, denotatively true.

What would I do in their position? I would stand tall and say \"Your statement is technically true, but I disagree with the connotations. If you state them explicitly, I will explain why I think they are wrong.\"

YSITTBIDWTCIYSTEIWEWITTAW is a little long for an acronym, but ADBOC for \"Agree Denotationally But Object Connotationally could work. [EDIT: Changed acronym to better suggestion by badger]

\n

Footnotes

\n

0: Anatoly Vorobey says in the comments that I'm using the word connotation too broadly. He suggests \"subtext\".

\n

1: I feel like I might have seen this game on Overcoming Bias before, but I can't find it there. If I did, apologies to the original poster.

\n

2: Comment with any other good ones you know.

\n

3: Playing the Connotation Game a lot might also give you partial immunity to this.

\n

4: This is a great example of a hotly-debated statement that is desperately in need of reduction.

" } }, { "_id": "Ndtb22KYBxpBsagpj", "title": "Eliezer Yudkowsky Facts", "pageUrl": "https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts", "postedAt": "2009-03-22T20:17:21.220Z", "baseScore": 247, "voteCount": 299, "commentCount": 324, "url": null, "contents": { "documentId": "Ndtb22KYBxpBsagpj", "html": "\n

If you know more Eliezer Yudkowsky facts, post them in the comments.

" } }, { "_id": "BHYBdijDcAKQ6e45Z", "title": "Cached Selves", "pageUrl": "https://www.lesswrong.com/posts/BHYBdijDcAKQ6e45Z/cached-selves", "postedAt": "2009-03-22T19:34:19.719Z", "baseScore": 229, "voteCount": 206, "commentCount": 81, "url": null, "contents": { "documentId": "BHYBdijDcAKQ6e45Z", "html": "

by Anna Salamon and Steve Rayhawk (joint authorship)

\n

Related to: Beware identity

\n

Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the \"possible strategies\" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.

\n

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, \"any random thing that happens to you can hijack your judgment and personality for the next few minutes.\"

\n

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.

\n

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

\n

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

\n

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

\n

Consider the following research.

\n

In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting.  Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting.  The theory is that the test subjects remembered calling the experiment interesting, and either:

\n
    \n
  1. Honestly figured they must have found the experiment interesting -- why else would they have said so for only $1?  (This interpretation is called self-perception theory.), or
  2. \n
  3. Didn’t want to think they were the type to lie for just $1, and so deceived themselves into thinking their lie had been true.  (This interpretation is one strand within cognitive dissonance theory.)
  4. \n
\n

In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot.  He also told each boy that such play was “wrong”.  Some boys were given big threats, or were kept carefully supervised while they played -- the equivalents of Festinger’s $20 bribe.  Others were given mild threats, and left unsupervised -- the equivalent of Festinger’s $1 bribe.  Later, instead of asking the boys about their verbal beliefs, Freedman arranged to test their actions.  He had an apparently unrelated researcher leave the boys alone with the robot, this time giving them explicit permission to play.  The results were as predicted.  Boys who’d been given big threats or had been supervised, on the first round, mostly played happily away.  Boys who’d been given only the mild threat mostly refrained.  Apparently, their brains had looked at their earlier restraint, seen no harsh threat and no experimenter supervision, and figured that not playing with the attractive, battery-operated robot was the way they wanted to act.

\n

One interesting take-away from Freedman’s experiment is that consistency effects change what we do -- they change the “near thinking” beliefs that drive our decisions -- and not just our verbal/propositional claims about our beliefs.  A second interesting take-away is that this belief-change happens even if we aren’t thinking much -- Freedman’s subjects were children, and a related “forbidden toy” experiment found a similar effect even in pre-schoolers, who just barely have propositional reasoning at all.

\n

Okay, so how large can such “consistency effects” be?  And how obvious are these effects -- now that you know the concept, are you likely to notice when consistency pressures change your beliefs or actions?

\n

In what is perhaps the most unsettling study I’ve heard along these lines, Freedman and Fraser had an ostensible “volunteer” go door-to-door, asking homeowners to put a big, ugly “Drive Safely” sign in their yard.  In the control group, homeowners were just asked, straight-off, to put up the sign.  Only 19% said yes.  With this baseline established, Freedman and Fraser tested out some commitment and consistency effects.  First, they chose a similar group of homeowners, and they got a new “volunteer” to ask these new homeowners to put up a tiny three inch “Drive safely” sign; nearly everyone said yes.  Two weeks later, the original volunteer came along to ask about the big, badly lettered signs -- and 76% of the group said yes, perhaps moved by their new self-image as people who cared about safe driving.  Consistency effects were working.

\n

The unsettling part comes next; Freedman and Fraser wanted to know how apparently unrelated the consistency prompt could be.  So, with a third group of homeowners, they had a “volunteer” for an ostensibly unrelated non-profit ask the homeowners to sign a petition to “keep America beautiful”.  The petition was innocuous enough that nearly everyone signed it.  And two weeks later, when the original guy came by with the big, ugly signs, nearly half of the homeowners said yes -- a significant boost above the 19% baseline rate.  Notice that the “keep America beautiful” petition that prompted these effects was: (a) a tiny and un-memorable choice; (b) on an apparently unrelated issue (“keeping America beautiful” vs. “driving safely”); and (c) two weeks before the second “volunteer”’s sign request (so we are observing medium-term attitude change from a single, brief interaction).

\n

These consistency effects are reminiscent of Yvain’s large, unnoticed priming effects -- except that they’re based on your actions rather than your sense-perceptions, and the influences last over longer periods of time.  Consistency effects make us likely to stick to our past ideas, good or bad.  They make it easy to freeze ourselves into our initial postures of disagreement, or agreement.  They leave us vulnerable to a variety of sales tactics.  They mean that if I’m working on a cause, even a “rationalist” cause, and I say things to try to engage new people, befriend potential donors, or get core group members to collaborate with me, my beliefs are liable to move toward whatever my allies want to hear.

\n

What to do?

\n

Some possible strategies (I’m not recommending these, just putting them out there for consideration):

\n
    \n
  1. Reduce external pressures on your speech and actions, so that you won’t make so many pressured decisions, and your brain won’t cache those pressure-distorted decisions as indicators of your real beliefs or preferences.  For example:\n\n
  2. \n
  3. Only say things you don’t mind being consistent with.  For example:\n\n
  4. \n
  5. Change or weaken your brain’s notion of “consistent”.  Your brain has to be using prediction and classification methods in order to generate “consistent” behavior, and these can be hacked.\n\n
  6. \n
  7. Make a list of the most important consistency pressures on your beliefs, and consciously compensate for them.  You might either consciously move in the opposite direction (I know I’ve been hanging out with singularitarians, so I somewhat distrust my singularitarian impressions) or take extra pains to apply rationalist tools to any opinions you’re under consistency pressure to have.  Perhaps write public or private critiques of your consistency-reinforced views (though Eliezer notes reasons for caution with this one).
  8. \n
  9. Build more reliably truth-indicative types of thought.  Ultimately, both priming and consistency effects suggest that our baseline sanity level is low; if small interactions can have large, arbitrary effects, our thinking is likely pretty arbitrary to to begin with.  Some avenues of approach:\n\n
  10. \n
\n" } }, { "_id": "cyzXoCv7nagDWCMNS", "title": "You're Calling *Who* A Cult Leader?", "pageUrl": "https://www.lesswrong.com/posts/cyzXoCv7nagDWCMNS/you-re-calling-who-a-cult-leader", "postedAt": "2009-03-22T06:57:46.809Z", "baseScore": 60, "voteCount": 85, "commentCount": 121, "url": null, "contents": { "documentId": "cyzXoCv7nagDWCMNS", "html": "

Followup toWhy Our Kind Can't Cooperate, Cultish Countercultishness

\n

I used to be a lot more worried that I was a cult leader before I started reading Hacker News.  (WARNING:  Do not click that link if you do not want another addictive Internet habit.)

\n

From time to time, on a mailing list or IRC channel or blog which I ran, someone would start talking about \"cults\" and \"echo chambers\" and \"coteries\".  And it was a scary accusation, because no matter what kind of epistemic hygeine I try to practice myself, I can't look into other people's minds.  I don't know if my long-time readers are agreeing with me because I'm making sense, or because I've developed creepy mind-control powers.  My readers are drawn from the nonconformist crowd—the atheist/libertarian/technophile/sf-reader/Silicon-Valley/early-adopter cluster—and so they certainly wouldn't admit to worshipping me even if they were.

\n

And then I ran into Hacker News, where accusations in exactly the same tone were aimed at the site owner, Paul Graham.

\n

Hold on.  Paul Graham gets the same flak I do?

\n\n

I've never heard of Paul Graham saying or doing a single thing that smacks of cultishness.  Not one.

\n

He just wrote some great essays (that appeal especially to the nonconformist crowd), and started an online forum where some people who liked those essays hang out (among others who just wandered into that corner of the Internet).

\n

So when I read someone:

\n
    \n
  1. Comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults;
  2. \n
  3. Claiming that founders were asked to move to the Bay Area startup hub as a cult tactic of separation from friends and family;
  4. \n
\n

...well, that outright broke my suspension of disbelief.

\n

\n

Something is going on here which has more to do with the behavior of nonconformists in packs than whether or not you can make a plausible case for cultishness or even cultishness risk factors.

\n

But there are aspects of this phenomenon that I don't understand, because I'm not feeling what they're feeling.

\n

Behold the following, which is my true opinion:

\n

\"Gödel, Escher, Bach\" by Douglas R. Hofstadter is the most awesome book that I have ever read.  If there is one book that emphasizes the tragedy of Death, it is this book, because it's terrible that so many people have died without reading it.

\n

I know people who would never say anything like that, or even think it: admiring anything that much would mean they'd joined a cult (note: Hofstadter does not have a cult).  And I'm pretty sure that this negative reaction to strong admiration is what's going on with Paul Graham and his essays, and I begin to suspect that not a single thing more is going on with me.

\n

But I'm having trouble understanding this phenomenon, because I myself feel no barrier against admiring Gödel, Escher, Bach that highly.

\n

In fact, I would say that by far the most cultish-looking behavior on Hacker News is people trying to show off how willing they are to disagree with Paul Graham.  Let me try to explain how this feels when you're the target of it:

\n

It's like going to a library, and when you walk in the doors, everyone looks at you, staring.  Then you walk over to a certain row of bookcases—say, you're looking for books on writing—and at once several others, walking with stiff, exaggerated movements, select a different stack to read in.  When you reach the bookshelves for Dewey decimal 808, there are several other people present, taking quick glances out of the corner of their eye while pretending not to look at you.  You take out a copy of The Poem's Heartbeat: A Manual of Prosody.

\n

At once one of the others present reaches toward a different bookcase and proclaims, \"I'm not reading The Poem's Heartbeat!  In fact, I'm not reading anything about poetry!  I'm reading The Elements of Style, which is much more widely recommended by many mainstream writers.\"  Another steps in your direction and nonchalantly takes out a second copy of The Poem's Heartbeat, saying, \"I'm not reading this book just because you're reading it, you know; I think it's a genuinely good book, myself.\"

\n

Meanwhile, a teenager who just happens to be there, glances over at the book.  \"Oh, poetry,\" he says.

\n

\"Not exactly,\" you say.  \"I just thought that if I knew more about how words sound—the rhythm—it might make me a better writer.\"

\n

\"Oh!\" he says, \"You're a writer?\"

\n

You pause, trying to calculate whether the term does you too much credit, and finally say, \"Well, I have a lot of readers, so I must be a writer.\"

\n

\"I plan on being a writer,\" he says.  \"Got any tips?\"

\n

\"Start writing now,\" you say immediately.  \"I once read that every writer has a million words of bad writing inside them, and you have to get it out before you can write anything good.  Yes, one million.  The sooner you start, the sooner you finish.\"

\n

The teenager nods, looking very serious.  \"Any of these books,\" gesturing around, \"that you'd recommend?\"

\n

\"If you're interested in fiction, then definitely Jack Bickham's Scene and Structure,\" you say, \"though I'm still struggling with the form myself.  I need to get better at description.\"

\n

\"Thanks,\" he says, and takes a copy of Scene and Structure.

\n

\"Hold on!\" says the holder of The Elements of Style in a tone of shock.  \"You're going to read that book just because he told you to?\"

\n

The teenager furrows his brow.  \"Well, sure.\"

\n

There's an audible gasp, coming not just from the local stacks but from several other stacks nearby.

\n

\"Well,\" says the one who took the other copy of The Poem's Heartbeat, \"of course you mean that you're taking into account his advice about which books to read, but really, you're perfectly capable of deciding for yourself which books to read, and would never allow yourself to be swayed by arguments without adequate support.  Why, I bet you can think of several book recommendations that you've rejected, thus showing your independence.  Certainly, you would never go so far as to lose yourself in following someone else's book recommendations—\"

\n

\"What?\" says the teenager.

\n

If there's an aspect of the whole thing that annoys me, it's that it's hard to get that innocence back, once you even start thinking about whether you're independent of someone.  I recently downvoted one of PG's comments on HN (for the first time—a respondent had pointed out that the comment was wrong, and it was).  And I couldn't help thinking, \"Gosh, I'm downvoting one of PG's comments\"—no matter how silly that is in context—because the cached thought had been planted in my mind from reading other people arguing over whether or not HN was a \"cult\" and defending their own freedom to disagree with PG.

\n

You know, there might be some other things that I admire highly besides Gödel, Escher, Bach, and I might or might not disagree with some things Douglas Hofstadter once said, but I'm not even going to list them, because GEB doesn't need that kind of moderation.  It is okay for GEB to be awesome.  In this world there are people who have created awesome things and it is okay to admire them highly!  Let this Earth have at least a little of its pride!

\n

I've been flipping through ideas that might explain the anti-admiration phenomenon.  One of my first thoughts was that I evaluate my own potential so highly (rightly or wrongly is not relevant here) that praising Gödel, Escher, Bach to the stars doesn't feel like making myself inferior to Douglas Hofstadter.  But upon reflection, I strongly suspect that I would feel no barrier to praising GEB even if I weren't doing anything much interesting with my life.  There's some fear I don't feel, or some norm I haven't acquired.

\n

So rather than guess any further, I'm going to turn this over to my readers.  I'm hoping in particular that someone used to feel this way—shutting down an impulse to praise someone else highly, or feeling that it was cultish to praise someone else highly—and then had some kind of epiphany after which it felt, not allowed, but rather, quite normal.

\n

 

\n

Part of the sequence The Craft and the Community

\n

Next post: \"On Things that are Awesome\"

\n

Previous post: \"Tolerate Tolerance\"

" } }, { "_id": "tSgcorrgBnrCH8nL3", "title": "Don't Revere The Bearer Of Good Info", "pageUrl": "https://www.lesswrong.com/posts/tSgcorrgBnrCH8nL3/don-t-revere-the-bearer-of-good-info", "postedAt": "2009-03-21T23:22:50.348Z", "baseScore": 126, "voteCount": 108, "commentCount": 72, "url": null, "contents": { "documentId": "tSgcorrgBnrCH8nL3", "html": "
\n

Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness

\n

One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.

\n

Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.

\n

If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.

\n

\n

Even if you resist revering the messenger, a well-written piece that purports to summarize a field can leave you ignorant of your ignorance. If you only read the National Review or The Nation you will pick up a lot of political knowledge, including knowledge about the other party/ideology, at least enough to score well on political science surveys. However, that very knowledge means that missing pieces favoring the other side can be more easily ignored: someone might not believe that the other side is made up of Evil Mutants with no reasons at all, and might be tempted to investigate, but ideological media can provide reasons that are plausible yet not so plausible as to be tempting to their audience. For a truth-seeker, beware of explanations of the speaker's opponents.

\n

This sort of intentional slanting and misplaced trust is less common in more academic sources, but it does occur. For instance, top philosophers of science have been caught failing to beware of Stephen J. Gould, copying his citations and misrepresentations of work by Arthur Jensen without having read either the work in question or the more scrupulous treatments in the writings of Jensen's leading scientific opponents, the excellent James Flynn and Richard Nisbett. More often, space constraints mean that a work will spend more words and detail on the view being advanced (Near) than on those rejected (Far), and limited knowledge of the rejected views will lead to omissions. Without reading the major alternative views to those of the one who introduced you to a field in their own words or, even better, neutral textbooks, you will underrate opposing views.

\n

What do LW contributors recommend as the best articulations of alternative views to OB/LW majorities or received wisdom, or neutral sources to put them in context? I'll offer David Chalmers' The Conscious Mind for reductionism, this article on theistic modal realism for the theistic (not Biblical) Problem of Evil, and David Cutler's Your Money or Your Life for the average (not marginal) value of medical spending. Across the board, the Stanford Encyclopedia of Philosophy is a great neutral resource for philosophical posts.

\n

Offline Reference:

\n

Ross, L. D., Amabile, T. M. & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perceptual processes. Journal of Personality and Social Psychology, 35, 485-494.

\n
" } }, { "_id": "azoP7WeKYYfgCozoh", "title": "The Power of Positivist Thinking", "pageUrl": "https://www.lesswrong.com/posts/azoP7WeKYYfgCozoh/the-power-of-positivist-thinking", "postedAt": "2009-03-21T20:55:37.146Z", "baseScore": 95, "voteCount": 94, "commentCount": 57, "url": null, "contents": { "documentId": "azoP7WeKYYfgCozoh", "html": "

Related to: No Logical Positivist I, Making Beliefs Pay Rent, How An Algorithm Feels From Inside, Disguised Queries

\n

Call me non-conformist, call me one man against the world, but...I kinda like logical positivism.

The logical positivists were a dour, no-nonsense group of early 20th-century European philosophers. Indeed, the phrase \"no-nonsense\" seems almost invented to describe the Positivists. They liked nothing better then to reject the pet topics of other philosophers as being untestable and therefore meaningless. Is the true also the beautiful? Meaningless! Is there a destiny to the affairs of humankind? Meaningless? What is justice? Meaningless! Are rights inalienable? Meaningless!

Positivism became stricter and stricter, defining more and more things as meaningless, until someone finally pointed out that positivism itself was meaningless by the positivists' definitions, at which point the entire system vanished in a puff of logic. Okay, it wasn't that simple. It took several decades and Popper's falsifiabilism to seal its coffin. But vanish it did. It remains one of the least lamented theories in the history of philosophy, because if there is one thing philosophers hate it's people telling them they can't argue about meaningless stuff.

But if we've learned anything from fantasy books, it is that any cabal of ancient wise men destroyed by their own hubris at the height of their glory must leave behind a single ridiculously powerful artifact, which in the right hands gains the power to dispel darkness and annihilate the forces of evil.

The positivists left us the idea of verifiability, and it's time we started using it more.

\n



Eliezer, in No Logical Positivist I, condemns the positivist notion of verifiability for excluding some perfectly meaningful propositions. For example, he says, it may be that a chocolate cake formed in the center of the sun on 8/1/2008, then disappeared after one second. This statement seems to be meaningful; that is, there seems to be a difference between it being true or false. But there's no way to test it (at least without time machines and sundiver ships, which we can't prove are possible) so the logical positivists would dismiss it as nonsense.

I am not an expert in logical positivism; I have two weeks studying positivism in an undergrad philosophy class under my belt, and little more. If Eliezer says that is how the positivists interpreted their verifiability criterion, I believe him. But it's not the way I would have done things, if I'd been in 1930s Vienna. I would have said that any statement corresponding to a state of the material universe, reducible in theory to things like quarks and photons, testable by a being who has access to the machine running the universe1 and who can check the logs at will - such a statement is meaningful2. In this case the chocolate cake example passes: it corresponds to a state of the material world, and is clearly visible on the universe's logs. \"Rights are inalienable\" remains meaningless, however. At the risk of reinventing the wheel3, I will call this interpretation \"soft positivism\".

My positivism gets even softer, though. Consider the statement \"Google is a successful company.\" Though my knowledge of positivism is shaky, I believe that most positivists would reject this as meaningless; \"success\" is too fuzzy to be reduced to anything objective. But if positivism is true, it should add up to normality: we shouldn't find that an obviously useful statement like \"Google is a successful company\" is total nonsense. I interpret the statement to mean certain objectively true propositions like \"The average yearly growth rate for Google has been greater than the average yearly growth rate for the average company\", which itself reduces down to a question of how much money Google made each year, which is something that can be easily and objectively determined by anyone with the universe's logs.

I'm not claiming that \"Google is a successful company\" has an absolute one-to-one identity with a statement about average growth rates. But the \"successful company\" statement is clearly allied with many testable statements. Average growth rate, average profits per year, change in the net worth of its founders, numbers of employees, et cetera. Two people arguing about whether Google was a successful company could in theory agree to create a formula that captures as much as possible of their own meaning of the word \"successful\", apply that formula to Google, and see whether it passed. To say \"Google is a successful company\" reduces to \"I'll bet if we established a test for success, which we are not going to do, Google would pass it.\"

(Compare this to Eliezer's meta-ethics, where he says \"X is good\" reduces to \"I'll bet if we calculated out this gigantic human morality computation, which we are not going to do, X would satisfy it.\")

This can be a very powerful method for resolving debates. I remember getting into an argument with my uncle, who believed that Obama's election would hurt America because having a Democratic president is bad for the economy. We were doing the normal back and forth, him saying that Democrats raised taxes which discouraged growth, me saying that Democrats tended to be more economically responsible and less ideologically driven, and we both gave lots of examples and we never would have gotten anywhere if I hadn't said \"You know what? Can we both agree that this whole thing is basically asking whether average GDP is lower under Democratic than Republican presidents?\" And he said \"Yes, that's pretty much what we're arguing about.\" So I went and got the GDP statistics, sure enough they were higher under Democrats, and he admitted I had a point4.

But people aren't always as responsible as my uncle, and debates aren't always reducible to anything as simple as GDP. Consider: Zahra approaches Aaron and says: \"Islam is a religion of peace.\"5

Perhaps Aaron disagrees with this statement. Perhaps he begins debating. There are many things he could say. He could recall all the instances of Islamic terrorism, he could recite seemingly violent verses from the Quran, he could appeal to wars throughout history that have involved Muslims. I've heard people try all of these.

And Zahra will respond to Aaron in the same vein. She will recite Quranic verses praising peace, and talk about all the peaceful Muslims who never engage in terrorism at all, and all of the wars started by Christians in which Muslims were innocent victims. I have heard all these too.

Then Paula the Positivist comes by. \"Hey,\" she says, \"We should reduce this statement to testable propositions, and then there will be no room for disagreement.\"

But maybe, if asked to estimate the percentage of Muslims who are active in terrorist groups, Aaron and Zahra will give the exact same number. Perhaps they are both equally aware of all the wars in history in which Muslims were either aggressors or peacemakers. They may both have the entire Quran memorized and be fully aware of all appropriate verses. But even after Paula has checked to make sure they agree on every actual real world fact, there is no guarantee that they will agree on whether Islam is a religion of peace or not.

What if we ask Aaron and Zahra to reduce \"Islam is a religion of peace\" to an empirical proposition? In the best case, they will agree on something easy, like \"Muslims on average don't commit any more violent crimes than non-Muslims.\" Then you just go find some crime statistics and the problem is solved. In the second-best case, the two of them reduce it to completely different statements, like \"No Muslim has ever committed a violent act\" versus \"Not all Muslims are violent people.\" This is still a resolution to the argument; both Aaron and Zahra may agree that the first proposition is false and the second proposition is true, and they both agree the original statement was too vague to go around professing.

In the worst-case scenario, they refuse to reduce the statement at all, or they deliberately reduce it to something untestable, or they reduce it to two different propositions but are outraged that their opponent is using a different proposition than they are and think their opponent's proposition is clearly not equivalent to the original statement.

How are they continuing to disagree, when they agree on all of the relevant empirical facts and they fully understand the concept of reducing a proposition?

In How an Algorithm Feels From the Inside, Eliezer writes about disagreement on definitions. \"We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass - but is it a planet?\" The question, he says, is meaningless. It's a spandrel from our cognitive algorithm, which works more efficiently if it assigns a separate central variable is_a_planet apart from all the actual tests that determine whether something is a planet or not.

Aaron and Zahra seem to be making the same sort of mistake. They have a separate variable is_a_religion_of_peace that's sitting there completely separate from all of the things you might normally use to decide whether one group of people is generally more violent than another.

But things get much worse than they do in the Pluto problem. Whether or not Pluto is a planet feels like a factual issue, but turns out to be underdetermined by the facts. Whether or not Islam is a religion of peace feels like a factual issue, but is really a false front for a whole horde of beliefs that have no relationship to the facts at all.

When Zahra says \"Islam is a religion of peace,\" she is very likely saying something along the lines of \"I like Islam!\" or \"I like tolerance!\" or \"I identify with an in-group who say things like 'Islam is a religion of peace'\" or \"People who hate Islam are mean!\" or even \"I don't like Republicans.\". She may be covertly pushing policy decisions like \"End the war on terror\" or \"Raise awareness of unfair discrimination against Muslims.\"

When Aaron says \"Islam is not a religion of peace,\" he is probably saying something like \"I don't like Islam,\" or \"I think excessive tolerance is harmful\", or \"I identify with an in-group who would never say things like 'Islam is a religion of peace'\" or even \"I don't like Democrats.\" He may be covertly pushing policy decisions like \"Continue the war on terror\" or \"Expel radical Muslims from society.\"

Eliezer's solution to the Pluto problem is to uncover the disguised query that made you care in the first place. If you want to know whether Pluto is spherical under its own gravity, then without worrying about the planet issue you can simply answer yes. And you're wondering whether to worry about your co-worker Abdullah bombing your office, you can simply answer no. Islam is peaceful enough for your purposes.

But although uncovering the disguised query is a complete answer to the Pluto problem, it's only a partial answer to the religion of peace problem. It's unlikely that someone is going to misuse the definition of Pluto as a planet or an asteroid to completely misunderstand what Pluto is or what it's likely to do (although it can happen). But the entire point of caring about the \"Islam is a religion of peace\" issue is so you can misuse it as much as possible.

Israel is evil, because it opposes Muslims, and Islam is a religion of peace. The Democrats are tolerating Islam, and Islam is not a religion of peace, so the Democrats must have sold out the country. The War on Terror is racist, because Islam is a religion of peace. We need to ban headscarves in our schools, because Islam is not a religion of peace.

I'm not sure how the chain of causation goes here. It could be (emotional attitude to Islam) -> (Islam [is/isn't] a religion of peace) -> (poorly supported beliefs about Islam). Or it could just be (emotional attitude to Islam) -> (poorly supported beliefs about Islam). But even in the second case, that \"Islam [is/isn't] a religion of peace\" gives the poorly supported beliefs a dignity that they would not otherwise have, and allows the person who holds them to justify themselves in an argument. Basically, that one phrase holes itself up in your brain and takes pot shots at any train of thought that passes by.

The presence of that extra is_a_religion_of_peace variable is not a benign feature of your cognitive process anymore. It's a malevolent mental smuggler transporting prejudices and strong emotions into seemingly reasonable thought processes.

Which brings us back to soft positivism. If we find ourselves debating statements that we refuse to reduce to empirical data6, or using statements in ways their reductions don't justify, we need to be extremely careful. I am not positivist enough to say we should never be doing it. But I think it raises one heck of a red flag.

Agree with me? If so, which of the following statements do you think are reducible, and how would you begin reducing them? Which are completely meaningless and need to be scrapped? Which ones raise a red flag but you'd keep them anyway?

1. All men are created equal.
2. The lottery is a waste of hope.
3. Religious people are intolerant.
4. Government is not the solution; government is the problem.
5. George Washington was a better president than James Buchanan.
6. The economy is doing worse today than it was ten years ago.
7. God exists.
8. One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can.
9. Imagination is more important than knowledge.
10. Rationalists should win.

\n

 

\n

Footnotes:

\n

1: More properly the machine running the multiverse, since this would allow counterfactuals to be meaningful. It would also simplify making a statement like \"The patient survived because of the medicine\", since it would allow quick comparison of worlds where the patient did and didn't receive it. But if the machine is running the multiverse, where's the machine?

\n

2: One thing I learned from the comments on Eliezer's post is that this criterion is often very hard to apply in theory. However, it's usually not nearly as hard in practice.

\n

3: This sounds like the sort of thing there should already be a name for, but I don't know what it is. Verificationism is too broad, and empiricism is something else. I should point out that I am probably misrepresenting the positivist position here quite badly, and that several dead Austrians are either spinning in their graves or (more likely) thinking that this whole essay is meaningless. I am using \"positivist\" only as a pointer to a certain style of thinking.

\n

4: Before this issue dominates the comments thread: yes, I realize that the president having any impact on the economy is highly debatable, that there's not nearly enough data here to make a generalization, et cetera. But my uncle's statement - that Democratic presidents hurt the economy, is clearly not supported.

\n

5: If your interpretation of anything in the following example offends you, please don't interpret it that way.

\n

6: Where morality fits into this deserves a separate post.

" } }, { "_id": "WzMJRQBN3ryxiAbhi", "title": "Individual Rationality Is a Matter of Life and Death", "pageUrl": "https://www.lesswrong.com/posts/WzMJRQBN3ryxiAbhi/individual-rationality-is-a-matter-of-life-and-death", "postedAt": "2009-03-21T19:22:02.606Z", "baseScore": 26, "voteCount": 41, "commentCount": 44, "url": null, "contents": { "documentId": "WzMJRQBN3ryxiAbhi", "html": "

On at least two occasions - one only a year past - my life was at serious risk because I was not thinking clearly.  Both times, I was lucky (and once, the car even survived!).  As a gambler I don't like counting on luck, and I'd much rather be rational enough to avoid serious mistakes.  So when I checked the top-ranked posts here and saw Robin's Rational Me or We? arguing against rationality as a martial art I was dumbfounded.  To me, individual rationality is a matter of life and death[1].

\n

In poker, much attention is given to the sexy art of reading your opponent, but the true veteran knows that far more important is the art of reading and controlling yourself.  It is very rare that a situation comes up where a \"tell\" matters, and each of my opponents is only in an occasional hand.  I and my irrationalities, however, are in every decision in every hand.  This is why self-knowledge and self-discipline are first-order concerns in poker, while opponent reading is second or perhaps even third.

\n

And this is why Robin's post is so wrong[2].  Our minds and their irrationalities are part of every second of our lives, every moment we experience, and every decision that we make.  And contra to Robin's security metaphor, few of our decisions can be outsourced.  My two bad decisions regarding motor vehicles, for example, could not have easily been outsourced to a group rationality mechanism[3].  Only a tiny percentage of the choices I make every day can be punted to experts.

\n

We have long since left the Hobbesian world where physical security depends on individual skills, but when it comes to rationality, we are all \"isolated survivalist Einsteins\".  We are in a world where our individual mental skills are constantly put to the test.  And even when we can rely on experts, it is our individual choices (influenced by the quality of our minds) that determine our success in life.  (How long would a professor's reputation last if he never did any original work?)

\n

So while I respect and admire Robin's interest in improving institutions, I believe that his characterization of the relative merits of individual and collective mechanisms is horridly wrong.  To have more and better rational collective institutions is a speculative, long-term goal with limited scope (albeit in some very important areas).  Learning the martial art of rationality is something that all of us can do now to improve the quality of our decisions and thus positively influence every part of our lives.  By making us more effective as individuals (hell, just keeping us from stupidly getting ourselves killed), it will help us work on all of our goals - like getting society to accept ambitious new social institutions.

\n

In the modern world, karate is unlikely to save your life.  But rationality can.  For example, if one believes that cryonics is a good gamble at immortality, and people don't do it because of irrationality, then improved individual rationality can give people a shot at immortality instead of certain death.  And that's only one of the myriad decisions we each face in optimizing our life!

\n

Which is why, while I spend my days working on better institutions, I also practice my rationality katas, so that I will survive to reach the new future our institutions will bring.

\n

[1] I have a post about the more recent incident that's been written in my mind for months, and just hasn't fallen out onto the screen yet.

\n

[2] Or at least, this is related - I freely admit to liking poker metaphors enough that I'm willing to stretch to make them!

\n

[3] Yes, I'm sure a clever person can come up with markets to keep young men from doing stupid things with cars.  That's not the point.  Markets have significant overhead, and it takes high public interest for it to be worth opening, funding, trading in, and running a market.  They may have great value for large decisions, but they are never going to replace the majority of decisions in our day to day lives.

" } }, { "_id": "KMoXBt4QX7XvEoBgS", "title": "Mind Control and Me", "pageUrl": "https://www.lesswrong.com/posts/KMoXBt4QX7XvEoBgS/mind-control-and-me", "postedAt": "2009-03-21T17:31:33.340Z", "baseScore": 12, "voteCount": 20, "commentCount": 34, "url": null, "contents": { "documentId": "KMoXBt4QX7XvEoBgS", "html": "

Reading Eliezer Yudkowsky's works have always inspired an insidious feeling in me, sort of a cross between righteousness, contempt, the fun you get from understanding something new and gravitas. It's a feeling that I have found to be pleasurable, or at least addictive enough to go through all of his OB posts,  and the feeling makes me less skeptical and more obedient than I normally would be. For instance, in an act of uncharacteristic generosity, I decided to make a charitable donation on Eliezer's advice.

\n

Now this is probably a good idea, because the charity is probably going to help guys like me later on in life and of course it's the Right Thing to Do. But the bottom line is that I did something I normally wouldn't have because Eliezer told me to. My sociopathic selfishness was acting as canary in the mine of my psyche.

\n

Now this could be because Eliezer has creepy mind control powers, but I get similar feelings when reading other people, such as George Orwell, Richard Stallman or Paul Graham. I even have a friend who can inspire that insidious feeling in me. So it's a personal problem, one that I'm not sure I want to remove, but I would like to understand it better.

\n

There are probably buttons being pushed by the style and the sort of ideas in the work that help to create the feeling, and I'll probably try to go over an essay or two and dissect it. However, I'd like to know who and at what times, if anyone at all, I should let create such feelings in me. Can I trust anyone that much, even if they aren't aware that they're doing it?

\n

I don't know if anyone else here has similar brain overrides, or if I'm just crazy, but it's possible that such brain overrides could be understood much more thoroughly and induced in more people.  So what are the ethics of mind control (for want of a better term) and how much effort should we put in to stopping such feelings from occuring?

\n

 

\n

 

\n

Edit Mar 22: Decided to remove the cryonics example due to factual inaccuracies.

" } }, { "_id": "JKxxFseBWz8SHkTgt", "title": "Tolerate Tolerance", "pageUrl": "https://www.lesswrong.com/posts/JKxxFseBWz8SHkTgt/tolerate-tolerance", "postedAt": "2009-03-21T07:34:12.259Z", "baseScore": 150, "voteCount": 108, "commentCount": 87, "url": null, "contents": { "documentId": "JKxxFseBWz8SHkTgt", "html": "

One of the likely characteristics of someone who sets out to be a \"rationalist\" is a lower-than-usual tolerance for flaws in reasoning.  This doesn't strictly follow.  You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size.  But realistically speaking, a lot of us probably have our level of \"annoyance at all these flaws we're spotting\" set a bit higher than average.

\n

That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.

\n

For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone.  Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots.  (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.)  (Please do not pronounce his True Name correctly or he will be summoned here.)

\n

But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.

\n

And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.

\n

Is that a realistic standard?  Especially if different people are annoyed in different amounts by different things?

\n

But it's hard to remember that when Ben is being nice to so many idiots.

\n

Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection.  So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly.  But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

\n

The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong).  Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them.  I have to remind myself, \"Tolerate tolerance!  Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!\"

\n

By my nature, I do get annoyed when someone else seems to be giving too much credit.  I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are.  I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.

\n

I am not generally a fan of \"tolerance\".  I certainly don't believe in being \"intolerant of intolerance\", as some inconsistently hold.  But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.

\n

Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.

\n

Just don't demand that everyone you work with be equally intolerant of behavior like that.  Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...

" } }, { "_id": "vHCetv8tx6LbRtfyc", "title": "Support That Sounds Like Dissent", "pageUrl": "https://www.lesswrong.com/posts/vHCetv8tx6LbRtfyc/support-that-sounds-like-dissent", "postedAt": "2009-03-20T22:28:03.357Z", "baseScore": 89, "voteCount": 81, "commentCount": 35, "url": null, "contents": { "documentId": "vHCetv8tx6LbRtfyc", "html": "

Related to: Why Our Kind Can't Cooperate

\n

Eliezer described a scene that's familiar to all of us:

\n
\n

Imagine that you're at a conference, and the speaker gives a 30-minute talk.  Afterward, people line up at the microphones for questions.  The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information.  The second questioner disputes a claim made in slide 3.  The third questioner suggests an alternative hypothesis that seems to explain the same data...

\n
\n

An outsider might conclude that this presentation went poorly, because all of the people who spoke afterwards seemed to disagree. Someone who had been to a few conferences would understand that this is normal; only the people who disagree speak up, while the the rest stay silent, because taking the mic to say \"me too!\" isn't a productive use of everyone's time. If you polled the audience, you might expect to find a few vocal dissenters against a silent majority. This is not what you would find.

\n

Consider this situation more closely. A series of people step up, and say things which sound like disagreement. But do they really disagree? The first questioner is only quibbling with a bit of the presentation; he hasn't actually disagreed with the main point. The second questioner has challenged one of the claims from the presentation, but ignored the rest. The third questioner has proposed an alternative hypothesis which might be true, but that doesn't mean the alternative hypothesis is true, or even that the questioner thinks it's likely. If you stopped and asked these questioners whether they agreed with the main thrust of the presentation, they would probably say that they do. Why, then, does it sound like everyone disagrees?

\n

\n

In our community, we hold writing and arguments to a high standard, so when we see something that's imperfect, we speak up. Posting merely to indicate agreement (\"me too\") is strongly discouraged. In practice, this often translates into nit-picking: pointing out minor issues in reasoning or presentation that are easy to fix. We have lots of practice nit-picking, because we do it all the time with our own writings. We remove or rework weak arguments, expand on points that need clarification and tweak explanations with every draft. Revising a draft, however, does not mean questioning its premise; usually, by the time the first draft is finished, your mind is set, so the fact that you agree with your own paper is a given. When reviewing someone else's work, we transfer this skill, and something strange happens. If we agree, we want to help the author make it stronger, so we treat it as though we were revising our own draft, point out the sections which are weak, and explain why. If we disagree, we want to change the author's mind, so we point out the sections which caused us to disagree, and explain why. These two cases are hard to distinguish, and we usually forget to say which we're doing.

\n

Discussions on the internet are usually dominated by dissent. Conventional wisdom states that this is because critics speak louder, but I think this is amplified by posts that are meant to be supportive but sound too much like dissent. In order to combat this, I propose the following social norm:

\n
\n

When criticizing something in a post other than the main point, authors should explicitly state whether they agree, disagree, or are unsure of the post as a whole.

\n
\n

Imagine the same conference as earlier, except that each questioner starts by saying whether or not he agreed with the presentation. \"I agree with your conclusion. That said, the graph on slide 14 shouldn't use a logarithmic scale.\" \"I agree with most of what you said, but there's a claim on slide 3 I disagree with.\" \"I'm not sure whether I agree with your conclusion, because there's an alternative hypothesis that could explain the data.\" The content of these responses is the same, but the overall impression generated is very different; before, it seemed like everyone disagreed, but now it sounds like there's a consensus and they're resolving details. Since the impression people got of speakers' positions disagreed with what they would have said their positions were, that impression was false. That which can be destroyed by the truth, should be; therefore, if enforcing this rule really does change the tone of discussions, then rationalists should enforce it by asking people to clarify their positions.

" } }, { "_id": "BDPBL9HDr9NT9ZXqv", "title": "Just a reminder: Scientists are, technically, people.", "pageUrl": "https://www.lesswrong.com/posts/BDPBL9HDr9NT9ZXqv/just-a-reminder-scientists-are-technically-people", "postedAt": "2009-03-20T20:33:52.486Z", "baseScore": 8, "voteCount": 14, "commentCount": 35, "url": null, "contents": { "documentId": "BDPBL9HDr9NT9ZXqv", "html": "

From Michael Eisen's blog:

\n

Yuval Levin, former Executive Director of the President's Council on Bioethics, has an op-ed in Tuesday's Washington Post arguing that Obama's new stem cell policy is dangerous. Levin does not argue that stem cell research is bad. Rather he is upset that Obama did not dictate which uses of stem cells are appropriate, but rather asked the National Institutes of Health to draft a policy on which uses of stem cells are appropriate:

\n
\n

It [Obama's policy] argues not for an ethical judgment regarding the moral worth of human embryos but, rather, that no ethical judgment is called for: that it is all a matter of science.

\n

This is a dangerous misunderstanding. Science policy questions do often require a grasp of complex details, which scientists can help to clarify. But at their core they are questions of priorities and worldviews, just like other difficult policy judgments.

\n
\n

Lost in this superficially unobjectionable - if banal - assertion of the complexity of ethical issues involving science is Levin's (and many other bioethicists) credo: that the moral complexity of scientific issues means that scientists should not make decisions about them.

" } }, { "_id": "7FzD7pNm9X68Gp5ZC", "title": "Why Our Kind Can't Cooperate", "pageUrl": "https://www.lesswrong.com/posts/7FzD7pNm9X68Gp5ZC/why-our-kind-can-t-cooperate", "postedAt": "2009-03-20T08:37:22.001Z", "baseScore": 301, "voteCount": 247, "commentCount": 211, "url": null, "contents": { "documentId": "7FzD7pNm9X68Gp5ZC", "html": "

From when I was still forced to attend, I remember our synagogue's annual fundraising appeal.  It was a simple enough format, if I recall correctly.  The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

\n

Straightforward, yes?

\n

Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase \"empirical cluster in personspace\" then you know who I'm talking about.)

\n

I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base.

\n

And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them.  (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)

\n

Now you might say, \"Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?\"

\n

Hold on to that thought.

\n

Because people were donating.  We started getting donations right away, via Paypal.  We even got congratulatory notes saying how the appeal had finally gotten them to start moving.  A donation of $111.11 was accompanied by a message saying, \"I decided to give **** a little bit more.  One more hundred, one more ten, one more single, one more dime, and one more penny.  All may not be for one, but this one is trying to be for all.\"

\n

But none of those donors posted their agreement to the mailing list.  Not one.

\n

\n

So far as any of those donors knew, they were alone.  And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated.  The criticisms, the justifications for not donating—only those were displayed proudly in the open.

\n

As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.

\n

I know someone with a rationalist cause who goes around plaintively asking, \"How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can't even get a thousand people working on this?\"

\n

The obvious wrong way to finish this thought is to say, \"Let's do what the Raelians do!  Let's add some nonsense to this meme!\"  For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous.  And the Dark Side may require non-obvious skills, which you, yes you, do not have:  Not everyone can be a Sith Lord.  In particular, if you talk about your planned lies on the public Internet, you fail.  I'm no master criminal, but even I can tell certain people are not cut out to be crooks.

\n

So it's probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.  That path leads to—pardon the expression—the Dark Side.

\n

But it probably does make sense to start asking ourselves some pointed questions, if supposed \"rationalists\" can't manage to coordinate as well as a flying-saucer cult.

\n

How do things work on the Dark Side?

\n

The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves.  So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.

\n

(\"Pluralistic ignorance\" is the standard label for this.)

\n

If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.

\n

(I call that \"evaporative cooling of groups\".)

\n

The ideas themselves, not just the leader, generate unbounded enthusiasm and praise.  The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated.  This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.

\n

(Which I term the \"affective death spiral\".)

\n

So these are all examples of strong Dark Side forces that can bind groups together.

\n

And presumably we would not go so far as to dirty our hands with such...

\n

Therefore, as a group, the Light Side will always be divided and weak.  Atheists, libertarians, technophiles, nerds, science-fiction fans, scientists, or even non-fundamentalist religions, will never be capable of acting with the fanatic unity that animates radical Islam.  Technological advantage can only go so far; your tools can be copied or stolen, and used against you.  In the end the Light Side will always lose in any group conflict, and the future inevitably belongs to the Dark.

\n

I think that one's reaction to this prospect says a lot about their attitude towards \"rationality\".

\n

Some \"Clash of Civilizations\" writers seem to accept that the Enlightenment is destined to lose out in the long run to radical Islam, and sigh, and shake their heads sadly.  I suppose they're trying to signal their cynical sophistication or something.

\n

For myself, I always thought—call me loony—that a true rationalist ought to be effective in the real world.

\n

So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.

\n

You would think, perhaps, that real rationalists ought to be more coordinated?  Surely all that unreason must have its disadvantages?  That mode can't be optimal, can it?

\n

And if current \"rationalist\" groups cannot coordinate—if they can't support group projects so well as a single synagogue draws donations from its members—well, I leave it to you to finish that syllogism.

\n

There's a saying I sometimes use:  \"It is dangerous to be half a rationalist.\"

\n

For example, I can think of ways to sabotage someone's intelligence by selectively teaching them certain methods of rationality.  Suppose you taught someone a long list of logical fallacies and cognitive biases, and trained them to spot those fallacies in biases in other people's arguments.  But you are careful to pick those fallacies and biases that are easiest to accuse others of, the most general ones that can easily be misapplied.  And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid.  (And note, by the way, that I have just given you another Fully General Counterargument against smart people whose arguments you don't like.)

\n

Similarly, if you wanted to ensure that a group of \"rationalists\" never accomplished any task requiring more than one person, you could teach them only techniques of individual rationality, without mentioning anything about techniques of coordinated group rationality.

\n

I'll write more later (tomorrow?) on how I think rationalists might be able to coordinate better.  But today I want to focus on what you might call the culture of disagreement, or even, the culture of objections, which is one of the two major forces preventing the atheist/libertarian/technophile crowd from coordinating.

\n

Imagine that you're at a conference, and the speaker gives a 30-minute talk.  Afterward, people line up at the microphones for questions.  The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information.  The second questioner disputes a claim made in slide 3.  The third questioner suggests an alternative hypothesis that seems to explain the same data...

\n

Perfectly normal, right?  Now imagine that you're at a conference, and the speaker gives a 30-minute talk.  People line up at the microphone.

\n

The first person says, \"I agree with everything you said in your talk, and I think you're brilliant.\"  Then steps aside.

\n

The second person says, \"Slide 14 was beautiful, I learned a lot from it.  You're awesome.\"  Steps aside.

\n

The third person—

\n

Well, you'll never know what the third person at the microphone had to say, because by this time, you've fled screaming out of the room, propelled by a bone-deep terror as if Cthulhu had erupted from the podium, the fear of the impossibly unnatural phenomenon that has invaded your conference.

\n

Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational.  You're only willing to hear some honest thoughts, but not others.  You are a dangerous half-a-rationalist.

\n

We are as uncomfortable together as flying-saucer cult members are uncomfortable apart.  That can't be right either.  Reversed stupidity is not intelligence.

\n

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

\n

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

\n

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

\n

Doing worse with more knowledge means you are doing something very wrong.  You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better.  You definitely should not do worse.  If you find yourself regretting your \"rationality\" then you should reconsider what is rational.

\n

On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.  I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

\n

We would seem to be stuck in an awful valley of partial rationality where we end up more poorly coordinated than religious fundamentalists, able to put forth less effort than flying-saucer cultists.  True, what little effort we do manage to put forth may be better-targeted at helping people rather than the reverse—but that is not an acceptable excuse.

\n

If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity.  One day everyone shows up dressed differently, another day they all show up in uniform.  You've got to cover both sides, or you're only half a rationalist.

\n

Can you imagine training prospective rationalists to wear a uniform and march in lockstep, and practice sessions where they agree with each other and applaud everything a speaker on a podium says?  It sounds like unspeakable horror, doesn't it, like the whole thing has admitted outright to being an evil cult?  But why is it not okay to practice that, while it is okay to practice disagreeing with everyone else in the crowd?  Are you never going to have to agree with the majority?

\n

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.  We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others' arguments.  Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society.  No, we're not losing because we're so superior, we're losing because our exclusively individualist traditions sabotage our ability to cooperate.

\n

The other major component that I think sabotages group efforts in the atheist/libertarian/technophile/etcetera community, is being ashamed of strong feelings.  We still have the Spock archetype of rationality stuck in our heads, rationality as dispassion.  Or perhaps a related mistake, rationality as cynicism—trying to signal your superior world-weary sophistication by showing that you care less than others.  Being careful to ostentatiously, publicly look down on those so naive as to show they care strongly about anything.

\n

Wouldn't it make you feel uncomfortable if the speaker at the podium said that he cared so strongly about, say, fighting aging, that he would willingly die for the cause?

\n

But it is nowhere written in either probability theory or decision theory that a rationalist should not care.  I've looked over those equations and, really, it's not in there.

\n

The best informal definition I've ever heard of rationality is \"That which can be destroyed by the truth should be.\"  We should aspire to feel the emotions that fit the facts, not aspire to feel no emotion.  If an emotion can be destroyed by truth, we should relinquish it.  But if a cause is worth striving for, then let us by all means feel fully its importance.

\n

Some things are worth dying for.  Yes, really!  And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects.  You've got to teach both sides of it, \"That which can be destroyed by the truth should be,\" and \"That which the truth nourishes should thrive.\"

\n

I've heard it argued that the taboo against emotional language in, say, science papers, is an important part of letting the facts fight it out without distraction.  That doesn't mean the taboo should apply everywhere.  I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry.  When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

\n

We need to keep our efforts to expose counterproductive causes and unjustified appeals, from stomping on tasks that genuinely need doing.  You need both sides of it—the willingness to turn away from counterproductive causes, and the willingness to praise productive ones; the strength to be unswayed by ungrounded appeals, and the strength to be swayed by grounded ones.

\n

I think the synagogue at their annual appeal had it right, really.  They weren't going down row by row and putting individuals on the spot, staring at them and saying, \"How much will you donate, Mr. Schwartz?\"  People simply announced their pledges—not with grand drama and pride, just simple announcements—and that encouraged others to do the same.  Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them.  That's probably about the way things should be in a sane human community—taking into account that people often have trouble getting as motivated as they wish they were, and can be helped by social encouragement to overcome this weakness of will.

\n

But even if you disagree with that part, then let us say that both supporting and countersupporting opinions should have been publicly voiced.  Supporters being faced by an apparently solid wall of objections and disagreements—even if it resulted from their own uncomfortable self-censorship—is not group rationality.  It is the mere mirror image of what Dark Side groups do to keep their followers.  Reversed stupidity is not intelligence.

" } }, { "_id": "beqq5Nm3EsJihHvK7", "title": "Precommitting to paying Omega.", "pageUrl": "https://www.lesswrong.com/posts/beqq5Nm3EsJihHvK7/precommitting-to-paying-omega", "postedAt": "2009-03-20T04:33:35.511Z", "baseScore": 5, "voteCount": 18, "commentCount": 33, "url": null, "contents": { "documentId": "beqq5Nm3EsJihHvK7", "html": "

Related to: Counterfactual Mugging, The Least Convenient Possible World

\n

MBlume said:

\n
\n

What would you do in situation X?\" and \"What would you like to pre-commit to doing, should you ever encounter situation X?\" should, to a rational agent, be one and the same question.

\n
\n

Applied to Vladimir Nesov's counterfactual mugging, the reasoning is then:

\n

Precommitting to paying $100 to Omega has expected utility of $4950.p(Omega appears). Not precommitting has strictly less utility; therefore I should precommit to paying. Therefore I should, in fact, pay $100 in the event (Omega appears, coin is tails).

To combat the argument that it is more likely that one is insane than that Omega has appeared, Eliezer said:

\n
\n

So imagine yourself in the most inconvenient possible world where Omega is a known feature of the environment and has long been seen to follow through on promises of this type; it does not particularly occur to you or anyone that believing this fact makes you insane.

\n
\n

My first reaction was that it is simply not rational to give $100 away when nothing can possibly happen in consequence. I still believe that, with a small modification: I believe, with moderately high probability, that it will not be instrumentally rational for my future self to do so. Read on for the explanation.

\n

Suppose we lived in Eliezer's most inconvenient possible world:

\n\n

Did you see a trap? Direct brain simulation instantiates precisely what Omega says does not exist, a \"you\" whose decision has consequences. So forget that. Suppose Omega privately performs some action for you (for instance, a hypercomputation) that is not simulable. Then direct brain simulation of this circumstance cannot occur. So just assume that you find Omega trustworthy in this world, and assume it does not itself simulate you to make its decisions. Other objections exist: numerous ones, actually. Forget them. If you find that a certain set of circumstances makes it easier for you to decide not to pay the $100, or to pay it, change the circumstances. For myself, I had to imagine knowing that the Tegmark ensemble didn't exist*. If, under the MWI of quantum mechanics, you find reasons (not) to pay, then assume MWI is disproven. If the converse, then assume MWI is true. If you find that both suppositions give you reasons (not) to pay, then assume some missing argument invalidates those reasons.

\n

Under these circumstances, should everyone pay the $100?

\n

No. Well, it depends what you mean by \"should\".

\n

Suppose I live in the Omega world. Then prior to the coin flip, I assign equal value to my future self in the event that it is heads, and my future self in the event that it is tails. My utility function is, very roughly, the expected utility function of my future self, weighted by the probabilities I assign that I will actually become some given future self. Therefore if I can precommit to paying $100, my utility function will possess the term $4950.p(Omega appears), and if I can only partially precommit, in other words I can arrange that with probablity q I will pay $100, then my utility function will possess the term $4950.q.p(Omega appears). So the dominant strategy is to precommit with probability one. I can in fact do this if Omega guarantees to contact me via email, or a trusted intermediary, and to take instructions thereby received as \"my response\", but I may have a slight difficulty if Omega chooses to appear to me in bed late one night.

\n

On the principle of the least convenient world, I'm going to suppose that is in fact how Omega chooses to appear to me. I'm also going to suppose that I have no tools available to me in Omega world that I do not in fact possess right now. Here comes Omega:

\n
\n

Hello Nathan. Tails, I'm afraid. Care to pay up?

\n

\"Before I make my decision: Tell me the shortest proof that P = NP, or the converse.\"

\n

Omega obliges (it will not, of course, let me remember this proof - but I knew that when I asked).

\n

\"Do you have any way of proving that you can hypercompute to me?\"

\n

Yes. (Omega proves it.)

\n

\"So, you're really Omega. And my choice will have no other consequences?\"

\n

None. Had heads appeared, I would have predicted precisely this current sequence of events and used it to make a decision. But heads has not appeared. No consequences will ensue.

\n

\"So you would have simulated my brain performing these actions? No, you don't do that, do you? Can you prove that's possible?\"

\n

Yes. (Omega proves it.)

\n

\"Right. No, I don't want to give you $100.\"

\n
\n

 

\n

What the hell just happened? Before Omega appeared, I wanted this sequence of events to play out quite differently. In fact this was my wish right up to the 't' of \"tails\". But now I've decided to keep the $100 after all!

\n

The answer is that there is no equivalence between my utility function at time t, where t < timeOmega, and my utility function at time T, where timeOmega < T. Before timeOmega, my utility function contains terms from states of the world where Omega appears and the coin turns up heads; after, it doesn't. Add to that the fact that my utility function is increasing in money possessed, and my preferred action at time T changes (predictably so) at timeOmega. To formalise:

\n

Suppose we index possible worlds with a time, t, and a state, S: a world state is then (S,t). Now let the utility function of 'myself' at time t and in world state S be denoted US,t:AS → R, where AS is my set of actions and R the real numbers. Then in the limit of a small time differential Δt, we can use the Bellman equation to pick an optimal policy π*:S → AS such that we maximise US,t as US,t(π*(S)).

\n

Before Omega appears, I am in (S,t). Suppose that the action \"paying $100 to Omega if tails appears\" is denoted a100. Then, obviously, a100 is not in my action set AS. Let \"not paying $100 to Omega if tails appears\" be denoted a0. a0 isn't in AS either. If we suppose Omega is guaranteed to appear shortly before time T (not a particularly restricting assumption for our purposes), then precommitting to paying is represented in our formalism by taking an action ap at (S,t) such that either:

\n
    \n
  1. The probability of being a state § in which tails has appeared and for which a0 ∈ A§ at time T is 0, or
  2. \n
  3. For all states § with tails having appeared, with a0 ∈ A§ and with non-zero probability at time T, U§,T(a0) < U§,T(a100) = π*(§). Note that a 'world state' S includes my brain
  4. \n
\n

Then if Omega uses a trusted intermediary, I can easily carry out an action ap = \"give bank account access to intermediary and tell intermediary to pay $100 from my account to Omega under all circumstances\". This counts as taking option 1 above. But suppose that option 1 is closed to us. Suppose we must take an action such that 2 is satisfied. What does such an action look like?

\n

Firstly, brain hacks. If my utility function in state § at time T is increasing in money, then U§,T(a0) > U§,T(a100), contra the desired property of ap. Therefore I must arrange for my brain in world-state § to be such that my utility function is not so fashioned. But by supposition my utility function cannot \"change\"; it is simply a mapping from world-states X possible actions to real numbers. In fact the function itself is an abstraction describing the behaviour of a particular brain in a particular world state**. If, in addition, we desire that the Bellman equation actually holds, then we cannot simply abolish the process of determining an optimal policy at some arbitrary point in time T. I propose one more desired property: the general principle of more money being better than less should not cease to operate due to ap, as this is sure to decrease US,t(ap) below optimum (would we really lose less than $4950?). So the modification I make to my brain should be minimal in some sense. This is, after all, a highly exceptional circumstance. What one could do is arrange for my brain to experience strong reward for a short time period after taking action a100. The actual amount chosen should be such that that the reward outweighs the time-discounted future loss in utility from surrendering the $100 (it follows that the shorter the duration of reward, the stronger its magnitude must be). I must also guarantee that I am not simply attaching a label called \"reward\" to something that does not actually represent reward as defined in the Bellman equation. This would, I believe, require some pretty deep knowledge of the nature of my brain which I do not possess. Add to that the fact that I do not know how to hack my brain, and in a least convenient world, this option is closed to me also***.

\n

It's looking pretty grim for my expected utility. But wait: we do not simply have to increase U§,T(a100). We can also decrease U§,T(a0). Now we could implement a brain hack for this also, but the same arguments against apply. A simple solution might be to use a trusted intermediary for another purpose: give him $1000, and tell him not to give it back unless I do a100. This would, in fact, motivate me, but it reintroduces the factor of how probable it is Omega will appear, which we were previously able to neglect, by altering the utility from time t to time timeOmega. Suppose we give the intermediary our account details instead. This solves the probability issue, but there is a potential for either myself to frustrate him, a solvable problem, or for Omega to frustrate him in order to satisfy the \"no further consequences\" requirement. And so on: the requirements of the problem are such that only our own utility function is sancrosact to Omega. It is through that mechanism only that we can win.

\n

This is my real difficulty: that the problem appears to require cognitive understanding and technology that we do not possess. Eliezer may very well give $100 whenever he meets this problem; so may Cameron; but I wouldn't, probably not, anyway. It wouldn't be instrumentally rational for me, given my utility function under those circumstances, at least not unless something happens that can put the concepts they carry around with them into my head, and stop me - or rather, make it instrumentally irrational for me, in the sense of being part of a suboptimal policy - from removing those concepts after Omega appears.

\n

However, on the off-chance that Omega~, a slightly less inconvenient version of Omega, appears before me: I hereby pledge one beer to every member of Less Wrong, if I fail to surrender my $100 when asked. Take that, obnoxious omniscient being!

\n

 

\n
\n

*It's faintly amusing, though only faintly, that despite knowing full well that I was supposed to consider the least convenient possible world, I neglected to think of my least convenient possible world when I first tried to tackle the problem. Ask yourself the question.

\n

**There are issues with identifying what it means for a brain/agent to persist from one world-state to another, but if such a persisting agent cannot be identified, then the whole problem is nonsense. It is more inconvenient for the problem to be coherent, as we must then answer it. I've also decided to use the Bellman equations with discrete time steps, rather than the time-continuous HJB equation, simply because I've never used the latter and don't trust myself to explain it correctly.

\n

***There is the question: would one not simply dehack after Omega arrives announcing 'tails'? If that is of higher utility than other alternatives: but then we must have defined \"reward\" inappropriately while making the hack, as the reward for being in each state, together with the discounting factor, serves to fully determine the utility function in the Bellman equation.

\n

(I've made a few small post-submission edits, the largest to clarify my conclusion)

" } }, { "_id": "iNCg6mjw584r9BWZK", "title": "Rationalist Poetry Fans, Unite!", "pageUrl": "https://www.lesswrong.com/posts/iNCg6mjw584r9BWZK/rationalist-poetry-fans-unite", "postedAt": "2009-03-20T01:58:56.188Z", "baseScore": 38, "voteCount": 41, "commentCount": 35, "url": null, "contents": { "documentId": "iNCg6mjw584r9BWZK", "html": "

Related to: Little Johnny Bayesian, Savanna Poets

\n

There are certain stereotypes about what rationalists can talk about versus what's really beyond the pale. So far, Less Wrong has pretty consistently exploded those stereotypes. In the past three weeks, we've discussed everything from Atlantis to chaos magick to \"9-11 Truth\". But I don't think anything surprised me quite as much as learning that there are a couple of rationalists here with a genuine interest in poetry.

Poetry has not been very friendly to the rational worldview over the past few centuries. What with all the 19th century's talk of unweaving rainbows and the 20th century's talk of quadrupeds swooning into billiard balls, it's tempting to think it reflects some natural order of things, some eternal conflict between Art and Science.

But for most of human history, science and art were considered natural allies. Lucretius' De Rerum Natura, an argument for atheism and atomic theory famous for being the ancient Roman equivalent of The God Delusion, was written in poetry. All through the Middle Ages, artists worked to a philosophy of trying to depict and celebrate natural truth. And the eighteenth century saw a golden age of what was sometimes called \"rationalist poetry\", a versified celebration of Enlightenment principles.

When William Wordsworth launched his poetic jihad against rationalism, he called his declaration of war The Tables Turned. On a mundane level, the title referred to an argument he was having with his friend, but on a grander scale he was consciously inverting the previous order of Reason as the virtue of poetry. Thus:

\n
\n

Enough of Science and of Art;
Close up these barren leaves;
Come forth, and bring with you a heart
That watches and receives.

\n
\n

Over the next few years, he and fellow jihadis John Keats and Percy Bysshe Shelley were wildly successful in completely changing the poetic ideal. I can't begrudge them their little movement; their poetry ranks among the greatest art ever produced by humankind. But it bears repeating that there was a strong rationalist tradition in poetry before, during, and after the Romantic Era. In its honor, I thought I would share some of my favorite rationalist poems. I make no claims that this is exhaustive, representative, or anything else besides my personal choices.

\n

\n

The most famous rationalist poet is probably Alexander Pope (1688-1744), perhaps best known for writing Isaac Newton's epitaph:

\n
\n

Nature and nature's laws lay hid in night.
God said, \"Let Newton be!\" and all was light.1

\n
\n

Indeed, Pope spent much of his career praising science and human reason, while also simultaneously lampooning human stupidity:

\n
\n

Go, wondrous creature! mount where Science guides;
Go measure earth, weigh air, and state the tides;
Instruct the planets in what orbs to run,
Correct old Time, and regulate the sun;
Go, soar with Plato to th'empyreal sphere,
To the first good, first perfect, and first fair;
Or tread the mazy round his followers trod,
And quitting sense call imitating God;
As eastern priests in giddy circles run,
And turn their heads to imitate the sun.
Go, teach Eternal Wisdom how to rule--
Then drop into thyself, and be a fool!

\n
\n

I can't claim this as a complete victory for rationalism, since it was in the context of Essay on Man, a mysterianist work declaring that humans should never overreach their pathetic mental powers and question God's supremacy. Even the quoted passage is a little ironic, intended to convey that humankind, with such amazing science, had a tendency to shoot itself in the foot when it tried to overstep its bounds.

\n

But Pope's appreciation for scientific progress was genuine, and he was also deeply interested in overcoming bias (which he, in his pre-Samuel Johnson way, called \"byass\"). His Essay on Criticism sometimes reads like a strangely spelled, classical-allusion-laden rationalists' manual:

\n
\n

Of all the Causes which conspire to blind
Man's erring Judgment, and misguide the Mind,
What the weak Head with strongest Byass rules,
Is Pride, the never-failing Vice of Fools [...]
If once right Reason drives that Cloud away,
Truth breaks upon us with resistless Day;
Trust not your self; but your Defects to know,
Make use of ev'ry Friend--and ev'ry Foe.

\n
\n

He exhorts us to think for ourselves, rather than take things on faith or blindly accept authority:

\n
\n

Some ne'er advance a Judgment of their own,
But catch the spreading Notion of the Town;
They reason and conclude by Precedent,
And own stale Nonsense which they ne'er invent.

\n
\n

But equally he reminds us that reversed stupidity is not intelligence:

\n
\n

The Vulgar thus through Imitation err;
As oft the Learn'd by being Singular;
So much they scorn the Crowd, that if the Throng
By Chance go right, they purposely go wrong;

\n
\n

And tells us to admit our errors, learn from them, and move on:

\n
\n

Some positive persisting Fops we know,
Who, if once wrong, will needs be always so;
But you, with Pleasure own your Errors past,
An make each Day a Critick on the last.

\n
\n

At the end, he describes the person he wants judging his poetry: someone who sounds rather like the ideal rationalist.

\n
\n

Unbiass'd, or by Favour or by Spite;
Not dully prepossest, nor blindly right;
Tho' Learn'd well-bred; and tho' well-bred, sincere;
Modestly bold, and Humanly severe
Who to a Friend his Faults can freely show,
And gladly praise the Merit of a Foe
Blest with a Taste exact, yet unconfin'd;
A Knowledge both of Books and Humankind;
Gen'rous Converse; a Sound exempt from Pride;
And Love to Praise, with Reason on his Side.

\n
\n

Pope came from a time when any person of good breeding was expected to be learned and able to converse about the scientific discoveries going on around them; an age when reason was actually trendy. There have been few such ages, and hence few such poets as Pope. But other rationalist poetry has come from people who were mathematicians or scientists in their day jobs, and poets only in their spare time.

Such a man was Omar Khayyam, the eleventh century Persian mathematician and astronomer. He did some work on cubic equations, wrote the Islamic world's most influential treatise on algebra, reformed the Persian calendar, and developed a partial heliocentric theory centuries before Copernicus. But he is most beloved for his rubaiyat, or quatrains, which recommend ignoring religion, accepting the deterministic material universe, and abandoning moral prudery in favor of having fun.

There are some beautiful translations and some accurate translations of Khayyam's works, but the rumor among those who speak Persian is that the beautiful translations are not accurate and the accurate translations are not beautiful, and that capturing the true spirit of the original may be hopeless. FitzGerald in particular, the most famous English translator, is accused of playing up the hedonism and playing down the rationalism. I've tried to select from a few different translations for this essay.

On determinism:

\n
\n

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

And that inverted Bowl we call The Sky,
Whereunder crawling coop't we live and die,
Lift not thy hands to It for help - for It
Rolls impotently on as Thou or I.

With Earth's first Clay They did the Last Man knead,
And there of the Last Harvest sow'd the Seed:
And the first Morning of Creation wrote
What the Last Dawn of Reckoning shall read.

\n
\n

On atheism:

\n
\n

What! out of senseless Nothing to provoke
A conscious Something to resent the yoke
Of unpermitted Pleasure, under pain
Of Everlasting Penalties, if broke!

What! from his helpless Creature be repaid
Pure Gold for what he lent him dross-allay'd--
Sue for a Debt he never did contract,
And cannot answer--Oh, the sorry trade!

In every step I take Thou sett'st a snare,
Saying,”I will entrap thee, so beware!\"
And, while all things are under Thy command,
I am a rebel - as Thou dost declare.

\n
\n

On Joy in the Merely Real:

\n
\n

If in the Spring, she whom I love so well
Meet me by some green bank - the truth I tell -
Bringing my thirsty soul a cup of wine,
I want no better Heaven, nor fear a Hell.

\n
\n

And unlike Alexander Pope, who is horrified, HORRIFIED at the thought that mankind might challenge God's divine plan, Omar Khayyam thinks he could do better:

\n
\n

Ah, Love! could you and I with Him conspire
To grasp this sorry Scheme of Things entire,
Would not we shatter it to bits--and then
Re-mould it nearer to the Heart's Desire!

\n
\n

Needless to say, his contemporaries shunned him for such blasphemies. What would he say, they ask, when called before the throne of Allah to account for his beliefs? Well, he told them, he would say this:

\n
\n

Although I have not served Thee from my youth,
And though my face is mask'd with Sin uncouth,
In Thine Eternal Justice I confide,
As one who ever sought to follow Truth.

\n
\n

Compare the clarity of Khayyam, who is prepared to stand before God and justify himself without fear, to Pascal, who insists that we abandon our own intellectual integrity on the imperceptibly tiny chance that we might accrue some material gain. I find this quatrain - \"in thy eternal justice I confide, as one who ever sought to follow Truth\" - the only fully satisfying answer to Pascal's Wager.

Piet Hein (whom I've quoted here before) was another scientist who turned to poetry. During his career as a theoretical physicist and mathematician, he developed the superellipse and the game Hex (later studied by John Nash). His career as a poet began when the Nazis invaded his native Denmark. The censors would have prohibited any obviously rebellious literature, so he turned to writing odd little poems that seemed innocuous until you thought about them long enough, at which point they became obvious critiques of dictatorship. He continued writing after the war, usually on the theme of keeping things simple and avoiding stupidity.

This, for example, seems appropriate to a site called Less Wrong:

\n
\n

The road to wisdom? -- Well, it's plain
and simple to express:
      Err
      and err
      and err again
      but less
      and less
      and less.

\n
\n

If you've read Are You A Solar Deity or Schools Proliferating Without Evidence, you may see the humor in this quatrain about fitting the data to the theory:

\n
\n

Everything's either
concave or convex,
so whatever you dream
will be something with sex.

\n
\n

On the first virtue:

\n
\n

I'd like to know
what this whole show
is about
before it's out.

\n
\n

On the fifth virtue:

\n
\n

Truth shall emerge from the interplay
   of attitudes freely debated.
Don't be mislead by fanatics who say
   that only one truth should be stated:
truth is constructed in such a way
   that it can't be exaggerated.

\n
\n

On making an extraordinary effort:

\n
\n

Our so-called limitations, I believe,
apply to faculties we don't apply.
We don't discover what we can't achieve
until we make an effort not to try.

\n
\n

On fake justifications:

\n
\n

In view of your manner
    of spending your days
I hope you may learn,
    before ending them,
that the effort you spend
    on defending your ways
could better be spent
    on amending them.

\n
\n

Appropriate to the Singularity or to any of a number of fields:

\n
\n

Eradicate the optimist
 who takes the easy view
that human values will persist
 no matter what we do.

Annihilate the pessimist
 whose ineffectual cry
is that the goal's already missed
 however hard we try.

\n
\n

On reversed stupidity:

\n
\n

For many system shoppers it's
 a good-for-nothing system
that classifies as opposites
 stupidity and wisdom.

because by logic-choppers it's
 accepted with avidity:
stupidity's true opposite's
 the opposite stupidity.

\n
\n

On shutting up and doing the impossible:

\n
\n

'Impossibilities' are good
 not to attach that label to;
since, correctly understood,
if we wanted to, we would
 be able to be able to.

\n
\n

And even some poets who had no such formal acquaintance with science considered their poetry allied with its goals: an attempt to explore the universe and celebrate its wonders. This one's from Don Juan by Lord Byron, commonly (but, according to his own protestations, erroneously) classed with Wordsworth as a Romantic. I won't say there's not sarcasm in there, but Byron has a way of being sarcastic even when saying things he believes:

\n
\n

When Newton saw an apple fall, he found
In that slight startle from his contemplation --
'Tis said (for I'll not answer above ground
For any sage's creed or calculation) --
A mode of proving that the earth turn'd round
In a most natural whirl, called \"gravitation;\"
And this is the sole mortal who could grapple,
Since Adam, with a fall or with an apple.

Man fell with apples, and with apples rose,
If this be true; for we must deem the mode
In which Sir Isaac Newton could disclose
Through the then unpaved stars the turnpike road,
A thing to counterbalance human woes:
For ever since immortal man hath glow'd
With all kinds of mechanics, and full soon
Steam-engines will conduct him to the moon.

And wherefore this exordium? -- Why, just now,
In taking up this paltry sheet of paper,
My bosom underwent a glorious glow,
And my internal spirit cut a caper:
And though so much inferior, as I know,
To those who, by the dint of glass and vapour,
Discover stars and sail in the wind's eye,
I wish to do as much by poesy.

\n
\n

There's no sarcasm at all in this next declaration of Byron's, where he vows hostility to everything from despotism to religion to mob rule to fuzzy thinking to the Blue vs. Green two-party swindle:

\n
\n

And I will war, at least in words (and -- should
My chance so happen -- deeds), with all who war
With Thought; -- and of Thought's foes by far most rude,
Tyrants and sycophants have been and are.
I know not who may conquer: if I could
Have such a prescience, it should be no bar
To this my plain, sworn, downright detestation
Of every depotism in every nation.

It is not that I adulate the people:
Without me, there are demagogues enough,
And infidels, to pull down every steeple,
And set up in their stead some proper stuff.
Whether they may sow scepticism to reap hell,
As is the Christian dogma rather rough,
I do not know; -- I wish men to be free
As much from mobs as kings -- from you as me.

The consequence is, being of no party,
I shall offend all parties: never mind!
My words, at least, are more sincere and hearty
Than if I sought to sail before the wind.
He who has nought to gain can have small art: he
Who neither wishes to be bound nor bind,
May still expatiate freely, as will I,
Nor give my voice to slavery's jackal cry.

\n
\n

Byron on the progress of science, and on rejecting disproven theories:

\n
\n

If from great nature's or our own abyss
Of thought we could but snatch a certainty,
Perhaps mankind might find the path they miss --
But then 't would spoil much good philosophy.
One system eats another up, and this
Much as old Saturn ate his progeny;
For when his pious consort gave him stones
In lieu of sons, of these he made no bones.

But System doth reverse the Titan's breakfast,
And eats her parents, albeit the digestion
Is difficult. Pray tell me, can you make fast,
After due search, your faith to any question?
Look back o'er ages, ere unto the stake fast
You bind yourself, and call some mode the best one.
Nothing more true than not to trust your senses;
And yet what are your other evidences?

\n
\n

Again on the same topic (and some thoughts on the \"wisdom of crowds\"):

\n
\n

There is a common-place book argument,
Which glibly glides from every tongue;
When any dare a new light to present,
\"If you are right, then everybody 's wrong\"!
Suppose the converse of this precedent
So often urged, so loudly and so long;
\"If you are wrong, then everybody's right\"!
Was ever everybody yet so quite?

\n
\n

This is Byron at his sarcastic best on the value accorded truth in society:

\n
\n

The antique Persians taught three useful things,
To draw the bow, to ride, and speak the truth.
This was the mode of Cyrus, best of kings --
A mode adopted since by modern youth.
Bows have they, generally with two strings;
Horses they ride without remorse or ruth;
At speaking truth perhaps they are less clever,
But draw the long bow better now than ever.

\n
\n

I can't help ending this by saying a word in praise of the Romantics. Yes, they may have gotten their rainbows in a tangle, and they may have hurled every curse they could at \"Reason\", but I think they were less opposed than they let on. Consider as anecdotal evidence Percy Shelley, who was expelled from Oxford after refusing to recant his atheism. What the Romantics hated was anyone telling them how to think, and their quarrel with a science they did not understand was less with its methods and more that it seemed an authority. Thus John Keats, in the same year he wrote Lamia, also penned perhaps the greatest statement of the Joy in the Merely Real ideal ever, writing:

\n
\n

Beauty is Truth, Truth Beauty, that is all
Ye know on Earth, and all ye need to know.

\n
\n

I think given an hour to talk to him and set him straight I could've convinced him there is no loss of beauty in accepting Newton's optics. It is true, after all.

I end with Shelley's description from Mont Blanc of the godless yet ordered essence of the universe that he worshipped:

\n
\n

The secret Strength of things
Which governs thought, and to the infinite dome
Of Heaven is as a law.

\n
\n

The laws that govern our own thought processes are the same laws that bind the infinite dome of Heaven. What better statement of the rationalist worldview could you ask for?

\n

Now, what are your favorite rationalist poems?

\n

Footnotes:

\n

1: I have since spotted the following addition to Pope's couplet:

\n
\n

It did not last; the Devil, howling \"Ho!
Let Einstein be!\" restored the status quo.

\n
" } }, { "_id": "q79vYjHAE9KHcAjSs", "title": "Rationalist Fiction", "pageUrl": "https://www.lesswrong.com/posts/q79vYjHAE9KHcAjSs/rationalist-fiction", "postedAt": "2009-03-19T08:22:43.093Z", "baseScore": 47, "voteCount": 47, "commentCount": 193, "url": null, "contents": { "documentId": "q79vYjHAE9KHcAjSs", "html": "

Followup toLawrence Watt-Evans's Fiction
Reply toOn Juvenile Fiction

\n

MBlume asked us to remember what childhood stories might have influenced us toward rationality; and this was given such excellent answers as Norton Juster's The Phantom Tollbooth.  So now I'd like to ask a related question, expanding the purview to all novels (adult or child, SF&F or literary):  Where can we find explicitly rationalist fiction?

\n

Now of course there are a great many characters who claim to be using logic.  The whole genre of mystery stories with seemingly logical detectives, starting from Sherlock Holmes, would stand in witness of that.

\n

But when you look at what Sherlock Holmes does - you can't go out and do it at home.  Sherlock Holmes is not really operating by any sort of reproducible method.  He is operating by magically finding the right clues and carrying out magically correct complicated chains of deduction.  Maybe it's just me, but it seems to me that reading Sherlock Holmes does not inspire you to go and do likewise.  Holmes is a mutant superhero.  And even if you did try to imitate him, it would never work in real life.

\n

Contrast to A. E. van Vogt's Null-A novels, starting with The World of Null-A.  Now let it first be admitted that Van Vogt had a number of flaws as an author.  With that said, it is probably a historical fact about my causal origins, that the Null-A books had an impact on my mind that I didn't even realize until years later.  It's not the sort of book that I read over and over again, I read it and then put it down, but -

\n

- but this is where I was first exposed to such concepts as \"The map is not the territory\" and \"rose1 is not rose2\".

\n

Null-A stands for \"Non-Aristotelian\", and the premise of the ficton is that studying Korzybski's General Semantics makes you a superhero.  Let's not really go into that part.  But in the Null-A ficton:

\n

1)  The protagonist, Gilbert Gosseyn, is not a mutant.  He has studied rationality techniques that have been systematized and are used by other members of his society, not just him.

\n

2)  Van Vogt tells us what (some of) these principles are, rather than leaving them mysteriously blank - we can't be Gilbert Gosseyn, but we can at least use some of this stuff.

\n

3)  Van Vogt conveys the experience, shows Gosseyn in action using the principles, rather than leaving them to triumphant explanation afterward.  We are put into Gosseyn's shoes at the moment of his e.g. making a conscious distinction between two different things referred to by the same name.

\n

This is a high standard to meet.

\n

But Marc Stiegler's David's Sling (quoted in e.g. this post) meets this same standard:  The Zetetics derive their abilities from training in a systematized tradition; we get to see the actual principles the Zetetics are using, and they're ones we could try to apply in real life; and we're put into their shoes at the moments of their use.

\n

I mention this to show that it isn't only van Vogt who's ever done this.

\n

However...

\n

...those two examples actually exhaust my knowledge of the science fiction and fantasy literature, so far as I can remember.

\n

It really is a very high standard we're setting here.  To realistically show your characters using an interesting technique of rationality, you have to know an interesting technique of rationality.  Van Vogt was inspired by Korzybski, who - I discovered when I looked this up, just now - actually invented the phrase \"The map is not the territory\".  Marc Stiegler was inspired by, among other sources, Eric Drexler and Robin Hanson.  (Stiegler has another novel called Earthweb about using prediction markets to defend the Earth from invading aliens, which was my introduction to the concept of prediction markets.)

\n

If I relax the standard to focus mainly on item (3), fiction that transmits a powerful experience of using rationality, then I could add in Greg Egan's Distress, some of Lawrence Watt-Evans's strange little novels, the travails of Salvor Hardin in the first Foundation novel, and probably any number of others.

\n

But what I'm really interested in is whether there's any full-blown Rationalist Fiction that I've missed - or maybe just haven't remembered.  Failing that, I'm interested in stories that merely do a good job of conveying a rationalist experience.  (Please specify which of these cases is true, if you make a recommendation.)

" } }, { "_id": "mg6jDEuQEjBGtibX7", "title": "Counterfactual Mugging", "pageUrl": "https://www.lesswrong.com/posts/mg6jDEuQEjBGtibX7/counterfactual-mugging", "postedAt": "2009-03-19T06:08:37.769Z", "baseScore": 84, "voteCount": 92, "commentCount": 299, "url": null, "contents": { "documentId": "mg6jDEuQEjBGtibX7", "html": "

Related to: Can Counterfactuals Be True?, Newcomb's Problem and Regret of Rationality.

\n

Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, Omega tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.

\n

Omega can predict your decision in case it asked you to give it $100, even if that hasn't actually happened, it can compute the counterfactual truth. Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says, it really tossed a coin and really would've given you $10000.

\n

From your current position, it seems absurd to give up your $100. Nothing good happens if you do that, the coin has already landed tails up, you'll never see the counterfactual $10000. But look at this situation from your point of view before Omega tossed the coin. There, you have two possible branches ahead of you, of equal probability. On one branch, you are asked to part with $100, and on the other branch, you are conditionally given $10000. If you decide to keep $100, the expected gain from this decision is $0: there is no exchange of money, you don't give Omega anything on the first branch, and as a result Omega doesn't give you anything on the second branch. If you decide to give $100 on the first branch, then Omega gives you $10000 on the second branch, so the expected gain from this decision is

\n

-$100 * 0.5 + $10000 * 0.5 = $4950

\n

So, this straightforward calculation tells that you ought to give up your $100. It looks like a good idea before the coin toss, but it starts to look like a bad idea after the coin came up tails. Had you known about the deal in advance, one possible course of action would be to set up a precommitment. You contract a third party, agreeing that you'll lose $1000 if you don't give $100 to Omega, in case it asks for that. In this case, you leave yourself no other choice.

\n

But in this game, explicit precommitment is not an option: you didn't know about Omega's little game until the coin was already tossed and the outcome of the toss was given to you. The only thing that stands between Omega and your 100$ is your ritual of cognition. And so I ask you all: is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?

\n

P.S. Let's assume that the coin is deterministic, that in the overwhelming measure of the MWI worlds it gives the same outcome. You don't care about a fraction that sees a different result, in all reality the result is that Omega won't even consider giving you $10000, it only asks for your $100. Also, the deal is unique, you won't see Omega ever again.

" } }, { "_id": "6yTShbTdtATxKonY5", "title": "How to Not Lose an Argument", "pageUrl": "https://www.lesswrong.com/posts/6yTShbTdtATxKonY5/how-to-not-lose-an-argument", "postedAt": "2009-03-19T01:07:52.097Z", "baseScore": 165, "voteCount": 152, "commentCount": 416, "url": null, "contents": { "documentId": "6yTShbTdtATxKonY5", "html": "

Related to: Leave a Line of Retreat

\n

Followup to: Talking Snakes: A Cautionary Tale, The Skeptic's Trilemma

\n

\"I argue very well. Ask any of my remaining friends. I can win an argument on any topic, against any opponent. People know this, and steer clear of me at parties. Often, as a sign of their great respect, they don't even invite me.\"

\n

        --Dave Barry

\n

The science of winning arguments is called Rhetoric, and it is one of the Dark Arts. Its study is forbidden to rationalists, and its tomes and treatises are kept under lock and key in a particularly dark corner of the Miskatonic University library. More than this it is not lawful to speak.

But I do want to talk about a very closely related skill: not losing arguments.

Rationalists probably find themselves in more arguments than the average person. And if we're doing it right, the truth is hopefully on our side and the argument is ours to lose. And far too often, we do lose arguments, even when we're right. Sometimes it's because of biases or inferential distances or other things that can't be helped. But all too often it's because we're shooting ourselves in the foot.

How does one avoid shooting one's self in the foot? In rationalist language, the technique is called Leaving a Social Line of Retreat. In normal language, it's called being nice.

First, what does it mean to win or lose an argument? There is an unspoken belief in some quarters that the point of an argument is to gain social status by utterly demolishing your opponent's position, thus proving yourself the better thinker. That can be fun sometimes, and if it's really all you want, go for it.

But the most important reason to argue with someone is to change his mind. If you want a world without fundamentalist religion, you're never going to get there just by making cutting and incisive critiques of fundamentalism that all your friends agree sound really smart. You've got to deconvert some actual fundamentalists. In the absence of changing someone's mind, you can at least get them to see your point of view. Getting fundamentalists to understand the real reasons people find atheism attractive is a nice consolation prize.

I make the anecdotal observation that a lot of smart people are very good at winning arguments in the first sense, and very bad at winning arguments in the second sense. Does that correspond to your experience?

Back in 2008, Eliezer described how to Leave a Line of Retreat. If you believe morality is impossible without God, you have a strong disincentive to become an atheist. Even after you've realized which way the evidence points, you'll activate every possible defense mechanism for your religious beliefs. If all the defense mechanisms fail, you'll take God on utter faith or just believe in belief, rather than surrender to the unbearable position of an immoral universe.

The correct procedure for dealing with such a person, Eliezer suggests, isn't to show them yet another reason why God doesn't exist. They'll just reject it along with all the others. The correct procedure is to convince them, on a gut level, that morality is possible even in a godless universe. When disbelief in God is no longer so terrifying, people won't fight it quite so hard and may even deconvert themselves.

But there's another line of retreat to worry about, one I experienced firsthand in a very strange way. I had a dream once where God came down to Earth; I can't remember exactly why. In the borderlands between waking and sleep, I remember thinking: I feel like a total moron. Here I am, someone who goes to atheist groups and posts on atheist blogs and has told all his friends they should be atheists and so on, and now it turns out God exists. All of my religious friends whom I won all those arguments against are going to be secretly looking at me, trying as hard as they can to be nice and understanding, but secretly laughing about how I got my comeuppance. I can never show my face in public again. Wouldn't you feel the same?

And then I woke up, and shook it off. I am an aspiring rationalist: if God existed, I would desire to believe that God existed. But I realized at that point the importance of the social line of retreat. The psychological resistance I felt to admitting God's existence, even after having seen Him descend to Earth, was immense. And, I realized, it was exactly the amount of resistance that every vocally religious person must experience towards God's non-existence.

There's not much we can do about this sort of high-grade long-term resistance. Either a person has enough of the rationalist virtues to overcome it, or he doesn't. But there is a less ingrained, more immediate form of social resistance generated with every heated discussion.

Let's say you approach a theist (let's call him Theo) and say \"How can you, a grown man, still believe in something stupid like talking snakes and magic sky kings? Don't you know you people are responsible for the Crusades and the Thirty Years' War and the Spanish Inquisition? You should be ashamed of yourself!\"

This suggests the following dichotomy in Theo's mind: EITHER God exists, OR I am an idiot who believes in stupid childish  things and am in some way partly responsible for millions of deaths and I should have lower status and this arrogant person who's just accosted me and whom I already hate should have higher status at my expense.

Unless Theo has attained a level of rationality far beyond any of us, guess which side of that dichotomy he's going to choose? In fact, guess which side of that dichotomy he's now going to support with renewed vigor, even if he was only a lukewarm theist before? His social line of retreat has been completely closed off, and it's your fault.

Here the two definitions of \"winning an argument\" I suggested before come into conflict. If your goal is to absolutely demolish the other person's position, to make him feel awful and worthless - then you are also very unlikely to change his mind or win his understanding. And because our culture of debates and mock trials and real trials and flaming people on Usenet encourages the first type of \"winning an argument\", there's precious little genuine mind-changing going on.

Really adjusting to the second type of argument, where you try to convince people, takes a lot more than just not insulting people outright1. You've got to completely rethink your entire strategy. For example, anyone used to the Standard Debates may already have a cached pattern of how they work. Activate the whole Standard Debate concept, and you activate a whole bunch of related thoughts like Atheists As The Enemy, Defending The Faith, and even in some cases (I've seen it happen) persecution of Christians by atheists in Communist Russia. To such a person, ceding an inch of ground in a Standard Debate may well be equivalent to saying all the Christians martyred by the Communists died in vain, or something similarly dreadful.

So try to show you're not just starting Standard Debate #4457. I remember once, during the middle of a discussion with a Christian, when I admitted I really didn't like Christopher Hitchens. Richard Dawkins, brilliant. Daniel Dennett, brilliant. But Christopher Hitchens always struck me as too black-and-white and just plain irritating. This one little revelation completely changed the entire tone of the conversation. I was no longer Angry Nonbeliever #116. I was no longer the living incarnation of All Things Atheist. I was just a person who happened to have a whole bunch of atheist ideas, along with a couple of ideas that weren't typical of atheists. I got the same sort of response by admitting I loved religious music. All of a sudden my friend was falling over himself to mention some scientific theory he found especially elegant in order to reciprocate2. I didn't end up deconverting him on the spot, but think he left with a much better appreciation of my position.

All of these techniques fall dangerously close to the Dark Arts, so let me be clear: I'm not suggesting you misrepresent yourself just to win arguments. I don't think misrepresenting yourself would even work; evolutionary psychology tells us humans are notoriously bad liars. Don't fake an appreciation for the other person's point of view, actually develop an appreciation for the other person's point of view. Realize that your points probably seem as absurd to others as their points seem to you. Understand that many false beliefs don't come from simple lying or stupidity, but from complex mixtures of truth and falsehood filtered by complex cognitive biases. Don't stop believing that you are right and they are wrong, unless the evidence points that way. But leave it at them being wrong, not them being wrong and stupid and evil.

I think most people intuitively understand this. But considering how many smart people I see shooting their own foot off when they're trying to convince someone3, some of them clearly need a reminder.

\n

 

\n

Footnotes

\n

1: An excellent collection of the deeper and most subtle forms of this practice of this sort can be found in Dale Carnegie's How to Win Friends and Influence People, one of the only self-help books I've read that was truly useful and not a regurgitation of cliches and applause lights. Carnegie's thesis is basically that being nice is the most powerful of the Dark Arts, and that a master of the Art of Niceness can use it to take over the world. It works better than you'd think.

\n

2: The following technique is definitely one of the Dark Arts, but I mention it because it reveals a lot about the way we think: when engaged in a really heated, angry debate, one where the insults are flying, suddenly stop and admit the other person is one hundred percent right and you're sorry for not realizing it earlier. Do it properly, and the other person will be flabbergasted, and feel deeply guilty at all the names and bad feelings they piled on top of you. Not only will you ruin their whole day, but for the rest of time, this person will secretly feel indebted to you, and you will be able to play with their mind in all sorts of little ways.

\n

3: Libertarians, you have a particular problem with this. If I wanted to know why I'm a Stalin-worshipper who has betrayed the Founding Fathers for personal gain and is controlled by his base emotions and wants to dominate others by force to hide his own worthlessness et cetera, I'd ask Ann Coulter. You're better than that. Come on. And then you wonder why people never vote for you.

" } }, { "_id": "6r2t5qTxjHWxG5DTj", "title": "Little Johny Bayesian", "pageUrl": "https://www.lesswrong.com/posts/6r2t5qTxjHWxG5DTj/little-johny-bayesian", "postedAt": "2009-03-18T21:30:59.049Z", "baseScore": 16, "voteCount": 41, "commentCount": 18, "url": null, "contents": { "documentId": "6r2t5qTxjHWxG5DTj", "html": "

Followup to: Rationalist Storybooks: A Challenge

\n

This was originally a comment in response to a challenge to create a nursery rhyme conveying rationality concepts, but, at the suggestion of Eliezer, I've made it into its own post.

\n

 

\n

Little Johny thought he was very bright,
But the schoolkids did not -- they would laugh when he came in sight.
He could count, sing, and guess the weather.
Then one day, Big Bill said \"Real bright boys will grow a feather.\"

\n

\"Ach!\" he cried, \"Could it be true?\"
\"Then I'm not bright, which makes me blue.\"
So he went home, and searched all over.
And then found growth on his head, clear as a clover.

\n

\"It is true, feathers are sprouting!\"
\"It's proof that I'm bright!\" So he stopped pouting.
He ran to show his mom, nearly tripping over some eggs,
When he saw on TV \"Bright boys will grow long legs.\"

\n

So he waited for weeks and weeks for to find proof,
Worried over his brightness, and staying quite aloof,
Until one day, feeling in a pinch,
He grabbed a tape measure, and found his legs had grown a whole inch!

\n

So he leaped off to school, but a scientist walked by,
And Johny overheard him say, that real bright boys could fly.
\"The hair, the legs, from these I know
Of my brightness. The flying thus follows, so...\"

\n

Little Johny plotted of his grand display,
Standing high on a wall, he would proudly say
\"Behold, I have proof that I'm bright!\"
And he would deftly leap off, and soar into flight.

\n

So he climbed up the wall, and made his speech,
But there his plan stopped with a screech,
For he hit the ground hard with a smack,
Leaving his leg all bloody and black.

\n

As the other children laughed, he tried to explain,
Of the things that he heard, and why he had taken it to his brain,
\"They came from on high, from people who knew
I looked at myself, and saw they were true.\"

\n

They laughed, \"You're too eager to believe, you fool.
Your feathers are just hair, all boys grow long legs as a rule.
Yes, if all you heard were true, you'd fly, but you'll find out,
That if you do logic with garbage, then you'll get garbage out.\"

\n

So Johny thought wrongly, and got his leg in a cast,
He had sought fame in the schoolyard, but now that's all past.
He's taken the lesson to heart, no longer believes all he hears.
So he doesn't believe them when they say he's not bright -- brightness doesn't come from peers.

" } }, { "_id": "LxqwJyH2BQBYgap7N", "title": "A corpus of our community's knowledge", "pageUrl": "https://www.lesswrong.com/posts/LxqwJyH2BQBYgap7N/a-corpus-of-our-community-s-knowledge", "postedAt": "2009-03-18T20:18:43.240Z", "baseScore": 8, "voteCount": 11, "commentCount": 14, "url": null, "contents": { "documentId": "LxqwJyH2BQBYgap7N", "html": "

The purpose of this site is to help building a rationalist community, and helping individuals to fulfill their potential in that domain.

\n

We have a lot of discussions going on, and a lot of material is being, and going to be, generated. At some point it may become difficult for any single individual to follow all of it. Even taking the karma system into account, interesting contributions may be missed by any particular individual. Furthermore, the sum of what would be elaborated upon here would not be as concise or even easily available as it could be wished to be.

\n

To the point : would it be a good idea to try to summarize the most important, relevant ideas upon which we will be building our edifice ? So that a future student of rationality can come upon a concise, easy to digest introduction to our results and ideas, so that less active members can still manage to follow this ongoing process too ?

\n

If so, how would we proceed ? What is being discussed here may not have the quality we'd expect of, say, a scientific publication, though I think that such a quality would be necessary, if even sufficient, for what would eventually become our own corpus of knowledge. How would we elaborate, layer upon layer of work and discussions ? A starting point would be to refer to, or summarize the relevant, existing scientific results that we would lay our base upon. We'd then move on to summarizing our most important achievements, however that word is to be taken, seamlessly upon that foundation.

\n

Any thought on how or whether to organize this?

" } }, { "_id": "tHJ43CyPdEkQ9Dfup", "title": "Rationalist Storybooks: A Challenge", "pageUrl": "https://www.lesswrong.com/posts/tHJ43CyPdEkQ9Dfup/rationalist-storybooks-a-challenge", "postedAt": "2009-03-18T02:25:30.475Z", "baseScore": 39, "voteCount": 37, "commentCount": 38, "url": null, "contents": { "documentId": "tHJ43CyPdEkQ9Dfup", "html": "

Follow-Up to: On Juvenile Fiction

\n

Related to: The Simple Truth

\n

I quote again from JulianMorrison, who writes:

\n
\n

If you want people to repeat this back, write it in a test, maybe even apply it in an academic context, a four-credit undergrad course will work.

\n

If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.

\n
\n

Anonym adds:

\n
\n

Imagine a world in which 8-year olds grok things like confirmation bias and the base-rate fallacy on an intuitive level because they are reminded of their favorite childhood stories and the lessons they internalized after having the story read to them again and again. What a wonderful foundation to build upon.

\n
\n

With this in mind, here is my challenge:

\n

Look through Eliezer's early standard bias posts.  Can you convey the essential content of one of these posts in a 16-page picture book, or in a nursery rhyme children could sing while they skip rope?

\n

Write the story, and post it here.  Let's see what we can come up with.

\n

This is not, by any means intended to be a simple challenge.  On the one hand, we are compressing a lot of information into a small space.  On the other, good fiction is not easy, and children's fiction is no exception.

\n

We have two options.  We can humbly admit that we are not skilled writers of children's fiction and walk away, or we can determine that this is a task which needs to be completed, produce lots of really bad fiction, and begin the process of criticizing one another, learning from our mistakes, and growing stronger.

\n

When I was a boy, I had a thick book of 365 short stories, some not even taking up a full page.  Each was self-contained, and I could flip open the book at random and find a story I hadn't read before.

\n

How quickly would our community grow, both in strength and in numbers, if we could crowdsource a Rationalist's Book of Tales?

\n

I know, I know.  It's optimistic. It's ambitious. Most of all, it seems really silly.

\n

Let's do it anyway.

" } }, { "_id": "ZmQv4DFx6y4jFbhLy", "title": "Never Leave Your Room", "pageUrl": "https://www.lesswrong.com/posts/ZmQv4DFx6y4jFbhLy/never-leave-your-room", "postedAt": "2009-03-18T00:30:27.152Z", "baseScore": 83, "voteCount": 88, "commentCount": 65, "url": null, "contents": { "documentId": "ZmQv4DFx6y4jFbhLy", "html": "

Related to: Priming and Contamination

\n

Psychologists define \"priming\" as the ability of a stimulus to activate the brain in such a way as to affect responses to later stimuli. If that doesn't sound sufficiently ominous, feel free to re-word it as \"any random thing that happens to you can hijack your judgment and personality for the next few minutes.\"

For example, let's say you walk into a room and notice a briefcase in the corner. Your brain is now the proud owner of the activated concept \"briefcase\". It is \"primed\" to think about briefcases, and by extension about offices, business, competition, and ambition. For the next few minutes, you will shift ever so slightly towards perceiving all social interactions as competitive, and towards behaving competitively yourself. These slight shifts will be large enough to be measured by, for example, how much money you offer during the Ultimatum Game. If that sounds too much like some sort of weird New Age sympathetic magic to believe, all I can say is Kay, Wheeler, Bargh, and Ross, 2004.1

We've been discussing the costs and benefits of Santa Claus recently. Well, here's one benefit: show Dutch children an image of St. Nicholas' hat, and they'll be more likely to share candy with others. Why? The researchers hypothesize that the hat activates the concept of St. Nicholas, and St. Nicholas activates an idealized concept of sharing and giving. The child is now primed to view sharing positively. Of course, the same effect can be used for evil. In the same study, kids shown the Toys 'R' Us logo refused to share their precious candy with anyone.

But this effect is limited to a few psych laboratories, right? It hasn't done anything like, you know, determine the outcome of a bunch of major elections?

\n



I am aware of two good studies on the effect of priming in politics. In the first, subjects were subliminally2 primed with either alphanumeric combinations that recalled the 9/11 WTC attacks (ie \"911\" or \"WTC\"), or random alphanumeric combinations. Then they were asked to rate the Bush administration's policies. Those who saw the random strings rated Bush at an unenthusiastic 42% (2.1/5). Those who were primed to be thinking about the War on Terror gave him an astounding 75% (3.75/5). This dramatic a change, even though none of them could consciously recall seeing terrorism-related stimuli.

In the second study, scientists analyzed data from the 2000 election in Arizona, and found that polling location had a moderate effect on voting results. That is, people who voted in a school were more likely to support education-friendly policies, people who voted in a church were more likely to support socially conservative policies, et cetera. The effect seems to have shifted results by about three percentage points. Think about all the elections that were won or lost by less than three percent...

Objection: correlation is not causation! Religious people probably live closer to churches, and are more likely to know where their local church is, and so on. So the scientists performed an impressive battery of regression analyses and adjustments on their data. Same response.

Objection: maybe their adjustments weren't good enough! The same scientists then called voters into their laboratory, showed them pictures of buildings, and asked them to cast a mock vote on the education initiatives. Voters who saw pictures of schools were more likely to vote yes on the pro-education initiatives than voters who saw control buildings.

What techniques do these studies suggest for rationalists? I'm tempted to say the optimal technique is to never leave your room, but there are still a few less extreme things you can do. First, avoid exposure to any salient stimuli in the few minutes before making an important decision. Everyone knows about the 9-11 terrorist attacks, but the War on Terror only hijacked the decision-making process when the subjects were exposed to the related stimuli directly before performing the rating task3.

Second, try to make decisions in a neutral environment and then stick to them. The easiest way to avoid having your vote hijacked by the location of your polling place is to decide how to vote while you're at home, and then stick to that decision unless you have some amazing revelation on your way to the voting booth. Instead of never leaving your room, you can make decisions in your room and then carry them out later in the stimulus-laden world.

I can't help but think of the long tradition of master rationalists \"blanking their mind\" to make an important decision. Jeffreyssai's brain \"carefully put in idle\" as he descends to a bare white room to stage his crisis of faith. Anasûrimbor Kellhus withdrawing into himself and entering a probability trance before he finds the Shortest Path. Your grandmother telling you to \"sleep on it\" before you make an important life choice.

Whether or not you try anything as formal as that, waiting a few minutes in a stimulus-free environment before a big decision might be a good idea.

\n

 

\n

Footnotes

\n

1: I bet that sympathetic magic probably does have strong placebo-type effects for exactly these reasons, though.

\n

2: Priming is one of the phenomena behind all the hype about subliminal advertising and other subliminal effects. The bad news is that it's real: a picture of popcorn flashed subliminally on a movie screen can make you think of popcorn. The good news is that it's not particularly dangerous: your thoughts of popcorn aren't any stronger or any different than they'd be if you just saw a normal picture of popcorn.

\n

3: The obvious objection is that if you're evaluating George Bush, it would be very strange if you didn't think of the 9-11 terror attacks yourself in the course of the evaluation. I haven't seen any research addressing this possibility, but maybe hearing an external reference to it outside the context of your own thought processes is a stronger activation than the one you would get by coming up with the idea yourself.

" } }, { "_id": "TQSb4wd6v5C3p6HX2", "title": "The Pascal's Wager Fallacy Fallacy", "pageUrl": "https://www.lesswrong.com/posts/TQSb4wd6v5C3p6HX2/the-pascal-s-wager-fallacy-fallacy", "postedAt": "2009-03-18T00:30:00.000Z", "baseScore": 45, "voteCount": 39, "commentCount": 128, "url": null, "contents": { "documentId": "TQSb4wd6v5C3p6HX2", "html": "

Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.

So I observed that:

  1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
  2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).

However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.

And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.

But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.

Yet instead we have reasoning that runs like this:

  1. Cryonics has a large payoff;
  2. Therefore, the argument carries even if the probability is tiny;
  3. Therefore, the probability is tiny;
  4. Therefore, why bother thinking about it?

(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)

Further details:

Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here).

See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.

In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

" } }, { "_id": "qJiSvhGyvbgwQcNXn", "title": "Tarski Statements as Rationalist Exercise", "pageUrl": "https://www.lesswrong.com/posts/qJiSvhGyvbgwQcNXn/tarski-statements-as-rationalist-exercise", "postedAt": "2009-03-17T19:47:16.021Z", "baseScore": 12, "voteCount": 21, "commentCount": 10, "url": null, "contents": { "documentId": "qJiSvhGyvbgwQcNXn", "html": "

Related to: Dissolving the Question, The Second Law of Thermodynamics, and Engines of Cognition, The Meditation on Curiosity.

\n
\n

The sentence \"snow is white\" is true if, and only if, snow is white.

\n
\n

-- A. Tarski

\n

Several days ago I've spent a couple of hours trying to teach my 15 year old brother how to properly construct Tarski statements. It's quite nontrivial to get right. Learning to place facts and representations in the separate mental buckets is one of the fundamental tools for a rationalist. In our model of the world, information propagates from object to object, from mind to mind. To ascertain the validity of your belief, you need to research the whole network of factors that led you to attain the belief. The simplest relation is between a fact and its representation, idealized to represent correctness or incorrectness only, without yet worrying about probabilities. The same object or the same property can be interpreted to mean different things in different relations and contexts, indicating the truth of one statement or another, and it's important not to conflate those.

\n

Let's say you are watching news on TV and the next item is an interview with a sasquatch. The sasquatch answers the questions about his family in decent English, with a slight British accent.

\n

What do you actually observe, how should you interpret the data? Did you \"see a sasquatch\"? Did you learn the facts about sasquatch's family? Is there a fact of the matter, as to whether the sasquatch's daughter is 5 years old, as opposed to 4 or 6?

\n

Meaningfulness of these questions is conditional on their context, like in the notorious \"when did you stop beating your wife?\". These examples seem unnaturally convoluted, but in fact every statement suffers from the same problem, you must cross the levels of indirection and not lose track of the question in order to go from a statement of fact, from a belief in your mind, to the fact that belief is about. First, you must relate \"sasquatch\" on the TV screen to an actual sasquatch, which fails if there isn't one, and then you need to relate the sasquatch to his daughter, which again fails if he lies and doesn't have one.

\n

By contemplating plausibility of the surface interpretation of a statement that doesn't yield to that interpretation, you fail before you even start. If the surface interpretation of the data doesn't apply, the data doesn't say anything about your interpretation. To form accurate beliefs about something, you really do have to observe it. When you misinterpret the data, the misinterpretation comes from you, not from the data, and so the data doesn't say anything about the thing you misinterpreted it to mean.

\n

Every piece of evidence tells something about the world, every lie speaks a hidden truth. By correctly interpreting the data, you may learn about the process that generated it, and further your expertise in correctly interpreting similar data. If your mind lies to you, it is an opportunity to learn the algorithms that constructed the lie, to right a wrong question and come out stronger from the experience.

\n

Consider a setting with a note lying near an apple stating \"the apple is poisonous\". There are two objects in this scene, the apple and the note. We associate an interpretation with the note, a statement of fact \"the apple is poisonous\". This statements corresponds to the note pretty much unambiguously. The statement implies that we should associate a non-obvious interpretation with the apple, \"poisonous thing\". The object (apple) and a newly introduced interpretation (\"poisonous thing\") are related to each other by means of our interpretation of the note. The interpretation of the apple as being poisonous is valid if and only if the statement of fact we read in the note is true. And this is what's stated by the Tarksi statement for this situation. The Tarski statement relates two things: the truth of interpretation of data, and the interpretation of the fact this data is about. It commutes two pathways by which we arrive at the fact stated by the data. First, the fact is an interpretation of the state of the world. We start from the apple, and go to the associated \"poisonous thing\" interpretation. And second, the fact is referred to by the interpretation of the data. You start from a note, proceed to interpreting it to say \"the apple is poisonous\", and interpret the statement a second time to extract from it the relation between the apple and the statement \"poisonous thing\". And so you state: the sentence \"the apple is poisonous\" is true if and only if the apple is poisonous. Or, in other words: the sentence \"the apple is poisonous\" is true if and only if the interpretation \"poisonous thing\" applies to the apple.

\n

Each of the transitions between the levels of indirection may turn out to be wrong. Of reality, we only see the note and the apple, in this model we are pretty sure they are present. All the rest are the constructions we associated with them, following interpretation of the evidence. First, we construct interpretation of the scribbles on the note. To do that, we must understand that the note is intended to read literally, in the language in which it's written, not in some code, so you won't see \"Tom is a spy\" when you read the note. Second, you interpret the statement that you read from the note to refer to this particular apple, and to a particular property, \"poisonous thing\". The note could be left on the scene by mistake, referring to a different apple. There is no poison in the scene as you see it, the poison is only in the form of your interpretation of the statement read from the note. And then you decide whether the statement is true, and if you decide that it is, you make a new connection between the property \"poisonous thing\" and the apple, you start interpreting the apple itself as \"poisonous thing\".

\n

We build our knowledge about the world step by step, and every step hides a potential error. Consciously inspecting the suspect steps requires constructing a model of these steps, with all the necessary parts. Rational analysis of beliefs and decisions in the real world requires not just understanding of mathematics of probability theory and decision theory, but also the skill of correctly applying those theories to the real-life situations. Learning to explore the relations between the truth of statements of fact and perception of the facts referred to by these statements may be a valuable exercise. Constructing Tarski statements requires mathematical thinking, keeping in mind many objects and relations between them, interpreting the objects according to the intention of current inquiry, constructing new relations based on those already present, and integrating them into the understanding of the problem. At the same time, the domain of Tarski statements is the reality itself, \"obvious\" common-sense knowledge.

" } }, { "_id": "z2EDCcQuknWTpc5Fi", "title": "Dead Aid", "pageUrl": "https://www.lesswrong.com/posts/z2EDCcQuknWTpc5Fi/dead-aid", "postedAt": "2009-03-17T14:51:11.304Z", "baseScore": 2, "voteCount": 22, "commentCount": 18, "url": null, "contents": { "documentId": "z2EDCcQuknWTpc5Fi", "html": "

Followup to So You Say You're an Altruist:

\n

Today Dambisa Moyo's book \"Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa\" was released.

\n

From the book's website:

\n
\n

In the past fifty years, more than $1 trillion in development-related aid has been transferred from rich countries to Africa. Has this assistance improved the lives of Africans? No. In fact, across the continent, the recipients of this aid are not better off as a result of it, but worse—much worse.

\n

In Dead Aid, Dambisa Moyo describes the state of postwar development policy in Africa today and unflinchingly confronts one of the greatest myths of our time: that billions of dollars in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth.

\n

In fact, poverty levels continue to escalate and growth rates have steadily declined—and millions continue to suffer. Provocatively drawing a sharp contrast between African countries that have rejected the aid route and prospered and others that have become aid-dependent and seen poverty increase, Moyo illuminates the way in which overreliance on aid has trapped developing nations in a vicious circle of aid dependency, corruption, market distortion, and further poverty, leaving them with nothing but the “need” for more aid.

\n
\n

From the Global Investor Bookshop:

\n
\n

Dead Aid analyses the history of economic development over the last fifty years and shows how Aid crowds out financial and social capital and directly causes corruption; the countries that have caught up did so despite rather than because of Aid. There is, however, an alternative. Extreme poverty is not inevitable. Dambisa Moyo also shows how, with improved access to capital and markets and with the right policies, even the poorest nations could be allowed to prosper. If we really do want to help, we have to do more than just appease our consciences, hoping for the best, expecting the worst. We need first to understand the problem.

\n
" } }, { "_id": "w9kwayt5SWqBQe8Nx", "title": "Rational Me or We?", "pageUrl": "https://www.lesswrong.com/posts/w9kwayt5SWqBQe8Nx/rational-me-or-we", "postedAt": "2009-03-17T13:39:29.073Z", "baseScore": 164, "voteCount": 160, "commentCount": 156, "url": null, "contents": { "documentId": "w9kwayt5SWqBQe8Nx", "html": "

Martial arts can be a good training to ensure your personal security, if you assume the worst about your tools and environment.  If you expect to find yourself unarmed in a dark alley, or fighting hand to hand in a war, it makes sense.  But most people do a lot better at ensuring their personal security by coordinating to live in peaceful societies and neighborhoods; they pay someone else to learn martial arts.  Similarly, while \"survivalists\" plan and train to stay warm, dry, and fed given worst case assumptions about the world around them, most people achieve these goals by participating in a modern economy.

The martial arts metaphor for rationality training seems popular at this website, and most discussions here about how to believe the truth seem to assume an environmental worst case: how to figure out everything for yourself given fixed info and assuming the worst about other folks.  In this context, a good rationality test is a publicly-visible personal test, applied to your personal beliefs when you are isolated from others' assistance and info.  

I'm much more interested in how we can can join together to believe truth, and it actually seems easier to design institutions which achieve this end than to design institutions to test individual isolated general tendencies to discern truth.  For example, with subsidized prediction markets, we can each specialize on the topics where we contribute best, relying on market consensus on all other topics.  We don't each need to train to identify and fix each possible kind of bias; each bias can instead have specialists who look for where that bias appears and then correct it. 

Perhaps martial-art-style rationality makes sense for isolated survivalist Einsteins forced by humanity's vast stunning cluelessness to single-handedly block the coming robot rampage.  But for those of us who respect the opinions of enough others to want to work with them to find truth, it makes more sense to design and field institutions which give each person better incentives to update a common consensus.

" } }, { "_id": "4GeE83592epCErQse", "title": "On Juvenile Fiction", "pageUrl": "https://www.lesswrong.com/posts/4GeE83592epCErQse/on-juvenile-fiction", "postedAt": "2009-03-17T08:53:06.300Z", "baseScore": 31, "voteCount": 27, "commentCount": 135, "url": null, "contents": { "documentId": "4GeE83592epCErQse", "html": "

Follow-up To: On the Care and Feeding of Young Rationalists

\n

Related on OB: Formative Youth

\n

Eliezer suspects he may have chosen an altruistic life because of Thundercats.

\n

Nominull thinks his path to truth-seeking might have been lit by Asimov's Robot stories.

\n

PhilGoetz suggests that Ender's Game has warped the psyches of many intelligent people.

\n

For good or ill, we seem to agree that fiction strongly influences the way we grow up, and the people we come to be.

\n

So for those of us with the tremendous task of bringing new sentience into the world, it seems sensible to spend some time thinking about what fictions our charges will be exposed to.

\n

\n

The natural counter-part to this question is, of course, are there any particular fictions, or types of fiction, to which we should avoid exposing our children?

\n

Again, this is a pattern we see more commonly in the religious community -- and the rest of us tend to look on and laugh at the prudery on display. Still, the general idea doesn't seem to be something we can reject out of hand. So far as we can tell, all (currently existing) minds are vulnerable to being hacked, young minds more than others. If we determine that a particular piece of fiction, or a particular kind of fiction, tends to reliably and destructively hack vulnerable minds, that seems a disproportionate consequence for pulling the wrong book off the shelf.

\n

So, what books, what films, what stories would you say affected your childhood for the better?  What stories do you wish you had encountered earlier? If there are any members of the Bardic Conspiracy present, what sorts of stories should we start telling? Finally, what stories (if any) should young minds not encounter until they have developed some additional robustness?

\n

ETA: If there are particular stories which you think the (adult) members of the community would benefit from, please feel free to share these as well.

\n

ETA2: My wildly optimistic best-case scenario for this post would be someone actually writing a rationalist children's story in the comments thread.

\n

ETA3: On second thought, this edit has become its own post.

" } }, { "_id": "KLjQedNYNEP4tW73W", "title": "The \"Spot the Fakes\" Test", "pageUrl": "https://www.lesswrong.com/posts/KLjQedNYNEP4tW73W/the-spot-the-fakes-test", "postedAt": "2009-03-17T00:52:54.095Z", "baseScore": 66, "voteCount": 63, "commentCount": 18, "url": null, "contents": { "documentId": "KLjQedNYNEP4tW73W", "html": "

Followup to: Are You a Solar Deity?

\n

James McAuley and Harold Stewart were mid-20th century Australian poets, and they were not happy. After having society ignore their poetry in favor of \"experimental\" styles they considered fashionable nonsense, they wanted to show everyone what they already knew: the Australian literary world was full of empty poseurs.

They began by selecting random phrases from random books. Then they linked them together into something sort of like poetry. Then they invented the most fashionable possible story: Ern Malley, a loner working a thankless job as an insurance salesman, writing sad poetry in his spare time and hiding it away until his death at an early age. Posing as Malley's sister, who had recently discovered the hidden collection, they sent the works to Angry Penguins, one of Australia's top experimental poetry magazines.

You wouldn't be reading this if the magazine hadn't rushed a special issue to print in honor of \"a poet in the same class as W.H. Auden or Dylan Thomas\".

The hoax was later revealed1, everyone involved ended up with egg on their faces, and modernism in Australia received a serious blow. But as I am reminded every time I look through a modern poetry anthology, one Ern Malley every fifty years just isn't enough. I daydream about an alternate dimension where people are genuinely interested in keeping literary criticism honest. In this universe, any would-be literary critic would have to distinguish between ten poems generally recognized as brilliant that he'd never seen before, and ten pieces of nonsense invented on the spot by drunk college students, in order to keep his critic's license.

Can we refine this test? And could it help Max Muller with his solar deity problem?

\n

\n

In the Malley hoax, McAuley and Steward suspected that a certain school of modernist poetry was without value. Because its supporters were too biased to admit this directly, they submitted a control poem they knew was without value, and found the modernists couldn't tell the difference. This suggests a powerful technique for determining when something otherwise untestable might be, as Neal Stephenson calls it, bulshytte.

Perhaps Max Muller thinks Hercules is a solar deity. He will write up a argument for this proposition, and submit it for consideration before all the great mythologists of the world. Even if these mythologists want to be unbiased, they will have a difficult time of it: Muller has a prestigious reputation, and they may not have any set conception of what does and doesn't qualify as a solar deity.

What if, instead of submitting one argument, Muller submitted ten? One sincere argument for why Hercules is a solar deity, and other bogus arguments for why Perseus, Bellerophon, Theseus, et cetera are solar myths (which he has nevertheless constructs to the best of his ability). Then he instructs the mythologists \"Please independently determine which of these arguments is true, and which ones I have just come up with by writing 'X is a solar deity' as my bottom line and then inventing fake justifications for the fact?\" If every mythologist finds the Hercules argument most convincing, then that doesn't prove anything about Hercules but it at least shows Muller has a strong case. On the other hand, if they're all convinced by different arguments, or find none of the arguments convincing, or worst of all they all settle on Bellerophon, then Dr. Muller knows his beliefs about Hercules are quite probably wishful thinking.

This method hinges on Dr. Muller's personal honesty: a dishonest man could simply do a bad job arguing for Theseus and Bellerophon. What if we thought Dr. Muller was dishonest? We might find another mythologist whom independent observers rate as equally persuasive as Dr. Muller, and ask her to come up with the bogus arguments.

The rationalists I know sometimes take a dim view of the humanities as academic disciplines. Part of the problem is the seeming untestability of their conclusions through good, blinded experimental methods. I don't think most humanities professors are really looking all that hard for such methods. But for those who are, I consider this technique a little better than nothing2.

\n

Footnotes

\n

1: The Sokal Affair is another related hoax. Wikipedia's Sokal Hoax page has some other excellent examples of this sort of test.

\n

2: One more example where this method could prove useful. I remember debating a very smart Christian on the subject of Biblical atrocities. You know, stuff about death by stoning for minor crimes, or God ordering the Israelites to murder women and enslave children - that sort of thing. My friend, who was quite smart, was always able to come up with a superficially plausible excuse, and it was getting on my nerves. But having just read Your Strength as a Rationalist, I knew that being able to explain anything wasn't always a virtue. I proposed the following experiment: I'd give my friend ten atrocities commanded by random Bronze Age kings generally agreed by historical consensus to be jerks, and ten commanded by God in the Bible. His job would be to determine which ten, for whatever reason, really weren't all that bad. If he identified the ten Bible passages, that would be strong evidence that Biblical commandments only seemed atrocious when misunderstood. But if he couldn't tell the difference between God and Ashurbanipal, that would prove God wasn't really that great. To my disgust, my friend knew his Bible so well that I couldn't find any atrocities he wasn't already familiar with. So much for that technique. I offer it to anyone who debates theists with less comprehensive knowledge of Scripture.

" } }, { "_id": "8XjwhnM9Guvyaf9Cj", "title": "Comments for \"Rationality\"", "pageUrl": "https://www.lesswrong.com/posts/8XjwhnM9Guvyaf9Cj/comments-for-rationality", "postedAt": "2009-03-16T22:34:51.045Z", "baseScore": 2, "voteCount": 5, "commentCount": 42, "url": null, "contents": { "documentId": "8XjwhnM9Guvyaf9Cj", "html": "

I wrote an Admin page for \"What do we mean by 'Rationality'?\" since this has risen to the status of a FAQ.  Comments can go here.

" } }, { "_id": "RcZCwxFiZzE6X7nsv", "title": "What Do We Mean By \"Rationality\"?", "pageUrl": "https://www.lesswrong.com/posts/RcZCwxFiZzE6X7nsv/what-do-we-mean-by-rationality-1", "postedAt": "2009-03-16T22:33:55.765Z", "baseScore": 414, "voteCount": 418, "commentCount": 20, "url": null, "contents": { "documentId": "RcZCwxFiZzE6X7nsv", "html": "

I mean two things:

1. Epistemic rationality: systematically improving the accuracy of your beliefs.

2. Instrumental rationality: systematically achieving your values.

The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.

This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1

Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”

So rationality is about forming true beliefs and making decisions that help you win.

(Where truth doesn't mean “certainty,” since we can do plenty to increase the probability that our beliefs are accurate even though we're uncertain; and winning doesn't mean “winning at others' expense,” since our values include everything we care about, including other people.)

When people say “X is rational!” it’s usually just a more strident way of saying “I think X is true” or “I think X is good.” So why have an additional word for “rational” as well as “true” and “good”?

An analogous argument can be given against using “true.” There is no need to say “it is true that snow is white” when you could just say “snow is white.” What makes the idea of truth useful is that it allows us to talk about the general features of map-territory correspondence. “True models usually produce better experimental predictions than false models” is a useful generalization, and it’s not one you can make without using a concept like “true” or “accurate.”

Similarly, “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” is the kind of thought that depends on a concept of (instrumental) rationality, whereas “It’s rational to eat vegetables” can probably be replaced with “It’s useful to eat vegetables” or “It’s in your interest to eat vegetables.” We need a concept like “rational” in order to note general facts about those ways of thinking that systematically produce truth or value—and the systematic ways in which we fall short of those standards.

As we’ve observed in the previous essays, experimental psychologists sometimes uncover human reasoning that seems very strange. For example, someone rates the probability “Bill plays jazz” as less than the probability “Bill is an accountant who plays jazz.” This seems like an odd judgment, since any particular jazz-playing accountant is obviously a jazz player. But to what higher vantage point do we appeal in saying that the judgment is wrong ?

Experimental psychologists use two gold standards: probability theory, and decision theory.

Probability theory is the set of laws underlying rational belief. The mathematics of probability applies equally to “figuring out where your bookcase is” and “estimating how many hairs were on Julius Caesars head,” even though our evidence for the claim “Julius Caesar was bald” is likely to be more complicated and indirect than our evidence for the claim “theres a bookcase in my room.” It’s all the same problem of how to process the evidence and observations to update one’s beliefs. Similarly, decision theory is the set of laws underlying rational action, and is equally applicable regardless of what one’s goals and available options are.

Let “P(such-and-such)” stand for “the probability that such-and-such happens,” and “P(A,B)” for “the probability that both A and B happen.” Since it is a universal law of probability theory that P(A) ≥ P(A,B), the judgment that P(Bill plays jazz) is less than P(Bill plays jazz, Bill is an accountant) is labeled incorrect.

To keep it technical, you would say that this probability judgment is non-Bayesian. Beliefs that conform to a coherent probability distribution, and decisions that maximize the probabilistic expectation of a coherent utility function, are called “Bayesian.”

I should emphasize that this isn't the notion of rationality thats common in popular culture. People may use the same string of sounds, “ra-tio-nal,” to refer to “acting like Mr. Spock of Star Trek” and “acting like a Bayesian”; but this doesn't mean that acting Spock-like helps one hair with epistemic or instrumental rationality.2

All of this does not quite exhaust the problem of what is meant in practice by “rationality,” for two major reasons:

First, the Bayesian formalisms in their full form are computationally intractable on most real-world problems. No one can actually calculate and obey the math, any more than you can predict the stock market by calculating the movements of quarks.

This is why there is a whole site called “Less Wrong,” rather than a single page that simply states the formal axioms and calls it a day. There’s a whole further art to finding the truth and accomplishing value from inside a human mind: we have to learn our own flaws, overcome our biases, prevent ourselves from self-deceiving, get ourselves into good emotional shape to confront the truth and do what needs doing, et cetera, et cetera.

Second, sometimes the meaning of the math itself is called into question. The exact rules of probability theory are called into question by, e.g., anthropic problems in which the number of observers is uncertain. The exact rules of decision theory are called into question by, e.g., Newcomblike problems in which other agents may predict your decision before it happens.3

In cases where our best formalizations still come up short, we can return to simpler ideas like “truth” and “winning.” If you are a scientist just beginning to investigate fire, it might be a lot wiser to point to a campfire and say “Fire is that orangey-bright hot stuff over there,” rather than saying “I define fire as an alchemical transmutation of substances which releases phlogiston.” You certainly shouldn’t ignore something just because you can’t define it. I can't quote the equations of General Relativity from memory, but nonetheless if I walk off a cliff, I'll fall. And we can say the same of cognitive biases and other obstacles to truth—they won't hit any less hard if it turns out we can't define compactly what “irrationality” is.

In cases like these, it is futile to try to settle the problem by coming up with some new definition of the word “rational” and saying, “Therefore my preferred answer, by definition, is what is meant by the word ‘rational.’ ” This simply raises the question of why anyone should pay attention to your definition. I’m not interested in probability theory because it is the holy word handed down from Laplace. I’m interested in Bayesian-style belief-updating (with Occam priors) because I expect that this style of thinking gets us systematically closer to, you know, accuracy, the map that reflects the territory.

And then there are questions of how to think that seem not quite answered by either probability theory or decision theory—like the question of how to feel about the truth once you have it. Here, again, trying to define “rationality” a particular way doesn’t support an answer, but merely presumes one.

I am not here to argue the meaning of a word, not even if that word is “rationality.” The point of attaching sequences of letters to particular concepts is to let two people communicate—to help transport thoughts from one mind to another. You cannot change reality, or prove the thought, by manipulating which meanings go with which words.

So if you understand what concept I am generally getting at with this word “rationality,” and with the sub-terms “epistemic rationality” and “instrumental rationality,” we have communicated: we have accomplished everything there is to accomplish by talking about how to define “rationality.” What’s left to discuss is not what meaning to attach to the syllables “ra-tio-na-li-ty”; what’s left to discuss is what is a good way to think.

If you say, “It’s (epistemically) rational for me to believe X, but the truth is Y,” then you are probably using the word “rational” to mean something other than what I have in mind. (E.g., “rationality” should be consistent under reflection—“rationally” looking at the evidence, and “rationally” considering how your mind processes the evidence, shouldn’t lead to two different conclusions.)

Similarly, if you find yourself saying, “The (instrumentally) rational thing for me to do is X, but the right thing for me to do is Y,” then you are almost certainly using some other meaning for the word “rational” or the word “right.” I use the term “rationality” normatively, to pick out desirable patterns of thought.

In this case—or in any other case where people disagree about word meanings—you should substitute more specific language in place of “rational”: “The self-benefiting thing to do is to run away, but I hope I would at least try to drag the child off the railroad tracks,” or “Causal decision theory as usually formulated says you should two-box on Newcomb’s Problem, but I’d rather have a million dollars.”

In fact, I recommend reading back through this essay, replacing every instance of “rational” with “foozal,” and seeing if that changes the connotations of what I’m saying any. If so, I say: strive not for rationality, but for foozality.

The word “rational” has potential pitfalls, but there are plenty of non-borderline cases where “rational” works fine to communicate what I’m getting at. Likewise “irrational.” In these cases I’m not afraid to use it.

Yet one should be careful not to overuse that word. One receives no points merely for pronouncing it loudly. If you speak overmuch of the Way, you will not attain it.


1 For a longer discussion of truth, see “The Simple Truth” at the very end of this volume.

2 The idea that rationality is about strictly privileging verbal reasoning over feelings is a case in point. Bayesian rationality applies to urges, hunches, perceptions, and wordless intuitions, not just to assertions.

I gave the example of opening your eyes, looking around you, and building a mental model of a room containing a bookcase against the wall. The modern idea of rationality is general enough to include your eyes and your brains visual areas as things-that-map, and to include instincts and emotions in the belief-and-goal calculus.

3 For an informal statement of Newcomb’s Problem, see Jim Holt, “Thinking Inside the Boxes,” Slate, 2002, http://www.slate.com/articles/arts/egghead/2002/02/thinkinginside_the_boxes.single.html.

" } }, { "_id": "BnoFcEkKE9syNCqWw", "title": "Science vs. art", "pageUrl": "https://www.lesswrong.com/posts/BnoFcEkKE9syNCqWw/science-vs-art", "postedAt": "2009-03-16T15:48:21.121Z", "baseScore": 7, "voteCount": 22, "commentCount": 36, "url": null, "contents": { "documentId": "BnoFcEkKE9syNCqWw", "html": "

In the comments on Soulless Morality, a few people mentioned contributing to humanity's knowledge as an ultimate value.  I used to place a high value on this myself.

\n

Now, though, I doubt whether making scientific advances would give me satisfaction on my deathbed.  All you can do in science is discover something before someone else discovers it.  (It's a lot like the race to the north pole, which struck me as stupid when I was a child; yet I never transferred that judgement to scientific races.)  The short-term effects of your discovering something sooner might be good, and might not.  The long-term effects are likely to be to bring about apocalypse a little sooner.

\n

Art is different.  There's not much downside to art.  There are some exceptions - romance novels perpetuate destructive views of love; 20th-century developments in orchestral music killed orchestral music; and Ender's Game has warped the psyches of many intelligent people.  But artists seldom worry that their art might destroy the world.  And if you write a great song, you've really contributed, because no one else would have written that song.

\n

EDIT: What is above is instrumental talk.  I find that, as I get older, science fails to satisfy me as much.  I don't assign it the high intrinsic value I used to.  But it's hard for me to tell whether this is really an intrinsic valuation, or the result of diminishing faith in its instrumental value.

\n

I think that people who value rationality tend to place an unusually high value on knowledge.  Rationality requires knowledge; but that gives knowledge only instrumental value.  It doesn't (can't, by definition) justify giving knowledge intrinsic value.

\n

What do the rest of you think?  Is there a strong correlation between rationalism, giving knowledge high intrinsic value, and giving art low intrinsic value?  If so, why?  And which would you rather be - a great scientist, or a great artist of some type?  (Pretend that great scientists and great artists are equally well-paid and sexually attractive.)

\n

(I originally wrote this as over-valuing knowledge and under-valuing art, but Roko pointed out that that's incoherent.)

\n

Under a theory that intrinsic and instrumental values are separate things, there's no reason why giving science a high instrumental value should correlate with giving it a high intrinsic value, or vice-versa.  Yet the people here seem to be doing one of those things.

\n

My theory is that we can't keep intrinsic and instrumental values separate from each other.  We attach positive valences to both, and then operate on the positive valences.  Or, we can't distinguish our intrinsic values from our instrumental values by introspection.  (You may have noticed that I started using examples that refer to both intrinsic and instrumental values.  I don't think I can separate them, except retrospectively; and with about as much accuracy as a courtroom witness asked to testify about an event that took place 20 years ago.)

\n

It's tempting to mention friends and family in here too, as another competing fundamental value.  But that would demand solving the relationship between personal values that you yourself take, and the valuations you would want a society or a singleton AI to make.  That's too much to take on here.  I want to talk just about intrinsic value given to science vs. art.

\n

Oh, and saying science is an art is a dodge.  You then have to say whether you value the knowledge, or the artistic endeavor.  Also, ignore the possibility that your scientific work can make a safe Singularity.  That would be science as instrumental value.  I'm asking about science vs. art as intrinsic values.

\n

EDIT:  An obvious explanation:  I was assuming that people here want to be rational as an instrumental value, and that we should find the distribution of intrinsic values to be the same as in the general populace.  But of course some people are drawn here because rationality is an intrinsic value to them, and this heavily biases the distribution of intrinsic values found here.

" } }, { "_id": "9cEKk7naZaCggiXsg", "title": "Taboo \"rationality,\" please.", "pageUrl": "https://www.lesswrong.com/posts/9cEKk7naZaCggiXsg/taboo-rationality-please", "postedAt": "2009-03-15T22:44:13.162Z", "baseScore": 28, "voteCount": 41, "commentCount": 54, "url": null, "contents": { "documentId": "9cEKk7naZaCggiXsg", "html": "

Related on OB: Taboo Your Words

\n

I realize this seems odd on a blog about rationality, but I'd like to strongly suggest that commenters make an effort to avoid using the words \"rational,\" \"rationality,\" or \"rationalist\" when other phrases will do.  I think we've been stretching the words to cover too much meaning, and it's starting to show.

Here are some suggested substitutions to start you off.

Rationality:

\n\n

Rationalist:

\n\n

Are there any others?

" } }, { "_id": "EsGuKGNC9gerguLv9", "title": "In What Ways Have You Become Stronger?", "pageUrl": "https://www.lesswrong.com/posts/EsGuKGNC9gerguLv9/in-what-ways-have-you-become-stronger", "postedAt": "2009-03-15T20:44:47.697Z", "baseScore": 32, "voteCount": 30, "commentCount": 40, "url": null, "contents": { "documentId": "EsGuKGNC9gerguLv9", "html": "

Related to: Tsuyoku Naritai! (I Want To Become Stronger), Test Your Rationality, 3 Levels of Rationality Verification.

\n

Robin and Eliezer ask about the ways to test rationality skills, for each of the many important purposes such testing might have. Depending on what's possible, you may want to test yourself to learn how well you are doing at your studies, at least to some extent check the sanity of the teaching that you follow, estimate the effectiveness of specific techniques, or even force a rationality test on a person whose position depends on the outcome.

\n

Verification procedures have various weaknesses, making them admissible for one purpose and not for another. But however rigorous the verification methods are, one must first find the specific properties to test for. These properties or skills may come naturally with the art, or they may be cultivated specifically for the testing, in which case they need to be good signals, hard to demonstrate without also becoming more rational.

\n

\n

So, my question is this - what have you become reliably stronger at, after you walked the path of an aspiring rationalist for considerable time? Maybe you have noticeably improved at something, or maybe you haven't learned a certain skill yet, but you are reasonably sure that because of your study of rationality you'll be able to do that considerably better than other people.

\n

This is a significantly different question from the ones Eliezer and Robin ask. Some of the skills you obtained may be virtually unverifiable, some of them may be easy to fake, some of them may be easy to learn without becoming sufficiently rational, and some of them may be standard in other disciplines. But I think it's useful to step back, and write a list of skills before selecting ones more suitable for the testing.

" } }, { "_id": "T5McDuWDeCvDZKeSj", "title": "Are You a Solar Deity?", "pageUrl": "https://www.lesswrong.com/posts/T5McDuWDeCvDZKeSj/are-you-a-solar-deity", "postedAt": "2009-03-15T19:30:09.265Z", "baseScore": 53, "voteCount": 54, "commentCount": 24, "url": null, "contents": { "documentId": "T5McDuWDeCvDZKeSj", "html": "

Max Muller was one of the greatest religious scholars of the 19th century. Born in Germany, he became fascinated with Eastern religion, and moved to England to be closer to the center of Indian scholarship in Europe. There he mastered English and Sanskrit alike to come out with the first English translation of the Rig Veda, the holiest book of Hinduism.

One of Muller's most controversial projects was his attempt to interpret all pagan mythologies as linked to one another, deriving from a common ur-mythology and ultimately from the celestial cycle. His tools were exhaustive knowledge of the myths of all European cultures combined with a belief in the interpretive power of linguistics.

What the significance of Orpheus' descent into the underworld to reclaim his wife's soul? The sun sets beneath the Earth each evening, and returns with renewed brightness. Why does Apollo love Daphne? Daphne is cognate with Sanskrit Dahana, the maiden of the dawn. The death of Hercules? It occurs after he's completed twelve labors (cf. twelve signs of zodiac) when he's travelling west (like the sun), he is killed by Deianeira (compare Sanskrit dasya-nari, a demon of darkness) and his body is cremated (fire = the sun).  His followers extended the method to Jesus - who was clearly based on a lunar deity, since he spent three days dead and then returned to life, just as the new moon goes dark for three days and then reappears.

Muller's work was massively influential during his time, and many 19th century mythographers tried to critique his paradigm and poke holes in it. Some accused him of trying to destroy the mystery of religion, and others accused him of shoddy scholarship.

R.F. Littledale, an Anglican apologist, took a completely different route. He claimed that there was, in fact, no such person as Professor Max Muller, holder of the Taylorian Chair in Modern European Languages. All these stories about \"Max Muller\" were nothing but a thinly disguised solar myth.

\n

Littledale begins his argument by noting Muller's heritage. He was supposedly born in Germany, only to travel to England when he came of age. This looks suspiciously like the classic Journey of the Sun, which is born in the east but travels to the west. Muller's origin in Germany is a clear reference to Germanus Apollo, one of the old appelations of the Greek sun god.

His Christian name must be related to Latin \"maximus\" or Sanskrit \"maha\", meaning great, a suitable description of the King of Gods, and his surname is cognate with Mjolnir, the mighty hammer of the sky god Thor. His claim to fame is bringing the ancient wisdom of the East to the people of the West - that is, illuminating them with eastern light.

Muller teaches at Oxford for the same reason that Genesis describes the sky as \"the waters above\" and the Egyptians gave Ra a solar barge: ancient people interpreted the sky as a river, and the sun as crossing that river upon his chariot (perhaps an ox-drawn chariot, fording the river?). His chair at Oxford is the throne of the sky, his status as Taylorian Professor because \"he cuts away with his glittering shears the ragged edges of cloud; he allows the...cuttings from his workshop, to descend in fertilizing showers upon the earth.\"

I could go on; instead I recommend you read the original essay. The take-home lesson is that any technique powerful enough to prove that Hercules is a solar myth is also powerful enough to prove that anyone is a solar myth. Muller lacked the strength of a rationalist: the ability to be more confused by fiction than by reality. This makes the Hercules theory useless, but that is not immediately apparent on a first or even a second reading of Muller's work. When reading Muller's work, the primary impression one gets is \"Wow, this man has gathered a lot of supporting evidence.\"

This is a problem encountered in many fields of scholarship, especially \"comparative\" anything. In comparative linguistics, for example, it's usually possible to make a case that two languages are related good enough to convince a layman, no matter which two languages or how distant they may be. In comparative religion, we get cases like this blog's recent discussion over the possible derivation of Esther and Mordechai defeating Haman from Ishtar and Marduk defeating Humbaba. The less said about comparative literature, the better, although I can't help but quote humor writer Dave Barry:

\n
\n

Suppose you are studying Moby-Dick. Anybody with any common sense would say that Moby-Dick is a big white whale, since the characters in the book refer to it as a big white whale roughly eleven thousand times. So in your paper, you say Moby-Dick is actually the Republic of Ireland. Your professor, who is sick to death of reading papers and never liked Moby-Dick anyway, will think you are enormously creative. If you can regularly come up with lunatic interpretations of simple stories, you should major in English.

\n
\n

The worst (but most fun to read!) are in pseudoscience, where plausible sounding comparisons can prove almost anything. Did you know the Mayans believed in a lost homeland called Atzlan, the Indonesians believed in a lost island called Atala, and the Greeks believed in a lost continent called Atlantis? Likewise, did you know that Nostradamus predicted a great battle involving Germany and \"Hister\", which sounds almost like \"Hitler\"?

Yet it would be a mistake to reject all such comparisons. In fact, I have thus far been enormously unfair to Professor Muller, whose work established several correspondences still viewed as valid today. Virtually all modern mythologists accept that the Hindu Varuna is the Greek Uranus, and that the Greek sky god Zeus equals the Hindu sky god Dyaus Pita and the Roman Jupiter (compare to Latin deus pater, meaning God the Father). Likewise, comparative linguists are quite certain that all modern European languages and Sanskrit derive from a common Indo-European root, and in my opinion even the Nostratic project - an ambitious attempt to link Semitic, Indo-European, Uralic. and a bunch of other languages - is at least worth consideration.

We need a test to distinguish between true and false correspondences. But the standard method, making and testing predictions, is useless here. A good mythologist already knows the stories of Varuna and Uranus. The chances of discovering a new fact that either confirms or overturns the Varuna-Uranus correspondence is not even worth considering.

Mark Rosenfelder has an excellent article on chance resemblances between languages which offers a semi-formal model for spotting dubious comparisons. But such precision may not be possible when comparing two deities.

I have what might be a general strategy for approaching this sort of problem, which I will present tomorrow. But how would you go about it?

" } }, { "_id": "RAftfkp3NqDDR2o79", "title": "The Tragedy of the Anticommons", "pageUrl": "https://www.lesswrong.com/posts/RAftfkp3NqDDR2o79/the-tragedy-of-the-anticommons", "postedAt": "2009-03-15T17:32:43.186Z", "baseScore": 43, "voteCount": 48, "commentCount": 47, "url": null, "contents": { "documentId": "RAftfkp3NqDDR2o79", "html": "

I assume that most of you are familiar with the concept of the Tragedy of the Commons. If you aren't, well, that was a Wikipedia link right there.

However, fewer are familiar with the Tragedy of the Anticommons, a term coined by Michael Heller. Where the Tragedy of the Commons is created by too little ownership, the Tragedy of the Anticommons is created by too much.

\n

For instance, the classical solution to the TotC is to divide up the commons between the herders using it, giving each of them ownership for a particular part. This gives each owner an incentive to enforce its sustainability. But what would happen if the commons were divided up to thousands of miniature pieces, say one square inch each? In order to herd your cattle, you'd have to acquire permission from hundreds of different owners. Not only would this be a massive undertaking by itself, any one of them could say no, potentially ruining your entire attempt.

This isn't just a theoretical issue. In his book, Heller offers numerous examples, such as this one:

\n
\n

 ...gridlock prevents a promising treatment for Alzheimer's diseases being tested. The head of research at a \"Big Pharma\" drugmaker told me that his lab scientists developed the potential cure (call it Compound X) years ago, but biotech competitors blocked its development. ... the company developing Compound X needed to pay every owner of a patent relevant to its testing. Ignoring even one would invite an expensive and crippling lawsuit. Each patent holder viewed its own discovery as the crucial one and demanded a corresponding fee, until the demands exceeded the drug's expected profits. None of the patent owners would yield first. ...
This story does not have a happy ending. No valiant patent bundler came along. Because the head of research could not figure out how to pay off all the patent owners and still have a good chance of earning a profit, he shifted his priorities to less ambitious options. Funding went to spin-offs of existing drugs for which his firm already controlled the underlying patents. His lab reluctantly shelved Compound X even though he was certain the science was solid, the market huge, and the potential for easing human suffering beyond measure.

\n
\n

Patents aren't the only field affected by this tragedy. America's airports are unnecessarily congested because land owners block all attempts to build new airports. 90% of the US broadcast spectrum goes unused, because in order to build national coverage, you'd need to apply for permission in 734 separate areas. Re-releasing an important documentary required a 600 000 dollar donation and negotiations that stretched over 20 years, because there were so many different copyright owners for all the pictures and music used in the documentary.

So, what does all of this have to do with rationality, and why am I bringing it up here?

\n

The interesting thing about the tragedy of the anticommons is that most people will be entirely blind to it. Patent owners block drug development, and the only people who'll know are the owners in question, as well as the people who tried to develop the drug. A documentary doesn't get re-released? A few people might wonder why the documentary they saw 20 years ago isn't available on DVD anywhere, but aside for that, nobody'll know. If you're not even aware that a problem exists, then you can't fix it.

In general, something not happening is much harder to spot than something happening. Heller remarks that even the term \"underuse\" hasn't existed for very long:

\n
\n

According to the OED, underuse is a recent coinage. In its first recorded appearance, in 1960, the word was hedged about with an anxious hyphen and scare quotes: \"There might, in some places, be considerable 'under-use' of [parking] meters.\" By 1970, copy editors felt sufficiently comfortable to cast aside the quotes: \"A country can never recover by persistently under-using its resources, as Britain has done for too long.\" The hyphen began to disappear around 1975.

\n
\n

This gives an interesting case study for rationality. If 'underuse' didn't become a word until 1960, that implies that people have been blind to its damages until then. Heller speculates:

\n
\n

In the OED, this new word means \"to use something below the optimum\" and \"insufficient use\". The reference to an \"optimum\" suggests to me how underuse entered English. It was, I think, an unintended consequence of the increasing role of cost-benefit analysis in public policy debates. ...In the old world of overuse versus ordinary use, our choices were binary and clear-cut: injury or health, waste or efficiency, bad or good. In the new world, we are looking for something more subtle - an \"optimum\" along a continuum. Looking for an optimal level of use has a surprising twist: it requires a concept of underuse and surreptitiously changes the long-standing meaning of overuse. Like Goldilocks, we are looking for something not too hot, not too cold, not too much or too little - just right. ...
How can we know whether we are overusing, underusing, or optimally using resources? It's not easy, and not just a matter of economic analysis. Consider, for example, the public health push to icnrease the use of \"statins\", drugs such as Liptor that help lower cholesterol. Underuse of statins may mean too many heart attacks and strokes. But no one suggests that everyone should take statins: putting the drug int he water supply would be overuse. So what is the optimal level of use? ... We estimate the cost of the drugs, we assign a dollar value to death and disease averted, and quantify the negative effects of increased use.
Driving faster gets you home sooner but increases your chance of crashing. Is the trade-off worthwhile? To answer that question, you need to know how to value life. If life were beyond value, we would require perfect auto safety, cars would be infinitely expensive, and car use would drop to nothing. But if there is too little safety regulation, too many will die. With auto safety, society faces aother Goldilocks' quest: we strive to ensure that, all things considered, cars kill the optimal amount of people. It sounds callous, but that's what an optimum is all about.
... The possibility of underuse reorients policymaking from relatively simple either-or choices to the more contentious trade-offs that make up modern regulation of risk.

\n
\n

There are several lessons one could draw here.

One is that we're biased to be blind to underuse, whereas overuse is much easier to spot.

Another is the more general case: we should be careful to look for hidden effects in any policies we institute or actions we take. Even if there seems to be no damage, there may actually be. How can we detect such hidden effects? Cost-benefit calculations looking for the optimal level of use are one way, though that's time-consuming and will only help detect under/overuse.

Not to mention that it's hard, requiring us to set a value on lives and other moral questions. That brings us to the second lesson. Heller's analysis implies that for a long time, people were actually blind to the possibility of underuse because they were reluctant to really tackle the hard problems. Refuse to assign a value on life? Then you can't engage in cost-benefit analysis... and as a result, you'll stay blind to the whole concept of underuse. If you'd looked at things objectively, you'd have seen that we need to give lives a value in order to make decisions in society. By refusing to do so, you'll stay blind to a huge class of problems ever after, simply because you didn't want to objectively ponder hard questions.

\n

That reinforces a message Eliezer's been talking about. I doubt anybody could have foreseen that by refusing to put a value on life, they'd fail to discover the concept of underuse, and thereby have difficulty noticing the risk of patents blocking the development of new drugs. If you let your beliefs get in the way of your rationality in even one thing, it may end up hurting you in entirely unexpected ways. Don't do it.

\n

(There are also some other lessons, which I realized after typing out this post... can you come up with them yourself?)

" } }, { "_id": "5K7CMa6dEL7TN7sae", "title": "3 Levels of Rationality Verification", "pageUrl": "https://www.lesswrong.com/posts/5K7CMa6dEL7TN7sae/3-levels-of-rationality-verification", "postedAt": "2009-03-15T17:19:14.736Z", "baseScore": 87, "voteCount": 74, "commentCount": 247, "url": null, "contents": { "documentId": "5K7CMa6dEL7TN7sae", "html": "

I strongly suspect that there is a possible art of rationality (attaining the map that reflects the territory, choosing so as to direct reality into regions high in your preference ordering) which goes beyond the skills that are standard, and beyond what any single practitioner singly knows.  I have a sense that more is possible.

\n

The degree to which a group of people can do anything useful about this, will depend overwhelmingly on what methods we can devise to verify our many amazing good ideas.

\n

I suggest stratifying verification methods into 3 levels of usefulness:

\n\n

If your martial arts master occasionally fights realistic duels (ideally, real duels) against the masters of other schools, and wins or at least doesn't lose too often, then you know that the master's reputation is grounded in reality; you know that your master is not a complete poseur.  The same would go if your school regularly competed against other schools.  You'd be keepin' it real.

\n

Some martial arts fail to compete realistically enough, and their students go down in seconds against real streetfighters.  Other martial arts schools fail to compete at all—except based on charisma and good stories—and their masters decide they have chi powers.  In this latter class we can also place the splintered schools of psychoanalysis.

\n

So even just the basic step of trying to ground reputations in some realistic trial other than charisma and good stories, has tremendous positive effects on a whole field of endeavor.

\n

But that doesn't yet get you a science.  A science requires that you be able to test 100 applications of method A against 100 applications of method B and run statistics on the results.  Experiments have to be replicable and replicated.  This requires standard measurements that can be run on students who've been taught using randomly-assigned alternative methods, not just realistic duels fought between masters using all of their accumulated techniques and strength.

\n

The field of happiness studies was created, more or less, by realizing that asking people \"On a scale of 1 to 10, how good do you feel right now?\" was a measure that statistically validated well against other ideas for measuring happiness.  And this, despite all skepticism, looks like it's actually a pretty useful measure of some things, if you ask 100 people and average the results.

\n

But suppose you wanted to put happier people in positions of power—pay happy people to train other people to be happier, or employ the happiest at a hedge fund?  Then you're going to need some test that's harder to game than just asking someone \"How happy are you?\"

\n

This question of verification methods good enough to build organizations, is a huge problem at all levels of modern human society.  If you're going to use the SAT to control admissions to elite colleges, then can the SAT be defeated by studying just for the SAT in a way that ends up not correlating to other scholastic potential?  If you give colleges the power to grant degrees, then do they have an incentive not to fail people?  (I consider it drop-dead obvious that the task of verifying acquired skills and hence the power to grant degrees should be separated from the institutions that do the teaching, but let's not go into that.)  If a hedge fund posts 20% returns, are they really that much better than the indices, or are they selling puts that will blow up in a down market?

\n

If you have a verification method that can be gamed, the whole field adapts to game it, and loses its purpose.  Colleges turn into tests of whether you can endure the classes.  High schools do nothing but teach to statewide tests.  Hedge funds sell puts to boost their returns.

\n

On the other hand—we still manage to teach engineers, even though our organizational verification methods aren't perfect.  So what perfect or imperfect methods could you use for verifying rationality skills, that would be at least a little resistant to gaming?

\n

(Added:  Measurements with high noise can still be used experimentally, if you randomly assign enough subjects to have an expectation of washing out the variance.  But for the organizational purpose of verifying particular individuals, you need low-noise measurements.)

\n

So I now put to you the question—how do you verify rationality skills?  At any of the three levels?  Brainstorm, I beg you; even a difficult and expensive measurement can become a gold standard to verify other metrics.  Feel free to email me at sentience@pobox.com to suggest any measurements that are better off not being publicly known (though this is of course a major disadvantage of that method).  Stupid ideas can suggest good ideas, so if you can't come up with a good idea, come up with a stupid one.

\n

Reputational, experimental, organizational:

\n\n

Finding good solutions at each level determines what a whole field of study can be useful for—how much it can hope to accomplish.  This is one of the Big Important Foundational Questions, so—

\n

Think!

\n

(PS:  And ponder on your own before you look at the other comments; we need breadth of coverage here.)

" } }, { "_id": "Z8vv4F2vQbXsQiXpW", "title": "Storm by Tim Minchin", "pageUrl": "https://www.lesswrong.com/posts/Z8vv4F2vQbXsQiXpW/storm-by-tim-minchin", "postedAt": "2009-03-15T14:48:29.060Z", "baseScore": 17, "voteCount": 23, "commentCount": 7, "url": null, "contents": { "documentId": "Z8vv4F2vQbXsQiXpW", "html": "

I'm sure many of you have already seen this performance. Tim Minchin's beat poem \"Storm\" is about the sceptical, secular understanding of the world, stupidity of quackery and supernatural, weight of dishonesty, and joy in the merely real. Contains strong language.

" } }, { "_id": "W3XpQDTEkaPQAuvHz", "title": "Really Extreme Altruism", "pageUrl": "https://www.lesswrong.com/posts/W3XpQDTEkaPQAuvHz/really-extreme-altruism", "postedAt": "2009-03-15T06:51:34.773Z", "baseScore": 17, "voteCount": 42, "commentCount": 98, "url": null, "contents": { "documentId": "W3XpQDTEkaPQAuvHz", "html": "

In secret, an unemployed man with poor job prospects uses his savings to buy a large term life insurance policy, and designates a charity as the beneficiary. Two years after the policy is purchased, it will pay out in the event of suicide. The man waits the required two years, and then kills himself, much to the dismay of his surviving relatives. The charity receives the money and saves the lives of many people who would otherwise have died.

\n

Are the actions of this man admirable or shameful?

" } }, { "_id": "JnKCaGcgZL4Rsep8m", "title": "Schools Proliferating Without Evidence", "pageUrl": "https://www.lesswrong.com/posts/JnKCaGcgZL4Rsep8m/schools-proliferating-without-evidence", "postedAt": "2009-03-15T06:43:28.509Z", "baseScore": 68, "voteCount": 73, "commentCount": 58, "url": null, "contents": { "documentId": "JnKCaGcgZL4Rsep8m", "html": "

Robyn Dawes, author of one of the original papers from Judgment Under Uncertainty and of the book Rational Choice in an Uncertain World—one of the few who tries really hard to import the results to real life—is also the author of House of Cards: Psychology and Psychotherapy Built on Myth.

\n

From House of Cards, chapter 1:

\n
\n

The ability of these professionals has been subjected to empirical scrutiny—for example, their effectiveness as therapists (Chapter 2), their insight about people (Chapter 3), and the relationship between how well they function and the amount of experience they have had in their field (Chapter 4).  Virtually all the research—and this book will reference more than three hundred empirical investigations and summaries of investigations—has found that these professionals' claims to superior intuitive insight, understanding, and skill as therapists are simply invalid...

\n
\n

Remember Rorschach ink-blot tests?  It's such an appealing argument: the patient looks at the ink-blot and says what he sees, the psychotherapist interprets their psychological state based on this.  There've been hundreds of experiments looking for some evidence that it actually works.  Since you're reading this, you can guess the answer is simply \"No.\"  Yet the Rorschach is still in use.  It's just such a good story that psychotherapists just can't bring themselves to believe the vast mounds of experimental evidence saying it doesn't work—

\n

—which tells you what sort of field we're dealing with here.

\n

And the experimental results on the field as a whole are commensurate.  Yes, patients who see psychotherapists have been known to get better faster than patients who simply do nothing.  But there is no statistically discernible difference between the many schools of psychotherapy.  There is no discernible gain from years of expertise.

\n

And there's also no discernible difference between seeing a psychotherapist and spending the same amount of time talking to a randomly selected college professor from another field.  It's just talking to anyone that helps you get better, apparently.

\n

In the entire absence of the slightest experimental evidence for their effectiveness, psychotherapists became licensed by states, their testimony accepted in court, their teaching schools accredited, and their bills paid by health insurance.

\n

And there was also a huge proliferation of \"schools\", of traditions of practice, in psychotherapy; despite—or perhaps because of—the lack of any experiments showing that one school was better than another...

\n

I should really post more some other time on all the sad things this says about our world; about how the essence of medicine, as recognized by society and the courts, is not a repertoire of procedures with statistical evidence for their healing effectiveness; but, rather, the right air of authority.

\n

But the subject today is the proliferation of traditions in psychotherapy.  So far as I can discern, this was the way you picked up prestige in the field—not by discovering an amazing new technique whose effectiveness could be experimentally verified and adopted by all; but, rather, by splitting off your own \"school\", supported by your charisma as founder, and by the good stories you told about all the reasons your techniques should work.

\n

This was probably, to no small extent, responsible for the existence and continuation of psychotherapy in the first place—the promise of making yourself a Master, like Freud who'd done it first (also without the slightest scrap of experimental evidence).  That's the brass ring of success to chase—the prospect of being a guru and having your own adherents.  It's the struggle for adherents that keeps the clergy vital.

\n

That's what happens to a field when it unbinds itself from the experimental evidence—though there were other factors that also placed psychotherapists at risk, such as the deference shown them by their patients, the wish of society to believe that mental healing was possible, and, of course, the general dangers of telling people how to think.

\n

The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness—that there was a family of measures that by golly did validate well against each other.

\n

The act of creating a new measurement creates new science; if it's a good measurement, you get good science.

\n

If you're going to create an organized practice of anything, you really do need some way of telling how well you're doing, and a practice of doing serious testing—that means a control group, an experimental group, and statistics—on plausible-sounding techniques that people come up with.  You really need it.

\n

Added:  Dawes wrote in the 80s and I know that the Rorschach was still in use as recently as the 90s, but it's possible matters have improved since then (as one commenter states).  I do remember hearing that there was positive evidence for the greater effectiveness of cognitive-behavioral therapy.

" } }, { "_id": "M7rwT264CSYY6EdR3", "title": "The Skeptic's Trilemma", "pageUrl": "https://www.lesswrong.com/posts/M7rwT264CSYY6EdR3/the-skeptic-s-trilemma", "postedAt": "2009-03-15T00:12:23.184Z", "baseScore": 86, "voteCount": 84, "commentCount": 15, "url": null, "contents": { "documentId": "M7rwT264CSYY6EdR3", "html": "

Followup to: Talking Snakes: A Cautionary Tale

\n

Related to: Explain, Worship, Ignore

\n

Skepticism is like sex and pizza: when it's good, it's very very good, and when it's bad, it's still pretty good.

It really is hard to dislike skeptics. Whether or not their rational justifications are perfect, they are doing society a service by raising the social cost of holding false beliefs. But there is a failure mode for skepticism. It's the same as the failure mode for so many other things: it becomes a blue vs. green style tribe, demands support of all 'friendly' arguments, enters an affective death spiral, and collapses into a cult.

What does it look like when skepticism becomes a cult? Skeptics become more interested in supporting their \"team\" and insulting the \"enemy\" than in finding the truth or convincing others. They begin to think \"If a assigning .001% probability to Atlantis and not accepting its existence without extraordinarily compelling evidence is good, then assigning 0% probability to Atlantis and refusing to even consider any evidence for its existence must be great!\" They begin to deny any evidence that seems pro-Atlantis, and cast aspersions on the character of anyone who produces it. They become anti-Atlantis fanatics.

\n

Wait a second. There is no lost continent of Atlantis. How do I know what a skeptic would do when confronted with evidence for it? For that matter, why do I care?

\n

Way back in 2007, Eliezer described the rationalist equivalent of Abort, Retry, Fail: the trilemma of Explain, Worship, Ignore. Don't understand where rain comes from? You can try to explain it as part of the water cycle, although it might take a while. You can worship it as the sacred mystery of the rain god. Or you can ignore it and go on with your everyday life.

So someone tells you that Plato, normally a pretty smart guy, wrote a long account of a lost continent called Atlantis complete with a bunch of really specific geographic details that seem a bit excessive for a meaningless allegory. Plato claims to have gotten most of the details from a guy called Solon, legendary for his honesty, who got them from the Egyptians, who are known for their obsessive record-keeping. This seems interesting. But there's no evidence for a lost continent anywhere near the Atlantic Ocean, and geology tells us continents can't just go missing.

One option is to hit Worship. Between the Theosophists, Edgar Cayce, the Nazis, and a bunch of well-intentioned but crazy amateurs including a U.S. Congressman, we get a supercontinent with technology far beyond our wildest dreams, littered with glowing crystal pyramids and powered by the peaceful and eco-friendly mystical wisdom of the ancients, source of all modern civilization and destined to rise again to herald the dawning of the Age of Aquarius.

Or you could hit Ignore. I accuse the less pleasnt variety of skeptic of taking this option. Atlantis is stupid. Anyone who believes it is stupid. Plato was a dirty rotten liar. Any scientist who finds anomalous historical evidence suggesting a missing piece to the early history of the Mediterranean region is also a dirty rotten liar, motivated by crazy New Age beliefs, and should be fired. Anyone who talks about Atlantis is the Enemy, and anyone who denies Atlantis gains immediate access to our in-group and official Good Rational Scientific Person status.

Spyridon Marinatos, a Greek archaeologist who really deserves more fame than he received, was a man who hit Explain. The geography of Plato's Atlantis, a series of concentric circles of land and sea, had been derided as fanciful; Marinatos noted1 that it matched the geography of the Mediterranean island of Santorini quite closely. He also noted that Santorini had a big volcano right in the middle and seemed somehow linked to the Minoan civilization, a glorious race of seafarers who had mysteriously collapsed a thousand years before Plato. So he decided to go digging in Santorini. And he found...

...the lost city of Atlantis. Well, I'm making an assumption here. But the city he found was over four thousand years old, had a population of over ten thousand people at its peak, boasted three-story buildings and astounding works of art, and had hot and cold running water - an unheard-of convenience that it shared with the city in Plato's story. For the Early Bronze Age, that's darned impressive. And like Plato's Atlantis, it was destroyed in a single day. The volcano that loomed only a few miles from its center went off around 1600 BC, utterly burying it and destroying its associated civilization. No one knows what happened to the survivors, but the most popular theory is that some fled to Egypt2, with which the city had flourishing trade routes at its peak.

The Atlantis = Santorini equivalence is still controversial, and the point of this post isn't to advocate for it. But just look at the difference between Joe Q. Skeptic and Dr. Marinatos. Both were rightly skeptical of the crystal pyramid story erected by the Atlantis-worshippers. But Joe Q. Skeptic considered the whole issue a nuisance, or at best a way of proving his intellectual superiority over the believers. Dr. Marinatos saw an honest mystery, developed a theory that made testable predictions, then went out and started digging.

The fanatical skeptic, when confronted with some evidence for a seemingly paranormal claim, says \"Wow, that's stupid.\" It's a soldier on the opposing side, and the only thing to be done with it is kill it as quickly as possible. The wise skeptic, when confronted with the same evidence, says \"Hmmm, that's interesting.\"

Did people at Roswell discovered the debris of a strange craft made of seemingly otherworldly material lying in a field, only to be silenced by the government later? You can worship the mighty aliens who are cosmic bringers of peace. You can ignore it, because UFOs don't exist so the people are clearly lying. Or you can search for an explanation until you find that the government was conducting tests of Project Mogul in that very spot.

Do thousands of people claim that therapies with no scientific basis are working? You can worship alternative medicine as a natural and holistic alternative to stupid evil materialism. You can ignore all the evidence for their effectiveness. Or you can shut up and discover the placebo effect, explaining the lot of them in one fell swoop.

Does someone claim to see tiny people, perhaps elves, running around and doing elvish things? You can call them lares and worship them as household deities. You can ignore the person because he's an obvious crank. Or you can go to a neurologist, and he'll explain that the person's probably suffering from Charles Bonnet Syndrome.

All unexplained phenomena are real. That is, they're real unexplained phenomena. The explanation may be prosaic, like that people are gullible. Or it may be an entire four thousand year old lost city of astounding sophistication. But even \"people are gullible\" can be an interesting explanation if you're smart enough to make it one. There's a big difference between \"people are gullible, so they believe in stupid things like religion, let's move on\" and a complete list of the cognitive biases that make explanations involving agency and intention more attractive than naturalistic explanations to a naive human mind. A sufficiently intelligent thinker could probably reason from the mere existence of religion all the way back to the fundamentals of evolutionary psychology.

This I consider a specific application of a more general rationalist technique: not prematurely dismissing things that go against your worldview. There's a big difference between dismissing that whole Lost Continent of Atlantis story, and prematurely dismissing it. It's the difference between discovering an ancient city and resting smugly satisfied that you don't have to.

\n

 

\n

Footnotes

\n

1: I may be unintentionally sexing up the story here. I read a book on Dr. Marinatos a few years ago, and I know he did make the Santorini-Atlantis connection, but I don't remember whether he made it before starting his excavation, or whether it only clicked during the dig (and the Internet is silent on the matter). If it was the latter, all of my moralizing about how wonderful it was that he made a testable prediction falls a bit flat. I should have used another example where I knew for sure, but this story was too perfect. Mea culpa.

\n

2: I don't include it in the main article because it is highly controversial and you have to fudge some dates for it to really work out, but here is a Special Bonus Scientific Explanation of a Paranormal Claim: the eruption of this same supervolcano in 1600 BC caused the series of geologic and climatological catastrophes recorded in the Bible as the Ten Plagues of Egypt. However, I specify that I'm including this because it's fun to think about rather than because there's an especially large amount of evidence for it.

" } }, { "_id": "E5QXf3tCeE7fZGq4t", "title": "Soulless morality", "pageUrl": "https://www.lesswrong.com/posts/E5QXf3tCeE7fZGq4t/soulless-morality", "postedAt": "2009-03-14T21:48:49.463Z", "baseScore": 28, "voteCount": 47, "commentCount": 39, "url": null, "contents": { "documentId": "E5QXf3tCeE7fZGq4t", "html": "

Follow-up to: So you say you're an altruist

\n

The responses to So you say you're an altruist indicate that people have split their values into two categories:

\n
    \n
  1. values they use to decide what they want
  2. \n
  3. values that are admissible for moral reasoning
  4. \n
\n

(where 2 is probably a subset of 1 for atheists, and probably nearly disjoint from 1 for Presbyterians).

\n

You're reading Less Wrong.  You're a rationalist.  You've put a lot of effort into education, and learning the truth about the world.  You value knowledge and rationality and truth a lot.

\n

Someone says you should send all your money to Africa, because this will result in more human lives.

\n

What happened to the value you placed on knowledge and rationality?

\n

There is little chance that any of the people you save in Africa will get a good post-graduate education and then follow that up by rejecting religion, embracing rationality, and writing Less Wrong posts.

\n

Here you are, spending a part of your precious life reading Less Wrong.  If you spend 10% of your life on the Web, you are saying that that activity is worth at least 1/10th of a life, and that lives with no access to the Web are worth less than lives with access.  If you value rationality, then lives lived rationally are more valuable than lives lived irrationally.  If you think something has a value, you have to give it the same value in every equation.  Not doing so is immoral.  You can't use different value scales for everyday and moral reasoning.

\n

Society tells you to work to make yourself more valuable.  Then it tells you that when you reason morally, you must assume that all lives are equally valuable.  You can't have it both ways.  If all lives have equal value, we shouldn't criticize someone who decides to become a drug addict on welfare.  Value is value, regardless of which equation it's in at the moment.

\n

How do you weigh rationality, and your other qualities and activities, relative to life itself?  I would say that life itself has zero value; the value of a life is the sum of the values of things done and experienced during that life.  But society teaches the opposite: that mere life has a tremendous value, and anything you do with your life has negligible additional value.  That's why it's controversial to execute criminals, but not controversial to lock them up in a bare room for 20 years.  We have a death-penalty debate in the US, which has consequences for less than 100 people per year.  We have a few hundred thousand people serving sentences of 20 years and up, but no debate about it.  That shows that most Americans place a huge value on life itself, and almost no value on what happens to that life.

\n

I think this comes from believing in the soul, and binary thought in general.  People want a simple moral system that classifies things as good or bad, allowable or not allowable, valuable or not valuable.  We use real values in deciding what to do on Saturday, but we discretize them on Sunday.  Killing people is not allowable; locking them up forever is.  Killing enemy soldiers is allowable; killing enemy civilians is not.  Killing enemy soldiers is allowable; torturing them is not.  Losing a pilot is not acceptable; losing a $360,000,000 plane is.  The results of this binarized thought include millions of lives wasted in prison; and hundreds of thousands of lives lost or ruined, and economies wrecked, because we fight wars in a way intended to avoid violating boundary constraints of a binarized value system rather than in a way intended to maximize our values.

\n

The idea of the soul is the ultimate discretizer.  Saving souls is good.  Losing souls is bad.  That is the sum total of Christian pragmatic morality.

\n

The religious conception is that personal values that you use for deciding what to do on Saturday are selfish, whereas moral values are unselfish.  It teaches that people need religion to be moral, because their natural inclination is to be selfish.  Rather than having a single set of values that you can plug into your equations, you have two completely different systems of logic which counterbalance each other.  No wonder people act schizophrenic on moral questions.

\n

What that worldview is really saying is that people are the wrong level of rationality.  Rationality is a win for the rational agent.  But in many prisoners-dilemma and tragedy-of-the-commons scenarios, having rational agents is not a win for society.  Religion teaches people to replace rational morality with an irrational dual-system morality under the (hidden) theory that rational morality leads to worse outcomes.

\n

That teaching isn't obviously wrong.  It isn't obviously irrational.  But it is opposed to rationalism, the dogma that rationality always wins.  I use the term \"rationalism\" to mean not just the reasonable assertion that rationality is the best policy for an agent, but also the dogmatic belief that rational agents are the best thing for society.  And I think this blog is about giving fanatical rationalism a chance.

\n

So, if you really want to be rational, you should throw away your specialized moral logic, and use just one logic and one set of values for all decisions.  If you decide to be a fanatic, you should tell other people to do so, too.

\n

EDIT: This is not an argument for or against aid to Africa.  It's an observation on an error that I think people made in reasoning about aid to Africa.

" } }, { "_id": "gPdM553fhdZuNmDxn", "title": "Closet survey #1", "pageUrl": "https://www.lesswrong.com/posts/gPdM553fhdZuNmDxn/closet-survey-1", "postedAt": "2009-03-14T07:51:04.675Z", "baseScore": 72, "voteCount": 58, "commentCount": 670, "url": null, "contents": { "documentId": "gPdM553fhdZuNmDxn", "html": "

What do you believe that most people on this site don't?

\n

I'm especially looking for things that you wouldn't even mention if someone wasn't explicitly asking for them. Stuff you're not even comfortable writing under your own name. Making a one-shot account here is very easy, go ahead and do that if you don't want to tarnish your image.

\n

I think a big problem with a \"community\" dedicated to being less wrong is that it will make people more concerned about APPEARING less wrong. The biggest part of my intellectual journey so far has been the acquisition of new and startling knowledge, and that knowledge doesn't seem likely to turn up here in the conditions that currently exist.

\n

So please, tell me the crazy things you're otherwise afraid to say. I want to know them, because they might be true.

" } }, { "_id": "neQ7eXuaXpiYw7SBy", "title": "The Least Convenient Possible World", "pageUrl": "https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world", "postedAt": "2009-03-14T02:11:15.177Z", "baseScore": 309, "voteCount": 265, "commentCount": 204, "url": null, "contents": { "documentId": "neQ7eXuaXpiYw7SBy", "html": "

Related to: Is That Your True Rejection?

\n

\"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.\"

\n

   -- Black Belt Bayesian, via Rationality Quotes 13

\n

Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

\n
\n

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

\n
\n

I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

\n
\n

It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

\n
\n

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, \"In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?\"

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

\n

\n

 

\n

1:  Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things most atheists think of is this:

\n

Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, \"Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they'll go there if they don't.\"

This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

\n

2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him from spiritual fulfillment.

\n

Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole, if it is intepreted as a desire for purpose or connection to something greater than one's self. But, some commenters say, science and rationality can fill this hole even better than God can.

What luck! Evolution has by a wild coincidence created us with a big rationality-shaped hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the best possible way! I don't know - despite my sarcasm this may even be true. But in the least convenient possible world, Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life. Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?

\n

3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating almost everything to charity.

\n

Some commenters object that many forms of charity, especially the classic \"give to starving African orphans,\" are counterproductive, either because they enable dictators or thwart the free market. This is quite true.

But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects. So is your real objection the corruption, or do you just not believe that you're morally obligated to give everything you own to starving Africans?

\n

 

\n

You may argue that this citing of convenient facts is at worst a venial sin. If you still get to the correct answer, and you do it by a correct method, what does it matter if this method isn't really the one that's convinced you personally?

One easy answer is that it saves you from embarrassment later. If some scientist does a study and finds that people really do have a god-shaped hole that can't be filled by anything else, no one can come up to you and say \"Hey, didn't you say the reason you didn't convert to religion was because rationality filled the god-shaped hole better than God did? Well, I have some bad news for you...\"

Another easy answer is that your real answer teaches you something about yourself. My friend may have successfully avoiding making a distasteful moral judgment, but he didn't learn anything about morality. My refusal to take the easy way out on the transplant question helped me develop the form of precedent-utilitarianism I use today.

But more than either of these, it matters because it seriously influences where you go next.

Say \"I accept the argument that I need to donate almost all my money to poor African countries, but my only objection is that corrupt warlords might get it instead\", and the obvious next step is to see if there's a poor African country without corrupt warlords (see: Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable answer would be to donate to another warlord-free charitable cause like the Singularity Institute.

If you just say \"Nope, corrupt dictators might get it,\" you may go off and spend the money on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of person who would have been convinced by John Maxwell's argument, but you dismissed it by saying \"Nope, corrupt dictators,\" then you've lost an opportunity to change your mind.

So I recommend: limit yourself to responses of the form \"I completely reject the entire basis of your argument\" or \"I accept the basis of your argument, but it doesn't apply to the real world because of contingent fact X.\" If you just say \"Yeah, well, contigent fact X!\" and walk away, you've left yourself too much wiggle room.

In other words: always have a plan for what you would do in the least convenient possible world.

" } }, { "_id": "SWraogEDJ6gocpvwa", "title": "On the Care and Feeding of Young Rationalists", "pageUrl": "https://www.lesswrong.com/posts/SWraogEDJ6gocpvwa/on-the-care-and-feeding-of-young-rationalists", "postedAt": "2009-03-13T23:48:50.617Z", "baseScore": 32, "voteCount": 34, "commentCount": 22, "url": null, "contents": { "documentId": "SWraogEDJ6gocpvwa", "html": "

Related to: Is Santa Real?Raising the Sanity Waterline

\n

Related on OB: Formative Youth

\n

JulianMorrison writes:

\n
\n

If you want people to repeat this back, write it in a test, maybe even apply it in an academic context, a four-credit undergrad course will work.

\n

If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.

\n
\n

There's been some discussion here on the formation of rationalist communities.

\n

If you look at any large, well-established religious community, you will see an extraordinary amount of attention being paid to the way children are raised. This is hardly surprising, since upbringing is how new members enter the community. Far more people become Mormons because they were raised by Mormon parents than because they had a long talk with two guys in white shirts.

\n

This doesn't seem to be the case in the rationalist community. Looking at how we all got here, I don't see all that many who were simply raised by rationalist parents, and had a leg up. Maybe this is a sampling bias -- a straightforward, enlightened upbringing is not as dramatic as a break from fundamentalist religion, and perhaps people don't think it's a story worth telling.

\n

But I don't think we can count out the possibility that we're just not doing the job of passing on our memes. I can even see a few reasons why this might be the case. In mixed marriages, it is usually the more devout parent, and especially the parent from the more demanding religion, who has the say in raising the children. Thus, we see spouses convert to Catholicism, to Orthodox Judaism, to Mormonism, but rarely to Congregationalism, or Unitarianism. On this totem pole, atheism seems to be pretty much at the bottom.

\n

And perhaps this also has something to do with the shyness of the atheist parent. Even in unmixed marriages, the unspoken thought seems to be \"so, our children will become atheists just because their parents were atheists, just as Baptist children become Baptist because their parents were Baptist -- are we really any better?\"  We resent the idea of the religionists forcing their foolish memes on their children, and want ours to choose their own paths. We think if we just get out of the way, our children will do whatever is right for them, as though by doing this we could make them the ultimate source of their lives.

\n

We have to make choices -- doing nothing is still a choice. There is nothing wrong with attempting to light our children's way to wisdom which took us time and effort to locate on our own.

\n

So, best practices, resources, websites, books, songs, nusery rhymes; anything that will help us to raise rationalist children.

\n

(As always, please post one suggestion per comment so that voting can represent an individual judgment of that suggestion.)

\n

(Disclaimer: I am not a parent, and am not remotely ready to become one. Nonetheless, I feel this could be a fruitful topic for discussion)

\n

ETA: I hope it doesn't look like I'm simply asking how to raise atheists -- I'm setting the bar a bit higher than that. I speak of atheist parents in the latter part of the post simply because I have no data on avowedly rationalist parents. 

" } }, { "_id": "T8ddXNtmNSHexhQh8", "title": "Epistemic Viciousness", "pageUrl": "https://www.lesswrong.com/posts/T8ddXNtmNSHexhQh8/epistemic-viciousness", "postedAt": "2009-03-13T23:33:48.785Z", "baseScore": 108, "voteCount": 87, "commentCount": 93, "url": null, "contents": { "documentId": "T8ddXNtmNSHexhQh8", "html": "

Someone deserves a large hattip for this, but I'm having trouble remembering who; my records don't seem to show any email or OB comment which told me of this 12-page essay, \"Epistemic Viciousness in the Martial Arts\" by Gillian Russell.  Maybe Anna Salamon?

\n
\n

      We all lined up in our ties and sensible shoes (this was England) and copied him—left, right, left, right—and afterwards he told us that if we practised in the air with sufficient devotion for three years, then we would be able to use our punches to kill a bull with one blow.
      I worshipped Mr Howard (though I would sooner have died than told him that) and so, as a skinny, eleven-year-old girl, I came to believe that if I practised, I would be able to kill a bull with one blow by the time I was fourteen.
      This essay is about epistemic viciousness in the martial arts, and this story illustrates just that. Though the word ‘viciousness’ normally suggests deliberate cruelty and violence, I will be using it here with the more old-fashioned meaning, possessing of vices.

\n
\n

It all generalizes amazingly.  To summarize some of the key observations for how epistemic viciousness arises:

\n\n

One thing that I remembered being in this essay, but, on a second reading, wasn't actually there, was the degeneration of martial arts after the decline of real fights—by which I mean, fights where people were really trying to hurt each other and someone occasionally got killed.

\n

In those days, you had some idea of who the real masters were, and which school could defeat others.

\n

And then things got all civilized.  And so things went downhill to the point that we have videos on Youtube of supposed Nth-dan black belts being pounded into the ground by someone with real fighting experience.

\n

I had one case of this bookmarked somewhere (but now I can't find the bookmark) that was really sad; it was a master of a school who was convinced he could use ki techniques.  His students would actually fall over when he used ki attacks, a strange and remarkable and frightening case of self-hypnosis or something... and the master goes up against a skeptic and of course gets pounded completely into the floor.  Feel free to comment this link if you know where it is.

\n

Truly is it said that \"how to not lose\" is more broadly applicable information than \"how to win\".  Every single one of these risk factors transfers straight over to any attempt to start a \"rationality dojo\".  I put to you the question:  What can be done about it?

" } }, { "_id": "iQRA6mMrxrs3xPhGz", "title": "Is Santa Real?", "pageUrl": "https://www.lesswrong.com/posts/iQRA6mMrxrs3xPhGz/is-santa-real", "postedAt": "2009-03-13T20:45:41.691Z", "baseScore": 20, "voteCount": 22, "commentCount": 78, "url": null, "contents": { "documentId": "iQRA6mMrxrs3xPhGz", "html": "

Related on OB: Lying to Kids The Third Alternative

\n

My wife and I are planning to have kids, so of course we've been going through the usual sorts of debates regarding upbringing. We wondered briefly, will we raise our children as atheists? It's kindof a cruel experiment, as folks tend to use their own experiences to guide raising children, and both of us were raised Catholic. Nonetheless, it was fairly well settled after about 5 minutes of dialogue that atheist was the way to go.

\n

Then we had the related discussion of whether to teach our children about Santa Claus. After hours of debate, we decided we'd both have to think on the question some more. It's still been an open question for years now.

\n

Should we teach kids that Santa Claus exists? This isn't a new question, by any means. But it's now motivated by this thread about rationalist origin stories. Note that many of the posters mark the 'rationalist awakening' as the time they realized God doesn't exist. The shock that everybody, including their parents, were wrong and/or lying to them was enough to motivate them to pursue rationality and truth.

\n

If those same children were never taught about God, Santa Claus, and other falsehoods, would they have become rationalists, or would they have contented themselves with playing better video games?  If the child never realized there's no Santa Claus, would we have a reason to say, \"You're growing up and I'm proud of you\"?

" } }, { "_id": "jeenaghE46m7Tttch", "title": "Dialectical Bootstrapping", "pageUrl": "https://www.lesswrong.com/posts/jeenaghE46m7Tttch/dialectical-bootstrapping", "postedAt": "2009-03-13T17:10:20.436Z", "baseScore": 22, "voteCount": 23, "commentCount": 8, "url": null, "contents": { "documentId": "jeenaghE46m7Tttch", "html": "

\"Dialectical Bootstrapping\" is a simple procedure that may improve your estimates. This is how it works:

\n
    \n
  1. Estimate the number in whatever manner you usually would estimate. Write that down.
  2. \n
  3. Assume your first estimate is off the mark.
  4. \n
  5. Think about a few reasons why that could be. Which assumptions and considerations could have been wrong?
  6. \n
  7. What do these new considerations imply? Was the first estimate rather too high or too low?
  8. \n
  9. Based on this new perspective, make a second, alternative estimate.
  10. \n
\n

Herzog and Hertwig find that average of the two estimates (in a historical-date estimating task) is more accurate than the first estimate, (Edit: or the average of two estimates without the \"assume you're wrong\" manipulation). To put the finding in a OB/LW-centric manner, this procedure (sometimes, partially) avoids Cached Thoughts.

" } }, { "_id": "NcFFzMExY5DDJHRAg", "title": "Reality vs Virtual Reality", "pageUrl": "https://www.lesswrong.com/posts/NcFFzMExY5DDJHRAg/reality-vs-virtual-reality", "postedAt": "2009-03-13T15:17:19.701Z", "baseScore": -12, "voteCount": 20, "commentCount": 8, "url": null, "contents": { "documentId": "NcFFzMExY5DDJHRAg", "html": "

Since the first days of civilization, humans have been known to entertain themselves with virtual reality games. The 6th century saw the birth of chess, a game where a few carved figures placed on a checkered board are supposed to mimic human social hierarchy.  Later, the technological breakthroughs of the 20th century allowed creation of games which were significantly more sophisticated. For instance, the highly addictive “Civilization” allowed players to create a history for an entire nation, guiding it from the initial troubles of wheel invention and to the headaches of global warming. Here is a quick summary of the virtual reality games features.

\r\n

1)     The “reality” of the game, while being superficially similar to the reality of the player, must at the same time be much simpler. Hence three-dimensional humans play in the two-dimensional world.

\r\n

2)     The laws of the game must be largely deterministic to allow a meaningful intervention by the player. Yet, in order not to make it too predictable and hence boring, an element of chance must be introduced.

\r\n

3)     The game protagonists must appear to have freedom of movement and yet be limited to the borders of the screen/allocated memory size. The limits of this virtual freedom are usually low at the early stages of the game, but grow as the scenario develops.

\r\n

4)     The game scenario must end before it reaches the limit of the allocated resources.

\r\n

I now propose a little Gedanken experiment. Imagine the existence of a four-dimensional world hosting a civilization whose technology is way ahead of ours. Is there a strong reason to think that such civilization is impossible or that members of this civilization would not play virtual reality games?  If the answer is no, how may these games look like? Using the analogy with our own games, we might expect the following.

\r\n

1)     The game protagonists would resemble the players. Yet, the need for simplification would require them to be three-dimensional.

\r\n

2)     To satisfy the second rule we need to combine determinism and chance. In the three-dimensional universe quantum mechanics is known to do the trick.

\r\n

3)     At the early stages of the game, protagonists’ freedom of movement is constrained by low technological development. At later stages a physical limit may be required (speed of light?).

\r\n

4)     This point may have something to do with the Fermi Paradox.

\r\n

\r\n

I’m interested in other possible analogies. If somebody can suggest a way to rule out the whole idea that would be even greater.

\r\n

" } }, { "_id": "96EhxPsNfr5FQYKsf", "title": "Boxxy and Reagan", "pageUrl": "https://www.lesswrong.com/posts/96EhxPsNfr5FQYKsf/boxxy-and-reagan", "postedAt": "2009-03-13T06:36:54.862Z", "baseScore": -5, "voteCount": 17, "commentCount": 1, "url": null, "contents": { "documentId": "96EhxPsNfr5FQYKsf", "html": "

An interesting article on forgetomori about Boxxy and Reagan and mirror neurons.

" } }, { "_id": "atcJqdhCxTZiJSxo2", "title": "Talking Snakes: A Cautionary Tale", "pageUrl": "https://www.lesswrong.com/posts/atcJqdhCxTZiJSxo2/talking-snakes-a-cautionary-tale", "postedAt": "2009-03-13T01:41:28.925Z", "baseScore": 191, "voteCount": 163, "commentCount": 235, "url": null, "contents": { "documentId": "atcJqdhCxTZiJSxo2", "html": "

I particularly remember one scene from Bill Maher's \"Religulous\". I can't find the exact quote, but I will try to sum up his argument as best I remember.

\n
\n

Christians believe that sin is caused by a talking snake. They may have billions of believers, thousands of years of tradition behind them, and a vast literature of apologetics justifying their faith - but when all is said and done, they're adults who believe in a talking snake.

\n
\n

I have read of the absurdity heuristic. I know that it is not carte blanche to go around rejecting beliefs that seem silly. But I was still sympathetic to the talking snake argument. After all...a talking snake?

\n

\n

I changed my mind in a Cairo cafe, talking to a young Muslim woman. I let it slip during the conversation that I was an atheist, and she seemed genuinely curious why. You've all probably been in such a situation, and you probably know how hard it is to choose just one reason, but I'd been reading about Biblical contradictions at the time and I mentioned the myriad errors and atrocities and contradictions in all the Holy Books.

Her response? \"Oh, thank goodness it's that. I was afraid you were one of those crazies who believed that monkeys transformed into humans.\"

I admitted that um, well, maybe I sorta kinda might in fact believe that.

It is hard for me to describe exactly the look of shock on her face, but I have no doubt that her horror was genuine. I may have been the first flesh-and-blood evolutionist she ever met. \"But...\" she looked at me as if I was an idiot. \"Monkeys don't change into humans. What on Earth makes you think monkeys can change into humans?\"

I admitted that the whole process was rather complicated. I suggested that it wasn't exactly a Optimus Prime-style transformation so much as a gradual change over eons and eons. I recommended a few books on evolution that might explain it better than I could.

She said that she respected me as a person but that quite frankly I could save my breath because there was no way any book could possibly convince her that monkeys have human babies or whatever sort of balderdash I was preaching. She accused me and other evolution believers of being too willing to accept absurdities, motivated by our atheism and our fear of the self-esteem hit we'd take by accepting Allah was greater than ourselves.

It is not clear to me that this woman did anything differently than Bill Maher. Both heard statements that sounded so crazy as to not even merit further argument. Both recognized that there was a large group of people who found these statements plausible and had written extensive literature justifying them. Both decided that the statements were so absurd as to not merit examining that literature more closely. Both came up with reasons why they could discount the large number of believers because those believers must be biased.

I post this as a cautionary tale as we discuss the logic or illogic of theism. I propose taking from it the following lessons:

- The absurdity heuristic doesn't work very well.

\n

- Even on things that sound really, really absurd.

\n

- If a large number of intelligent people believe something, it deserves your attention. After you've studied it on its own terms, then you have a right to reject it. You could still be wrong, though.

\n

- Even if you can think of a good reason why people might be biased towards the silly idea, thus explaining it away, your good reason may still be false.

\n

- If someone cannot explain why something is not stupid to you over twenty minutes at a cafe, that doesn't mean it's stupid. It just means it's complicated, or they're not very good at explaining things.

\n

- There is no royal road.

\n

(special note to those prone to fundamental attribution errors: I do not accept theism. I think theism is wrong. I think it can be demonstrated to be wrong on logical grounds. I think the nonexistence of talking snakes is evidence against theism and can be worked into a general argument against theism. I just don't think it's as easy as saying \"talking snakes are silly, therefore theism is false.\" And I find it embarrassing when atheists say things like that, and then get called on it by intelligent religious people.)

" } }, { "_id": "Nu3wa6npK4Ry66vFp", "title": "A Sense That More Is Possible", "pageUrl": "https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible", "postedAt": "2009-03-13T01:15:30.208Z", "baseScore": 176, "voteCount": 142, "commentCount": 220, "url": null, "contents": { "documentId": "Nu3wa6npK4Ry66vFp", "html": "

To teach people about a topic you've labeled \"rationality\", it helps for them to be interested in \"rationality\".  (There are less direct ways to teach people how to attain the map that reflects the territory, or optimize reality according to their values; but the explicit method is the course I tend to take.)

\n

And when people explain why they're not interested in rationality, one of the most commonly proffered reasons tends to be like:  \"Oh, I've known a couple of rational people and they didn't seem any happier.\"

\n

Who are they thinking of?  Probably an Objectivist or some such.  Maybe someone they know who's an ordinary scientist.  Or an ordinary atheist.

\n

That's really not a whole lot of rationality, as I have previously said.

\n

Even if you limit yourself to people who can derive Bayes's Theorem—which is going to eliminate, what, 98% of the above personnel?—that's still not a whole lot of rationality.  I mean, it's a pretty basic theorem.

\n

Since the beginning I've had a sense that there ought to be some discipline of cognition, some art of thinking, the studying of which would make its students visibly more competent, more formidable: the equivalent of Taking a Level in Awesome.

\n

But when I look around me in the real world, I don't see that.  Sometimes I see a hint, an echo, of what I think should be possible, when I read the writings of folks like Robyn Dawes, Daniel Gilbert, Tooby & Cosmides.  A few very rare and very senior researchers in psychological sciences, who visibly care a lot about rationality—to the point, I suspect, of making their colleagues feel uncomfortable, because it's not cool to care that much.  I can see that they've found a rhythm, a unity that begins to pervade their arguments—

\n

Yet even that... isn't really a whole lot of rationality either.

\n

Even among those whose few who impress me with a hint of dawning formidability—I don't think that their mastery of rationality could compare to, say, John Conway's mastery of math.  The base knowledge that we drew upon to build our understanding—if you extracted only the parts we used, and not everything we had to study to find it—it's probably not comparable to what a professional nuclear engineer knows about nuclear engineering.  It may not even be comparable to what a construction engineer knows about bridges.  We practice our skills, we do, in the ad-hoc ways we taught ourselves; but that practice probably doesn't compare to the training regimen an Olympic runner goes through, or maybe even an ordinary professional tennis player.

\n

And the root of this problem, I do suspect, is that we haven't really gotten together and systematized our skills.  We've had to create all of this for ourselves, ad-hoc, and there's a limit to how much one mind can do, even if it can manage to draw upon work done in outside fields.

\n

The chief obstacle to doing this the way it really should be done, is the difficulty of testing the results of rationality training programs, so you can have evidence-based training methods.  I will write more about this, because I think that recognizing successful training and distinguishing it from failure is the essential, blocking obstacle.

\n

There are experiments done now and again on debiasing interventions for particular biases, but it tends to be something like, \"Make the students practice this for an hour, then test them two weeks later.\"  Not, \"Run half the signups through version A of the three-month summer training program, and half through version B, and survey them five years later.\"  You can see, here, the implied amount of effort that I think would go into a training program for people who were Really Serious about rationality, as opposed to the attitude of taking Casual Potshots That Require Like An Hour Of Effort Or Something.

\n

Daniel Burfoot brilliantly suggests that this is why intelligence seems to be such a big factor in rationality—that when you're improvising everything ad-hoc with very little training or systematic practice, intelligence ends up being the most important factor in what's left.

\n

Why aren't \"rationalists\" surrounded by a visible aura of formidability?  Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought?  Why do most \"rationalists\" just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?

\n

Of this there are several answers; but one of them, surely, is that they have received less systematic training of rationality in a less systematic context than a first-dan black belt gets in hitting people.

\n

I do not except myself from this criticism.  I am no beisutsukai, because there are limits to how much Art you can create on your own, and how well you can guess without evidence-based statistics on the results.  I know about a single use of rationality, which might be termed \"reduction of confusing cognitions\".  This I asked of my brain, this it has given me.  There are other arts, I think, that a mature rationality training program would not neglect to teach, which would make me stronger and happier and more effective—if I could just go through a standardized training program using the cream of teaching methods experimentally demonstrated to be effective.  But the kind of tremendous, focused effort that I put into creating my single sub-art of rationality from scratch—my life doesn't have room for more than one of those.

\n

I consider myself something more than a first-dan black belt, and less.  I can punch through brick and I'm working on steel along my way to adamantine, but I have a mere casual street-fighter's grasp of how to kick or throw or block.

\n

Why are there schools of martial arts, but not rationality dojos?  (This was the first question I asked in my first blog post.)  Is it more important to hit people than to think?

\n

No, but it's easier to verify when you have hit someone.  That's part of it, a highly central part.

\n

But maybe even more importantly—there are people out there who want to hit, and who have the idea that there ought to be a systematic art of hitting that makes you into a visibly more formidable fighter, with a speed and grace and strength beyond the struggles of the unpracticed.  So they go to a school that promises to teach that.  And that school exists because, long ago, some people had the sense that more was possible.  And they got together and shared their techniques and practiced and formalized and practiced and developed the Systematic Art of Hitting.  They pushed themselves that far because they thought they should be awesome and they were willing to put some back into it.

\n

Now—they got somewhere with that aspiration, unlike a thousand other aspirations of awesomeness that failed, because they could tell when they had hit someone; and the schools competed against each other regularly in realistic contests with clearly-defined winners.

\n

But before even that—there was first the aspiration, the wish to become stronger, a sense that more was possible.  A vision of a speed and grace and strength that they did not already possess, but could possess, if they were willing to put in a lot of work, that drove them to systematize and train and test.

\n

Why don't we have an Art of Rationality?

\n

Third, because current \"rationalists\" have trouble working in groups: of this I shall speak more.

\n

Second, because it is hard to verify success in training, or which of two schools is the stronger.

\n

But first, because people lack the sense that rationality is something that should be systematized and trained and tested like a martial art, that should have as much knowledge behind it as nuclear engineering, whose superstars should practice as hard as chess grandmasters, whose successful practitioners should be surrounded by an evident aura of awesome.

\n

And conversely they don't look at the lack of visibly greater formidability, and say, \"We must be doing something wrong.\"

\n

\"Rationality\" just seems like one more hobby or hobbyhorse, that people talk about at parties; an adopted mode of conversational attire with few or no real consequences; and it doesn't seem like there's anything wrong about that, either.

" } }, { "_id": "Dc3bwCjM9HzZcq9M8", "title": "So you say you're an altruist...", "pageUrl": "https://www.lesswrong.com/posts/Dc3bwCjM9HzZcq9M8/so-you-say-you-re-an-altruist", "postedAt": "2009-03-12T22:15:59.935Z", "baseScore": 12, "voteCount": 36, "commentCount": 107, "url": null, "contents": { "documentId": "Dc3bwCjM9HzZcq9M8", "html": "

I'd be really interested to hear what the Less Wrong community thinks of this.  Don't spoil it by reading the comments first.

\n

.

\n

.

\n

.

\n

.

\n

.

\n

.

\n

.

\n

.

" } }, { "_id": "bjxo24nCFByCzkjdP", "title": "Rational Defense of Irrational Beliefs", "pageUrl": "https://www.lesswrong.com/posts/bjxo24nCFByCzkjdP/rational-defense-of-irrational-beliefs", "postedAt": "2009-03-12T18:48:28.967Z", "baseScore": 6, "voteCount": 32, "commentCount": 33, "url": null, "contents": { "documentId": "bjxo24nCFByCzkjdP", "html": "

“Everyone complains of his memory, but nobody of his judgment.\" This maxim of La Rochefoucauld rings as true today as it did back in the XVIIth century. People tend overestimate their reasoning abilities even when this overconfidence has a direct monetary cost. For instance, multiple studies have shown that investors who are more confident of their ability to beat the market receive lower returns on their investments. This overconfidence penalty applies even to the supposed experts, such as fund managers. 

\r\n

So what an expert rationalist should do to avoid this overconfidence trap? The seeming answer is that we should rely less on our own reasoning and more on the “wisdom of the crowds”. To a certain extent this is already achieved by the society pressure to conform, which acts as an internal policeman in our minds. Yet those of us who deem themselves not very susceptible to such pressures (overconfidence, here we go again) might need to shift their views even further.

\r\n

I invite you now to experiment on how this will work in practice. Quite a few of the recent posts and comments were speaking with derision about religion and the supernatural phenomena in general. Did the authors of these comments fully consider the fact that the existence of God is firmly believed by the majority?  Or that this belief is not restricted to the uneducated but shared by many famous scientists, including Newton and Einstein?  Would they be willing to shift their views to accommodate the chance that their own reasoning powers are insufficient to get the right answer?

\r\n

Let the stone throwing begin.

\r\n

 

" } }, { "_id": "Yy9NB3kZKF76CR5XM", "title": "The origins of virtue", "pageUrl": "https://www.lesswrong.com/posts/Yy9NB3kZKF76CR5XM/the-origins-of-virtue", "postedAt": "2009-03-12T12:30:00.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 0, "url": null, "contents": { "documentId": "Yy9NB3kZKF76CR5XM", "html": "

\n

I read Matt Ridley’s ‘The origins of virtue’ just now. It was full of engaging anecdotes and irrelevant details, which I don’t find that useful for understanding, so I wrote down the interesting points. On the off chance anyone else would like a summary, I publish it here. I recommend reading it properly. Things written in [here] are my comments.

\n


\n

***

\n


\n

Prologue

\n

The aim of this book: How did all this cooperation and niceness, especially amongst humans, come about evolutionarily?

\n


\n

Chapter 1

\n

There are benefits to cooperation: can do many things at once, [can avoid costs of conflict, can enjoy other prisoners’ dilemmas, can be safer in groups]

\n

Cooperation occurs on many levels: allegiances, social groups, organisms, cells, organelles, chromosomes, genomes, genes.

\n

Selfish genes explain everything.

\n

Which means it’s possible for humans to be unselfish.

\n

There are ubiquitous conflicts of interest to be controlled in coalitions at every level.

\n


\n

2

\n

Relatedness explains most groupishness ( = like selfishness, but pro-group). e.g. ants, naked mole rats.

\n

Humans distribute reproduction, so aren’t closely related to their societies. They try to suppress nepotism even. So why all the cooperation?

\n

Division of labour has huge benefits (trade isn’t zero sum)

\n

[cells are cool because they have the same genes, so don’t mutiny, but different characters so benefit from division of labour]

\n

Division of labor is greater in larger groups, and with better transport.

\n

There is a trade-off between division of labour and benefits of competition.

\n

By specialising at individual level a group can generalise at group level: efficiently exploit many niches.

\n

Division of labour between males and females is huge and old.

\n


\n

3

\n

Prisoners’ dilemmas are ubiquitous.

\n

Evolutionarily stable strategies = nash equilibria found by evolution.

\n

Tit-for-tat and related strategies are good in iterated prisoners’ dilemmas.

\n

This is because they are nice, retaliatory, forgiving, and clear.

\n

If a combination of strategies play against one other repeatedly, increasing in number according to payoffs, the always-defectors thrive as they beat the always-cooperators, then the tit-for-taters take over as the defectors kill each other.

\n

Reciprocity is ubiquitous in our society.

\n

Hypothesis: it’s an evolutionarily stable strategy. It allowed us to benefit from cooperation without being related. This has been a major win for our species.

\n

Reciprocity isn’t as prevalent between related individuals (in ours and other species).

\n

Tit-for-tat can lead to endless revenge :(

\n


\n

4

\n

Reciprocity requires remembering many other individuals and their previous behavior. This requires a large brain.

\n

Reciprocity requires meeting the same people continually. Which is why people are nastier in big anonymous places.

\n

Other strategies beat tit-for-tat once tit-for-tat has removed nastier strategies. Best of these is pavlov, or win-stay/lose-shift, especially with learned probabilities.

\n

In asynchronous games ‘firm-but-fair’ is better – similar to pavlov, but cooperates [once presumably] after being defected against as a cooperator in the last round.

\n

In larger populations reciprocity should be less beneficial – most interactions are with those you won’t see again.

\n

Boyd’s suggestion: this is the reason for morality behaviour, or punishing those who don’t punish defection.

\n

Another solution: social ostracism: make choosing who to play with an option.

\n

A strategy available to humans is prediction of cooperativeness in advance. [Why can we do this? Why don’t we evolve to not demonstrate our lack of cooperativeness? Because others evolve to show their cooperativeness if they have it? There are behaviours that only make sense if you intend do be cooperative.]

\n


\n

5.

\n

We share food socially a lot, with strangers and friends. Not so much other possessions. Sex is private and coveted.

\n

Meat is especially important in shared meals.

\n

Hypothesis: meat hunting is where division of labour was first manifested.

\n

Monkey males share meat with females to get sex, consequently hunting meat more than would be worth it for such small successes otherwise.

\n

Hypothesis: humans do this too (some evidence that promiscuous natives hunt more), and the habit evolved into a sexual division of labour amongst married couples (long term relationships are usual in our species, but not in chimps). Males then benefit from division of labour, and also feeding their children.

\n

Hypothesis: sexual division of labour fundamental to our early success as a species – neither hunting or gathering would have done alone, but together with cooking it worked.

\n

Hypotheses: food sharing amongst non-relatives could have descended from when males of a tribe were mostly related, or from the more recent division of labour in couples.

\n

Chimps share and show reciprocity behaviour, but do not offer food voluntarily [doesn’t that suggest that in humans its not a result of marriage related sexual division?]

\n

Why do hunter-gatherers share meat more, and share more on trips?

\n

Hypotheses: 1. meat is cooperatively caught, so have to share to continue cooperation. 2. High variance in meat catching – sharing gives stable supply.

\n

What stops free-riding then?

\n


\n

6.

\n

Mammoth hunting introduced humans to significant public goods. You can’t not share a mammoth, especially if others have spear throwers. [mammoth hunting should have started then when it became easier to kill a mammoth than to successfully threaten to kill a tribesman who killed a mammoth]

\n

Tolerated theft: the idea that people must share things where they can’t use all of them, and to prevent others from taking parts is an effort. That is, TT is what happens once you’ve caught a public good (e.g. mammoth). Evidence that this isn’t what happens in reality; division seems to be controlled. Probably reciprocity of some sort (argument over whether this is in the form of material goods or prestige and sex). Evidence against this too; idle men are allowed to share (if the trade is in sex, they aren’t the ones the trading is aimed at, and miss out on the sex trade).

\n

Alternative hypothesis: is treated as a public good, but so big that it’s possible to sneak the best bits to girls and get sex.

\n

Trade across time (e.g. in large game) reduces exposure to fluctuations in meat.

\n

Hypothesis: hunter-gatherers are relatively idle because they have to share what they get, so stop getting things after their needs are fulfilled.

\n

Hypotheses: when people hoard money they are punished by their neighbours because they are defecting in the reciprocal sharing that usually takes place, yet they have no incentive to share if they have an improbably large windfall – the returned favours won’t be as good. Alternatively can be seen as tolerated theft: punishment for not sharing is an attempt to steal from huge good.

\n

When instincts for reciprocity are in place, gifts can be given ‘as weapons’. That is, to force future generosity from the recipient.

\n

Gift giving is less reciprocal (still prevalent, just not carefully equal) amongst human families than amongst human allies.

\n

Gifts can then also signal status; ostentatious generosity demands reciprocity – those who can’t lose face. The relative benefit of buying status this way depends on the goods – perishables may as well be grandly wasted. In this case reciprocity is zero sum: no benefits from division of labour, status cycle is zero sum.

\n


\n

7.

\n

Humans are better at solving Wason test when it is in terms of noticing cheating than in terms of other social contexts, or abstract terms.

\n

Hypothesis: humans have an ‘exchange organ’ in their brains, which deals with calculating related to social contracts. This is unique amongst animals. Evidence: brain damage victims and patients who fail all other tests of intelligence except these, anthropomorphic attitudes to nature heavily involve exchange, anthropomorphizing of objects heavily involves exchange related social emotions (anger, praise).

\n

Moral sentiments appear irrational, but overcome short term personal interests for long term genetic gains.

\n

Commitment problem: when at least one side in a game has no credible threat if the other defects, how can cooperation occur? The other can’t prove they will commit. e.g. kidnap victim can’t prove she won’t go to police, so kidnapper must kill her even if both would be better off if he let her go in return for her silence.

\n

Various games have bad equilibria for rational players in one off situations, but emotions can change things. e.g. prisoners’ dilemma is solved if players have guilt and shame. Where player would be irrational to punish other for defection (punishment costly to implement, loss already occurred), anger produces credible threat (will punish in spite of self).

\n

Many emotions serve to alter the rewards of commitment problems, by bringing forward costs and benefits.

\n

For this to work, emotions have to be hard to fake. Shouldn’t defectors who are good at faking emotions invade a population of people who can’t? No, because in the long run the people who can’t find each other and cooperate together. [that’s what would happen anyway – you would cooperate the first time, then don’t go back if the other defects. Commitment should be a problem largely in one off games – are more emotions shown in those things? In one off games can’t have the long run to find people and make good liars pay].

\n

Emotions make interests common, which stops prisoners’ dilemmas. Interests of genes are not common, so emotions must be shared with other emotional ones.

\n

Ultimatum game variations suggest that people are motivated more by reciprocity than by absolute fairness.

\n

People lacking social emotions due to brain damage are paralyzed by indecision as they try to rationally weigh information.

\n

We like and praise altruism much more than we practice it. Others’ altruism and our looking altruistic are useful, whereas our own selfishness is. [why aren’t people who behave like this invaded by slightly more altruistic ones who don’t cooperate with them? Why is the equilibrium at being exactly as selfish as we are? Signaling means that everyone looks more altruistic than they are, so everyone is less altruistic than they would be if others were maximally altruistic?]

\n

Hypothesis: economics and evolutionary biology are held in distrust because talking about them doesn’t signal belief in altruism etc. Claiming that people or genes are selfish suggests that you are selfish.

\n


\n

8

\n

Cooperation began (or is used primarily in monkey society) in competition and aggression.

\n

The same ‘tricks’ will be discovered by evolution as by thought [if their different aims don’t matter], so if we share a behaviour with animals it’s not obvious that it’s evolved in our case, though often it is.

\n

Our ancestors were: social, hierarchical (especially amongst males), more egalitarian and with less rigid hierarchies than monkeys.

\n

Differences between primates:

\n

Monkey hierarchies rely on physical strength more than chimp ones, which rely on social manipulation.

\n

Baboons use cooperation to steal females from higher ranking males, chimps use it to change the social hierarchy.

\n

Chimp coalitions are reciprocal, unlike monkeys.

\n

Power and sexual success are had by coalitions of weaker individuals in chimps and humans.

\n

Bottlenose dolphins (the only species other than us with brain:body ratio bigger than chimps): males have coalitions of 2-3 which they use to kidnap females. All mate with her. These coalitions join to form super-coalitions to steal females from other coalitions. This is reciprocal (on winning, one coalition will leave the female with the other coalition in the super-coalition, in return for the favor next time)

\n

Second order alliances seem unique to dolphins and humans.

\n

Chimp males stay in a troop while females leave, with monkeys it is the other way around. Could be related to aggressive xenophobia of chimp males. Seems so in human societies: matrilineal societies are less fighty.

\n

Chimp groups, rather than individuals, possess territory (rare, but not unique: e.g. wolves).

\n

Hypothesis: this is an extension of the coalition building that occurs for gaining power in a group. Alpha males prevent conflict within the group, making it stable, which is good for all as they are safer from other groups if they stick together.

\n

Humans pursue status through fighting between groups, whereas chimps only do it within groups [how do you know?]

\n


\n

9

\n

Group selection can almost never happen.

\n

Large groups cooperating are often being directly selfish (safer in shoal than alone).

\n

50:50 sex ratio is because individual selection stronger than group selection. A group would do better by having far more females, yet a gene to produce males would make you replicate much faster, bringing ratio back.

\n

Humans appear to be exception: culturally, not genetically, different groups compete.

\n

Conformism would allow group characteristics to persist long enough that there would be group selection before groups dissolved or were invaded by others’ ideas.

\n

Why would conformism evolve?

\n

Hypothesis: we have many niches which require different behaviors. If you move it’s beneficial to copy your behavior from your neighbors.

\n

Imitation should be more beneficial if there are more people doing it; better to copy something tried by many than the behavior of one other. How did it get started then?

\n

Hypothesis: seeing what is popular amongst many gives you information.

\n

Hypothesis: keeps groups together. If receptive to indoctrination about altruism we will find ourselves in more successful groups [I don’t follow this].

\n

Humans don’t actually live in groups; they just perceive everything in terms of them.

\n

A persons’ fate isn’t tied to that of their group. They don’t put the group’s wellbeing first. They are groupish out of selfishness – it’s not group selection.

\n

Ritual is universal, but details of it are particular.

\n

Hypothesis: Ritual is a means to conformity keeps groups together in conflict, and they survive [How would this begin? Why ritual? Why do they have to be different? Why is conformity necessary to keep groups together? Seems just that we are used to conformity being linked to staying together we assume one leads to the other].

\n

Music and religious belief seem to have similarly group grouping properties.

\n

Cooperation within groups seems linked to xenophobia outside them [cooperation for safety in conflict is of course. What about cooperation for trade? Has that given us non-xenophobia induced cooperative feelings? Earlier chapters seemed to imply so].

\n


\n

10

\n

Weak evidence of trade 200,000 years ago – not clear when it started.

\n

Trade between groups is unique to humans.

\n

Trade is the glue of alliances between groups; it appears that some trade is just an excuse for this.

\n

Trading rules predate governments. Governments nationalize preexisting trading systems. e.g. 11th C Europe merchant courts [is this a general trend? why is everything in anecdotes? aargh].

\n

Speculation isn’t beneficial because there is no division of labour [?].

\n


\n

11

\n

Natives are not ecologically nice. They do not conserve game. They sent many species extinct.

\n

We tie environmentalism up with other morality [is it pro-social morality, as the book has been about, or purity?].

\n

As with other morality, we are more programmed to preach than to practice.

\n

It doesn’t look like people have an instinctive environmental ethic [it’s a big prisoners’ dilemma – can’t we make use of something else in our repertoire?].

\n


\n

12

\n

Property rights emerge unaided where it is possible to defend them [if you see a tragedy of the commons coming, best to draw up property rights – no reason you will be the free rider].

\n

Nationalization often turns property-divided ‘commons’ into a free for all, as the govt can’t defend it and nobody has reason to protect what they are stealing from.

\n

Ordered and successful systems can emerge without design. e.g. Bali subak traditions could have resulted from all copying any neighbour who did better than them.

\n

Lab experiment suggests that communication encourages a lot of cooperation in tragedy of commons games (better than ability to fine defectors)

\n

If humans can arrange property rights unaided, why all the extinctions last chapter?

\n

Hypothesis: property rights can’t be enforced on moving things. Animals that could have property rights asserted on them did have in some cases. e.g. Beavers.

\n

Hoarding taboo (as a result of reciprocity instinct) is to blame for environmentalist dislike of privatisation as a solution.

\n

Hoarding isn’t allowed in primitive tribes, but as soon as more reliable lifestyle allows powerful individuals to do better by hoarding than relying on social insurance, they do. Yet we retain an aversion to it.

\n


\n

13

\n

Humans are born wanting to cooperate, discriminate trustworthiness, commit to trustworthiness, gain a reputation, exchange goods and info, and divide labour.

\n

There was morality before the church, trade before the state, exchange before money, social contracts before Hobbes, welfare before rights, culture before Babylon, society before Greece, self interest before Adam Smith and greed before capitalism.

\n

Also tendency to xenophobic groups is well inbuilt.

\n

How can we make use of our instincts in designing institutions?

\n

Trust is fundamental to cooperative parts of human nature being used.

\n

This has been part of an endless argument about the perfectability of man, famously between Hobbes and Rousseau. Also about how malleable human nature is. [The book goes into detail about the argument over the centuries, but it’s an irrelevant story to me].

\n

To say that humans are selfish, especially that their virtue is selfish, is unpopular because saying so encourages it supposedly.

\n

Big state doesn’t make bargains with the individual, engendering responsibility, reciprocity, duty, pride – it uses authority. How do people respond to authority?

\n

Welfare state replaces existing community institutions based on reciprocity and encouraging useful feelings, having built up trust over the years. Centralised replacements like the National Health Service. Mandatory donation → reluctance, resentment. Client feelings changed from gratitude to apathy, anger, drive to exploit the system.

\n

:. Government makes people more selfish, not less.

\n

We must encourage material and social exchange between people, for that is what breeds trust, and trust is what breeds virtue.

\n

Our institutions are largely upshots of human nature, not serious attempts to make the best of it.

\n


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "XqmjdBKa4ZaXJtNmf", "title": "Raising the Sanity Waterline", "pageUrl": "https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline", "postedAt": "2009-03-12T04:28:49.168Z", "baseScore": 247, "voteCount": 217, "commentCount": 233, "url": null, "contents": { "documentId": "XqmjdBKa4ZaXJtNmf", "html": "

To paraphrase the Black Belt Bayesian:  Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.

\n

If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.

\n

We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem.  But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.

\n

Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions?  In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment.  You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way.  All your examples must center about real-world cases that have nothing to do with religion.

\n

If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?

\n

Here are some such topics I've already covered—not avoiding all mention of religion, but it could be done:

\n\n

But to look at it another way—

\n

Suppose we have a scientist who's still religious, either full-blown scriptural-religion, or in the sense of tossing around vague casual endorsements of \"spirituality\".

\n

We now know this person is not applying any technical, explicit understanding of...

\n\n

When you consider it—these are all rather basic matters of study, as such things go.  A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites?

\n

But there are Nobel laureates who haven't taken that course!  Richard Smalley if you're looking for a cheap shot, or Robert Aumann if you're looking for a scary shot.

\n

And they can't be isolated exceptions.  If all of their professional compatriots had taken that course, then Smalley or Aumann would either have been corrected (as their colleagues kindly took them aside and explained the bare fundamentals) or else regarded with too much pity and concern to win a Nobel Prize.  Could you—realistically speaking, regardless of fairness—win a Nobel while advocating the existence of Santa Claus?

\n

That's what the dead canary, religion, is telling us: that the general sanity waterline is currently really ridiculously low.  Even in the highest halls of science.

\n

If we throw out that dead and rotting canary, then our mine may stink a bit less, but the sanity waterline may not rise much higher.

\n

This is not to criticize the neo-atheist movement.  The harm done by religion is clear and present danger, or rather, current and ongoing disaster.  Fighting religion's directly harmful effects takes precedence over its use as a canary or experimental indicator.  But even if Dawkins, and Dennett, and Harris, and Hitchens should somehow win utterly and absolutely to the last corner of the human sphere, the real work of rationalists will be only just beginning.

" } }, { "_id": "ZiQqsgGX6a42Sfpii", "title": "The Apologist and the Revolutionary", "pageUrl": "https://www.lesswrong.com/posts/ZiQqsgGX6a42Sfpii/the-apologist-and-the-revolutionary", "postedAt": "2009-03-11T21:39:47.614Z", "baseScore": 267, "voteCount": 228, "commentCount": 101, "url": null, "contents": { "documentId": "ZiQqsgGX6a42Sfpii", "html": "

Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.

Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.

Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient \"turned her head and searched in a bemused way over her left shoulder\".

Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.

\n

\n

One immediately plausible hypothesis: the patient is unable to cope psychologically with the possibility of being paralyzed, so he responds with denial. Plausible, but according to Dr. Ramachandran, wrong. He notes that patients with left-side strokes almost never suffer anosognosia, even though the left side controls the right half of the body in about the same way the right side controls the left half. There must be something special about the right hemisphere.

Another plausible hypothesis: the part of the brain responsible for thinking about the affected area was damaged in the stroke. Therefore, the patient has lost access to the area, so to speak. Dr. Ramachandran doesn't like this idea either. The lack of right-sided anosognosia in left-hemisphere stroke victims argues against it as well. But how can we disconfirm it?

Dr. Ramachandran performed an experiment2 where he \"paralyzed\" an anosognosiac's good right arm. He placed it in a clever system of mirrors that caused a research assistant's arm to look as if it was attached to the patient's shoulder. Ramachandran told the patient to move his own right arm, and the false arm didn't move. What happened? The patient claimed he could see the arm moving - a classic anosognosiac response. This suggests that the anosognosia is not specifically a deficit of the brain's left-arm monitoring system, but rather some sort of failure of rationality.

\n

Says Dr. Ramachandran:

\n
\n

The reason anosognosia is so puzzling is that we have come to regard the 'intellect' as primarily propositional in character and one ordinarily expects propositional logic to be internally consistent. To listen to a patient deny ownership of her arm and yet, in the same breath, admit that it is attached to her shoulder is one of the most perplexing phenomena that one can encounter as a neurologist.

\n
\n

So what's Dr. Ramachandran's solution? He posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it's had enough of the left-brain's confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as \"a left-wing revolutionary\".

Normally these two systems work in balance. But if a stroke takes the revolutionary offline, the brain loses its ability to change its mind about anything significant. If your left arm was working before your stroke, the little voice that ought to tell you it might be time to reject the \"left arm works fine\" theory goes silent. The only one left is the poor apologist, who must tirelessly invent stranger and stranger excuses for why all the facts really fit the \"left arm works fine\" theory perfectly well.

It gets weirder. For some reason, squirting cold water into the left ear canal wakes up the revolutionary. Maybe the intense sensory input from an unexpected source makes the right hemisphere unusually aroused. Maybe distoring the balance sense causes the eyes to move rapidly, activating a latent system for inter-hemisphere co-ordination usually restricted to REM sleep3. In any case, a patient who has been denying paralysis for weeks or months will, upon having cold water placed in the ear, admit to paralysis, admit to having been paralyzed the past few weeks or months, and express bewilderment at having ever denied such an obvious fact. And then the effect wears off, and the patient not only denies the paralysis but denies ever having admitted to it.

This divorce between the apologist and the revolutionary might also explain some of the odd behavior of split-brain patients. Consider the following experiment: a split-brain patient was shown two images, one in each visual field. The left hemisphere received the image of a chicken claw, and the right hemisphere received the image of a snowed-in house. The patient was asked verbally to describe what he saw, activating the left (more verbal) hemisphere. The patient said he saw a chicken claw, as expected. Then the patient was asked to point with his left hand (controlled by the right hemisphere) to a picture related to the scene. Among the pictures available were a shovel and a chicken. He pointed to the shovel. So far, no crazier than what we've come to expect from neuroscience.

Now the doctor verbally asked the patient to describe why he just pointed to the shovel. The patient verbally (left hemisphere!) answered that he saw a chicken claw, and of course shovels are necessary to clean out chicken sheds, so he pointed to the shovel to indicate chickens. The apologist in the left-brain is helpless to do anything besides explain why the data fits its own theory, and its own theory is that whatever happened had something to do with chickens, dammit!

The logical follow-up experiment would be to ask the right hemisphere to explain the left hemisphere's actions. Unfortunately, the right hemisphere is either non-linguistic or as close as to make no difference. Whatever its thoughts, it's keeping them to itself.

...you know, my mouth is still agape at that whole cold-water-in-the-ear trick. I have this fantasy of gathering all the leading creationists together and squirting ice cold water in each of their left ears. All of a sudden, one and all, they admit their mistakes, and express bafflement at ever having believed such nonsense. And then ten minutes later the effect wears off, and they're all back to talking about irreducible complexity or whatever. I don't mind. I've already run off to upload the video to YouTube.

This is surely so great an exaggeration of Dr. Ramachandran's theory as to be a parody of it. And in any case I don't know how much to believe all this about different reasoning modules, or how closely the intuitive understanding of it I take from his paper matches the way a neuroscientist would think of it. Are the apologist and the revolutionary active in normal thought? Do anosognosiacs demonstrate the same pathological inability to change their mind on issues other than their disabilities? What of the argument that confabulation is a rather common failure mode of the brain, shared by some conditions that have little to do with right-hemisphere failure? Why does the effect of the cold water wear off so quickly? I've yet to see any really satisfying answers to any of these questions.

\n

But whether Ramachandran is right or wrong, I give him enormous credit for doing serious research into the neural correlates of human rationality. I can think of few other fields that offer so many potential benefits.

\n

 

\n

Footnotes

\n

1: See Anton-Babinski syndrome

\n

2: See Ramachandran's \"The Evolutionary Biology of Self-Deception\", the link from \"posits two different reasoning modules\" in this article.

\n

3: For Ramachandran's thoughts on REM, again see \"The Evolutionary Biology of Self Deception\"

" } }, { "_id": "wPbxcq6MdexECaXvM", "title": "Beginning at the Beginning", "pageUrl": "https://www.lesswrong.com/posts/wPbxcq6MdexECaXvM/beginning-at-the-beginning", "postedAt": "2009-03-11T19:23:23.651Z", "baseScore": 5, "voteCount": 27, "commentCount": 60, "url": null, "contents": { "documentId": "wPbxcq6MdexECaXvM", "html": "

I can't help but notice that some people are utilizing some very peculiar and idiosyncratic meanings for the word 'rational' in their posts and comments.  In many instances, the correctness of rationality is taken for granted; in others, the process of being rational is not only ignored, but dispensed with alltogether, and rational is defined as 'that which makes you win'.

\n

That's not a very useful definition.  If I went to someone looking for helping selecting between options, and was told to choose \"the best one\", or \"the right one\", or \"the one that gives you the greatest chance of winning\", what help would I have received?  If I had clear ideas about how to determine the best, the right, or the one that would win, I wouldn't have come looking for help in the first place.  The responses provide no operational assistance.

\n

There is a definite lack of understanding here of what rationality is, much less why it is correct, and this general incomprehension can only cripple attempts to discuss its nature or how to apply it. We might think that this site would try to dispel the fog surrounding the concept.  Remarkably, a blog established to help \"refining the art of human rationality\" neither explains nor defines rationality.

\n

Those are absolutely critical goals if lesswrong is to accomplish what it advertises itself as attempting.  So let's try to reach them.

\n
\n

The human mind is at the same time both extremely sophisticated and shockingly primitive.  Most of its operations take place beneath the level of explicit awareness; we don't know how we reach conclusions and make decisions, we're merely presented with the results along with an emotional sense of rightness or confidence.

\n

Despite these emotional assurances, we sometimes suspect that such feelings are unfounded.  Careful examination shows that to be precisely the case.  We can and do develop confidence in results, not because they are reliable, but for a host of other reasons. 

\n

Our approval or disapproval of some properties can cross over into our evaluation of others.  We can fall prey to shortcuts while believing that we've been thorough.  We tend to interpret evidence in terms of our preferences, perceiving what we want to perceive and screening out evidence we find inconvenient or uncomfortable.  Sometimes, we even construct evidence out of whole cloth to support something we want to be true.

\n

It's very difficult to detect these flaws in ourselves as we make them.  It is somewhat easier to detect them in others, or in hindsight while reflecting upon past decisions which we are no longer strongly emotionally involved in.  Without knowing how our decisions are reached, though, we're helpless in the face of impulses and feelings of the moment, even while we're ultimately skeptical about how our judgment functions.

\n

So how can we try to improve our judgment if we don't even know what it's doing?

\n

How did Aristotle establish the earliest-known examination of the principles of justification?  If he originated the foundation of the systems we know as *logic*, how could that be accomplished without the use of logic?

\n

As Aristotle noted, the principles he made into a set of formal rules already existed.  He observed the arguments of others, noting how people defended positions and attacked the positions of others, and how certain arguments had flaws that could be pointed out while others seemed to possess no counters.  His attempts to organize people's implicit understandings of the validity of arguments led to an explicit, formal system.  The principles of logic were implicit before they were understood explicitly.

\n

The brain is capable of performing astounding feats of computation, but our conscious grasp of mathematics is emulated and lacks the power of the system that creates it.  We can intuitively comprehend how a projectile will move from just a glimpse of its trajectory, although solving the explicit partial differential equation that describes that motion is terrifically difficult, and virtually impossible to accomplish in real-time.  Yet our explicit grasp of mathematics makes it possible for us to solve problems and comprehend ideas completely beyond the capacity of our hunter-gatherer ancestors, even though the processing power of our brains does not appear to have changed from those early days.

\n

In the same way, our models of what proper thought means give us options and opportunities far beyond what our intuitive, unconscious reasoning makes possible, even though the conscious understanding works with much fewer resources than the unconscious.

\n

When we consciously and deliberately model the evolution of one statement into another according to elementary rules that make up the foundation of logical consistency, something new and exciting happens.  The self-referential aspects of that modeling permit us to compare the decisions presented to us by the parts of our minds beneath the threshold of our awareness and override them.  We can evaluate our own evaluations, reaching conclusions that our emotions don't lead us to and rejecting some of those that they do.

\n

That's what rationality is:  having explicit and conscious standards of validity, and applying them in a systematic way.  It doesn't matter if we possess an inner conviction that something is true - if we can't demonstrate that it can be generated from basic principles according to well-defined rules, it's not valid.

\n

What makes this so interesting is that it's self-correcting.  If we observe an empirical relationship that our understanding doesn't predict, we can treat it as a new fact.  For example, let's say that we find that certain manipulations of tarot decks permit us to predict the weather, even though we have no idea of why the two should be correlated at all.  With rationality, we don't need to know why.  Once we've recognized that the relationship exists, it becomes rational for us to use it.  Likewise, if a previously-useful relationship suddenly ceases to be, even though we have no theoretical grounds for expecting that to happen, we simply acknowledge the fact.  Once we've done so, we can justify ignoring that which we previously considered to be evidence.

\n

Human reasoning is especially plagued by superstitions, because it's easy for us to accept contradictory principles without acknowledging the inconsistency.  But when we're forced to construct step-by-step justifications for our beliefs, contradiction is thrown into sharp relief, and can't be ignored.

\n

Arguments that are not made explicitly, with conscious awareness of how each point is derived from fundamental principles and empirical observations, may or may not be correct.  But they're never rational.  Rational reasoning does not guarantee correctness; rational choice does not guarantee victory.  What rationality offers is self-knowledge of validity.  If rational standards are maintained when thinking, the best choice as defined by the knowledge we possess will be made.  Whether it will be best when we gain new knowledge, or in some absolute sense, is unknown and unknowable until that moment comes.

\n

Yet those who speak here often of the value of human rationality frequently don't do so by rational means.  They make implicit arguments with hidden assumptions and do not acknowledge or clarify them.  They emphasize the potential for rationality to bootstrap itself to greater and greater levels of understanding, yet don't concern themselves with demonstrating that their arguments arise from the most basic elements of reason.  Rationality starts when we make a conscious attempt to understand and apply those basic elements, to emulate in our minds the principles that make the existence of our minds possible.

\n

Are we doing so?

" } }, { "_id": "qmufiasd6cevHRcr3", "title": "Adversarial System Hats", "pageUrl": "https://www.lesswrong.com/posts/qmufiasd6cevHRcr3/adversarial-system-hats", "postedAt": "2009-03-11T16:56:05.745Z", "baseScore": 8, "voteCount": 15, "commentCount": 15, "url": null, "contents": { "documentId": "qmufiasd6cevHRcr3", "html": "

In Reply to: Rationalization, Epistemic Handwashing, Selective Processes

\n

Eliezer Yudkowsky wrote about scientists defending pet hypotheses, and prosecutors and defenders as examples of clever rationalization. His primary focus was advice to the well-intentioned individual rationalist, which is excellent as far as it goes. But Anna Salamon and Steve Rayhawk ask how a social system should be structured for group rationality.

\n

The adversarial system is widely used in criminal justice. In the legal world, roles such as Prosecution, Defense, and Judge are all guaranteed to be filled, with roughly the same amount of human effort applied to each side. Suppose individuals chose their own roles. It is possible that one role turns out more popular. Because different effort is applied to different sides, selecting for the positions with the strongest arguments will no longer much select for positions that are true.

\n

\n

One role might be more popular because of an information cascade: individuals read the extant arguments and then choose a role, striving to align themselves with the truth, and create arguments for that position. Alternately, a role may be popular due to status-based affiliation, or striving to be on the \"winning\" side.

\n

I'm well aware that there are vastly more than two sides to most questions. Imagine a list of rationalist roles something like IDEO's \"Ten Faces\".

\n

Example rationalist roles, leaving the obvious ones for last:

\n\n

Due to natural group phenomena (cascades, affiliation), in order to achieve group rationality, there need to be social structures that strive to prevent those natural phenomena. Roles might help.

" } }, { "_id": "HThGHPeLhe7uy8FEx", "title": "Selective processes bring tag-alongs (but not always!)", "pageUrl": "https://www.lesswrong.com/posts/HThGHPeLhe7uy8FEx/selective-processes-bring-tag-alongs-but-not-always", "postedAt": "2009-03-11T08:17:12.726Z", "baseScore": 39, "voteCount": 39, "commentCount": 5, "url": null, "contents": { "documentId": "HThGHPeLhe7uy8FEx", "html": "

by Anna Salamon and Steve Rayhawk (joint authorship)

\n

Related to: Conjuring An Evolution To Serve You, Disguised Queries 

\n

Let’s say you have a bucket full of “instances” (e.g., genes, hypotheses, students, foods), and you want to choose a good one.  You fish around in the bucket, draw out the first 10 instances you find, and pick the instance that scores highest on some selection criterion.

\n

For example, perhaps your selection criterion is “number of polka dots”, and you reach into the bucket pictured below, and you draw out 10 instances.  What do you get?  Assuming some instances have more polka dots than others, you get hypotheses with an above average number of expected polka dots.  The point I want to dwell on, though -- which is obvious when you think about it, but which sheds significant light on everyday phenomena -- is that you don’t get instances that are just high in polka dots.  You get instances are also high in every trait that correlates with having the most polka dots.

\n

\"\"

\n

For example, in the bucket above, selecting for instances that have many polka dots implies inadvertently selecting for instances that are red.  Selective processes bring tag-alongs, and the specific tag-alongs that you get (redness, in this case) depend on both the trait you’re selecting for, and the bucket from which you’re selecting.

\n

Nearly all cases of useful selection (e.g., evolution, science) would be unable to produce the cool properties they produce (complex order in organisms, truth in theories) if they didn’t have particular, selection-friendly types of buckets, in addition to good selection criteria.  Zoom in carefully enough, and nearly all of the traits one gets by selection can be considered tag-alongs.  Conversely, if you are consciously selecting entities from buckets with a particular aim in view, you may want to consciously safeguard the “selection-friendliness” of the buckets you are using.

\n

Some examples:

\n

Algebra test:

\n

Let’s say you’re trying to select for students who know algebra.  So you write out an algebra exam, E, find ten students at random from your pool, and see which one does best on exam E.  For many sorts of buckets of students, this procedure should work: selecting for the criterion “does well on algebra exam E” will give you more than just that criterion.  It’ll give you the tag-along property “does well on other algebra problems” or “understands algebra”.

\n

But not for all buckets.  If, for example, you release the test questions ahead of time, you’re liable to end up with a bucket of students for which the tag-along property which “does well on algebra exam E” gives you is only “memorized the answers to exam E”, and not “understands algebra”.

\n

Taste and nutrition:

\n

Or, again, perhaps you’re designing a test for healthy foods.  In a hunter-gatherer environment, human tastes are perhaps not a bad indicator.  Grab 10 foodstuffs at random from your bucket, select for “best tasting”, and you’re liable to get “above average nutrition” as a tag-along.

\n

But for the buckets of food-choices created by modern manufacturing (now that we know chemistry, and we can create compounds on purpose that trigger just those sensory mechanisms that signal “good taste”), selecting for taste no longer selects for nutrition (at least, not as much).

\n

Defensive mimicry:

\n

Selecting against prey items that have warning coloration (particular patterns of bright color, as on poison arrow frogs) will select against poisonous prey items, among some buckets of possible prey.  But as other species evolve to mimic the warning coloration pattern, the bucket of prey items changes, and “poisonous” becomes less tied to the indicator trait “such-and-such a coloration pattern”.

\n

Choosing coins that come up heads:

\n

If you have a bucket of fair coins, and you flip 10 of them several times and choose the one that come up heads the most, this one will be no likelier than average to come up heads in later rounds.  If you have a bucket where half the coins are two-headed and the other half are two-tailed, then selecting for even a single head is enough to give you a guarantee of heads on every future coin-toss.  And if you have a bucket mixed fair and unfair coins, then selecting for heads helps, but more weakly.

\n

Nassim Taleb argues that the financial success of particular traders is similar to the success of particular coins from the fair coins bucket -- selecting for “traders who were successful in the past” doesn’t give you traders who are particularly likely to be successful in the future.  Most traders who obtain above-market returns in a particular timespan are, on Taleb’s analysis, traders with ordinary judgment who put their money on a short string of lucky strategies (“buy US real estate”).  The bucket of traders doesn’t have the type of structure that allows “future success” to be predicted from “past success”.

\n

In the “Perfect Prediction Scam”, crooks send out all possible prediction-sequences for a small number of e.g. sports outcomes, with each prediction-sequence going to a different set of people.  Some string of predictions inevitably does well, allowing the crooks to make money off the proven success of their “psychic powers” or “sophisticated computer models” -- but of course, with a bucket like this, the recipients of the correct prediction-sequence are no likelier than average to receive accurate predictions in the next iteration.

\n

Selecting alleles, within biological evolution:

\n

Alleles are selected based on their differential reproductive impact on one generation -- whether they help a given animal flee from this particular tiger.  But it turns out that alleles that help animals flee from this tiger often also help animals flee from other, future tigers.  One can imagine a (weird, physically implausible) biology in which alleles that help one flee this particular tiger are not more likely to help one flee other tigers.  Technically speaking, the ability to “flee from other, future tigers” is a tag-along, given to animals by the trait-correlations in evolution’s buckets.

\n

With traits as closely linked as “fleeing this tiger” and “fleeing other, future tigers”, it may seem odd to give credit to the bucket of alleles for the fact that selecting for the one trait often also gives you the other trait.  One may be tempted to explain our modern tendency to flee tigers by talking solely about “selection”, with no mention of  what sort of buckets evolution was pulling traits from.  But there are plenty of recurring biological traits that one does intuitively regard as due to co-incidences in evolution’s buckets.  Whiteness of bones does not in itself much impact fitness, but it so happens that when evolution selected for bones with advantageous structural properties, it ended up also selecting for bones with a particular, fairly uniform, color.

\n

Moreover, the traits that tag along with particular selection criteria vary from species to species.  Depending on evolution’s starting-point, selecting for particular properties will sometimes simultaneously select for particular other traits, and sometimes not.  If you take a group of migratory moths, and select for “ability to find the egg-laying site”, you will improve their ability to find a single, particular location.  If you take a group of hominids and select for “ability to find their way home”, you may instead improve their general navigational ability.  (Similarly, selective pressures for bees to communicate with other bees produced a fixed, species-universal communication system, while selective pressures on hominids to communicate with other hominids produced a system for learning any language that fits a particular, species-universal language template.  Species such as hominids that have large brains, and that can therefore have their brains tweaked in different directions by particular chance alleles, create different types of variation-buckets for evolution to select within -- perhaps buckets whose tag-alongs more often include “general” abilities.)

\n

So... it is naively tempting to view traits as “due to selection” or “due to chance correlations in evolution’s buckets”.  One might imagine “running from tigers” as a single, unified property that of course you should get “from selection”, when you select for running from past tigers.  One might imagine bones’ whiteness as a “mere chance” property that you get “from the correlations in the bucket”, based on the accident that candidate-bones with good structural properties also happen to have a certain color.  But I suspect that a person who solidly understood evolution would see all biological traits (white bones; navigational ability in moths and hominids; tiger fleeing) as due to a sort of tagging-along process that depends on both the traits being selected for, and the buckets of alleles selected among.  The effective categories available to evolution (like “tiger-fleeing”, or “finding your way to the nest site” vs. “finding your way anywhere”) are somehow in the buckets... so how did they get in there?  And what kind of categories land in naturally occurring buckets?  More on this in later posts.

\n

Reason vs. rationalization:

\n

If you search out the best arguments for each position and exert equal strength on each search, then selecting for the position for which you found the strongest arguments will often also select for positions that are true, as a tag-along.

\n

If you instead find the best arguments you can for your initial opinions, and only allow yourself to notice weak evidence for your opponents’ positions -- if you avoid your beliefs' real weak points -- then selecting for the positions for which you found the strongest arguments will no longer much select for positions that are true.

\n

Reasoning produces more “selection-friendly” buckets of arguments than does rationalization (for a person seeking truth).

\n

 

\n

(The above might not seem as though it has that much to do with rationality.  But it lays groundwork for a couple other posts I want to do, that help explain where categories come from and what kind of use we do and can make of categories, and of generalization from past to future data-sets, in science.)

" } }, { "_id": "J2sr4tdC4ThrbJRR3", "title": "Wanted: Python open source volunteers", "pageUrl": "https://www.lesswrong.com/posts/J2sr4tdC4ThrbJRR3/wanted-python-open-source-volunteers", "postedAt": "2009-03-11T04:59:40.459Z", "baseScore": 17, "voteCount": 17, "commentCount": 9, "url": null, "contents": { "documentId": "J2sr4tdC4ThrbJRR3", "html": "

Less Wrong is a fork of the open Reddit codebase, written in Python.  Less Wrong's code is online at Github (look in r2/r2 for the meat), the issues tracker is at code.google.com.  See Contributing to Less Wrong for a gentle introduction to getting started.

\n

According to Reddit's blog, we are the coolest use of reddit source code they've seen!  But we've still got a long way to go before we're as cool as we want to be.

\n

If anyone out there is fluent in Python and willing to donate a noticeable amount of time to a good cause, an extra hand or two might help us implement many Less Wrong features a lot sooner.

\n

The Reddit codebase does not have unit tests or a whole lot of documentation.  Contributors need to be able to wade through Reddit's code to grok it, and write unit tests for what they do (or better yet, write unit tests for existing code).

\n

Items on the issues tracker marked \"Contributions-Welcome\" are those that look relatively easy to contribute.  Items marked \"Contributions-LeaveItToUs\" are those that look big and complicated, or that Tricycle (the main developers) have strong opinions about how to design and implement.  You could hack the big ones, but the developers might need to spend time talking to you - so please don't step up unless you're fluent in Python, have the necessary time, and are serious about it.

\n

An example of a Welcome contribution would be having the registration page explain what is a valid username, or making sure that any HTML generated automatically in comments is also legal to enter directly (like <a href></a>).

\n

An example of a LeaveItToUs contribution would be the future Tag and Sequence system - a next/prev tracking system for tags (so that you can navigate through via \"next in self_deception\" / \"prev in self_deception\" arrows); a system that lets authors create sequences which behave like tags but are owned by that author; and RSS feeds for tags and sequences that feed a specified number of posts per day to new users trying to catch up.  An experienced volunteer Python dev with enough block hours free to run out and do this would be very welcome.

\n

Items like making the site fluid-width or importing old posts from Overcoming Bias are probably best left to the current site designers.

\n

Hopefully this gives you an idea of what kind of work needs doing.  See the issues tracker for more.

" } }, { "_id": "MYXrsBzXNPqTNarqQ", "title": "Software tools for community truth-seeking", "pageUrl": "https://www.lesswrong.com/posts/MYXrsBzXNPqTNarqQ/software-tools-for-community-truth-seeking", "postedAt": "2009-03-10T13:20:32.081Z", "baseScore": 2, "voteCount": 14, "commentCount": 14, "url": null, "contents": { "documentId": "MYXrsBzXNPqTNarqQ", "html": "

In reply to: Community Epistemic Practice

\n

There are software tools, possibly helpful for community truth-seeking. For example, truthmapping.com is described very well here. Also, debategraph.org, and I'm sure there are others.

\n

 

" } }, { "_id": "Cxcormwz6jb98gGzW", "title": "Striving to Accept", "pageUrl": "https://www.lesswrong.com/posts/Cxcormwz6jb98gGzW/striving-to-accept", "postedAt": "2009-03-09T23:29:46.758Z", "baseScore": 49, "voteCount": 44, "commentCount": 38, "url": null, "contents": { "documentId": "Cxcormwz6jb98gGzW", "html": "

Reply toThe Mystery of the Haunted Rationalist
Followup toDon't Believe You'll Self-Deceive

\n

Should a rationalist ever find themselves trying hard to believe something?

\n

You may be tempted to answer \"No\", because \"trying to believe\" sounds so stereotypical of Dark Side Epistemology.  You may be tempted to reply, \"Surely, if you have to try hard to believe something, it isn't worth believing.\"

\n

But Yvain tells us that - even though he knows damn well, on one level, that spirits and other supernatural things are not to be found in the causal closure we name \"reality\" - and even though he'd bet $100 against $10,000 that an examination would find no spirits in a haunted house - he's pretty sure he's still scared of haunted houses.

\n

Maybe it's okay for Yvain to try a little harder to accept that there are no ghosts, since he already knows that there are no ghosts?

\n

In my very early childhood I was lucky enough to read a book from the children's section of a branch library, called \"The Mystery of Something Hill\" or something.  In which one of the characters says, roughly:  \"There are two ways to believe in ghosts.  One way is to fully believe in ghosts, to look for them and talk about them.  But the other way is to half-believe - to make fun of the idea of ghosts, and talk scornfully of ghosts; but to break into a cold sweat when you hear a bump in the night, or be afraid to enter a graveyard.\"

\n

I wish I remembered the book's name, or the exact quote, because this was one of those statements that sinks in during childhood and remains a part of you for the rest of the life.  But all I remember was that the solution to the mystery had to do with hoofbeats echoing from a nearby road.

\n

So whenever I found something that I knew I shouldn't believe, I also tried to avoid half-believing; and I soon noticed that this was the harder part of the problem.  In my childhood, I cured myself of the fear of the dark by thinking:  If I'm going to think magically anyway, then I'll pretend that all the black and shapeless, dark and shadowy things are my friends.  Not quite the way I would do it nowadays, but it worked to counteract the half-belief, and in not much time I wasn't thinking about it at all.

\n

Considerably later in my life, I realized that I was having a problem with half-believing in magical thinking - that I would sometimes try to avoid visualizing unpleasant things, from half-fear that they would happen.  If, before walking through a door, I visualized that a maniac had chosen that exact moment to sneak into the room on the other side, and was armed and waiting with a knife - then I would be that little bit more scared, and look around more nervously, when entering the room.

\n

So - being, at this point, a bit more sophisticated - I visualized a spread of probable worlds, in which - in some tiny fraction - a knife-wielding maniac had indeed chosen that moment to lurk behind the door; and I visualized the fact that my visualizing the knife-wielding maniac did not make him the tiniest bit more likely to be there - did not increase the total number of maniacs across the worlds.  And that did cure me, and it was done; along with a good deal of other half-superstitions of the same pattern, like not thinking too loudly about other people in case they heard me.

\n

Enforcing reflectivity - making ourselves accept what we already know - is, in general, an ongoing challenge for rationalists.  I cite the example above because it's a very direct illustration of the genre:  I actually went so far as to visualize the (non-)correlation of map to territory across possible worlds, in order to get my object-level map to realize that the maniac really really wasn't there.

\n

It wouldn't be unusual for a rationalist to find themselves struggling to rid themselves of attachment to an unwanted belief.  If we can get out of sync in that direction, why not the other direction?  If it's okay to make ourselves try to disbelieve, why not make ourselves try to believe?

\n

Well, because it really is one of the classic warning signs of Dark Side Epistemology that you have to struggle to make yourself believe something.

\n

So let us then delimit, and draw sharp boundaries around the particular and rationalist version of striving for acceptance, as follows:

\n

First, you should only find yourself doing this when you find yourself thinking, \"Wait a minute, that really is actually true - why can't I get my mind to accept that?\"  Not Gloriously and Everlastingly True, mind you, but plain old mundanely true.  This will be gameable in verbal arguments between people - \"Wait, but I do believe it's really actually true!\" - but if you're honestly trying, you should be able to tell the difference internally.  If you can't find that feeling of frustration at your own inability to accept the obvious, then you should back up and ask whether or not it really is obvious, before trying to make your mind do anything.  Can the fool say, \"But I do think it's completely true and obvious\" about random silly beliefs?  Yes, they can.  But as for you, just don't do that.  This is to be understood as a technique for not shooting off your own foot, not as a way of proving anything to anyone.

\n

Second, I call it \"striving to accept\", not \"striving to believe\", following billswift's suggestion.  Why?  Consider the difference between \"I believe people are nicer than they are\" and \"I accept people are nicer than they are\".  You shouldn't be trying to raise desperate enthusiasm for a belief - if it doesn't seem like a plain old reality that you need to accept, then you're using the wrong technique.

\n

Third and I think most importantly - you should always be striving to accept some particular argument that you feel isn't sinking in.  Strive to accept \"X implies Y\", not just \"Y\".  Strive to accept that there are no ghosts because spirits are only made of material neurons, or because the supernatural is incoherent.  Strive to accept that there's no maniac behind the door because your thoughts don't change reality.  Strive to accept that you won't win the lottery because you could make one distinct statement every second for a year with every one of them wrong, and not be so wrong as you would be by saying \"I will win the lottery.\"

\n

So there is my attempt to draw a line between the Dark Side and the Light Side versions of \"trying to believe\".  Of course the Light Side also tends to be aimed a bit more heavily at accepting negative beliefs than positive beliefs, as is seen from the three examples.  Trying to think of a positive belief-to-accept was difficult; the best I could come up with offhand was, \"Strive to accept that your personal identity is preserved by diassembly and reassembly, because deep down, there just aren't any billiard balls down there.\"  And even that, I suspect, is more a negative belief, that identity is not disrupted.

\n

But to summarize - there should always be some particular argument, that has the feeling of being plain old actually true, and that you are only trying to accept, and are frustrated at your own trouble in accepting.  Not a belief that you feel obligated to believe in more strongly and enthusiastically, apart from any particular argument.

" } }, { "_id": "Cq45AuedYnzekp3LX", "title": "You May Already Be A Sinner", "pageUrl": "https://www.lesswrong.com/posts/Cq45AuedYnzekp3LX/you-may-already-be-a-sinner", "postedAt": "2009-03-09T23:18:35.876Z", "baseScore": 54, "voteCount": 52, "commentCount": 37, "url": null, "contents": { "documentId": "Cq45AuedYnzekp3LX", "html": "

Followup to: Simultaneously Right and Wrong

\n

Related to: Augustine's Paradox of Optimal Repentance

\n

\"When they inquire into predestination, they are penetrating the sacred precincts of divine wisdom. If anyone with carefree assurance breaks into this place, he will not succeed in satisfying his curiosity and he will enter a labyrinth from which he can find no exit.\"

\n

            -- John Calvin

John Calvin preached the doctrine of predestination: that God irreversibly decreed each man's eternal fate at the moment of Creation. Calvinists separate mankind into two groups: the elect, whom God predestined for Heaven, and the reprobate, whom God predestined for eternal punishment in Hell.

If you had the bad luck to be born a sinner, there is nothing you can do about it. You are too corrupted by original sin to even have the slightest urge to seek out the true faith. Conversely, if you were born one of the elect, you've got it pretty good; no matter what your actions on Earth, it is impossible for God to revoke your birthright to eternal bliss.

However, it is believed that the elect always live pious, virtuous lives full of faith and hard work. Also, the reprobate always commit heinous sins like greed and sloth and commenting on anti-theist blogs. This isn't what causes God to damn them. It's just what happens to them after they've been damned: their soul has no connection with God and so it tends in the opposite direction.

Consider two Calvinists, Aaron and Zachary, both interested only in maximizing his own happiness. Aaron thinks to himself \"Whether or not I go to Heaven has already been decided, regardless of my actions on Earth. Therefore, I might as well try to have as much fun as possible, knowing it won't effect the afterlife either way.\" He spends his days in sex, debauchery, and anti-theist blog comments.

Zachary sees Aaron and thinks \"That sinful man is thus proven one of the reprobate, and damned to Hell. I will avoid his fate by living a pious life.\" Zachary becomes a great minister, famous for his virtue, and when he dies his entire congregation concludes he must have been one of the elect.

Before the cut: If you were a Calvinist, which path would you take?

\n

Amos Tversky, Stanford psychology professor by day, bias-fighting superhero by night, thinks you should live a life of sin. He bases his analysis of the issue on the famous maxim that correlation is not causation. Your virtue during life is correlated to your eternal reward, but only because they're both correlated to a hidden third variable, your status as one of the elect, which causes both.

Just to make that more concrete: people who own more cars live longer. Why? Rich people buy more cars, and rich people have higher life expectancies. Both cars and long life are caused by a hidden third variable, wealth. Trying to increase your chances of getting into Heaven by being virtuous is as futile as trying to increase your life expectancy by buying another car.

Some people would stop there, but not Amos Tversky, bias-fighting superhero. He and George Quattrone conducted a study that both illuminated a flaw in human reasoning about causation and demonstrated yet another way people can be simultaneously right and wrong.

Subjects came in thinking it was a study on cardiovascular health. First, experimenters tested their pain tolerance by making them stick their hands in a bucket of freezing water until they couldn't bear it any longer. However long they kept it there was their baseline pain tolerance score.

Then experimenters described two supposed types of human heart: Type I hearts, which work poorly and are prone to heart attack and will kill you at a young age, and Type II hearts, which work well and will bless you with a long life. You can tell a Type I heart from a Type II heart because...and here the subjects split into two groups. Group A learned that people with Type II hearts, the good hearts, had higher pain tolerance after exercise. Group B learned that Type II hearts had lower pain tolerance after exercise.

Then the subjects exercised for a while and stuck their hands in the bucket of ice water again. Sure enough, the subjects who thought increased pain tolerance meant a healthier heart kept their hands in longer. And then when the researchers went and asked them, they said they must have a Type II heart because the ice water test went so well!

The subjects seem to have believed on some level that keeping their hand in the water longer could give them a different kind of heart. Dr. Tversky declared that people have a cognitive blind spot to \"hidden variable\" causation, and this explains the Calvinists who made such an effort to live virtuously.

But this study is also interesting as an example of self-deception. One level of the mind made the (irrational) choice to leave the hand in the ice water longer. Another level of the mind that wasn't consciously aware of this choice interpreted it as evidence for the Type II heart. There are two cognitive flaws here: the subject's choice to try harder on the ice water test, and his lack of realization that he'd done so.

I don't know of any literature explicitly connecting this study to self-handicapping, but the surface similarities are striking. In both, a person takes an action intended to protect his self-image that will work if and only if he doesn't realize this intent. In both, the action is apparently successful, self-image is protected, and the conscious mind remains unaware of the true motives.

Despite all this, and with all due respect to Dr. Tversky I think he might be wrong about the whole predestination issue. If I were a Calvinist, I'd live a life of sin if and only if I would two-box on Newcomb's Problem.

" } }, { "_id": "XbfdLQrAWTRfpggRM", "title": "LessWrong anti-kibitzer (hides comment authors and vote counts)", "pageUrl": "https://www.lesswrong.com/posts/XbfdLQrAWTRfpggRM/lesswrong-anti-kibitzer-hides-comment-authors-and-vote", "postedAt": "2009-03-09T19:18:44.923Z", "baseScore": 66, "voteCount": 67, "commentCount": 62, "url": null, "contents": { "documentId": "XbfdLQrAWTRfpggRM", "html": "

Related to Information Cascades

Information Cascades has implied that people's votes are being biased by the number of votes already cast. Similarly, some commenters express a perception that higher status posters are being upvoted too much.

EDIT: the UserScript below no longer works because it is a very old version of the site. LessWrong v2.0 Anti-Kibitzer is for the new version of the site (working as of May 2020). It has the added feature that each user is assigned a color and style of censor-bar to represent their username, which makes threaded conversations easier to follow.


If, like me, you suspect that you might be prone to these biases, you can correct for them by installing LessWrong anti-kibitzer which I hacked together yesterday morning. You will need Firefox with the greasemonkey extention installed. Once you have greasemonkey installed, clicking on the link to the script will pop up a dialog box asking if you want to enable the script. Once you enable it, a button which you can use to toggle the visibility of author and point count information should appear in the upper right corner of any page on LessWrong. (On any page you load, the authors and pointcounts are automatically hidden until you show them.) Let me know if it doesn't work for any of you.

Already, I've had some interesting experiences. There were a few comments that I thought were written by Eliezer that turned out not to be (though perhaps people are copying his writing style.) There were also comments that I thought contained good arguments which were written by people I was apparently too quick to dismiss as trolls. What are your experiences?

" } }, { "_id": "GDKX7S9mqL46qWct2", "title": "The Mistake Script", "pageUrl": "https://www.lesswrong.com/posts/GDKX7S9mqL46qWct2/the-mistake-script", "postedAt": "2009-03-09T17:35:42.390Z", "baseScore": 12, "voteCount": 23, "commentCount": 14, "url": null, "contents": { "documentId": "GDKX7S9mqL46qWct2", "html": "

Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you're reading an article and you encounter a fallacy. What do you do? Consider the following script:

\n
    \n
  1. Reread the argument to determine whether it's really an error. (If not, resume reading.)
  2. \n
  3. Verify that the error is relevant to the point of the article. (If not, resume reading.)
  4. \n
  5. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't.
  6. \n
\n

This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don't. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure:

\n
    \n
  1. Reread the argument to determine whether it's really an error. (If not, resume reading.)
  2. \n
  3. Verify that the error is relevant to the point of the article. (If not, resume reading.)
  4. \n
  5. Decide whether you agree with the article's conclusion. If you are sure you do, stop reading. If you aren't sure what the conclusion is or aren't sure you agree with it, continue.
  6. \n
  7. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't.
  8. \n
\n

This extra step protects us from confirmation bias and the \"echo chamber\" effect. We might try adding more steps, to reduce bias even further:

\n
    \n
  1. Reread the argument to determine whether it's really an error. (If not, resume reading.)
  2. \n
  3. Verify that the error is relevant to the point of the article. (If not, resume reading.)
  4. \n
  5. Attempt to generate other arguments which could substitute for the faulty one. If you produce a valid one, resume reading.
  6. \n
  7. Decide whether you agree with the article's conclusion. (If you are sure you do, stop reading. If you aren't sure what the conclusion is or aren't sure you agree with it, continue.)
  8. \n
  9. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't.
  10. \n
\n

While seemingly valid, this extra step would be bad, because the associated cost is too high. Generating arguments takes much more time and mental effort than evaluating someone else's, so you will always be tempted to skip this step. If you were to use any means to force yourself to include it when you invoke the script, then you would instead bias yourself against invoking the script in the first place, and let errors slide.

\n

Finding an error in someone else's argument shouldn't cause you to spend much time. Dealing with a mistake in an argument you wrote yourself, on the other hand, is more involved. Suppose you catch yourself writing, saying, or thinking an argument that you know is invalid. What do you do about it? Here is my script:

\n
    \n
  1. If you caught the problem immediately when you first generated the bad argument, things are working as they should, so skip this script.
  2. \n
  3. Check your emotional reaction to the conclusion of the bad argument. If you want it to be true, then you have caught yourself rationalizing. Run a script for that.
  4. \n
  5. Give yourself an imaginary gold star for having recognized your mistake. If you feel bad about having made the mistake in the first place, give yourself enough additional gold stars to counter this feeling.
  6. \n
  7. Name the heuristic or fallacy you used (Surface similarity, overgeneralization, ad hominem, non sequitur, etc.)
  8. \n
  9. Estimate how often the named heuristic or fallacy has lead you astray. If the answer is more often than you think is acceptable, note it, so you can think about how to counter that bias later.
  10. \n
  11. Generate other conclusions which you have used this same argument to support in the past, if any. Note them, to reevaluate later.
  12. \n
\n

A good script provides a checklist of things to think about, plus guidance on how long to think about each, and what state to be in while doing so. When evaluating our own mistakes, emotional state is important; if acknowledging that we've made a mistake causes us to feel bad, then we simply won't acknowledge our mistakes, hence step (3) in this procedure.

\n

Thinking accurately is more complicated than just following scripts, but script-following is a major part of how the mind works. If left alone, the mind will generate its own scripts for common occurances, but they probably won't be optimal. The scripts we use for error handling filter the information we receive and regulate all other beliefs; they are too important to leave to chance. What other anti-bias countermeasures could we add? What other scripts do we follow that could be improved?

" } }, { "_id": "YGPzzqqpYcAoyzF4d", "title": "The Wrath of Kahneman", "pageUrl": "https://www.lesswrong.com/posts/YGPzzqqpYcAoyzF4d/the-wrath-of-kahneman", "postedAt": "2009-03-09T12:52:41.695Z", "baseScore": 30, "voteCount": 27, "commentCount": 20, "url": null, "contents": { "documentId": "YGPzzqqpYcAoyzF4d", "html": "

Cass Sunstein, David Schkade, and Daniel Kahneman, in a 1999 paper named Do People Want Optimal Deterrence, write:

\n
\n

Previous research suggests that people’s judgments about punitive damage awards are a reflection of outrage at the defendant’s actions rather than of deterrence. This is not to say that people do not care about deterrence; of course they do. Our hypothesis here is that they do not attempt to promote optimal deterrence; for this reason they do not make the kinds of distinctions that are obvious, even secondnature, for those who study deterrence questions. Above all, they may not believe that in order to ensure optimal deterrence, the amount that a given defendant is required to pay should be increased or decreased depending on the probability of detection, a central claim in the economic analysis of law.

\n
\n

If we're after optimal deterrence, we should punish potentially harmful actions more if they're hard to detect, or else the expected disutility of the punishment is too small. But apparently this does not accord with people's sense of justice.

\n

Does this mean we should change our sense of justice? And should we apply optimal deterrence theory to informal social rewards and punishments, such as by getting angrier at antisocial behaviors that we learned of by (what the wrongdoer thought was) a freak coincidence?

" } }, { "_id": "W7LcN9gmdnaAk9K52", "title": "Don't Believe You'll Self-Deceive", "pageUrl": "https://www.lesswrong.com/posts/W7LcN9gmdnaAk9K52/don-t-believe-you-ll-self-deceive", "postedAt": "2009-03-09T08:03:20.329Z", "baseScore": 47, "voteCount": 65, "commentCount": 72, "url": null, "contents": { "documentId": "W7LcN9gmdnaAk9K52", "html": "

I don't mean to seem like I'm picking on Kurige, but I think you have to expect a certain amount of questioning if you show up on Less Wrong and say:

\n
\n

One thing I've come to realize that helps to explain the disparity I feel when I talk with most other Christians is the fact that somewhere along the way my world-view took a major shift away from blind faith and landed somewhere in the vicinity of Orwellian double-think.

\n
\n

\"If you know it's double-think...

\n

...how can you still believe it?\" I helplessly want to say.

\n

Or:

\n
\n

I chose to believe in the existence of God—deliberately and consciously. This decision, however, has absolutely zero effect on the actual existence of God.

\n
\n

If you know your belief isn't correlated to reality, how can you still believe it?

\n

Shouldn't the gut-level realization, \"Oh, wait, the sky really isn't green\" follow from the realization \"My map that says 'the sky is green' has no reason to be correlated with the territory\"?

\n

Well... apparently not.

\n

One part of this puzzle may be my explanation of Moore's Paradox (\"It's raining, but I don't believe it is\")—that people introspectively mistake positive affect attached to a quoted belief, for actual credulity.

\n

But another part of it may just be that—contrary to the indignation I initially wanted to put forward—it's actually quite easy not to make the jump from \"The map that reflects the territory would say 'X'\" to actually believing \"X\".  It takes some work to explain the ideas of minds as map-territory correspondence builders, and even then, it may take more work to get the implications on a gut level.

\n

I realize now that when I wrote \"You cannot make yourself believe the sky is green by an act of will\", I wasn't just a dispassionate reporter of the existing facts.  I was also trying to instill a self-fulfilling prophecy.

\n

It may be wise to go around deliberately repeating \"I can't get away with double-thinking!  Deep down, I'll know it's not true!  If I know my map has no reason to be correlated with the territory, that means I don't believe it!\"

\n

Because that way—if you're ever tempted to try—the thoughts \"But I know this isn't really true!\" and \"I can't fool myself!\" will always rise readily to mind; and that way, you will indeed be less likely to fool yourself successfully.  You're more likely to get, on a gut level, that telling yourself X doesn't make X true: and therefore, really truly not-X.

\n

If you keep telling yourself that you can't just deliberately choose to believe the sky is green—then you're less likely to succeed in fooling yourself on one level or another; either in the sense of really believing it, or of falling into Moore's Paradox, belief in belief, or belief in self-deception.

\n

If you keep telling yourself that deep down you'll know—

\n

If you keep telling yourself that you'd just look at your elaborately constructed false map, and just know that it was a false map without any expected correlation to the territory, and therefore, despite all its elaborate construction, you wouldn't be able to invest any credulity in it—

\n

If you keep telling yourself that reflective consistency will take over and make you stop believing on the object level, once you come to the meta-level realization that the map is not reflecting—

\n

Then when push comes to shove—you may, indeed, fail.

\n

When it comes to deliberate self-deception, you must believe in your own inability!

\n

Tell yourself the effort is doomed—and it will be!

\n

Is that the power of positive thinking, or the power of negative thinking?  Either way, it seems like a wise precaution.

" } }, { "_id": "mja6jZ6k9gAwki9Nu", "title": "The Mystery of the Haunted Rationalist", "pageUrl": "https://www.lesswrong.com/posts/mja6jZ6k9gAwki9Nu/the-mystery-of-the-haunted-rationalist", "postedAt": "2009-03-08T20:39:39.139Z", "baseScore": 114, "voteCount": 107, "commentCount": 70, "url": null, "contents": { "documentId": "mja6jZ6k9gAwki9Nu", "html": "

Followup to: Simultaneously Right and Wrong

\n

    \"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents.\"

\n

          - H.P. Lovecraft, The Call of Cthulhu

There is an old yarn about two skeptics who stayed overnight in a supposedly haunted mansion, just to prove they weren't superstitious. At first, they laughed and joked with each other in the well-lit master bedroom. But around eleven, there was a thunderstorm - hardly a rare event in those parts - and all the lights went off. As it got later and later, the skeptics grew more and more nervous, until finally around midnight, the stairs leading up to their room started to creak. The two of them shot out of there and didn't stop running until they were in their car and driving away.

So the skeptics' emotions overwhelmed their rationality. That happens all the time. Is there any reason to think this story proves anything more interesting than that some skeptics are cowards?

\n

\n

The Koreans have a superstition called \"fan death\": if you sleep in a closed room with a fan on all night, you will die. Something about the fan blades shredding the oxygen molecules or something. It all sounds pretty far-fetched, but in Korea it's endorsed by everyone from doctors to the government's official consumer safety board.

I don't believe in ghosts, and I don't believe in fan death. But my reactions to spending the night in a haunted mansion and spending the night with a fan are completely different. Put me in a haunted mansion, and I'll probably run out screaming the first time something goes bump in the night1. Put me in a closed room with a fan and I'll shrug and sleep like a baby. Not because my superior rationality has conquered my fear. Because fans just plain don't kill people by chopping up oxygen, and to think otherwise is just stupid.

\n

So although it's correct to say that the skeptics' emotions overwhelmed their rationality, they wouldn't have those emotions unless they thought on some level that ghosts were worth getting scared about.

A psychologist armed with the theory of belief-profession versus anticipation-control would conclude that I profess disbelief in ghosts to fit in with my rationalist friends, but that I anticipate being killed by a ghost if I remain in the haunted mansion. He'd dismiss my skepticism about ghosts as exactly the same sort of belief in belief afflicting the man who thinks his dragon is permeable to flour.

If this psychologist were really interested in investigating my beliefs, he might offer me X dollars to stay in the haunted mansion. This is all a thought experiment, so I can't say for certain what I would do. But when I imagine the scenario, I visualize myself still running away when X = 10, but fighting my fear and staying around when X = 1000000.

This looks suspiciously like I'm making an expected utility calculation. Probability of being killed by ghost * value of my life, compared to a million dollars. It also looks like I'm using a rather high number for (probability of being killed by ghost): certainly still less than .5, but much greater than the <.001 I would consciously assign it. Is my mind haunted by an invisible probability of ghosts, ready to jump out and terrify me into making irrational decisions?

How can I defend myself against the psychologist's accusation that I merely profess a disbelief in ghosts? Well, while I am running in terror out of the mansion, a bookie runs up beside me. He offers me a bet: he will go back in and check to see if there is a ghost. If there isn't, he owes me $100. If there is, I owe him $10,000 (payable to his next of kin). Do I take the bet?

Thought experiments don't always work, but I imagine myself taking the bet. I assign a less than 1/100 chance to the existence of ghosts, so it's probably a good deal. The fact that I am running away from a ghost as I do the calculation changes the odds not at all.

But if that's true, we're now up to three different levels of belief. The one I profess to my friends, the one that controls my anticipation, and the one that influences my emotions.

There are no ghosts, profess skepticism.
There are no ghosts, take the bet.
There are ghosts, run for your life!

\n

Footnote

\n

1: I worry when writing this that I may be alone among Less Wrong community members, and that the rest of the community would remain in the mansion with minimal discomfort. If \"run screaming out of the mansion\" is too dramatic for you, will you agree that you might, after the floorboards get especially creaky, feel a tiny urge to light a candle or turn on a flashlight? Even that is enough to preserve the point I am trying to make here.

" } }, { "_id": "uXJQekmFRKDGCfXph", "title": "Lies and Secrets", "pageUrl": "https://www.lesswrong.com/posts/uXJQekmFRKDGCfXph/lies-and-secrets", "postedAt": "2009-03-08T14:43:22.152Z", "baseScore": 19, "voteCount": 26, "commentCount": 21, "url": null, "contents": { "documentId": "uXJQekmFRKDGCfXph", "html": "

My intuition says presenting bad facts or pieces of reasoning is wrong, but withholding good facts or pieces of reasoning is less wrong. I assume most of you agree.

This is a puzzle, because on the face of it, the effect is the same.

Suppose the Walrus and the Carpenter are talking of whether pigs have wings.

Scenario 1: The Carpenter is 80% sure that pigs have wings, but the Walrus wants him to believe that they don't. So the Walrus claims that it's a deep principle of evolution theory that no animal can have wings, and the Carpenter updates to 60%.

Scenario 2: The Carpenter is 60% sure that pigs have wings, and the Walrus wants him to believe that they don't. So the Walrus neglects to mention that he once saw a picture of a winged pig in a book. Learning this would cause the Carpenter to update to 80%, but he doesn't learn this, so he stays at 60%.

In both scenarios, the Walrus chose for the Carpenter's probability to be 60% when he could have chosen for it to be 80%. So what's the difference?

If there isn't any, then we're forced to claim bias (maybe omission bias), which we can then try to overcome.

But in this post I want to try rationalizing the asymmetry. I don't feel that my thinking here is clear, so this is very tentative.

If a man is starving, not giving him a loaf of bread is as deadly as giving him cyanide. But if there are a lot of random objects lying around in the neighborhood, the former deed is less deadly: it's far more likely that one of the random objects is a loaf of bread than that it is an antidote to cyanide.

I believe that, likewise, it is more probable that you'll randomly find a good argument duplicated (conditioning on it makes some future evidence redundant), than that you'll randomly find a bad argument debunked (conditioning on it makes some future counter-evidence relevant). In other words, whether you're uninformed or misinformed, you're equally mistaken; but in an environment where evidence is not independent, it's normally easier to recover from being uninformed than from being misinformed.

The case becomes stronger when you think of it in terms of boundedly rational agents fishing from a common meme pool. If agents can remember or hold in mind fewer pieces of information than they are likely to encounter, pieces of disinformation floating in the pool not only do damage by themselves, but do further damage by displacing pieces of good information.

These are not the only asymmetries. A banal one is that misinforming takes effort and not informing saves effort. And if you're caught misinforming, that makes you look far worse than if you're caught not informing. (But the question is why this should be so. Part of it is that, usually, there are plausible explanations other than bad faith for why one might not inform -- if not, it's called \"lying by omission\" -- but no such explanations for why one might misinform.) And no doubt there are yet others.

\n

But I think a major part of it has to be that ignorance heals better than confusion when placed in a bigger pool of evidence. Do you agree? Do you think \"lies\" are worse than \"secrets\", and if so, why?

" } }, { "_id": "JZRtfjG48xNR3GKeo", "title": "It's the Same Five Dollars!", "pageUrl": "https://www.lesswrong.com/posts/JZRtfjG48xNR3GKeo/it-s-the-same-five-dollars", "postedAt": "2009-03-08T07:23:41.621Z", "baseScore": 27, "voteCount": 30, "commentCount": 32, "url": null, "contents": { "documentId": "JZRtfjG48xNR3GKeo", "html": "

\n

\n

From Tversky and Khaneman's \"The Framing of Decisions and the Psychology of Choice\" (Science, Vol. 211, No. 4481, 1981):

\n
\n

The following problem [...] illustrates the effect of embedding an option in different accounts. Two versions of this problem were presented to different groups of subjects. One group (N = 93) was given the values that appear in parentheses, and the other group (N = 88) the values shown in brackets.

\n

[...] Imagine that you are about to purchase a jacket for ($125) [$15], and a calculator for ($15) [$125]. The calculator salesman informs you that the calculator you wish to buy is on sale for ($10) [$120] at the other branch of the store, located 20 minutes drive away. Would you make the trip to the other store?

\n

The response to the two versions of [the problem] were markedly different: 68 percent of the respondents were willing to make an extra trip to save $5 on a $15 calculator; only 29 percent were willing to exert the same effort when the price of the calculator was $125. [...] A closely related observation has been reported [...] that the variability of the prices at which a given product is sold by different stores is roughly proportional to the mean price of that product. The same pattern was observed for both frequently and infrequently purchased items. Overall, a ratio of 2:1 in the mean price of two products is associated with a ratio of 1.86:1 in the standard deviation of the respective quoted prices. If the effort that consumers exert to save each dollar on a purchase [...] were independent of price, the dispersion of quoted prices should be about the same for all products.

\n
\n

This one's a killer. Money is supposed to be fungible, but these observations really highlight how difficult it is to really behave as if you believed that. So, aspiring rationalists, how might we combat this in ourselves? Maybe it would help to consciously convert between money and time: if you value your time at 25 $/hr, then the cost of a twenty-minute drive is 25 $/hr * (1/3) hr = $8.33 > $5, so you buy the calculator in front of you in either case. So this heuristic at least takes care of the calculator problem, although I would guess it fails miserably in other contexts, I currently know not which.

\n

Another takeaway lesson is to ignore advertisements boasting that a product is currently such-and-such percent off. We don't care about the percentage! How many minutes are you saving? 

\n
\n

" } }, { "_id": "ERRk4thxxYNcScqR4", "title": "Moore's Paradox", "pageUrl": "https://www.lesswrong.com/posts/ERRk4thxxYNcScqR4/moore-s-paradox", "postedAt": "2009-03-08T02:27:39.732Z", "baseScore": 93, "voteCount": 85, "commentCount": 24, "url": null, "contents": { "documentId": "ERRk4thxxYNcScqR4", "html": "

I think I understand Moore's Paradox a bit better now, after reading some of the comments on Less Wrong.  Jimrandomh suggests:

\n
\n

Many people cannot distinguish between levels of indirection. To them, \"I believe X\" and \"X\" are the same thing, and therefore, reasons why it is beneficial to believe X are also reasons why X is true.

\n
\n

I don't think this is correct—relatively young children can understand the concept of having a false belief, which requires separate mental buckets for the map and the territory.  But it points in the direction of a similar idea:

\n

Many people may not consciously distinguish between believing something and endorsing it.

\n

After all—\"I believe in democracy\" means, colloquially, that you endorse the concept of democracy, not that you believe democracy exists.  The word \"belief\", then, has more than one meaning.  We could be looking at a confused word that causes confused thinking (or maybe it just reflects pre-existing confusion).

\n

So: in the original example, \"I believe people are nicer than they are\", she came up with some reasons why it would be good to believe people are nice—health benefits and such—and since she now had some warm affect on \"believing people are nice\", she introspected on this warm affect and concluded, \"I believe people are nice\".  That is, she mistook the positive affect attached to the quoted belief, as signaling her belief in the proposition.  At the same time, the world itself seemed like people weren't so nice.  So she said, \"I believe people are nicer than they are.\"

\n

And that verges on being an honest mistake—sort of—since people are not taught explicitly how to know when they believe something.  As in the parable of the dragon in the garage; the one who says \"There is a dragon in my garage—but it's invisible\", does not recognize his anticipation of seeing no dragon, as indicating that he possesses an (accurate) model with no dragon in it.

\n

It's not as if people are trained to recognize when they believe something.  It's not like they're ever taught in high school:  \"What it feels like to actually believe something—to have that statement in your belief pool—is that it just seems like the way the world is.  You should recognize this feeling, which is actual (unquoted) belief, and distinguish it from having good feelings about a belief that you recognize as a belief (which means that it's in quote marks).\"

\n

This goes a long way toward making this real-life case of Moore's Paradox seem less alien, and providing another mechanism whereby people can be simultaneously right and wrong.

\n

Likewise Kurige who wrote:

\n
\n

I believe that there is a God—and that He has instilled a sense of right and wrong in us by which we are able to evaluate the world around us.  I also believe a sense of morality has been evolutionarily programmed into us—a sense of morality that is most likely a result of the formation of meta-political coalitions in Bonobo communities a very, very long time ago.  These two beliefs are not contradictory, but the complexity lies in reconciling the two.

\n
\n

I suspect, Kurige, you have decided that you have reasons to endorse the quoted belief that God has instilled a sense of right and wrong in us.  And also that you have reasons to endorse the verdict of science.  They both seem like good communities to join, right?  There are benefits to both sets of beliefs?  You introspect and find that you feel good about both beliefs?

\n

But you did not say:

\n

\"God instilled a sense of right and wrong in us, and also a sense of morality has been evolutionarily programmed into us.  The two states of reality are not inconsistent, but the complexity lies in reconciling the two.\"

\n

If you're reading this, Kurige, you should very quickly say the above out loud, so you can notice that it seems at least slightly harder to swallow—notice the subjective difference—before you go to the trouble of rerationalizing.

\n

This is the subjective difference between having reasons to endorse two different beliefs, and your mental model of a single world, a single way-things-are.

" } }, { "_id": "P3uavjFmZD5RopJKk", "title": "Simultaneously Right and Wrong", "pageUrl": "https://www.lesswrong.com/posts/P3uavjFmZD5RopJKk/simultaneously-right-and-wrong", "postedAt": "2009-03-07T22:55:33.476Z", "baseScore": 120, "voteCount": 114, "commentCount": 63, "url": null, "contents": { "documentId": "P3uavjFmZD5RopJKk", "html": "

Related to: Belief in Belief, Convenient Overconfidence

     \"You've no idea of what a poor opinion I have of myself, and how little I deserve it.\"

      -- W.S. Gilbert 

In 1978, Steven Berglas and Edward Jones performed a study on voluntary use of performance inhibiting drugs. They asked subjects to solve certain problems. The control group received simple problems, the experimental group impossible problems. The researchers then told all subjects they'd solved the problems successfully, leaving the controls confident in their own abilities and the experimental group privately aware they'd just made a very lucky guess.

Then they offered the subjects a choice of two drugs to test. One drug supposedly enhanced performance, the other supposedly handicapped it.

There's a cut here in case you want to predict what happened.


Males in the lucky guesser group chose the performance-inhibiting drug significantly more than those in the control group1.

The researchers conjectured that the lucky guessers felt good about acing the first test. They anticipated failing the second whether they chose the helpful or the harmful drug. So they chose the harmful drug as an excuse: \"Oh, I would have passed the test, only the drug was making me stupid.\" As the study points out, this is a win-win situation: if they fail, the drug excuses their failure, and if they succeed it's doubly impressive that they passed even with a handicap.

Since this study, psychologists have applied self-handicapping to interpret common phenomena like procrastination, hypochondria, and minority underachievement; they've also discovered factors that increase or decrease self-handicapping tendencies (for example, you can increase it by making the test more relevant to your self-esteem: just say \"this is a proven test of intellectual ability in general\").

But some especially interesting studies investigated the effect of privacy on self-handicapping. For example, Hobden conducted an experiment similar to Berglas and Jones', albeit with tapes of performance-enhancing or -handicapping music instead of drugs. The twist: half the subjects' choice of tapes and test scores were public, and half the subjects believed their choice of tapes and test scores were anonymous. What happens when no one but the subject himself will ever know his test score? He self-handicaps just as often as everyone else. And it seems to *work*. The same set of studies showed that subjects who self-handicap on a test are less likely to attribute their failure on the test to their own incompetence.

In order to handicap, subjects must have an inaccurate assessment of their own abilities. Otherwise, there's no self-esteem to protect. If I believe my IQ is 80, and I get 80 on an IQ test, I have no incentive to make excuses to myself, or to try to explain away the results. The only time I would want to explain away the results as based on some external factor was if I'd been going around thinking my real IQ was 100.

But subjects also must have an accurate assessment of their own abilities. Subjects who take an easy pre-test and expect an easy test do not self-handicap. Only subjects who understand their low chances of success can think \"I will probably fail this test, so I will need an excuse2

If this sounds familiar, it's because it's another form of the dragon problem from Belief in Belief. The believer says there is a dragon in his garage, but expects all attempts to detect the dragon's presence to fail. Eliezer writes: \"The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse.\" 

Should we say that the subject believes he will get an 80, but believes in believing that he will get a 100? This doesn't quite capture the spirit of the situation. Classic belief in belief seems to involve value judgments and complex belief systems, but self-handicapping seems more like simple overconfidence bias3. Is there any other evidence that overconfidence has a belief-in-belief aspect to it?

Last November, Robin described a study where subjects were less overconfident if asked to predict their performance on tasks they will actually be expected to complete. He ended by noting that \"It is almost as if we at some level realize that our overconfidence is unrealistic.\"

Belief in belief in religious faith and self-confidence seem to be two areas in which we can be simultaneously right and wrong: expressing a biased position on a superficial level while holding an accurate position on a deeper level. The specifics are different in each case, but perhaps the same general mechanism may underlie both. How many other biases use this same mechanism?

Footnotes

1: In most studies on this effect, it's most commonly observed among males. The reasons are too complicated and controversial to be discussed in this post, but are left as an exercise for the reader with a background in evolutionary psychology.

2: Compare the ideal Bayesian, for whom expected future expectation is always the same as the current expectation, and investors in an ideal stock market, who must always expect a stock's price tomorrow to be on average the same as its price today - to this poor creature, who accurately predicts that he will lower his estimate of his intelligence after taking the test, but who doesn't use that prediction to change his pre-test estimates.

3: I have seen \"overconfidence bias\" used in two different ways: to mean poor calibration on guesses (ie predictions made with 99% certainty that are only right 70% of the time) and to mean the tendency to overestimate one's own good qualities and chance of success. I am using the latter definition here to remain consistent with the common usage on Overcoming Bias; other people may call this same error \"optimism bias\".

" } }, { "_id": "d4pyYJ8Xdi5o6GJYT", "title": "The Golem", "pageUrl": "https://www.lesswrong.com/posts/d4pyYJ8Xdi5o6GJYT/the-golem", "postedAt": "2009-03-07T18:32:13.264Z", "baseScore": 17, "voteCount": 27, "commentCount": 9, "url": null, "contents": { "documentId": "d4pyYJ8Xdi5o6GJYT", "html": "

Anthony Ravenscroft writing on why it is important, in a relationship, to honestly communicate your grievances to the other person:

\r\n
\r\n

If you don't present your gripes to the responsible party, you cannot humanly bury those complaints - it's just not possible to \"forget\" about something that has hurt or stung you. Actually, you are probably \"testing\" these complaints against your experience of the person, trying to figure out what they would say, how they would react. You create a simulacrum in order to argue this all out in your head, and thus to avoid unpleasantness. Certain conclusions are made, which you file away. When another problem comes up, you then test this against your estimates of the person, which have been expanded by your previous guesswork.

Eventually, you will have created this huge guesswork of assumptions, which are so far removed from the actual person that they likely have no bearing on the reality. I call this \"a golem made of boxes\", a warehouse-sized beast that has nothing to do with the simple small human being from which it is supposedly modeled.

When I have had such a golem used against me, I was told by my lover that she had kept a rather ugly situation from me \"because I know how you'd react.\" I described to her exactly what the situation was, as I'd pieced it together very accurately (you can do this with the actions of humans, not the humans themselves). She was stunned. When I described for her how the root assumptions she had made were very largely off the mark, she actually became very angry with me, defending the golem as though it represented the truth, and therefore I must be lying! In the end, she could have better determined my reaction from writing down the possibilities on slips of paper and choosing one out of a hat. ...

The golem is handy, but almost entirely dishonest. It begins from faulty (incomplete, biased) data, and runs rapidly downhill from there.

\r\n
\r\n

The map and the territory. How have you had the golem used against you? When have you, yourselves, made the mistake of resorting to a golem and had it blow in your face?

" } }, { "_id": "HnzB46zsL8ehdpLcd", "title": "Checklists", "pageUrl": "https://www.lesswrong.com/posts/HnzB46zsL8ehdpLcd/checklists", "postedAt": "2009-03-07T15:47:14.673Z", "baseScore": 17, "voteCount": 19, "commentCount": 2, "url": null, "contents": { "documentId": "HnzB46zsL8ehdpLcd", "html": "

Checklists are a rationality technique, mentioned previously on OB. Everyone knows this, but we don't hear about them as often as we should, possibly because they seem prosaic and boring.

\n

In the context of doing something over and over, there is a checklist improvement cycle.

\n\n

There are many caveats to this description: Some checklists are not primarily lists of errors, but primarily ordered procedures. You may want to complicate the cycle to track the cost and benefit of the items on the checklist. We're assuming that errors are eventually discovered. I want to pass over those caveats and claim that this kind of checklist-of-errors is very successful. If you agree, my question is: What feature or features of our minds are checklists compensating for? If we understood that, then we would be able to use checklists even more effectively.

\n

The act of considering the current checklist item and the post/decision/proof simultaneously reminds me of \"Forced Association\", a creativity technique. So one idea is that by putting one's mind into several different states via forced association with the items on the checklist, we gain more independent chances to detect an error.

\n

Even if you haven't made any errors yet, and so your checklist is empty, conducting several searches for errors while wearing De Bono's hats (or another forced association list) might be a way to make fewer errors.

\n

If you're consciously looking for a red minivan, then you will notice more red minivans. This \"noticing\" seems surprisingly spontaneous, unlike deliberately scanning a scene and considering \"Is that a red minivan? Is that a red minivan?\". Possibly this is because the noticing is being done by unconscious modules of our minds. A checklist breaks our search for errors into a sequence of searches for more specific kinds of errors. Possibly checklists are effective because the general concept \"error\" is too vague for those modules. By breaking it into easier chunks (e.g. \"ad hominem fallacy\", \"missing semicolon\"), we can start using those modules to get a more thorough search.

\n

I admit, I'm not sure how to use this \"modules\" idea to use checklists more effectively.

" } }, { "_id": "GTzBTtkZH8KxNNfxA", "title": "Slow down a little... maybe?", "pageUrl": "https://www.lesswrong.com/posts/GTzBTtkZH8KxNNfxA/slow-down-a-little-maybe", "postedAt": "2009-03-07T01:34:03.994Z", "baseScore": -4, "voteCount": 26, "commentCount": 24, "url": null, "contents": { "documentId": "GTzBTtkZH8KxNNfxA", "html": "

I think that three posts a day over and above Yudkowsky and/or Hanson posts might be enough.  Where anything that gets voted to 0 or below doesn't count, nor do quick links.

\n

Say you differently, readers?  I'm just trying to space things out so we don't get overloaded with everything, all at once... if it turns out that people just have more to say than this, sustainably in the long term, then we can raise the posting speed.

" } }, { "_id": "gM4XxTNe7ChfZgpkS", "title": "Formalization is a rationality technique", "pageUrl": "https://www.lesswrong.com/posts/gM4XxTNe7ChfZgpkS/formalization-is-a-rationality-technique", "postedAt": "2009-03-06T20:22:39.032Z", "baseScore": 3, "voteCount": 25, "commentCount": 27, "url": null, "contents": { "documentId": "gM4XxTNe7ChfZgpkS", "html": "

We are interested in developing practical techniques of rationality. One practical technique, used widely and successfully in science and technology is formalization, transforming a less-formal argument into a more-formal one. Despite its successes, formalization isn't trivial to learn, and schools rarely try to teach general techniques of thinking and deciding. Instead, schools generally only teach domain-specific reasoning. We end up with graduates who can apply formalization skillfully inside of specific domains (e.g. electrical engineering or biology), but fail to apply, or misapply, their skills to other domains (e.g. politics or religion).

\n

A side excursion, to be used as an example:

\n
\n

Is it true that a real-world decision can be perfectly justified via a mathematical proof? No.

\n

Here is one reason: Any real-world \"proof\", whether it is a publication in a math journal or the output of a computer proof-checker, might contain a mistake. Human mathematicians routinely make mistakes in publications. The computer running the proof-checker is physical, and might be struck by a cosmic ray. Even if the computer were perfect, the proof-checker might have a bug in it.

\n

Here is another reason: Even if you had a proof known to be mistake free, mathematical proof always reasons from assumptions to conclusions. To apply the proof to the real world, you must make an informal argument that the assumptions apply in this real-world situation, and a second informal argument that the conclusion of the proof corresponds to the real-world decision. These first and last links in the chain will still be there, no matter how rigorous and trusted the proof is. The problem with duplicating real-world spheres with Banach-Tarski isn't that there's a mistake in the proof, it's that the real-world spheres don't fit the assumptions.

\n
\n

If you were fitting this argument into your beliefs, you might produce a number, a \"gut\" estimate of how likely this informal argument is wrong. Can we improve on that using the technique of formalization? What would a formalization of this argument look like? One possible starting point might be to rename everything. We're confident (via philosophy of logic) that renaming won't increase or decrease the quality of the argument. We will reason better about the correctness of the form if we hide the subjects of the argument.

\n
\n

Is it true that A? No.

\n

Here is one reason: B is true (and B implies not A). Intuition pump. Intuition pump. Intuition pump.

\n

Here is another reason: Even assuming B were false, C is true (and C implies not A). Intuition pump.

\n
\n

Note: there are many choices in this renaming process. It's not a trivial, thought-free operation at all. Someone else might get a completely different \"underlying structure\" from the same starting point. This particular structure suggests an equation, something like:

\n
\n

P( whole argument is wrong ) = P( first subargument is wrong ) * P( second subargument is wrong | first subargument is wrong )

\n
\n

The equation allows you to estimate the probability of the whole argument being wrong, using two \"gut\" estimates instead of one. This is probably an improved, lower-variance estimate.

\n

The point is:

\n" } }, { "_id": "fsSoAMsntpsmrEC6a", "title": "Does blind review slow down science?", "pageUrl": "https://www.lesswrong.com/posts/fsSoAMsntpsmrEC6a/does-blind-review-slow-down-science", "postedAt": "2009-03-06T12:35:34.304Z", "baseScore": 25, "voteCount": 29, "commentCount": 22, "url": null, "contents": { "documentId": "fsSoAMsntpsmrEC6a", "html": "

Previously, Robin Hanson pointed out that even if implementing anonymous peer review has an effect on the acceptance rate of different papers, this doesn't necessarily tell us the previous practice was biased. Yesterday, I ran across an interesting passage suggesting one way that anonymous review might actually be harmful:

\r\n
\r\n

 Second, fame may confer license. If a person has done valuable work in the past, this increases the probability that his current work is also valuable and induces the audience to suspend its disbelief. He can therefore afford to thumb his nose at the crowd. This is merely the obverse of the \"shamelessness\" of the old, which Aristotle discussed. Peter Messeri argues in this vein that \"senior scientists are better situated than younger scientists to withstand adverse consequences of public advocacy of unpopular positions,\" and that this factor may explain why the tendency for older scientists to resist new theories is, in fact, weak. And remember Kenneth Dover's negative verdict on old age (chapter 5)? He offered one qualification: \"There just aren't any [aspects of old age which compensate for its ills] - except, maybe, a complacent indifference to fashion, because people no longer seeking employment or promotion have less to fear.\"

\r\n

This point suggests that the use by scholarly journals of blind refereeing is a mistaken policy. It may cause them to turn down unconventional work to which they would rightly have given the benefit of doubt had they known that the author was not a neophyte or eccentric.

\r\n
\r\n

 (From Richard A. Posner, \"Aging and Old Age\")

\r\n

 If this hypothesis holds (and Posner admits it hasn't been tested, at least at the time of writing), then blind review may actually slow down the acceptance of theories which are radical but true. Looking up the Peter Messeri reference gave me the article \"Age Differences in the Reception of New Scientific Theories: The Case of Plate Tectonics Theory\". It notes:

\r\n

\r\n
\r\n

Young scientists may adopt a new theory before their elders in part because they are better informed about current research in a broader range of fields. As Zuckerman and Merton observe with respect to life-course differences in opportunities for discovery, young scientists - closer in time to their formal training with an 'aggregate of specialists at work on [many] research front[s]' - are likely to be more up to date 'in a wider variety of fields than their older and more specialized' colleagues. Moreover, defects in existing knowledge may be more apparent to young scientists burdened with fewer preconceptions reinforced by prior practice.  Conversly, older scientists may take longer to be won over to a new theoretical perspective because of greater familiarity with past successes of established theory in overcoming previous theoretical or empirical challenges. Commitment to existing knowledge may also strenghten with age because older scientists have a greater social and cognitive investment in its perpetuation, and have less to gain from adopting new ideas. This may be particularly important when adoption of a new theory requires substantial effort to master new research skills and concepts.

\r\n
\r\n

In this light, an older scientist's acceptance of a new, radical hypothesis should tell us to give the new hypothesis extra weight. It might even be appropriate to apply a heavier \"reputation weighting\" on controversial theories than established ones - we'd expect the established scientist to write papers supporting the established theories, but not controversial ones. However, blind review makes this impossible. The reviewers may even mistake the established scientist as an undiscriminating free thinker, who endorses any controversial theories simply because they're unpopular. This will slow down the acceptance of hypotheses which are, in fact, correct.

\r\n

A possible objection to this could be that old scientists are unlikely to change their minds, and therefore old scientists not getting enough credit for their achievements won't have much of an effect in the spread of new hypotheses. (After all, if no old scientist endorses controversial hypotheses, then it doesn't matter if those endorsements aren't properly weighted in review.) Not so. Messeri finds that science does actually progress a lot faster than \"one funeral at a time\" (as Max Planck put it), and old scientists are ready to adopt new theories given sufficient evidence. While the last holdouts for outdated theories do tend to be the old, their increased security also makes them the first who are willing to publicly support new theories:

\r\n
\r\n

 ...the episode which promted Planck's 'observation' ... seems like a poor illustration of the 'fact' Planck claims to have learned. ... Wilhelm Ostwald, one of the leaders of the opposition of 'Energetics' school prominently mentioned by Planck, was only five years older than Planck, whereas Ludwig Boltzmann, whose theoretical work on entropy, in no small measure ... helped bring the scientific community around to Planck's view, was fourteen years Planck's senior. ...

\r\n

In a study of the Chemical Revolution, McCann reports a negative correlation between author's age and the use of the oxygen paradigm in scientific papers written between 1760 and 1795. On closer inspection of the data, he finds the earliest group of converts to the oxygen paradigm (between 1772 and 1777) were middle-aged men with close ties to Lavoisier; the inverse age effect became manifest only after 1785, during the ten-year period of 'major conversion and consolidation'. ...

\r\n

As for evolutionary theory, Hull and his colleagues find weak support for 'Planck's Principle' among nineteenth-century British scientists. The small minority of scientists who held out against the theory after 1869 were, on average, almost ten years older than earlier adopters. Age in 1859 (the year the Origin of Species was published) was unrelated, however, to speed of acceptance for the great majority of those converting to evolutionary theory by 1869. ...

I examined the publications of ninety-six North American earth scientists actively engaged in pertinent research during the 1960s and 1970s. ... The dependent variable for this study is the year in which a scientist decided to adopt the mobilist programme of research rather than to continue working within a stabilist programme. ... Before 1966, when prevailing scientific opinion still ran strongly against the mobilist perspective, the small number of scientists adopting the programme were considerably older (in terms of career age) than other scientists active during this early period. Thus, scientists adopting the programme through 1963 were on average nineteen years 'older' than non-adopters. ... Adopters in 1964 were twenty-three years older than non-adopters. ... Only with the shift in scientific opinion favourable to mobilist concepts beginning in 1966, do we start to see a progressive narrowing, and then reversal, in the age differentials between adopters and non-adopters.

\r\n
" } }, { "_id": "6QqfAirjEQLiwcosH", "title": "Is it rational to take psilocybin?", "pageUrl": "https://www.lesswrong.com/posts/6QqfAirjEQLiwcosH/is-it-rational-to-take-psilocybin", "postedAt": "2009-03-06T04:44:05.349Z", "baseScore": 13, "voteCount": 24, "commentCount": 56, "url": null, "contents": { "documentId": "6QqfAirjEQLiwcosH", "html": "

\n

Is it rational to take psilocybin?

\n

Just to make my definition of rational clear:

\n

Rationality is only intelligible when in the context of a goal (whether that goal be rational or irrational). Now, if one acts rationally, given their information set, will chose the best plan-of-action towards succeeding their goal. Part of being rational is knowing which goals will maximize one’s utility function.

\n

According to Discovery:

\n

“Scientists released their findings on a recent survey of volunteer psilocybin users 14 months after they took the drug.”

\n

“Sixty-four percent of the volunteers said they still felt at least a moderate increase in well-being or life satisfaction, in terms of things like feeling more creative, self-confident, flexible and optimistic. And 61 percent reported at least a moderate behavior change in what they considered positive ways.”

\n

Assuming you won’t get a bad trip, is it rational to take the drug?

\n

I doubt a psychedelic experience can help me optimize my current utility function better than my sober self. How can I be more rational from a drug? Therefore, I conclude that it must, in fact, change my preference ordering—make me care about things more than I would have otherwise. I prefer my preferences and therefore would rather keep my preferences the way they are now.

\n

If you were guaranteed to have all these positive results from taking the drug, would you take it?

\n

 

" } }, { "_id": "DNQw596nPCX4x7xT9", "title": "Information cascades", "pageUrl": "https://www.lesswrong.com/posts/DNQw596nPCX4x7xT9/information-cascades", "postedAt": "2009-03-06T04:08:04.882Z", "baseScore": 61, "voteCount": 62, "commentCount": 36, "url": null, "contents": { "documentId": "DNQw596nPCX4x7xT9", "html": "

An information cascade is a problem in group rationality. Wikipedia has excellent introductions and links about the phenomenon, but here is a meta-ish example using likelihood ratios.

\n

Suppose in some future version of this site, there are several well-known facts:

\n\n

Let's talk about how the very first reader would vote. If they judged the post high quality, then they would multiply the prior likelihood ratio (6:4) times the bayes factor for a high private signal (4:1), get (6*4:4*1) = (6:1) and vote the post up. If they judged the post low quality then they would instead multiply by the bayes factor for a low private signal (1:4), get (6*1:4*4) = (3:8) and vote the post down.

\n

There were two scenarios for the first reader (private information high or low). If we speculate that the first reader did in fact vote up, then there are two scenarios for the second scenario: There are two scenarios for the second reader:

\n
    \n
  1. Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
  2. \n
  3. Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
  4. \n
\n

Note that now there are two explanations for ending up two votes up. It could be that the second reader actually agreed, or it could be that the second reader was following the first reader and the prior against their personal judgement. That means that the third reader gets zero information from the second reader's personal judgement! The two scenarios for the third reader, and every future reader, are exactly analogous to the two scenarios for the second reader.

\n
    \n
  1. Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up.
  2. \n
  3. Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement.
  4. \n
\n

This has been a nightmare scenario of groupthink afflicting even diligent bayesians. Possible conclusions:

\n\n

Note: Olle found an error that necessitated a rewrite. I apologize.

" } }, { "_id": "ijSZW27bd8dCqBwCC", "title": "Recommended Rationalist Resources", "pageUrl": "https://www.lesswrong.com/posts/ijSZW27bd8dCqBwCC/recommended-rationalist-resources", "postedAt": "2009-03-05T20:23:09.098Z", "baseScore": 6, "voteCount": 8, "commentCount": 34, "url": null, "contents": { "documentId": "ijSZW27bd8dCqBwCC", "html": "

I thought Recommended Rationalist Reading was very useful and interesting. Now we have voting and threading it seems a good time to comprehensively gather opinions on online material.

\n

Please suggest high-quality links related to or useful for improving rationality. It could be a blog, a forum, a great essay, a reference site, an e-book, anything clickable. Anyone interested can then check out what looks promising and report back.

\n

[edit]
There seems to be confusion... The post's for online material, not physical books. We already have Recommended Rationalist Reading, but as that hasn't got threading and voting, if people think it's a good idea I (or someone else) can do a seperate post for books [metaedit] ...not happening, is against blog guidelines. [/metaedit]

\n

Looks like we're getting lots of suggestions, so please don't forget to vote on them so busier readers have an idea which ones are worth more investigating!
[/edit]

\n
\n

Contributors - if making multiple suggestions, please give each their own comment so we can vote on them separately. Click 'Help' for how to do links.

\n

Voters - for top level comments containing suggestions (as oppose to comments replying to suggestions) please vote on the quality of the resource, not anything else in the comment. If you feel strongly about the comment quality, just post a sub-comment.

\n

Here's 3 to get started:

" } }, { "_id": "5zkntzzStbYsSaDza", "title": "Define Rationality", "pageUrl": "https://www.lesswrong.com/posts/5zkntzzStbYsSaDza/define-rationality", "postedAt": "2009-03-05T18:25:06.240Z", "baseScore": 1, "voteCount": 12, "commentCount": 14, "url": null, "contents": { "documentId": "5zkntzzStbYsSaDza", "html": "

I would like to suggest that we try to come up with several defintions of rationality. I don't feel we have exhausted this search area by any means. Robin has suggested, \"More \"rational\" means better believing what is true, given one's limited info and analysis resources\". Other commenters have emphasised goal-directed behaviour as a necessary ingredience of rationality. I think these defintions miss out on several important ingrediences - such as the social nature of rationality. There is also a subtext which argues - that rationality only gives one (correct) answer even if we only can approximate it. I feel strongly that rationality can give several correct answers and thus imagination is an ingredience of rationality. So without in any way believing that I have found the one correct defintion, I propose the following: When two or more brains try to be sensible about things and expand their agency. I believe that \"sensible\" in this context does not need to be defined as it is a primitive and each player willl submit their own meaning.

\n

Maybe this is a can of worms - but are there other suggestions or defintions for rationality we can apply in our lives?

" } }, { "_id": "HahzBTjKLFLv4iQE8", "title": "Kinnaird's truels", "pageUrl": "https://www.lesswrong.com/posts/HahzBTjKLFLv4iQE8/kinnaird-s-truels", "postedAt": "2009-03-05T16:50:02.621Z", "baseScore": 30, "voteCount": 35, "commentCount": 35, "url": null, "contents": { "documentId": "HahzBTjKLFLv4iQE8", "html": "

A \"truel\" is something like a duel, but among three gunmen. Martin Gardner popularized a puzzle based on this scenario, and there are many variants of the puzzle which mathematicians and game theorists have analyzed.

\n

The optimal strategy varies with the details of the scenario, of course. One take-away from the analyses is that it is often disadvantageous to be very skillful. A very skillful gunman is a high-priority target.

\n

The environment of evolutionary adaptedness undoubtedly contained multiplayer social games. If some of these games had a truel-like structure, they may have rewarded mediocrity. This might be an explanation of psychological phenomena like \"fear of success\" and \"choking under pressure\".

\n

Robin Hanson has mentioned that there are costs to \"truth-seeking\". One of the example costs might be convincingly declaring \"I believe in God\" in order to be accepted into a religious community. I think truels are a game-theoretic structure that suggests that there are costs to (short-sighted) \"winning\", just as there are costs to \"truth-seeking\".

\n

How can you identify truel-like situations? What should you (a rationalist) do if you might be in a truel-like situation?

\n

 

" } }, { "_id": "LD5kLJEwzaPrc9cik", "title": "Posting now enabled on Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/LD5kLJEwzaPrc9cik/posting-now-enabled-on-less-wrong", "postedAt": "2009-03-05T16:15:00.000Z", "baseScore": 2, "voteCount": 1, "commentCount": 6, "url": null, "contents": { "documentId": "LD5kLJEwzaPrc9cik", "html": "

Posting is now enabled on Less Wrong, with a minimum karma required of 20 - that is, you must have gotten at least 20 upvotes on your comments in order to publish a post.  Or an adminstrator such as myself or Robin (by default you should bother me) can temporarily bless you with posting ability - in the long run this shouldn't happen much.

For those of you who haven't yet subscribed to / gotten in the habit of checking Less Wrong:

\n

The five most recent LW posts now appear in OB's sidebar (and vice versa), but aside from this you shouldn't expect further regular summaries of LW on OB.

" } }, { "_id": "sr7n7WpiisSJ8oJk2", "title": "Posting now enabled", "pageUrl": "https://www.lesswrong.com/posts/sr7n7WpiisSJ8oJk2/posting-now-enabled", "postedAt": "2009-03-05T15:56:45.958Z", "baseScore": 5, "voteCount": 11, "commentCount": 24, "url": null, "contents": { "documentId": "sr7n7WpiisSJ8oJk2", "html": "

Posting is now enabled with a minimum karma required of 20 - that is, you must have gotten at least 20 upvotes on your comments in order to publish a post.  Or an adminstrator such as myself or Robin (by default you should bother me) can temporarily bless you with posting ability - in the long run this shouldn't happen much.

" } }, { "_id": "w6QGyqyoL8k5GwuPu", "title": "Rationality and Positive Psychology", "pageUrl": "https://www.lesswrong.com/posts/w6QGyqyoL8k5GwuPu/rationality-and-positive-psychology", "postedAt": "2009-03-05T15:31:16.803Z", "baseScore": 4, "voteCount": 19, "commentCount": 12, "url": null, "contents": { "documentId": "w6QGyqyoL8k5GwuPu", "html": "

    Robin recently had us consider the costs of rationality, but I have been thinking about the benefits. I typically think of rationality as having instrumental value, but after reading Mihály Csíkszentmihályi's work on flow, I began pondering its status as an intrinsically fulfilling activity. Cognitive and evolutionary psychology are major components in the study of rationality, but I haven't seen connections drawn between rationality and positive psychology before. Csíkszentmihályi (said cheek-sent-me-high-ee) defines \"flow\" as a state of intense focus where you lose track of yourself and become completely involved in what you are doing. After some consideration, I am intrigued by the similarities between the practice of rationality and a flow-like state of mind.

\n

Csíkszentmihályi identifies the following components of flow:

\n
\n

1. Clear goals and expectations about the task at hand.

\n

2. Direct and immediate feedback.

\n

3. A high degree of concentration.

\n

4. Loss of self-consciousness.

\n

5. Altered perception of time.

\n

6. Challenge proportionate to your skill level.

\n

7. Feeling of control over the situation.

\n

8. Activity is intrinsically rewarding.

\n

(summarized from Flow (psychology))

\n
\n

    The first component appears directly tied to the concepts of conservation of expectation and making predictions in advance. By clearly defining your goals, how you will respond to new evidence, and what your current predictions are, you will know how to react in advance. Clear goals and expectations allow you to better recognize successes and failures. A well-defined scoring function is important to guide intelligence as an optimization process, but also allows you to be fulfilled when you do encounter success.

\n

    The second component depends on the first, but it also entails finding reliable feedback to evaluate those expectations against. Careful rationalists uses precise quantitative measures and math where possible. The precision of math can give detailed and immediate feedback, so long as it does not introduce additional unjustified complexity. As rationalists, we should actively be seeking out feedback on our beliefs and predictions. Feedback assists calibration, and if correctly used, prevents us from being trapped in bad beliefs and with poor methods.

\n

    Components three, four, and five are all closely related. These three are more of a mixed bag than the first two. On one hand, this aspect of flow represents the directive to shut up and multiply: forget yourself and your own feelings and focus only on the issue at hand. However, until your rationality instincts are sufficiently honed, becoming absorbed in a task means that you could be forgetting about unconscious biases. Like any skill though, with practice, checking biases can become automatic.

\n

    The ability to automatically check biases leads into the sixth part. The challenge of a task should be proportionate to your skill level. A disproportionately difficult task leads to frustration, and an insufficiently difficult one leads to relaxation or boredom. Relaxation is not a bad thing, but is distinct from flow and antithetical to rationality. If curiosity should lead to its own destruction, we shouldn't allow ourselves to always be in a state of relaxation. As beginning rationalists, the challenge can simply be becoming aware of our biases and the principles of rationality. As this becomes easier and our skill increases, we can focus on more and more difficult issues to apply ourselves to. I am interested in the possibility of developing simple standard challenges that allow rationalists to build awareness without being overwhelming. Reading the origins thread, it appears religion played this role for many of us, but I think the issue is too fraught with emotion to be a reliable standard challenge.

\n

    The seventh component is a little more difficult for rationalists. Being open and willing to relinquish beliefs and acknowledge mistakes seems contrary to being in control. Nevertheless, dispelling biases and reflecting on our values means that we can be in better control of our minds and behaviors.

\n

    Finally, for many of us, curiosity is an intrinsic desire. In my case, I only need to give it more of a chance to express itself.

\n

    While I began by considering whether rationality is instrinsically fulfulling, this has really been a discussion of the practice of rationality. And even then, to be careful I should say the practice of rationality is still only instrumentally valuable; just less so than commonly thought. Eliezer thinks that preferences should be neutral to rituals of cognition, which I am inclined to agree with. That rationality tends to produce a state of flow in me  is a highly contingent fact. Is it possible to lessen the costs of rationality and increase its benefits?

\n

    Does anyone else have any thoughts or experiences on this subject? Is anyone aware of a more rigorous or academic study on the relation between rationality and positive psychology?

" } }, { "_id": "wP2ymm44kZZwaFPYh", "title": "Belief in Self-Deception", "pageUrl": "https://www.lesswrong.com/posts/wP2ymm44kZZwaFPYh/belief-in-self-deception", "postedAt": "2009-03-05T15:20:27.590Z", "baseScore": 104, "voteCount": 101, "commentCount": 114, "url": null, "contents": { "documentId": "wP2ymm44kZZwaFPYh", "html": "

I spoke yesterday of my conversation with a nominally Orthodox Jewish woman who vigorously defended the assertion that she believed in God, while seeming not to actually believe in God at all.

\n

While I was questioning her about the benefits that she thought came from believing in God, I introduced the Litany of Tarski—which is actually an infinite family of litanies, a specific example being:

\n

  If the sky is blue
      I desire to believe \"the sky is blue\"
  If the sky is not blue
      I desire to believe \"the sky is not blue\".

\n

\"This is not my philosophy,\" she said to me.

\n

\"I didn't think it was,\" I replied to her.  \"I'm just asking—assuming that God does not exist, and this is known, then should you still believe in God?\"

\n

She hesitated.  She seemed to really be trying to think about it, which surprised me.

\n

\"So it's a counterfactual question...\" she said slowly.

\n

I thought at the time that she was having difficulty allowing herself to visualize the world where God does not exist, because of her attachment to a God-containing world.

\n

Now, however, I suspect she was having difficulty visualizing a contrast between the way the world would look if God existed or did not exist, because all her thoughts were about her belief in God, but her causal network modelling the world did not contain God as a node.  So she could easily answer \"How would the world look different if I didn't believe in God?\", but not \"How would the world look different if there was no God?\"

\n

She didn't answer that question, at the time.  But she did produce a counterexample to the Litany of Tarski:

\n

She said, \"I believe that people are nicer than they really are.\"

\n

\n

I tried to explain that if you say, \"People are bad,\" that means you believe people are bad, and if you say, \"I believe people are nice\", that means you believe you believe people are nice.  So saying \"People are bad and I believe people are nice\" means you believe people are bad but you believe you believe people are nice.

\n

I quoted to her:

\n

  \"If there were a verb meaning 'to believe falsely', it would not have any
  significant first person, present indicative.\"
          —Ludwig Wittgenstein

\n

She said, smiling, \"Yes, I believe people are nicer than, in fact, they are.  I just thought I should put it that way for you.\"

\n

  \"I reckon Granny ought to have a good look at you, Walter,\" said Nanny.  \"I reckon
  your mind's all tangled up like a ball of string what's been dropped.\"
          —Terry Pratchett, Maskerade

\n

And I can type out the words, \"Well, I guess she didn't believe that her reasoning ought to be consistent under reflection,\" but I'm still having trouble coming to grips with it.

\n

I can see the pattern in the words coming out of her lips, but I can't understand the mind behind on an empathic level.  I can imagine myself into the shoes of baby-eating aliens and the Lady 3rd Kiritsugu, but I cannot imagine what it is like to be her.  Or maybe I just don't want to?

\n

This is why intelligent people only have a certain amount of time (measured in subjective time spent thinking about religion) to become atheists.  After a certain point, if you're smart, have spent time thinking about and defending your religion, and still haven't escaped the grip of Dark Side Epistemology, the inside of your mind ends up as an Escher painting.

\n

(One of the other few moments that gave her pause—I mention this, in case you have occasion to use it—is when she was talking about how it's good to believe that someone cares whether you do right or wrong—not, of course, talking about how there actually is a God who cares whether you do right or wrong, this proposition is not part of her religion—

\n

And I said, \"But I care whether you do right or wrong.  So what you're saying is that this isn't enough, and you also need to believe in something above humanity that cares whether you do right or wrong.\"  So that stopped her, for a bit, because of course she'd never thought of it in those terms before.  Just a standard application of the nonstandard toolbox.)

\n

Later on, at one point, I was asking her if it would be good to do anything differently if there definitely was no God, and this time, she answered, \"No.\"

\n

\"So,\" I said incredulously, \"if God exists or doesn't exist, that has absolutely no effect on how it would be good for people to think or act?  I think even a rabbi would look a little askance at that.\"

\n

Her religion seems to now consist entirely of the worship of worship.  As the true believers of older times might have believed that an all-seeing father would save them, she now believes that belief in God will save her.

\n

After she said \"I believe people are nicer than they are,\" I asked, \"So, are you consistently surprised when people undershoot your expectations?\"  There was a long silence, and then, slowly:  \"Well... am I surprised when people... undershoot my expectations?\"

\n

I didn't understand this pause at the time.  I'd intended it to suggest that if she was constantly disappointed by reality, then this was a downside of believing falsely.   But she seemed, instead, to be taken aback at the implications of not being surprised.

\n

I now realize that the whole essence of her philosophy was her belief that she had deceived herself, and the possibility that her estimates of other people were actually accurate, threatened the Dark Side Epistemology that she had built around beliefs such as \"I benefit from believing people are nicer than they actually are.\"

\n

She has taken the old idol off its throne, and replaced it with an explicit worship of the Dark Side Epistemology that was once invented to defend the idol; she worships her own attempt at self-deception.  The attempt failed, but she is honestly unaware of this.

\n

And so humanity's token guardians of sanity (motto: \"pooping your deranged little party since Epicurus\") must now fight the active worship of self-deception—the worship of the supposed benefits of faith, in place of God.

\n

This actually explains a fact about myself that I didn't really understand earlier—the reason why I'm annoyed when people talk as if self-deception is easy, and why I write entire blog posts arguing that making a deliberate choice to believe the sky is green, is harder to get away with than people seem to think.

\n

It's because—while you can't just choose to believe the sky is green—if you don't realize this fact, then you actually can fool yourself into believing that you've successfully deceived yourself.

\n

And since you then sincerely expect to receive the benefits that you think come from self-deception, you get the same sort of placebo benefit that would actually come from a successful self-deception.

\n

So by going around explaining how hard self-deception is, I'm actually taking direct aim at the placebo benefits that people get from believing that they've deceived themselves, and targeting the new sort of religion that worships only the worship of God.

\n

Will this battle, I wonder, generate a new list of reasons why, not belief, but belief in belief, is itself a good thing?  Why people derive great benefits from worshipping their worship?  Will we have to do this over again with belief in belief in belief and worship of worship of worship?  Or will intelligent theists finally just give up on that line of argument?

\n

I wish I could believe that no one could possibly believe in belief in belief in belief, but the Zombie World argument in philosophy has gotten even more tangled than this and its proponents still haven't abandoned it.

\n

I await the eager defenses of belief in belief in the comments, but I wonder if anyone would care to jump ahead of the game and defend belief in belief in belief?  Might as well go ahead and get it over with.

" } }, { "_id": "ZP2om2oWHPhvWP2Q3", "title": "The ethic of hand-washing and community epistemic practice", "pageUrl": "https://www.lesswrong.com/posts/ZP2om2oWHPhvWP2Q3/the-ethic-of-hand-washing-and-community-epistemic-practice", "postedAt": "2009-03-05T04:28:28.528Z", "baseScore": 61, "voteCount": 48, "commentCount": 48, "url": null, "contents": { "documentId": "ZP2om2oWHPhvWP2Q3", "html": "

Related to: Use the Native Architecture

\n

When cholera moves through countries with poor drinking water sanitation, it apparently becomes more virulent. When it moves through countries that have clean drinking water (more exactly, countries that reliably keep fecal matter out of the drinking water), it becomes less virulent. The theory is that cholera faces a tradeoff between rapidly copying within its human host (so that it has more copies to spread) and keeping its host well enough to wander around infecting others. If person-to-person transmission is cholera’s only means of spreading, it will evolve to keep its host well enough to spread it. If it can instead spread through the drinking water (and thus spread even from hosts who are too ill to go out), it will evolve toward increased lethality. (Critics here.)

\n

I’m stealing this line of thinking from my friend Jennifer Rodriguez-Mueller, but: I’m curious whether anyone’s gotten analogous results for the progress and mutation of ideas, among communities with different communication media and/or different habits for deciding which ideas to adopt and pass on. Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children? Do mass media such as radio, TV, newspapers, or printing presses decrease the functionality of the average person’s ideas, by allowing ideas to spread in a manner that is less dependent on their average host’s prestige and influence? (The intuition here is that prestige and influence might be positively correlated with the functionality of the host’s ideas, at least in some domains, while the contingencies determining whether an idea spreads through mass media instruments might have less to do with functionality.)

\n

Extending this analogy -- most of us were taught as children to wash our hands. We were given the rationale, not only of keeping ourselves from getting sick, but also of making sure we don’t infect others. There’s an ethic of sanitariness that draws from the ethic of being good community members.

\n

Suppose we likewise imagine that each of us contain a variety of beliefs, some well-founded and some not. Can we make an ethic of “epistemic hygiene” to describe practices that will selectively cause our more accurate beliefs to spread, and cause our less accurate beliefs to stay con tained, even in cases where the individuals spreading those beliefs don’t know which is which? That is: (1) is there a set of simple, accessible practices (analogous to hand-washing) that will help good ideas spread and bad ideas stay contained; and (2) is there a nice set of metaphors and moral intuitions that can keep the practices alive in a community? Do we have such an ethic already, on OB or in intellectual circles more generally? (Also, (3) we would like some other term besides “epistemic hygiene” that would be less Orwellian and/or harder to abuse -- any suggestions? Another wording we’ve heard is “good cognitive citizenship”, which sounds relatively less prone to abuse.)

\n

Honesty is an obvious candidate practice, and honesty has much support from human moral intuitions. But “honesty” is too vague to pinpoint the part that’s actually useful. Being honest about one’s evidence and about the actual causes of one’s beliefs is valuable for distinguishing accurate from mistaken beliefs. However, a habit of focussing attention on evidence and on the actual causes of one’s own as well as one’s interlocutor’s beliefs would be just as valuable, and such a practice is not part of the traditional requirements of “honesty”. Meanwhile, I see little reason to expect a socially-endorsed practice of “honesty” about one’s “sincere” but carelessly assembled opinions (about politics, religion, the neighbors’ character, or anything else) to selectively promote accurate ideas.

\n

Another candidate practice is the practice of only passing on ideas one has oneself verified from empirical evidence (as in the ethic of traditional rationality, where arguments from authority are banned, and one attains virtue by checking everything for oneself). This practice sounds plausibly useful against group failure modes where bad ideas are kept in play, and passed on, in large part because so many others believe the idea (e.g. religious beliefs, or the persistence of Aristotelian physics in medieval scholasticism; this is the motivation for the scholarly norm of citing primary literature such as historical documents or original published experiments). But limiting individuals’ sharing to the (tiny) set of beliefs they can themselves check sounds extremely costly. Rolf Nelson’s suggestion that we find words to explicitly separate “individual impressions” (impressions based only on evidence we’ve ourselves verified) from “beliefs” (which include evidence from others’ impressions) sounds promising as a means of avoiding circular evidence while also benefiting from others’ evidence. I’m curious how many here are habitually distinguishing impressions from beliefs. (I am. I find it useful.)

\n

Are there other natural ideas? Perhaps social norms that accord status for reasoned opinion-change in the face of new good evidence, rather than norms that dock status from the “losers” of debates? Or social norms that take care to leave one’s interlocutor a line of retreat in all directions -- to take care to avoid setting up consistency and commitment pressures that might wedge them toward either your ideas or their own? (I’ve never seen this strategy implemented as a community norm. Some people conscientiously avoid “rhetorical tricks” or “sales techniques” for getting their interlocutor to adopt their ideas; but I’ve never seen a social norm of carefully preventing one’s interlocutor from having status- or consistency pressures toward entrenchedly keeping their own pre-existing ideas.) These norms strike me as plausibly helpful, if we could manage to implement them. However, they appear difficult to integrate with human instincts and moral intuitions around purity and hand-washing, whereas honesty and empiricism fit comparatively well into human purity intuitions. Perhaps this is why these social norms are much less practiced.

\n

In any case:

\n

(1) Are ethics of “epistemic hygiene”, and of the community impact of one’s speech practices, worth pursuing? Are they already in place? Are there alternative moral frames that one might pursue instead? Are human instincts around purity too dangerously powerful and inflexible for sustainable use in community epistemic practice?

\n

(2) What community practices do you actually find useful, for creating community structures where accurate ideas are selectively promoted?

" } }, { "_id": "rZX4WuufAPbN6wQTv", "title": "No, Really, I've Deceived Myself", "pageUrl": "https://www.lesswrong.com/posts/rZX4WuufAPbN6wQTv/no-really-i-ve-deceived-myself", "postedAt": "2009-03-04T23:29:50.910Z", "baseScore": 133, "voteCount": 128, "commentCount": 90, "url": null, "contents": { "documentId": "rZX4WuufAPbN6wQTv", "html": "

I recently spoke with a person who... it's difficult to describe.  Nominally, she was an Orthodox Jew.  She was also highly intelligent, conversant with some of the archaeological evidence against her religion, and the shallow standard arguments against religion that religious people know about.  For example, she knew that Mordecai, Esther, Haman, and Vashti were not in the Persian historical records, but that there was a corresponding old Persian legend about the Babylonian gods Marduk and Ishtar, and the rival Elamite gods Humman and Vashti.  She knows this, and she still celebrates Purim.  One of those highly intelligent religious people who stew in their own contradictions for years, elaborating and tweaking, until their minds look like the inside of an M. C. Escher painting.

\n

Most people like this will pretend that they are much too wise to talk to atheists, but she was willing to talk with me for a few hours.

\n

As a result, I now understand at least one more thing about self-deception that I didn't explicitly understand before—namely, that you don't have to really deceive yourself so long as you believe you've deceived yourself.  Call it \"belief in self-deception\".

\n

When this woman was in high school, she thought she was an atheist.  But she decided, at that time, that she should act as if she believed in God.  And then—she told me earnestly—over time, she came to really believe in God.

\n

So far as I can tell, she is completely wrong about that.  Always throughout our conversation, she said, over and over, \"I believe in God\", never once, \"There is a God.\"  When I asked her why she was religious, she never once talked about the consequences of God existing, only about the consequences of believing in God.  Never, \"God will help me\", always, \"my belief in God helps me\".  When I put to her, \"Someone who just wanted the truth and looked at our universe would not even invent God as a hypothesis,\" she agreed outright.

\n

She hasn't actually deceived herself into believing that God exists or that the Jewish religion is true.  Not even close, so far as I can tell.

\n

On the other hand, I think she really does believe she has deceived herself.

\n

So although she does not receive any benefit of believing in God—because she doesn't—she honestly believes she has deceived herself into believing in God, and so she honestly expects to receive the benefits that she associates with deceiving oneself into believing in God; and that, I suppose, ought to produce much the same placebo effect as actually believing in God.

\n

And this may explain why she was motivated to earnestly defend the statement that she believed in God from my skeptical questioning, while never saying \"Oh, and by the way, God actually does exist\" or even seeming the slightest bit interested in the proposition.

" } }, { "_id": "9SaAyq7F7MAuzAWNN", "title": "Teaching the Unteachable", "pageUrl": "https://www.lesswrong.com/posts/9SaAyq7F7MAuzAWNN/teaching-the-unteachable", "postedAt": "2009-03-03T23:14:39.495Z", "baseScore": 55, "voteCount": 47, "commentCount": 18, "url": null, "contents": { "documentId": "9SaAyq7F7MAuzAWNN", "html": "

Previously in seriesUnteachable Excellence
Followup toArtificial Addition

\n

The literary industry that I called \"excellence pornography\" isn't very good at what it does.  But it is failing at a very important job.  When you consider the net benefit to civilization of Warren Buffett's superstar skills, versus the less glamorous but more communicable trick of \"reinvest wealth to create more wealth\" - there's hardly any comparison.  You can see how much it would matter, if you could figure out how to communicate just one more skill that used to be a secret sauce.  Not the pornographic promise of consuming the entire soul of a superstar.  Just figuring out how to reliably teach one more thing, even if it wasn't everything...

\n

What makes a success hard to duplicate?

\n

Naked statistical chance is always incommunicable.  No matter what you say about your historical luck, you can't teach someone else to have it.  The arts of seizing opportunity, and exposing yourself to positive randomness, are commonly underestimated; I've seen people stopped in their tracks by \"bad luck\" that a Silicon Valley entrepreneur would drive over like a steamroller flattening speed bumps...  Even so, there is still an element of genuine chance left over.

\n

Einstein's superstardom depended on his genetics that gave him the potential to learn his skills.  If a skill relies on having that much brainpower, you can't teach it to most people... Though if the potential is one-in-a-million, then six thousand Einsteins around the world would be an improvement.  (And if we're going to be really creative, who says genes are incommunicable?  It just takes more advanced technology than a blackboard, that's all.)

\n

So when we factor out the genuinely unteachable - what's left?  Where you can you push the border?  What is it that might be possible to teach - albeit perhaps very difficult - and isn't being taught?

\n

I was once told that half of Nobel laureates were the students of other Nobel laureates.  This source seems to assert 155 out of 503.  (Interestingly, the same source says that the number of Nobel laureates with Nobel \"grandparents\" (teachers of teachers) is just 60.)  Even after discounting for cherry-picking of students and political pull, this suggests to me that you can learn things by apprenticeship - close supervision, free-form discussion, ongoing error correction over a long period of time - that no Nobel laureate has yet succeeding in putting into any of their many books.

\n

What is it that the students of Nobel laureates learn, but can't put into words?

\n

This subject holds a fascination for me, because how it delves into the meta, the source behind, the gap between the output and the generator.  We can explain Einstein's General Relativity to students, but we can't make them Einstein.  (If you look at it from the right angle, the whole trick of human intelligence is just an incommunicable insight that humans have and can't explain to a computer.)

\n

The amount of wordless intelligence in our work tends to be underestimated because the words themselves are so much easier to introspect on.  But when I'm paying attention, I can see how much of my searchpower takes place in fast flashes of perception that tell me what's important, which thought to think next.

\n

When I met my apprentice Marcello he was already better at mathematical proof than myself, certainly much faster.  He'd competed at the national level - but in competitions like that you get told which problems are important.  (And also in competitions, you instantly hand in the problem when you're done, and rush on to the next one; without looking over your proof to see if you can simplify it, see it at a glance, learn something more.)  But the really critical thing I was trying to teach him - testing to see if it could even be taught at all - was this sense of which AI problems led somewhere.  \"You can pedal as well as I can,\" I said to him early on when he asked how he was doing, \"but I'm still doing ninety percent of the steering.\"  And it was a constant, tremendous struggle to put anything into words about why I thought  that we hadn't yet found the really important insight that was lurking somewhere in a problem, and so we were going to discard Marcello's current proof and reformulate the problem and try again from another angle, to see if this time we would really understand something.

\n

We go through our life events, and our brain uses an opaque algorithm to grind the experiences to grist, and outputs yet another opaque neural net of circuitry: the procedural skill, the source of wordless intuitions that you know so fast you can't see yourself knowing them.  \"The zeroth step\", I called it, the step in reasoning that comes before the first step and goes by so quickly that you don't realize it's there.

\n

I pride myself on being good at putting things into words, at being able to introspect on the momentary flashes and see their pattern and trend, even if I can't print out the circuitry that is their source.  But when I tried to communicate my cutting edge, the borderline where I advanced my knowledge - then my words were defeated, and I was left working with Marcello on problem after problem, hoping his brain would pick up that unspoken rhythm of the steering:  Turn left, turn right; this is probably worth pursuing, this is not; this seems like a valuable insight, this is just a black box around our ignorance.

\n

I'd expected it to go like that; I'd never had the delusion that the most important parts of thought would be easy to put in words.  If it were that simple we really would have had Artificial Intelligence in the 1970s.

\n

Civilization gets by on teaching the output of the generator without teaching the generator.  Einstein output his various discoveries, and then the generated knowledge is verbal enough to be passed on to students in university.  When another Einstein is needed, civilization just holds its breath and hopes.

\n

But if these wordless skills are the product of experience - then why not communicate the experiences?  Or if fiction isn't good enough, and it probably isn't even close, then why not duplicate the experiences - put people through the same events?

\n

(1)  Superstars may not know what their critical experiences were.

\n

(2)  The critical experiences may be difficult to duplicate - for example, everyone already knows the answer to Special Relativity, and now we can't train people by giving them the problem of Special Relativity.  Just knowing that it has something to do with space and time shifting around, is already too much of a spoiler.  The really important part of the problem is the one where you stare at a blank sheet of paper until drops of blood form on your forehead, trying to figure out what to think next.  The skills of genius are rare, I've suggested, because there is not enough opportunity to practice them.

\n

(3)  There may be luck or genetic talent involved in your brain hitting on the right thing to learn - finding a solution of high quality in the space of wordless procedural skills.  Even if we put you through the same experiences, there's components of true chance and genetic talent left over in having your brain learn the same wordless skill.

\n

But I think there's still reason to go on trying to describe the indescribable and teach the unteachable.

\n

Consider the transition in gambling skill associated with the invention of probability theory a few centuries back.  There's still a leftover art to poker, wordless skills that poker superstars can only partially describe in words.  But go back far enough, and no one would have any idea how to calculate the odds of rolling three dice and coming up with all ones.  And maybe an experienced enough gambler would have a wordless intuition that some things were likelier than others, but they couldn't have put it into words - couldn't have told anyone else what they'd learned about the chances; except, maybe, through a long process of watching over an apprentice's shoulder and supervising their bets.

\n

The more we learn about a domain, and the more we systematically observe the stars at work, and the more we learn about the human mind in general, the more we can hope for new skills to make the transition from unteachable to apprenticeable to publishable.

\n

And you can hope to trailblaze certain paths, even if you can't set down all the path in words.  Even if you yourself got somewhere through luck (including genetic luck), you can hope to diminish the role of luck on future occasions:

\n

(A)  Warning against blind alleys that delayed you, is one obvious sort of help.

\n

(B)  If you lay down a set of thoughts that are the product of wordless skills, someone reading through the set of thoughts may find their brain picking up the rhythm, making the leap to the unspoken thing behind; and this might require less luck than the events that led to your own original acquisition of those wordless skills.

\n

(C)  There are good attractors in the solution-space - clustered sub-solutions which make it easier to reach other solutions in the same attractor.  Then - even if some of the thoughts can't be put into words, and even if it took a lot of luck to wander into the attractor the first time through - describing everything that can be put into words, may be enough to anchor the attractor.

\n

(D)  Some important experiences are duplicable: for example, you can advise people what areas to study, what books to read.

\n

(E)  And finally, the simple advance of science may just describe a domain better, so that you realize what it is you know, and are suddenly able to communicate it outright.

\n

And of course the punchline is that this is the transition I hope to see in certain aspects of human rationality - skills which have been up until now unteachable, or only passed down from master to apprentice.  We've learned a lot about the domain in the past few decades, and I think it's time to take another shot at systematizing it.

\n

I aspire to diminish the role of luck and talent in producing rationalists of a higher grade.

" } }, { "_id": "9Z3pezjiWLfNANg9P", "title": "The Costs of Rationality", "pageUrl": "https://www.lesswrong.com/posts/9Z3pezjiWLfNANg9P/the-costs-of-rationality", "postedAt": "2009-03-03T18:13:17.465Z", "baseScore": 36, "voteCount": 47, "commentCount": 81, "url": null, "contents": { "documentId": "9Z3pezjiWLfNANg9P", "html": "

The word \"rational\" is overloaded with associations, so let me be clear: to me [here], more \"rational\" means better believing what is true, given one's limited info and analysis resources. 

Rationality certainly can have instrumental advantages.  There are plenty of situations where being more rational helps one achieve a wide range of goals.  In those situtations, \"winnners\", i.e., those who better achieve their goals, should tend to be more rational.  In such cases, we might even estimate someone's rationality by looking at his or her \"residual\" belief-mediated success, i.e., after explaining that success via other observable factors.

\n

But note: we humans were designed in many ways not to be rational, because believing the truth often got in the way of achieving goals evolution had for us.  So it is important for everyone who intends to seek truth to clearly understand: rationality has costs, not only in time and effort to achieve it, but also in conflicts with other common goals.

\n

Yes, rationality might help you win that game or argument, get promoted, or win her heart.  Or more rationality for you might hinder those outcomes.  If what you really want is love, respect, beauty, inspiration, meaning, satisfaction, or success, as commonly understood, we just cannot assure you that rationality is your best approach toward those ends.  In fact we often know it is not.

\n

The truth may well be messy, ugly, or dispriting; knowing it make you less popular, loved, or successful.  These are actually pretty likely outcomes in many identifiable situations.  You may think you want to know the truth no matter what, but how sure can you really be of that?  Maybe you just like the heroic image of someone who wants the truth no matter what; or maybe you only really want to know the truth if it is the bright shining glory you hope for. 

\n

Be warned; the truth just is what it is.  If just knowing the truth is not reward enough, perhaps you'd be better off not knowing.  Before you join us in this quixotic quest, ask yourself: do you really want to be generally rational, on all topics?  Or might you be better off limiting your rationality to the usual practical topics where rationality is respected and welcomed?

" } }, { "_id": "34Tu4SCK5r5Asdrn3", "title": "Unteachable Excellence", "pageUrl": "https://www.lesswrong.com/posts/34Tu4SCK5r5Asdrn3/unteachable-excellence", "postedAt": "2009-03-02T15:33:22.933Z", "baseScore": 48, "voteCount": 49, "commentCount": 41, "url": null, "contents": { "documentId": "34Tu4SCK5r5Asdrn3", "html": "

There's a whole genre of literature whose authors want to sell you the secret success sauce behind Gates's Microsoft or Buffett's Berkshire Hathaway - the common theme being that you, yes, you can be the next Larry Page.

\n

But probably not even Warren Buffett can teach you to be the next Warren Buffett.  That kind of extraordinary success is extraordinary because no one has yet figured out how to teach it reliably.

\n

And so mostly these books are a waste of hope, feeding off the excitement from dangling the possibility of the glorious yet unattainable; which is why I call them \"excellence pornography\", with subgenres like investment pornography and business pornography, telling every barista how to run the next Starbucks and every MBA student how to be the best CEO in the Fortune 500.  Calling this \"excellence pornography\" might be too unkind to pornography, which is at least overtly fiction.

\n

Now, there are incredibly powerful techniques that civilization has figured out how to teach: techniques like \"test your ideas by experiment\" or \"reinvest your wealth to generate more wealth\".  You, yes, you can be a scientist!  Or maybe not everyone - but enough people can become scientists by using learnable techniques and communicable knowledge, to support our technological civilization.

\n

\"You, yes, you can reinvest the proceeds of your earlier investments!\"  You may not beat the market like Warren Buffett.  But if you think about a whole civilization practicing that rule, we do better nowadays than historical societies with no banks or stock markets.  (No, really, we still do better on net.)  Because the trick of Reinvestment can be taught, can be described in words, can work for ordinary people without extraordinary luck... we don't think of it as an extraordinary triumph.  Just anyone can do it, so it must not be important.

\n

Warren Buffett did manage to turn on a lot of people to value investing.  He's given out a lot of advice, and it looks like good advice to me on the occasions I've read it.  The impression I get, at least, is that if he knew how to communicate what was left, he would just tell you.

\n

But Berkshire Hathaway, and Buffett personally, still spend huge amounts of time looking for managerial excellence.  Why?  Because they don't know any systematically reliable process to start with bright kids and turn them into Fortune 500 CEOs.

\n

There are things you can learn from the superstars.  But you can't expect to eat their whole soul, and the last mile of their extraordinariness will be the hardest for them to teach.  You will, at best, learn a few useful tricks that a lot of other people can learn as well, and that won't put you anywhere near the delicious high status of the superstar.  Unless, of course, you yourself have the right mix of raw genetic talents and you put years and years into training yourself and have a lot of luck along the way, etcetera, but the point is, you won't get there by reading pornography.

\n

(If someone actually does come up with a new teachable supertrick, so that civilization itself is about to take another lurching step forward, then you should expect to have a lot of fellow superstars by the time you're done learning!) 

\n

There's a number of lessons that I draw from this point; but one of the main ones is that much of the most important information we can learn from history is about how to not lose, rather than how to win.

\n

It's easier to avoid duplicating spectacular failures than to duplicate spectacular successes.  And it's often easier to generalize failure between domains.  The instructions for \"how to be a superstar\" tend to be highly specific and domain-specialized (Buffett ≠ Einstein) but the lessons for \"how not to be an idiot\" have a lot more in common between professions.

\n

Ken Lay can teach you how not to be the next Enron a lot more easily than Warren Buffett can teach you to be the next Berkshire Hathaway.  Casey Serin can teach you how to lose hope, Lord Kelvin can teach you not to worship your own ignorance...

\n

But that kind of lesson won't make you a glorious superstar.  It may prevent your life from becoming miserable, but this is not as glamorous.  And even worse - this kind of lesson may end up showing you that you're doing something wrong, that you yes you are about to join the ranks of fools.

\n

It's a lot easier to sell excellence pornography.

" } }, { "_id": "Kn6H8Tk6EPT4Atq4k", "title": "Test Your Rationality", "pageUrl": "https://www.lesswrong.com/posts/Kn6H8Tk6EPT4Atq4k/test-your-rationality", "postedAt": "2009-03-01T13:21:34.375Z", "baseScore": 44, "voteCount": 46, "commentCount": 87, "url": null, "contents": { "documentId": "Kn6H8Tk6EPT4Atq4k", "html": "

So you think you want to be rational, to believe what is true even when sirens tempt you?  Great, get to work; there's lots you can do.  Do you want to justifiably believe that you are more rational than others, smugly knowing your beliefs are more accurate?  Hold on; this is hard

Humans nearly universally find excuses to believe that they are more correct that others, at least on the important things. They point to others' incredible beliefs, to biases afflicting others, and to estimation tasks where they are especially skilled.  But they forget most everyone can point to such things.  

But shouldn't you get more rationality credit if you spend more time studying common biases, statistical techniques, and the like?  Well this would be good evidence of your rationality if you were in fact pretty rational about your rationality, i.e., if you knew that when you read or discussed such issues your mind would then systematically, broadly, and reasonably incorporate those insights into your reasoning processes. 

But what if your mind is far from rational?  What if your mind is likely to just go through the motions of studying rationality to allow itself to smugly believe it is more accurate, or to bond you more closely to your social allies? 

It seems to me that if you are serious about actually being rational, rather than just believing in your rationality or joining a group that thinks itself rational, you should try hard and often to test your rationality.  But how can you do that? 

To test the rationality of your beliefs, you could sometimes declare beliefs, and later score those beliefs via tests where high scoring beliefs tend to be more rational.  Better tests are those where scores are more tightly and reliably correlated with rationality.  So, what are good rationality tests?

" } }, { "_id": "ajYePZpAM4FYMwrqT", "title": "That You'd Tell All Your Friends", "pageUrl": "https://www.lesswrong.com/posts/ajYePZpAM4FYMwrqT/that-you-d-tell-all-your-friends", "postedAt": "2009-03-01T12:04:39.721Z", "baseScore": 8, "voteCount": 15, "commentCount": 53, "url": null, "contents": { "documentId": "ajYePZpAM4FYMwrqT", "html": "
\n
\n
\n

Followup toThe Most Frequently Useful Thing

\n

What's the number one thing that goes into a book on rationality, which would make you buy a copy of that book for a friend?  We can, of course, talk about all the ways that the rationality of the Distant World At Large needs to be improved.  But in this case - I think the more useful data might be the Near question, \"With respect to the people I actually know, what do I want to see in that book, so that I can give the book to them to explain it?\"

\n

(And again, please think of your own answer-component before reading others' comments.)

\n
\n
\n
" } }, { "_id": "Gjb5fJuDBWamPoCoz", "title": "The Most Frequently Useful Thing", "pageUrl": "https://www.lesswrong.com/posts/Gjb5fJuDBWamPoCoz/the-most-frequently-useful-thing", "postedAt": "2009-02-28T18:43:56.156Z", "baseScore": 12, "voteCount": 20, "commentCount": 57, "url": null, "contents": { "documentId": "Gjb5fJuDBWamPoCoz", "html": "

Followup toThe Most Important Thing You Learned

\n

What's the most frequently useful thing you've learned on OB - not the most memorable or most valuable, but the thing you use most often?  What influences your behavior, factors in more than one decision?  Please give a concrete example if you can.  This isn't limited to archetypally \"mundane\" activities: if your daily life involves difficult research or arguing with philosophers, go ahead and describe that too.

" } }, { "_id": "bsdxuNdSGbfEKZREP", "title": "The Most Important Thing You Learned", "pageUrl": "https://www.lesswrong.com/posts/bsdxuNdSGbfEKZREP/the-most-important-thing-you-learned", "postedAt": "2009-02-27T20:15:59.430Z", "baseScore": 15, "voteCount": 20, "commentCount": 99, "url": null, "contents": { "documentId": "bsdxuNdSGbfEKZREP", "html": "

My current plan does still call for me to write a rationality book - at some point, and despite all delays - which means I have to decide what goes in the book, and what doesn't.  Obviously the vast majority of my OB content can't go into the book, because there's so much of it.

\n

So let me ask - what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind?  If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others.  If it was striking enough that you remember the exact post where you \"got it\", include that information.  If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so.  To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month - on the other hand, if you can't remember it even after a year, then it's probably not the most important thing.

\n

Please also distinguish this question from \"What was the most frequently useful thing you learned, and how did you use it?\" and \"What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?\"  I'll ask those on Saturday and Sunday.

\n

PS:  Do please think of your answer before you read the others' comments, of course.

" } }, { "_id": "fwWyQgAFRuNpQJDPm", "title": "Tell Your Rationalist Origin Story... at Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/fwWyQgAFRuNpQJDPm/tell-your-rationalist-origin-story-at-less-wrong", "postedAt": "2009-02-27T04:06:15.000Z", "baseScore": 5, "voteCount": 4, "commentCount": 0, "url": null, "contents": { "documentId": "fwWyQgAFRuNpQJDPm", "html": "
(A beta version of Less Wrong is now live, no old posts imported as yet.  Some of the plans for what to do with Less Wrong relative to OB have been revised by further discussion among Robin, Nick, and myself, but for now we're just seeing what happens once LW is up - whether it's stable, what happens to the tone of comments once threading and voting is enabled, etcetera.

Posting by non-admins is disabled for now - today we're just testing out registration, commenting, threading, etcetera.)
\n \n

To\nbreak up the awkward silence at the start of a recent Overcoming Bias\nmeetup, I asked everyone present to tell their rationalist origin story\n- a key event or fact that played a role in their becoming rationalists.  This worked surprisingly well.

\n

I think I've already told enough of my own origin story on Overcoming Bias:\nhow I was digging in my parents' yard as a kid and found a tarnished\nsilver amulet inscribed with Bayes's Theorem, and how I wore it to bed\nthat night and dreamed of a woman in white, holding a leather-bound\nbook called Judgment Under Uncertainty: Heuristics and Biases (eds. D. Kahneman, P. Slovic, and A. Tversky, 1982)... but there's no need to go into that again.

\n

So, seriously... how did you originally go down that road?

Continue reading "Tell Your Rationalist Origin Story" at Less Wrong »

" } }, { "_id": "qoFhFhGuFDRkb4GTq", "title": "Issues, Bugs, and Requested Features", "pageUrl": "https://www.lesswrong.com/posts/qoFhFhGuFDRkb4GTq/issues-bugs-and-requested-features", "postedAt": "2009-02-26T16:45:28.803Z", "baseScore": 9, "voteCount": 15, "commentCount": 674, "url": null, "contents": { "documentId": "qoFhFhGuFDRkb4GTq", "html": "

[Edit: IssuesBugs, and Requested Features should be tracked at Google Code, not here -- matt, 2010-04-23

\n

 

\n

Less Wrong is still under construction.  Please post any bugs or issues with Less Wrong to this thread.  Try to keep each comment thread a clean discussion of each bug or issue.

\n

Requested features... sure, go ahead, but bear in mind we may not be able to implement for a while.

" } }, { "_id": "h24JGbmweNpWZfBkM", "title": "Markets are Anti-Inductive", "pageUrl": "https://www.lesswrong.com/posts/h24JGbmweNpWZfBkM/markets-are-anti-inductive", "postedAt": "2009-02-26T00:55:33.000Z", "baseScore": 97, "voteCount": 74, "commentCount": 62, "url": null, "contents": { "documentId": "h24JGbmweNpWZfBkM", "html": "

I suspect there's a Pons Asinorum of probability between the bettor who thinks that you make money on horse races by betting on the horse you think will win, and the bettor who realizes that you can only make money on horse races if you find horses whose odds seem poorly calibrated relative to superior probabilistic guesses.

There is, I think, a second Pons Asinorum associated with more advanced finance, and it is the concept that markets are an anti-inductive environment.

Let's say you see me flipping a coin.  It is not necessarily a fair coin.  It's a biased coin, and you don't know the bias.  I flip the coin nine times, and the coin comes up "heads" each time.  I flip the coin a tenth time.  What is the probability that it comes up heads?

If you answered "ten-elevenths, by Laplace's Rule of Succession", you are a fine scientist in ordinary environments, but you will lose money in finance.

In finance the correct reply is, "Well... if everyone else also saw the coin coming up heads... then by now the odds are probably back to fifty-fifty."

Recently on Hacker News I saw a commenter insisting that stock prices had nowhere to go but down, because the economy was in such awful shape.  If stock prices have nowhere to go but down, and everyone knows it, then trades won't clear - remember, for every seller there must be a buyer - until prices have gone down far enough that there is once again a possibility of prices going up.

So you can see the bizarreness of someone saying, "Real estate prices\nhave gone up by 10% a year for the last N years, and we've never\nseen a drop."  This treats the market like it was the mass of an\nelectron or something.  Markets are anti-inductive.  If, historically, real estate prices have always gone up, they will\nkeep rising until they can go down.

\n

\n

\n

To get an excess return - a return that pays premium interest over the going rate for that level of riskiness - you need to know something that other market participants don't, or they will rush in and bid up whatever you're buying (or bid down whatever you're selling) until the returns match prevailing rates.

If the economy is awful and everyone knows it, no one's going to buy at\na price that doesn't take into account that knowledge.

If there's an\nobvious possibility of prices dropping further, then the market must\nalso believe there's a probability of prices rising to make up for it, or the trades won't clear.

This elementary point has all sorts of caveats I'm not bothering to include here, like the fact that "up" and "down" is relative to the risk-free interest rate and so on.  Nobody believes the market is really "efficient", and recent events suggest it is less efficient than previously believed, and I have a certain friend who says it's even less efficient than that... but still, the market does not leave hundred-dollar-bills on the table if everyone believes in them.

There was a time when the Dow systematically tended to drop on Friday and rise on Monday, and once this was noticed and published, the effect went away.

Past history, e.g. "real estate prices have always gone up", is not private info.

And the same also goes for more complicated regularities.  Let's say two stock prices are historically anticorrelated - the variance in their returns moves in opposite directions.  As soon as everyone believes this, hedge-fund managers will leverage up and buy both stocks.  Everyone will do this, meaning that both stocks will rise.  As the stocks rise, their returns get more expensive.  The hedge-fund managers book profits, though, because their stocks are rising.  Eventually the stock prices rise to the point they can go down.  Once they do, hedge-fund managers who got in late will have to liquidate some of their assets to cover margin calls.  This means that both stock prices will go down - at the same time, even though they were originally anticorrelated.  Other hedge funds may lose money on the same two stocks and also sell or liquidate, driving the price down further, etcetera.  The correlative structure behaves anti-inductively, because other people can observe it too.

If mortage defaults are historically uncorrelated, so that you can get an excess return on risk by buying lots of mortages and pooling them together, then people will rush in and buy lots of mortgages until (a) rates on mortgages are bid down (b) individual mortgage failure rates rise (c) mortgage failure rates become more correlated, possibly looking uncorrelated in the short-term but having more future scenarios where they all fail at once.

Whatever is believed in, stops being real.  The market is literally anti-inductive rather than anti-regular - it's the regularity that enough participants induce, which therefore goes away.

\n

\n\n\n

This, as I understand it, is the standard theory of\n"efficient markets", which should perhaps have been called\n"inexploitable markets" or "markets that are not easy to exploit\nbecause others are already trying to exploit them".  Should I have made\na mistake thereof, let me be corrected.

\n

Now it's not surprising, on the one hand, to see this screwed up in random internet discussions where a gold bug argues from well-known observations about the past history of gold.  (This is the equivalent of trying to make money at horse-racing by betting on the horse that you think will win - failing to cross the Pons Asinorum.)

But it is surprising is to hear histories of the financial crisis in which prestigious actors argued in crowded auditoriums that, previously, real-estate prices had always gone up, or that previously mortage defaults had been uncorrelated.  This is naive inductive reasoning of the sort that only works on falling apples and rising suns and human behavior and everything else in the universe except markets.  Shouldn't everyone have frowned and said, "But isn't the marketplace an anti-inductive environment?"

Not that this is standard terminology - but perhaps "efficient market" doesn't convey quite the same warning as "anti-inductive".  We would appear to need stronger warnings.

PS:  To clarify, the coin example is a humorous exaggeration of what the world would be like if most physical systems behaved the same way as market price movements, illustrating the point, "An exploitable pricing regularity that is easily inducted degrades into inexploitable noise."  Here the coin coming up "heads" is analogous to getting an above-market return on a publicly traded asset.

" } }, { "_id": "ZtckZmtwWxhwgpt8Y", "title": "Image Test", "pageUrl": "https://www.lesswrong.com/posts/ZtckZmtwWxhwgpt8Y/image-test", "postedAt": "2009-02-26T00:12:38.613Z", "baseScore": 1, "voteCount": 2, "commentCount": 0, "url": null, "contents": { "documentId": "ZtckZmtwWxhwgpt8Y", "html": "

\"\"

\n

\"\"

\n

Editing in IE6!

" } }, { "_id": "BHMBBFupzb4s8utts", "title": "Tell Your Rationalist Origin Story", "pageUrl": "https://www.lesswrong.com/posts/BHMBBFupzb4s8utts/tell-your-rationalist-origin-story", "postedAt": "2009-02-25T17:16:11.626Z", "baseScore": 38, "voteCount": 48, "commentCount": 414, "url": null, "contents": { "documentId": "BHMBBFupzb4s8utts", "html": "

To break up the awkward silence at the start of a recent Overcoming Bias meetup, I asked everyone present to tell their rationalist origin story - a key event or fact that played a role in their first beginning to aspire to rationality.  This worked surprisingly well (and I would recommend it for future meetups).

\n

I think I've already told enough of my own origin story on Overcoming Bias: how I was digging in my parents' yard as a kid and found a tarnished silver amulet inscribed with Bayes's Theorem, and how I wore it to bed that night and dreamed of a woman in white, holding an ancient leather-bound book called Judgment Under Uncertainty: Heuristics and Biases (eds. D. Kahneman, P. Slovic, and A. Tversky, 1982)... but there's no need to go into that again.

\n

So, seriously... how did you originally go down that road?

\n

Added:  For some odd reason, many of the commenters here seem to have had a single experience in common - namely, at some point, encountering Overcoming Bias...  But I'm especially interested in what it takes to get the transition started - crossing the first divide.  This would be very valuable knowledge if it can be generalized.  If that did happen at OB, please try to specify what was the crucial \"Aha!\" insight (down to the specific post if possible).

" } }, { "_id": "hwbopYqniG9iDqGDH", "title": "Formative Youth", "pageUrl": "https://www.lesswrong.com/posts/hwbopYqniG9iDqGDH/formative-youth", "postedAt": "2009-02-24T23:02:35.000Z", "baseScore": 33, "voteCount": 25, "commentCount": 46, "url": null, "contents": { "documentId": "hwbopYqniG9iDqGDH", "html": "

Followup toAgainst Maturity

\n

\"Rule of thumb:  Be skeptical of things you learned before you could read.  E.g., religion.\"
        -- Ben Casnocha

\n

Looking down on others is fun, and if there's one group we adults can all enjoy looking down on, it's children.  At least I assume this is one of the driving forces behind the incredible disregard for... but don't get me started.

\n

Inconveniently, though, most of us were children at one point or another during our lives.  Furthermore, many of us, as adults, still believe or choose certain things that we happened to believe or choose as children.  This fact is incongruent with the general fun of condescension - it means that your life is being run by a child, even if that particular child happens to be your own past self.

\n

I suspect that most of us therefore underestimate the degree to which our youths were formative - because to admit that your youth was formative is to admit that the course of your life was not all steered by Incredibly Deep Wisdom and uncaused free will.

\n

To give a concrete example, suppose you asked me, \"Eliezer, where does your altruism originally come from?  What was the very first step in the chain that made you amenable to helping others?\"

\n

Then my best guess would be \"Watching He-Man and similar TV shows as a very young and impressionable child, then failing to compartmentalize the way my contemporaries did.\"  (Same reason my Jewish education didn't take; I either genuinely believed something, or didn't believe it at all.  (Not that I'm saying that I believed He-Man was fact; just that the altruistic behavior I picked up wasn't compartmentalized off into some safely harmless area of my brain, then or later.))

\n

It's my understanding that most people would be reluctant to admit this sort of historical fact, because it makes them sound childish - in the sense that they're still being governed by the causal history of a child.

\n

But I find myself skeptical that others are governed by their childhood causal histories so much less than myself - especially when there's a simple alternative explanation: they're too embarrassed to admit it.

\n

\n

A lovely excuse, of course, is that we at first ended up in a certain place for childish reasons, and then we went back and redid the calculations as adults, and what do you know, it magically ended up with the same bottom line.

\n

Well - of course that can happen.  If you ask me why I'm out to save the world, then there's a sense in which I can defend that as a sober utilitarian calculation, \"Shut up and multiply\", that has nothing to do with spending my childhood reading science fiction about protagonists who saved the world.  But if you ask me why I listen to that sober utilitarian calculation, why it actually has the capacity to move me - then yes, the fact that the first \"grownup\" book I read was Dragonflight may have played a role.  It's what F'lar and Lessa would do.

\n

Why not really start over from scratch - throw away our childhoods and redo everything?

\n

For epistemic beliefs that might be sorta-possible, which is why I didn't name an epistemic belief that I think I inherited from the chaos of childhood.  That wouldn't be tolerable, and when I look back, I really have rejected a lot of what I once believed epistemically.

\n

But matters of taste?  Of personality?  Of deeply held ideals and values?

\n

Well, yes, I reformulated my whole metaethics at a certain point and that had a definite influence on my values... but despite that, I think you could draw an obvious line back from where I am now, to factors like reading Dragonlance at age nine and vowing never to end up like Raistlin Majere.  (Bitter genius archetype.)

\n

If you can't look back and draw a line between your current adult self and factors like that, I have to wonder if your self-history is really accurate.

\n

In particular, I have to wonder if you're thinking right now of a deceptively obvious-seeming line that someone else might be tempted to draw, but which of course isn't the real reason why you still...

\n

PS:  Of course I don't directly justify any of my decisions, these days, by saying \"That's what the Thundercats did, therefore it is right.\"  The question is more like whether I ended up finding developed altruistic philosophies more appealing as an adult because, sometime back in my youth, I was bombarded with altruistic messages.

\n

If there are many different stores selling developed philosophies, then which store you walk into to buy your sophisticated adult judgments might depend on a factor like that.

\n

PPS:  Several commenters asked why I focused on fiction.  I could point to several real-life events in my childhood that I still remember and that seem promisingly characteristic of \"me\" - for example, the only time I remember my kindergarten classmates ever praising me or liking me was the time I used wooden blocks to build a complicated track that they could \"ski\" along.  Making something clever = peer approval, says this memory.

\n

But because this was a one-off event, I doubt it would have quite as much influence as messages repeated over and over, through many different TV shows with similar themes, or many different books written by science-fiction authors who influenced one another.  I couldn't recite the plot of even a single episode of He-Man, but I have some memory of what the opening theme song was, because it was recurring.  That's the power of a fictional corpus, relative to any single moment of real life no matter how significant it seems - fictions can repeat the same message over and over.

\n

My childhood universe was very much a universe of books.  The nonfiction I read (like the Childcraft books) might have been formative in a sense - but factual beliefs you really can recheck and redo.  Hence my citation of fiction as a lingering influence on values and personality.

" } }, { "_id": "2om7AHEHtbogJmT5s", "title": "About Less Wrong", "pageUrl": "https://www.lesswrong.com/posts/2om7AHEHtbogJmT5s/about-less-wrong", "postedAt": "2009-02-23T23:30:48.747Z", "baseScore": 57, "voteCount": 57, "commentCount": 3, "url": null, "contents": { "documentId": "2om7AHEHtbogJmT5s", "html": "

Edit: This post refers to the original version of LessWrong, which ran between February 2009 and March 2018. The About page referring to the period from March 2018 to the present can be found here.

\n

 

\n

Over the last decades, new experiments have changed science's picture of the way we think - the ways we succeed or fail to obtain the truth, or fulfill our goals. The heuristics and biases program, in cognitive psychology, has exposed dozens of major flaws in human reasoning. Social psychology shows how we succeed or fail in groups. Probability theory and decision theory have given us new mathematical foundations for understanding minds.

\n

Less Wrong is devoted to refining the art of human rationality - the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices.

\n

Less Wrong consists of three areas: The main community blog, the Less Wrong wiki and the Less Wrong discussion area.

\n

Less Wrong is a partially moderated community blog that allows general authors to contribute posts as well as comments. Users vote posts and comments up and down (with code based on Reddit's open source). \"Promoted\" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance.

\n

We suggest submitting links with a short description. Recommended books should have longer descriptions. Links will not be promoted unless they are truly excellent - the \"promoted\" posts are intended as a filtered stream for the casual/busy reader.

\n

The Less Wrong discussion area is for topics not yet ready or not suitable for normal top level posts. To post a new discussion, select \"Post to: Less Wrong Discussion\" from the Create new article page. Comment on discussion posts as you would elsewhere on the site.

\n

Votes on posts are worth ±10 points on the main site and ±1 point in the discussion area. Votes on comments are worth ±1 point. Users with sufficient karma can publish posts. You need 20+ points to post to the main area and 2+ points to post to the discussion area. You can only down vote up to four times your current karma (thus if you never comment, you cannot downvote). Comments voted to -3 or lower will be collapsed by default for most readers (if you log in, you can change this setting in your Preferences). Please keep this in mind before writing long, thoughtful, intelligent responses to trolls: most readers will never see your work, and your effort may be better spent elsewhere, in more visible threads. Similarly, if many of your comments are heavily downvoted, please take the hint and change your approach, or choose a different venue for your comments. (Failure to take the hint may lead to moderators deleting future comments.) Spam comments will be deleted immediately. Off-topic top-level posts may be removed.

\n

We reserve the right for moderators to change contributed posts or comments to fix HTML problems or other misfeatures. Moderators may add or remove tags.

\n

Less Wrong is brought to you by the Future of Humanity Institute at Oxford University. Neither FHI nor Oxford University necessarily endorses any specific views appearing anywhere on Less Wrong. Copyright is retained by each author, but we reserve the non-exclusive right to move, archive, or otherwise reprint posts and comments.

\n

Sample posts:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Politics is the Mind-KillerPolicy Tug-O-WarAre Your Enemies Innately Evil?
The Affect HeuristicWe Change Our Minds Less Often Than We ThinkConjunction Controversy
Feeling RationalKnowing About Biases Can Hurt YouNewcomb's Problem
Probability is in the MindAbsence of Evidence is Evidence of AbsenceEinstein's Arrogance
The Bottom LineMaking Beliefs Pay RentTsuyoku Naritai
\n

To read through Less Wrong systematically, use the Sequences.

\n

Less Wrong was established as a sister site to the blog Overcoming Bias, where Eliezer Yudkowsky originally began blogging under the chief editorship of Robin Hanson, and contains many old posts imported from OB.

\n

To some extent, the software of LW is still under development. The code that runs this site is forked from the open source Reddit base, and is hosted on Github. Our public issue tracker is hosted at Google Code. Contributions of code or issues are welcome and volunteer Python developers are cordially solicited (see this post). More information on how to contribute is available at the LW Github wiki.

\n

Less Wrong is hosted and maintained by Trike Apps.

" } }, { "_id": "nYEcgJe8LyqB4qqCL", "title": "On Not Having an Advance Abyssal Plan", "pageUrl": "https://www.lesswrong.com/posts/nYEcgJe8LyqB4qqCL/on-not-having-an-advance-abyssal-plan", "postedAt": "2009-02-23T20:20:31.000Z", "baseScore": 31, "voteCount": 25, "commentCount": 43, "url": null, "contents": { "documentId": "nYEcgJe8LyqB4qqCL", "html": "

"Even though he could foresee the problem then, we can see it equally well now.  Therefore, if he could foresee the solution then, we should be able to see it now.  After all, Seldon was not a magician.  There are no trick methods of escaping a dilemma that he can see and we can't."
        -- Salvor Hardin

Years ago at the Singularity Institute, the Board was entertaining a proposal to expand somewhat.  I wasn't sure our funding was able to support the expansion, so I insisted that - if we started running out of money - we decide in advance who got fired and what got shut down, in what order.

Even over the electronic aether, you could hear the uncomfortable silence.

"Why can't we decide that at the time, if the worst happens?" they said, or something along those lines.

"For the same reason that when you're buying a stock you think will go up, you decide how far it has to decline before it means you were wrong," I said, or something along those lines; this being far back enough in time that I would still have used stock-trading in a rationality example.  "If we can make that decision during a crisis, we ought to be able to make it now.  And if I can't trust that we can make this decision in a crisis, I can't trust this to go forward."

People are really, really reluctant to plan in advance for the abyss.  But what good reason is there not to?  How can you be worse off from knowing in advance what you'll do in the worse cases?

I have been trying fairly hard to keep my mouth shut about the current economic crisis.  But still -

Why didn't various governments create and publish a plan for what they would do in the event of various forms of financial collapse, before it actually happened?\n

Never mind hindsight on the real-estate bubble - there are lots of things that could potentially trigger financial catastrophes.  I'm willing to bet the American government knows what it will do in terms of immediate rescue operations if an atomic bomb goes off in San Francisco.  But if the US government had any advance idea of under which circumstances it would nationalize Fannie Mae or guarantee Bear Stearns's counterparties, this plan was not very much in evidence as various government officials gave every appearance of trying to figure everything out on the fly.

A published, believable advance plan for the worst case - one that you could actually believe the government would carry out, instead of junking the plan to try to keep the top spinning a little longer - would have made the markets that much less uncertain.

If you don't publish a plan for catastrophe - or can't publish a believable plan - then the market just tries to guess what a realistic, believable plan would look like.  If that realistic, believable plan involves frantically attempting to bail out the large financial entities in order to keep the whole system from melting down further, you have moral hazard.  If they actually do it, that's lemon socialism (privatized upside, public downside).

If that's what happens in the abyssal case - then not publishing that fact, doesn't prevent anyone from foreseeing it.  If you publish that plan, maybe it will start a debate about whether to break up Bear Stearns into smaller entities, or change the plan to give counterparties a predictable 10% haircut, or claw back executive bonuses no matter what their contracts read (because you really aren't supposed to screw up so badly that the government has to get involved)...

But if you can't publish a realistic, believable advance abyssal plan that doesn't call for rescuing the huge entities - then who are you even kidding?

Governmental agencies failing to stare into the abyss in advance gives us a double problem: moral hazard as counterparties and investors try to guess what the government will realistically do; and fear and uncertainty in the market when the worst does happen.

It's questionable whether the government should be in the position of trying to forecast the abyss - to put a probability on financial meltdown in any given year due to any given cause.  But advance abyssal planning isn't about the probability, as it would be in investing.  It's about the possibility.  If you can realistically imagine global financial meltdowns of various types being possible, there's no excuse for not war-gaming them.  If your brain doesn't literally cease to exist upon facing systemic meltdowns at the time, you ought to be able to imagine plausible systemic meltdowns in advance.

Sure, you might have to make some modifications on-the-fly because you didn't get the exact causes and circumstances right.  But it shouldn't be obvious and predictable that the modifications will consist of "Oh dear it's more awful than we planned for and the systemic hazard is worse and now we really do have to bail out everyone even though we said we wouldn't."  Then the plan is not believable.

So long as the plan is not wrong in the stupidly obvious directions, it's hard to see how we'd be worse off if the governors of the Federal Reserve had taken a week once per year to play through scenarios more nightmarish than this one in their minds, deciding in advance what to do about it, realistically.

I suppose the main argument against publishing the plan would be that the uninformed public (i.e. Congress) would revolt against the emergency plans, demanding that unbelievable plans be substituted (let the banks burn! don't bail out GM!) and then changing their tune as soon as the worst actually happened.

But at least having the Federal Reserve privately visualizing all sorts of hideous possibilities in advance, war-gaming them with the Board of Governors, and planning for them realistically - so that when the worst starts happening, you don't have everyone running around being vague and visibly unprepared and refusing to talk about what happens if things get even worse - instead you just take out folder #37-B and figure out what needs tweaking - for the lack of that preparedness, there seems to me to be very little excuse.  The Federal Reserve should not be in the business of forecasting probabilities - they've already demonstrated that they can't, and they're not investors.  They should just be always staring into the abyss.

Of course the Federal Reserve doesn't read this blog, so far as I know.  But it's the sort of thing that doesn't require a majority vote for individuals to use in their personal lives.

" } }, { "_id": "rKGvgRiEu5qYechNT", "title": "Fairness vs. Goodness", "pageUrl": "https://www.lesswrong.com/posts/rKGvgRiEu5qYechNT/fairness-vs-goodness", "postedAt": "2009-02-22T20:22:00.000Z", "baseScore": 15, "voteCount": 13, "commentCount": 21, "url": null, "contents": { "documentId": "rKGvgRiEu5qYechNT", "html": "

It seems that back when the Prisoner's Dilemma was still being worked out, Merrill Flood and Melvin Drescher tried a 100-fold iterative PD on two smart but unprepared subjects, Armen Alchian of UCLA and John D. Williams of RAND.

The kicker being that the payoff matrix was asymmetrical, with dual cooperation awarding JW twice as many points as AA:

\n\n
(AA, JW)JW: DJW: C
AA: D(0, 0.5)(1, -1)
AA: C(-1, 2)(0.5, 1)

The resulting 100 iterations, with a log of comments written by both players, make for fascinating reading.

JW spots the possibilities of cooperation right away, while AA is slower to catch on.

But once AA does catch on to the possibilities of cooperation, AA goes on throwing in an occasional D... because AA thinks the natural meeting point for cooperation is a fair outcome, where both players get around the same number of total points.

JW goes on trying to enforce (C, C) - the option that maximizes total utility for both players - by punishing AA's attempts at defection.  JW's log shows comments like "He's crazy.  I'll teach him the hard way."

Meanwhile, AA's log shows comments such as "He won't share.  He'll punish me for trying!"\n

I confess that my own sympathies lie with JW, and I don't think I would have played AA's game in AA's shoes.  This would seem to indicate that I'm more of a utilitarian than a fair-i-tarian.  Life doesn't always hand you fair games, and the best we can do for each other is play them positive-sum.

Though I might have been somewhat more sympathetic to AA, if the (C, C) outcome had actually lost him points, and only (D, C) had made it possible for him to gain them back.  For example, this is also a Prisoner's Dilemma:

(AA, JW)JW: DJW: C
AA: D(-2, 2)(2, 0)
AA: C(-5, 6)(-1, 4)

Theoretically, of course, utility functions are invariant up to affine transformation, so a utility's absolute sign is not meaningful.  But this is not always a good metaphor for real life.

Of course what we want in this case, societally speaking, is for JW to slip AA a bribe under the table.  That way we can maximize social utility while letting AA go on making a profit.  But if AA starts out with a negative number in (C,\nC), how much do we want AA to demand in bribes - from our global, societal perspective?

The whole affair makes for an interesting reminder of the different worldviews that people invent for themselves - seeming so natural and uniquely obvious from the inside - to make themselves the heroes of their own stories.

" } }, { "_id": "adSXR7Lnyok9ZMWcR", "title": "Rationality Quotes 27", "pageUrl": "https://www.lesswrong.com/posts/adSXR7Lnyok9ZMWcR/rationality-quotes-27", "postedAt": "2009-02-22T01:55:48.000Z", "baseScore": 9, "voteCount": 6, "commentCount": 8, "url": null, "contents": { "documentId": "adSXR7Lnyok9ZMWcR", "html": "

\"Believing this statement will make you happier.\"
        -- Ryan Lortie

\n

\"Make changes based on your strongest opportunities, not your most convenient ones.\"
        -- MegaTokyo

\n

\"The mind is a cruel, lying, unreliable bastard that can't be trusted with even an ounce of responsibility.  If you were dating the mind, all your friends would take you aside, and tell you that you can really do better, and being alone isn't all that bad, anyway.  If you hired the mind as a babysitter, you would come home to find all but one of your children in critical condition, and the remaining one crowned 'King of the Pit'.\"
        -- Lore Sjoberg

\n

\"Getting bored is a non-trivial cerebral transformation that doubtlessly took many millions of years for nature to perfect.\"
        -- Lee Corbin

\n

\"The views expressed here do not necessarily represent the unanimous view of all parts of my mind.\"
        -- Malcolm McMahon

\n

\n

\"The boundary between these two classes is more porous than I've made it sound.  I'm always running into regular dudes--construction workers, auto mechanics, taxi drivers, galoots in general--who were largely aliterate until something made it necessary for them to become readers and start actually thinking about things.  Perhaps they had to come to grips with alcoholism, perhaps they got sent to jail, or came down with a disease, or suffered a crisis in religious faith, or simply got bored.  Such people can get up to speed on particular subjects quite rapidly.  Sometimes their lack of a broad education makes them over-apt to go off on intellectual wild goose chases, but, hey, at least a wild goose chase gives you some exercise.\"
        -- Neal Stephenson, In the Beginning was the Command Line

" } }, { "_id": "i97ohcwLugt5oQvMy", "title": "Wise Pretensions v.0", "pageUrl": "https://www.lesswrong.com/posts/i97ohcwLugt5oQvMy/wise-pretensions-v-0", "postedAt": "2009-02-20T17:02:28.000Z", "baseScore": 15, "voteCount": 13, "commentCount": 23, "url": null, "contents": { "documentId": "i97ohcwLugt5oQvMy", "html": "

Followup toPretending to be Wise

\n

For comparison purposes, here's an essay with similar content to yesterday's \"Pretending to be Wise\", which I wrote in 2006 in a completely different style, edited down slightly (content has been deleted but not added).  Note that the 2006 concept of \"pretending to be Wise\" hasn't been narrowed down as much compared to the 2009 version; also when I wrote it, I was in more urgent need of persuasive force.

\n

I thought it would be an interesting data point to check whether this essay seems more convincing than yesterday's, following Robin's injuction \"to avoid emotion, color, flash, stories, vagueness, repetition, rambling, and even eloquence\" - this seems like rather the sort of thing he might have had in mind.

\n

And conversely the stylistic change also seems like the sort of thing Orwell might have had in mind, when Politics and the English Language compared:  \"I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.\"  Versus:  \"Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.\"  That would be the other side of it.

\n

At any rate, here goes Eliezer2006...

\n

I do not fit the stereotype of the Wise. I am not Gandalf, Ged, or Gandhi. I do not sit amidst my quiet garden, staring deeply into the truths engraved in a flower or a drop of dew; speaking courteously to all who come before me, and answering them gently regardless of how they speak to me.

\n

If I tried to look Wise, and succeeded, I would receive more respect from my fellows. But there would be a price.

\n

To pretend to be Wise means that you must always appear to give people the benefit of the doubt. Thus people will admire you for your courtesy. But this is not always true.

\n

To pretend to be Wise, you must always pretend that both sides have merit, and solemnly refuse to judge between them. For if you took one side or another, why then, you would no longer be one of the aloof Wise, but merely another partisan, on a level with all the other mere bickerers.

\n

As one of the Wise, you are omnipotent on the condition that you never exercise your power. Otherwise people would start thinking that you were no better than they; and they would no longer hold you in awe.

\n

\n

Ofttimes it is greatly convenient, to pretend to be Wise. When any conflict breaks out, you can sternly chide both sides, saying: \"You are equally at fault; you must learn to see each other's viewpoints. I am older and more mature, and I say to you: stop this pointless bickering, children, for you begin to annoy me. Ponder well the wisdom of having everyone get along!\" You do not need to examine the dispute, nor wonder if perhaps one side does have more merit than the other. You need not judge between two sides, and risk having your judgment turn out to be embarrassingly wrong, or risk having your judgment questioned as though you were only another ordinary mortal. Indeed you must not ask questions, you must not judge; for if you take sides, you will at once lose your reputation for being Wise, which requires that you stand forever above the fray.

\n

But truth is not handed out in equal parts before the start of a dispute.

\n

And I am not one of the Wise. Even if I wished to be, it is not within my nature. I do not hesitate to place my reputation in jeopardy to aid the side I believe is right. Even if it makes me seem but an ordinary mortal, no better than any other in the fray. The respect I have earned, I have earned by other ways than by appearing gravely solemn; and respect has no purpose but what it can accomplish. Respect is not to be hoarded, but spent. Even when those pretending to be Wise chide me, saying: \"Stop this bickering!\" - yet I will not pretend to neutrality, nor rise above the fray.

\n

For not all conflicts are balanced; indeed an exactly balanced conflict is very rare. Sometimes - indeed often - I have struck out against both sides in a dispute, saying: \"You are both wrong; here is the third way.\" But never have I told both sides of a dispute: \"It doesn't matter who started it, just end it.\" This is the path of convenience to yourself, and it comes at a cost to others; it is selfish. There are aggressors and aggressed, in wars. When some small nation is invaded by another, or is provoked endlessly, or when one nation provokes another and that other responds disproportionately; then it may prove convenient indeed to the Great Powers, to pretend that all violence is equally wrong and equally the fault of all sides, and selfishly seek a truce for this year, this election. The Great Powers have no need to take sides, when they can more easily tell the two edges of the gaping wound: \"Oh, just stop fighting, you foolish children!\" Ignoring the rottenness inside... but that is pragmatism to a Great Power, which only wishes that the boat should go unrocked, and does not truly care for the health of lesser nations.

\n

And so too with those who pretend to be Wise: who pretend that there is no aggrieved, that there is no long-term problem to be addressed, that no side is ever in the right nor another in the wrong; that there are no causes for conflicts, only fools who are not Wise and who will spontaneously strike out at each other for no reason. It only takes one to start a war. But the Wise cannot acknowledge this in any particular case, for then they would be taking sides, and they would not be above the fray, merely another combatant. They would lose the awe, in which the Wise are held, and which they most earnestly desire.

\n

I do not say that the Wise do this deliberately; but it is the constraint that settles around them, the invisible chain that governs their behavior. No doubt the Wise truly believe that the combatants are but spoiled children; for if the Wise ceased to believe this, they would have to act, and no longer appear Wise.

\n

Have you not met them? the principal who cares not which child started it? the Chair who is above the mere fray of corporate politics? the Great Power who demands only an immediate truce? the priest who says that all alike are sinners and all must repent? the boss who sternly dictates that the conflict end now? have you not met them, the Wise?

\n

To care about the public image of any virtue - whether that virtue be charity, or wisdom, or rationality itself - is to limit your performance of that virtue to what all others already understand of it. Therefore those who would walk the Way of any virtue must first relinquish all desire to appear virtuous before others, for this will limit and constrain their understanding of the virtue. To earn the respect of others is not hard, if you have a little talent, if that is the limit of your desire. But to know what is true, or to do what is right - that is far harder than convincing an audience of your wisdom. I am not Wise, and I will not be Wise, and no one can be Wise if they would follow the Way of rationality.

\n

For the eye of the Wise is blinded, and it may sometimes miss the gaping obvious.

\n

I prefer yesterday's post (which is why I wrote it).  But I also suspect that yesterday's post is more persuasive, signaling more maturity and deliberately avoiding flashes of eloquence that might provoke skepticism here... while containing pretty much exactly the same message-payload.

\n

On the other hand, this version seems easier to read, and you might find it more persuasive if you had just encountered it on the Net - if you weren't used to a different style from me.

\n

(Nowadays if I have something to say that sounds suspiciously eloquent, I'll create a character and have them say it in dialogue or fiction; this lets me have my cake and eat it too.  Though - important disclaimer - many of my characters are also there to say eloquent things that I disagree with, c.f. the Superhappies.)

\n

What think you?  Criticism can be addressed to me personally, I guess; Eliezer2006 is still someone who I'd talk about as \"me\".

" } }, { "_id": "jeyvzALDbjdjjv5RW", "title": "Pretending to be Wise", "pageUrl": "https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise", "postedAt": "2009-02-19T22:30:22.000Z", "baseScore": 175, "voteCount": 163, "commentCount": 77, "url": null, "contents": { "documentId": "jeyvzALDbjdjjv5RW", "html": "

The hottest place in Hell is reserved for those who in time of crisis remain neutral.

Dante Alighieri, famous hell expert

—John F. Kennedy, misquoter

 

Belief is quantitative, and just as it is possible to make overconfident assertions relative to ones anticipations, it is possible to make under confident assertions relative to ones anticipations. One can wear the attire of uncertainty, or profess an agnosticism that isn’t really there. Here, I'll single out a special case of improper uncertainty: the display of neutrality or suspended judgment in order to signal maturity, impartiality, or a superior vantage point.

An example would be the case of my parents, who respond to theological questions like “Why does ancient Egypt, which had good records on many other matters, lack any records of Jews having ever been there?” with “Oh, when I was your age, I also used to ask that sort of question, but now I’ve grown out of it.”

Another example would be the principal who, faced with two children who were caught fighting on the playground, sternly says: “It doesn’t matter who started the fight, it only matters who ends it.” Of course it matters who started the fight. The principal may not have access to good information about this critical fact, but if so, the principal should say so, not dismiss the importance of who threw the first punch. Let a parent try punching the principal, and we’ll see how far “It doesn’t matter who started it” gets in front of a judge. But to adults it is just inconvenient that children fight, and it matters not at all to their convenience which child started it. It is only convenient that the fight end as rapidly as possible.

A similar dynamic, I believe, governs the occasions in international diplomacy where Great Powers sternly tell smaller groups to stop that fighting right now. It doesn’t matter to the Great Power who started it—who provoked, or who responded disproportionately to provocation—because the Great Power’s ongoing inconvenience is only a function of the ongoing conflict. Oh, can’t Israel and Hamas just get along?

This I call “pretending to be Wise.” Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses—refusing to sum up evidence—refusing to pass judgment—refusing to take sides—staying above the fray and looking down with a lofty and condescending gaze—which is to say, signaling wisdom by saying and doing nothing—well, that I find particularly pretentious.

Paolo Freire said, “Washing one’s hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral.”1 A playground is a great place to be a bully, and a terrible place to be a victim, if the teachers don’t care who started it. And likewise in international politics: A world where the Great Powers refuse to take sides and only demand immediate truces is a great world for aggressors and a terrible place for the aggressed. But, of course, it is a very convenient world in which to be a Great Power or a school principal. So part of this behavior can be chalked up to sheer selfishness on the part of the Wise.

But part of it also has to do with signaling a superior vantage point. After all—what would the other adults think of a principal who actually seemed to be taking sides in a fight between mere children? Why, it would lower the principal’s status to a mere participant in the fray!

Similarly with the revered elder—who might be a CEO, a prestigious academic, or a founder of a mailing list—whose reputation for fairness depends on their refusal to pass judgment themselves, when others are choosing sides. Sides appeal to them for support, but almost always in vain; for the Wise are revered judges on the condition that they almost never actually judge— then they would just be another disputant in the fray, no better than any mere arguer.2

There are cases where it is rational to suspend judgment, where people leap to judgment only because of their biases. As Michael Rooney said:

The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, when the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.

But then how can we avoid the (related but distinct) pseudo-rationalist behavior of signaling your unbiased impartiality by falsely claiming that the current balance of evidence is neutral? “Oh, well, of course you have a lot of passionate Darwinists out there, but I think the evidence we have doesn’t really enable us to make a definite endorsement of natural selection over intelligent design.”

On this point I’d advise remembering that neutrality is a definite judgment. It is not staying above anything. It is putting forth the definite and particular position that the balance of evidence in a particular case licenses only one summation, which happens to be neutral. This belief, too, must pay rent in anticipated experiences, and it can be wrong; propounding neutrality is just as attackable as propounding any particular side.

Likewise with policy questions. If someone says that both pro-life and pro-choice sides have good points and that they really should try to compromise and respect each other more, they are not taking a position above the two standard sides in the abortion debate. They are putting forth a definite judgment, every bit as particular as saying “pro-life!” or “pro-choice!”

It may be useful to initially avoid using issues like abortion or the Israeli-Palestinian conflict for your rationality practice, and so build up skill on less emotionally charged topics. But it’s not that a rationalist is too mature to talk about politics. It’s not that a rationalist is above this foolish fray in which only mere political partisans and youthful enthusiasts would stoop to participate.

As Robin Hanson describes it, the ability to have potentially divisive conversations is a limited resource. If you can think of ways to pull the rope sideways, you are justified in expending your limited resources on relatively less common issues where marginal discussion offers relatively higher marginal payoffs.3

But then the responsibilities that you deprioritize are a matter of your limited resources. Not a matter of floating high above, serene and Wise.

In sum, there’s a difference between:


1 Paulo Freire, The Politics of Education: Culture, Power, and Liberation (Greenwood Publishing Group, 1985), 122.

2 Oddly, judges in the actual legal system can repeatedly hand down real verdicts without automatically losing their reputation for impartiality. Maybe because of the understood norm that they have to judge, that it’s their job. Or maybe because judges don’t have to repeatedly rule on issues that have split a tribe on which they depend for their reverence.

3 See Hanson, “Policy Tug-O-War” (http://www.overcomingbias.com/2007/05/policy_ tugowar.html) and “Beware Value Talk” (http://www.overcomingbias.com/2009/02/the-cost-of-talking-values.html).

" } }, { "_id": "rM7hcz67N7WtwGGjq", "title": "Against Maturity", "pageUrl": "https://www.lesswrong.com/posts/rM7hcz67N7WtwGGjq/against-maturity", "postedAt": "2009-02-18T23:34:51.000Z", "baseScore": 67, "voteCount": 55, "commentCount": 57, "url": null, "contents": { "documentId": "rM7hcz67N7WtwGGjq", "html": "

I remember the moment of my first break with Judaism.  It was in kindergarten, when I was being forced to memorize and recite my first prayer.  It was in Hebrew.  We were given a transliteration, but not a translation.  I asked what the prayer meant.  I was told that I didn't need to know - so long as I prayed in Hebrew, it would work even if I didn't understand the words.  (Any resemblance to follies inveighed against in my writings is not coincidental.)

Of course I didn't accept this, since it was blatantly stupid, and I figured that God had to be at least as smart as I was.  So when I got home, I asked my parents, and they didn't bother arguing with me.  They just said, "You're too young to argue with; we're older and wiser; adults know best; you'll understand when you're older."

They were right about that last part, anyway.

\n

Of course there were plenty of places my parents really did know better, even in the realms of abstract reasoning.  They were doctorate-bearing folks and not stupid.  I remember, at age nine or something silly like that, showing my father a diagram full of filled circles and trying to convince him that the indeterminacy of particle collisions was because they had a fourth-dimensional cross-section and they were bumping or failing to bump in the fourth dimension.

My father shot me down flat.  (Without making the slightest effort to humor me or encourage me.  This seems to have worked out just fine.  He did buy me books, though.)

But he didn't just say, "You'll understand when you're older."  He said that physics was math and couldn't even be talked about\nwithout math.  He talked about how everyone he met tried to invent\ntheir own theory of physics and how annoying this was.  He may even\nhave talked about the futility of "providing a mechanism", though I'm\nnot actually sure if I originally got that off him or Baez.

You see the pattern developing here.  "Adulthood" was what my parents appealed to when they couldn't verbalize any object-level justification.  They had doctorates and were smart; if there was a good reason, they usually would at least try to explain it to me.  And it gets worse...
\n

\n

The most fearsome damage wreaked upon my parents by their concept of "adulthood", was the idea that being "adult" meant that you were finished - that "maturity" marked the place where you declared yourself done, needing to go no further.

This was displayed most clearly in the matter of religion, where I would try to talk about a question I had, and my parents would smile and say:  "Only children ask questions like that; when you're adult, you know that it's pointless to argue about it."  They actually said that outright!  To ask questions was a manifestation of earnest, childish enthusiasm, earning a smile and a pat on the head.  An adult knew better than to waste effort on pointless things.

We never really know our parents; we only know the face of our parents that they turn to us, their children.  I don't know if my parents ever thought about the child-adult dichotomy when they weren't talking to me.

But this is what I think my parents were thinking:  If they had tried to answer a question as children, and then given up as adults - a quite common pattern in their religious decay - they labeled "mature" the place and act of giving up, by way of consolation.  They'd asked the question as children and stopped asking as adults - and the story they told themselves about that was that only children asked that question, and now they had succeeded into the sage maturity of knowing not to argue.

\n

To this very day, I constantly remind myself that, no matter\nwhat I do in this world, I will doubtlessly be considered an infant by\nthe standards of future intergalactic civilization, and so there is no\npoint in pretending to be a grown-up.  I try to maintain a mental\npicture of myself as someone who is not mature, so that I can go on maturing.

And more...

From my parents I learned the observational lesson that "adulthood" was something sort\nof like "peer acceptance", that is, its pursuit made you do stupid things that\nyou wouldn't have done if you were just trying to get it right.

At\nthat age I couldn't have given you a very good definition of "right"\noutside the realm of pure epistemic accuracy -

-\nbut I understood the concept of asking the wrong question.  "Does this\nhelp people?"  "Will this make anyone happy?"  "Is this belief true?" \nThose were the sorts of questions to ask, not, "Is this the adult thing\nto do?"

So I did not divide up the universe into the childish way\nversus the adult way, nor ever tell myself that I had completed anything by getting older, nor congratulate myself on having stopped being a child.  Instead I\nlearned that there were various stereotypes and traps that could take\npeople's attention off the important questions, and instead make them\ntry to match certain unimportant concepts that existed in their minds. \nOne of these attractor-traps was called "teenager", and one of these attractor-traps was\ncalled "adult", and both were to be avoided.

I've previously touched on the massive effect on my youthful psychology of reading a book of advice to parents with teenagers, years before I was actually a teenager; I took one look at the description of the stupid things teenagers did, and said to myself with quiet revulsion, "I'm not going to do that"; and then I actually didn't.  I never drank and drove, never drank, never tried a single drug, never lost control to hormones, never paid any attention to peer pressure, and never once thought my parents didn't love me.  In a safer world, I would have wished for my parents to have hidden that book better...

...but I had a picture in my mind of what it meant to be a "teenager"; and I determined to avoid it; and I did.

Of course there are a lot of children in this world who don't like being "children" and who try to appear as "adult" or as "mature" as possible.  That's why they start smoking, right?  So that was also part of the picture that I had in my mind of a "stupid teenager": stupid teenagers deliberately try to be mature.

My parents had a picture in their mind of what it meant to be a "kid", which included "kids desperately want to be adult".  I presume, though I don't exactly know, that my parents had a picture of "childishness" which was formed by their own childhood and not updated.

In any case my parents were constantly trying to get me to do things by telling me about how it would make me look adult.

That was their appeal - not, "Do this because it is older and wiser," but, "Do this, because it will make you look adult."  To this day I wonder what they could have possibly been thinking.  Would a stereotypical teenager listen to their parents' advice about that sort of thing?

Not surprisingly, being constantly urged to do things because they would signal adulthood, which I had no particular desire to do, had the effect of making me strongly notice things that signaled adulthood as mere signals.

I think that a lot of difference in the individual style of\nrationalists comes down to which signaling behaviors strike us as dangerous, harmful, or, perhaps, personally annoying; and which signaling behaviors seem relatively harmless, or possibly even useful paths to accomplishment (perhaps because they don't annoy us quite so much).  Robin is willing to tolerate formality in journals, viewing it as possibly even a useful path to filtering out certain kinds of noise; to me formality seems like a strong net danger to rationality that filters nothing, just making it more difficult to read.  I'm willing to tolerate behaviors that signal idealism or caring for others, viewing it as an important pathway to real altruism later on; Robin seems to think such behaviors are relatively more harmful and that they ought to be stopped outright.

It's not that Robin is a cynic or I'm an idealist, but that I'm relatively more annoyed by cynical errors and Robin is relatively more annoyed by idealistic errors.  And fitting the same dimension, Robin seems relatively more annoyed by the errors in the style of youth, where I seem relatively more annoyed by errors in the style of maturity.

And so I take a certain dark delight in quoting anime fanfiction at people who expect me to behave like a wise sage of rationality.  Why should I pretend to be mature when practically every star in the night sky is older than I am?

" } }, { "_id": "YdXMZX5HbZTvvNy84", "title": "Good Idealistic Books are Rare", "pageUrl": "https://www.lesswrong.com/posts/YdXMZX5HbZTvvNy84/good-idealistic-books-are-rare", "postedAt": "2009-02-17T18:41:21.000Z", "baseScore": 15, "voteCount": 11, "commentCount": 22, "url": null, "contents": { "documentId": "YdXMZX5HbZTvvNy84", "html": "

Saith Robin in "Seeking a Cynic's Library":

Cynicism and Idealism are a classic yin and yang, a contradictory pair\nwhere we all seem to need both sides...

Books on education, medicine, government, charity, religion,\ntechnology, travel, relationships, etc. mostly present relatively\nidealistic views, though of course no view is entirely one way or the\nother.  So one reason the young tend to be idealistic is that most reading material they can easily find and understand is idealistic. 

My impression of this differs somewhat from Robin's (what a surprise).

I think that what we see in most books of the class Robin describes, are official views.  These official views may leave out many unpleasant elements of the story.  But because officialism also tries to signal authority and maturity, it's hardly likely to permit itself any real hope or enthusiasm.  Perhaps an obligatory if formal nod in the direction of some popular good cause, because this is expected of officialdom.  But this is hardly an idealistic voice.

What does a full-blown nonfictional idealism look like?  Some examples that I remember from my own youth:

\n

Supposing you wanted your child to grow up an idealist - what\nnonfiction books like these could you find to give them?  I don't find\nit easy to think of many - most nonfiction books are not like this.

On the other hand, I suspect that idealistic fiction aimed specifically at children is a far greater cultural force than anything they pick up from their school textbooks.  Textbooks are marketed to adult textbook-selectors; juvenile fiction and children's television are actually aimed at children.

Of course this just implies a chicken-and-egg problem; why do children\nenjoy idealism more than cynicism?  On this score I would suggest that children in the hunter-gatherer EEA have no chance of successful rebellion, and so haven't yet developed certain emotions that will come\ninto play after puberty.  But children will be actively engaged in absorbing tribal mores during their maturation, and may benefit from signaling such absorption.

Teenagers who act as if they could still get together with their friends and split off to form their own tribe, enjoy cynicism aimed at current authority figures and idealism aimed at their new tribe.

When such forces have petered out, I suggest we are left with a mostly socially-determined adult equilibrium: idealism about some distant subjects is used to signal virtue, cynicism about other distant subjects is used to signal sophistication.

" } }, { "_id": "R2crzhnqrytPycc6C", "title": "Cynical About Cynicism", "pageUrl": "https://www.lesswrong.com/posts/R2crzhnqrytPycc6C/cynical-about-cynicism", "postedAt": "2009-02-17T00:49:26.000Z", "baseScore": 72, "voteCount": 45, "commentCount": 27, "url": null, "contents": { "documentId": "R2crzhnqrytPycc6C", "html": "

I'm cynical about cynicism.  I don't believe that most cynicism is really about knowing better.  When I see someone being cynical, my first thought is that they're trying to show off their sophistication and assert superiority over the naive.  As opposed to, say, sharing their uncommon insight about not-widely-understood flaws in human nature.

There are two obvious exceptions to this rule.  One is if the speaker has something serious and realistic to say about how to improve matters.  Claiming that problems can be fixed will instantly lose you all your world-weary street cred and mark you as another starry-eyed idealistic fool.  (Conversely, any "solution" that manages not to disrupt the general atmosphere of doom, does not make me less skeptical:  "Humans are evil creatures who slaughter and destroy, but eventually we'll die out from poisoning the environment, so it's all to the good, really.")

No, not every problem is solvable.  But by and large, if someone\nachieves uncommon insight into darkness - if they know more than I do\nabout human nature and its flaws - then it's not unreasonable to expect\nthat they might have a suggestion or two to make about remedy,\npatching, or minor deflection.  If, you know, the problem is\none that they really would prefer solved, rather than gloom being milked for a feeling of superiority to the naive herd.

The other obvious exception is for science that has something to say about human nature.  A testable hypothesis is a testable hypothesis and the thing to do with it is test it.  Though here one must be very careful not to go beyond the letter of the experiment for the sake of signaling hard-headed realism:\n

\n

Consider the hash that some people make of evolutionary psychology in trying to be cynical - assuming that humans have a subconscious motive to promote their inclusive genetic fitness.  Consider the hash that some neuroscientists make of the results of their brain scans, supposing that if a brain potential is visible before the moment of reported decision, this proves the nonexistence of free willIt's not you who chooses, it's your brain!

The facts are one thing, but feeling cynical about those facts is another matter entirely.  In some cases it can lead people to overrun the facts - to construct new, unproven, or even outright disproven glooms in the name of signaling realism.  Behaviorism probably had this problem - signaling hardheaded realism about human nature was probably one of the reasons they asserted we don't have minds.
\n

\n

I'm especially on guard against cynicism because it seems to be a\nstandard corruption of rationality in particular.  If many people are\noptimists, then true rationalists will occasionally have to say things\nthat sound pessimistic by contrast.  If people are trying to signal\nvirtue through their beliefs, then a rationalist may have to advocate\ncontrasting beliefs that don't signal virtue.

Which in turn means that\nrationalists, and especially apprentice rationalists watching other rationalists at work, are especially at-risk for absorbing cynicism as though it were a virtue in its own right - assuming that whosoever speaks of ulterior motives is probably a wise rationalist with uncommon insight; or believing that it is an entitled benefit of realism to feel superior to the naive herd that still has a shred of hope.

And this is a fearsome mistake indeed, because you can't propose ways to meliorate problems and still come off as world-weary.

TV Tropes proposes a Sliding Scale of Idealism Versus Cynicism.  It looks to me like Robin tends to focus his suspicions on that which might be signaling idealism, virtue, or righteousness; while I tend to focus my own skepticism on that which might signal cynicism, world-weary sophistication, or sage maturity.

" } }, { "_id": "yyteh3qwD6kjhLiCb", "title": "An African Folktale", "pageUrl": "https://www.lesswrong.com/posts/yyteh3qwD6kjhLiCb/an-african-folktale", "postedAt": "2009-02-16T01:00:51.000Z", "baseScore": 28, "voteCount": 28, "commentCount": 62, "url": null, "contents": { "documentId": "yyteh3qwD6kjhLiCb", "html": "

This is a folktale of the Hausa, a farming culture of around 30 million people, located primarily in Nigeria and Niger but with other communities scattered around Africa.  I find the different cultural assumptions revealed to be... attention-catching; you wouldn't find a tale like this in Aesop.  From Hausa Tales and Traditions by Frank Edgar and Neil Skinner; HT Robert Greene.

\n

The Farmer, the Snake and the Heron

\n

    There was once a man hoeing away on his farm, when along came some people chasing a snake, meaning to kill it.  And the snake came up to the farmer.
    Says the snake \"Farmer, please hide me.\"  \"Where shall I hide you?\" said the farmer, and the snake said \"All I ask is that you save my life.\"  The farmer couldn't think where to put the snake, and at last bent down and opened his anus, and the snake entered.
    Presently the snake's pursuers arrived and said to the farmer \"Hey, there!  Where's the snake we were chasing and intend to kill?  As we followed him, he came in your direction.\"  Says the farmer \"I haven't seen him.\"  And the people went back again.
    Then the farmer said to the snake \"Righto - come out now.  They've gone.\"  \"Oh no\" said the snake, \"I've got me a home.\"  And there was the farmer, with his stomach all swollen, for all the world like a pregnant woman!

\n

\n

    And the farmer set off, and presently, as he passed, he saw a heron.  They put their heads together and whispered, and the heron said to the farmer \"Go and excrete.  Then, when you have finished, move forward a little, but don't get up.  Stay squatting, with your head down and your buttocks up, until I come.\"
    So the man went off and did just exactly as the heron had told him, everything.  And the snake put out his head and began to catch flies.  Then the heron struck and seized the snake's head.  Then he pulled and he pulled until he had got him out, and the man tumbled over.  And the heron finished off the snake with his beak.
    The man rose and went over to the heron.  \"Heron\" says he, \"You've got the snake out for me, now please give me some medicine to drink, for the poison where he was lying.\"
    Says the heron \"Go and find yourself some white fowls, six of them.  Cook them and eat them - they are the medicine.\"  \"Oho\" said the man, \"White fowl?  But that's you\" and he grabbed the heron and tied it up and went off home.  There he took him into a hut and hung him up, the heron lamenting the while.
    Then the man's wife said \"Oh, husband!  The bird did you a kindness.  He saved your life, by getting the trouble out of your stomach for you.  And now you seize him and say that you are going to slaughter him!\"
    So the man's wife loosed the heron, but as he was going out, he pecked out one of her eyes.  And so passed and went his way.  That's all.  For so it always has been - if you see the dust of a fight rising, you will know that a kindness is being repaid!  That's all.  The story's finished.

\n

I wonder if this has something to do with why Africa stays poor.

\n

I was slightly shocked, reading this story.  It seems to reveal a cultural gloominess deeper than its Western analogue of fashionable cynicism.  The cynical Western tale would at least have an innocent, virtuous, idealistic fool to be exploited, since our cynicism is mostly about feeling superior to those less sophisticated.  This tale has so much defection that you can't even call the characters hypocrites.  This isn't a story told to make the listener feel superior to the fools who still believe in the essential goodness of human nature.  This is a tribe whose children are being warned to expect cooperation to be met with defection the same way our own kids are told \"slow but steady wins the race\".

\n

It's occasionally debated on this blog whether the psychological unity of humankind is real and how much room it leaves for humans to be deeply different.  Someone might look at this tale and say, \"Gratitude in Africa isn't like gratitude in the West.\"  But to me it looks like the people who pass on this tale must have pretty much the same chunk of brain circuitry somewhere that implements the emotion of gratitude - that's what creates the background against which this story is told.  It wouldn't be viewed as important wisdom, otherwise.

\n

But cultural gloominess this deep may be a self-fulfilling prophecy, as powerful as if the emotion of gratitude itself had diminished, or failed.

" } }, { "_id": "3Jn4voRWzqboq2M38", "title": "Rationality Quotes 26", "pageUrl": "https://www.lesswrong.com/posts/3Jn4voRWzqboq2M38/rationality-quotes-26", "postedAt": "2009-02-14T16:14:57.000Z", "baseScore": 5, "voteCount": 7, "commentCount": 16, "url": null, "contents": { "documentId": "3Jn4voRWzqboq2M38", "html": "

\"Poets, philosophers, acidheads, salesmen: everybody wants to know, 'What is Reality?'  Some say it's a vast Unknowable so astounding and raw and naked that it grips the human mind and shakes it like a puppy shakes a rag doll.  A lot of good that does us.\"
        -- The Book of the SubGenius

\n

\"When they discovered that reality was more complicated than they thought, they just swept the complexity under a carpet of epicycles.  That is, they created unnecessary complexity.  This is an important point.  The universe is complex, but it's usefully complex.\"
        -- Larry Wall

\n

\"I can't imagine a more complete and precise answer to the question 'for what reason...?' than 'none'.  The fact that you don't like the answer is your problem, not the universe's.\"
        -- Lee Daniel Crocker

\n

\"In the end they all moved in fantasies and not in the daily tide of their seemingly useless lives.  Souls forever lost in the terrifying freedom of their existence.\"
        -- Shinji and Warhammer40k

\n

\"Thus the freer the judgement of a man is in regard to a definite issue, with so much greater necessity will the substance of this judgement be determined.\"
        -- Friedrich Engels, Anti-Dühring

\n

\n

\"There will always be some that cannot be saved.
 It is impossible to save everyone.
 If I have to lose five hundred to earn one thousand,
 I will abandon one hundred and save the lives of nine hundred.
 That is the most efficient method.
 That is the ideal----
 Kiritsugu once said that.
 Of course I got mad.
 I really got mad.
 Because I knew that without being told.
 Because I myself was saved like that.
 I don't even need to be told something as obvious as that.
 But still----I believed that someone would be a superhero if they saved everyone even though they think like that.
 It may be an idealistic thought or an impossible pipe dream, but a superhero is someone who tries to save everyone in spite of that.\"
        -- Emiya Shirou, in Fate/stay night
           (visual novel by Kinoko Nasu; Unlimited Blade Works path, Mirror Moon translation)

" } }, { "_id": "J4vdsSKB7LzAvaAMB", "title": "An Especially Elegant Evpsych Experiment", "pageUrl": "https://www.lesswrong.com/posts/J4vdsSKB7LzAvaAMB/an-especially-elegant-evpsych-experiment", "postedAt": "2009-02-13T14:58:18.000Z", "baseScore": 79, "voteCount": 67, "commentCount": 41, "url": null, "contents": { "documentId": "J4vdsSKB7LzAvaAMB", "html": "
In a 1989 Canadian study, adults were asked to imagine the death of children of various ages and estimate which deaths would create the greatest sense of loss in a parent. The results, plotted on a graph, show grief growing until just before adolescence and then beginning to drop. When this curve was compared with a curve showing changes in reproductive potential over the life cycle (a pattern calculated from Canadian demographic data), the correlation was fairly strong. But much stronger - nearly perfect, in fact - was the correlation between the grief curves of these modern Canadians and the reproductive-potential curve of a hunter-gatherer people, the !Kung of Africa. In other words, the pattern of changing grief was almost exactly what a Darwinian would predict, given demographic realities in the ancestral environment... The first correlation was .64, the second an extremely high .92.

(Robert Wright, summarizing: "Human Grief: Is Its Intensity Related to the Reproductive Value of the Deceased?" Crawford, C. B., Salter, B. E., and Lang, K.L. Ethology and Sociobiology 10:297-307.)

Disclaimer: I haven't read this paper because it (a) isn't online and (b) is not specifically relevant to my actual real job. But going on the given description, it seems like a reasonably awesome experiment. [Gated version here, thanks Benja Fallenstein. Odd, I thought I searched for that. Reading now... seems to check out on the basics. Correlations are as described, N=221.]

The most obvious inelegance of this study, as described, is that it was conducted by asking human adults to imagine parental grief, rather than asking real parents with children of particular ages. (Presumably that would have cost more / allowed fewer subjects.) However, my understanding is that the results here squared well with the data from closer studies of parental grief that were looking for other correlations (i.e., a raw correlation between parental grief and child age).

That said, consider some of this experiment's elegant aspects:

The parental grief is not even subconsciously about reproductive value - otherwise it would update for Canadian reproductive value instead of !Kung reproductive value. Grief is an adaptation that now simply exists, real in the mind and continuing under its own inertia.

Parents do not care about children for the sake of their reproductive contribution. Parents care about children for their own sake; and the non-cognitive, evolutionary-historical reason why such minds exist in the universe in the first place, is that children carry their parents' genes.

Indeed, evolution is the reason why there are any minds in the universe at all. So you can see why I'd want to draw a sharp line through my cynicism about ulterior motives at the evolutionary-cognitive boundary; otherwise, I might as well stand up in a supermarket checkout line and say, "Hey! You're only correctly processing visual information while bagging my groceries in order to maximize your inclusive genetic fitness!"

• 0.92 is, I think, the highest correlation I've ever seen in any ev-psych experiment, and indeed, one of the highest correlations I've seen in any psychology experiment. (Albeit I've seen e.g. a correlation of 0.98 reported for asking one group of subjects "How similar is A to B?" and another group "What is the probability of A given B?" on questions like "How likely are you to draw 60 red balls and 40 white balls from this barrel of 800 red balls and 200 white balls?" - in other words, these are simply processed as the same question.)

Since we are all Bayesians here, we may take our priors into account and ask if at least some of this unexpectedly high correlation is due to luck. The evolutionary fine-tuning we can probably take for granted; this is a huge selection pressure we're talking about. The remaining sources of suspiciously low variance are, (a) whether a large group of adults could correctly envision, on average, relative degrees of parental grief (apparently they can). And, (b), whether the surviving !Kung are typical ancestral hunter-gatherers in this dimension, or whether variance between hunter-gatherer tribal types should have been too high to allow a correlation of .92.

But even after taking into account any skeptical priors, correlation .92 and N=221 is pretty strong evidence, and our posteriors should be less skeptical on all these counts.

• You might think it an inelegance of the experiment that it was performed prospectively on imagined grief, rather than retrospectively on real grief. But it is prospectively imagined grief that will actually operate to steer parental behavior away from losing the child! From an evolutionary standpoint, an actual dead child is a sunk cost; evolution "wants" the parent to learn from the pain, not do it again, adjust back to their hedonic set point, and go on raising other children.

• Similarly, the graph that correlates to parental grief is for the future reproductive potential of a child that has survived to a given age, and not the sunk cost of raising the child which has survived to that age. (Might we get an even higher correlation if we tried to take into account the reproductive opportunity cost of raising a child of age X to independent maturity, while discarding all sunk costs to raise a child to age X?)

Humans usually do notice sunk costs - this is presumably either an adaptation to prevent us from switching strategies too often (compensating for an overeager opportunity-noticer?) or an unfortunate spandrel of pain felt on wasting resources.

Evolution, on the other hand - it's not that evolution "doesn't care about sunk costs", but that evolution doesn't even remotely "think" that way; "evolution" is just a macrofact about the real historical reproductive consequences.

So - of course - the parental grief adaptation is fine-tuned in a way that has nothing to do with past investment in a child, and everything to do with the future reproductive consequences of losing that child. Natural selection isn't crazy about sunk costs the way we are.

But - of course - the parental grief adaptation goes on functioning as if the parent were living in a !Kung tribe rather than Canada. Most humans would notice the difference.

Humans and natural selection are insane in different stable complicated ways.

" } }, { "_id": "o8Bh82hKGpRNA2q36", "title": "The Evolutionary-Cognitive Boundary", "pageUrl": "https://www.lesswrong.com/posts/o8Bh82hKGpRNA2q36/the-evolutionary-cognitive-boundary", "postedAt": "2009-02-12T16:44:43.000Z", "baseScore": 51, "voteCount": 33, "commentCount": 29, "url": null, "contents": { "documentId": "o8Bh82hKGpRNA2q36", "html": "

I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history.  There are good reasons for this!  Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.

\n

Consider, for example, the hypothesis that managers behave rudely toward subordinates \"to signal their higher status\".  This hypothesis then has two natural subdivisions:

\n

If rudeness is an executing adaptation as such - something historically linked to the fact it signaled high status, but not psychologically linked to status drives - then we might experiment and find that, say, the rudeness of high-status men to lower-status men depended on the number of desirable women watching, but that they weren't aware of this fact.  Or maybe that people are just as rude when posting completely anonymously on the Internet (or more rude; they can now indulge their adapted penchant to be rude without worrying about the now-nonexistent reputational consequences).

\n

If rudeness is a conscious or subconscious strategy to signal high status (which is itself a universal adapted desire), then we're more likely to expect the style of rudeness to be culturally variable, like clothes or jewelry; different kinds of rudeness will send different signals in different places.  People will be most likely to be rude (in the culturally indicated fashion) in front of those whom they have the greatest psychological desire to impress with their own high status.

\n

\n

When someone says, \"People do X to signal Y\", I tend to hear, \"People do X when they consciously or subconsciously expect it to signal Y\", not, \"Evolution built people to do X as an adaptation that executes given such-and-such circumstances, because in the ancestral environment, X signaled Y.\"

\n

I apologize, Robin, if this means I misunderstood you.  But I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation - \"People are adapted to do X because it signaled Y\", versus \"People do X because they expect it to signal Y\".

\n

\"Distal cause\" and \"proximate cause\" doesn't seem good enough, when there's such a sharp boundary within the causal network about what gets computed, how it got computed, and when it will be recomputed.  Yes, we have epistemic leakage across this boundary - we can try to fill in our leftover uncertainty about psychology using evolutionary predictions - but it's epistemic leakage between two very different subjects.

\n

I've noticed that I am, in general, less cynical than Robin, and I would offer up the guess for refutation (it is dangerous to reason about other people's psychologies) that Robin doesn't draw a sharp boundary across his cynicism at the evolutionary-cognitive boundary.  When Robin asks \"Are people doing X mostly for the sake of Y?\" he seems to answer the same \"Yes\", and feel more or less the same way about that answer, whether or not the reasoning goes through an evolutionary step along the way.

\n

I would be very disturbed to learn that parents, in general, showed no grief for the loss of a child who they consciously believed to be sterile.  The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society - not the different reproductive curve in a modern society - does not disturb me.

\n

There was a point more than a decade ago when I would have seen that as a puppeteering of human emotions by evolutionary selection pressures, and hence something to be cynical about.  Yet how could parental grief come into existence at all, without a strong enough selection pressure to carve it into the genome from scratch?  All that should matter for saying \"The parent truly cares about the child\" is that the grief in the parent's mind is cognitively real and unconditional and not even subconsciously for the sake of any ulterior motive; and so it does not update for modern reproductive curves.

\n

Of course the emotional circuitry is ultimately there for evolutionary-historical reasons.  But only conscious or subconscious computations can gloom up my day; natural selection is an alien thing whose 'decisions' can't be the target of my cynicism or admiration.

\n

I suppose that is a merely moral consequence - albeit it's one that I care about quite a lot.  Cynicism does have hedonic effects.  Part of my grand agenda that I have to put forward about rationality, has to do with arguing against many various propositions \"Rationality should make us cynical about X\" (e.g. \"physical lawfulness -> choice is a meaningless illusion\") that I happen to disagree with.  So you can see why I'm concerned about drawing the proper boundary of cynicism around evolutionary psychology (especially since I think the proper boundary is a sharp full stop).

\n

But the same boundary also has major consequences for what we can expect people to recompute or not recompute - for the way that future behaviors will change as the environment changes.  So once again, I advocate for language that separates out evolutionary causes and clearly labels them, especially in discussions of signaling.  It has major effects, not just on how cynical I end up about human nature, but on what 'signaling' behaviors to expect, when.

" } }, { "_id": "KC5qGJiWSxt9zpyDy", "title": "Cynicism in Ev-Psych (and Econ?)", "pageUrl": "https://www.lesswrong.com/posts/KC5qGJiWSxt9zpyDy/cynicism-in-ev-psych-and-econ", "postedAt": "2009-02-11T15:06:44.000Z", "baseScore": 37, "voteCount": 25, "commentCount": 40, "url": null, "contents": { "documentId": "KC5qGJiWSxt9zpyDy", "html": "

Though I know more about the former than the latter, I begin to suspect that different styles of cynicism prevail in evolutionary psychology than in microeconomics.

\n

Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) - we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

\n

Evolutionary psychologists are mildly cynical about the environmental circumstances that activate and maintain an emotion.  For example, if you fall in love with the body, mind, and soul of some beautiful mate, an evolutionary psychologist would like to check up on you in ten years to see whether the degree to which you think your mate's mind is still beautiful, correlates with independent judges' ratings of how physically attractive that mate still is.

\n

But it wouldn't be conventionally ev-psych cynicism to suppose that you don't really love your mate, and that you were actually just attracted to their body all along, but that instead you told yourself a self-deceiving story about virtuously loving them for their mind, in order to falsely signal commitment.

\n

Robin, on the other hand, often seems to think that this general type of cynicism is the default explanation and that anything else bears a burden of proof - why suppose an explanation that invokes a genuine virtue, when a selfish desire will do?

\n

Of course my experience with having deep discussions with economists mostly consists of talking to Robin, but I suspect that this is at least partially reflective of a difference between the ev-psych and economic notions of parsimony.

\n

Ev-psychers are trying to be parsimonious with how complex of an adaptation they postulate, and how cleverly complicated they are supposing natural selection to have been.

\n

Economists... well, it's not my field, but maybe they're trying be parsimonious by having just a few simple motives that play out in complex ways via consequentialist calculations?

\n

\n

Quoth Leda Cosmides and John Tooby (famous EPers): 

\n

\"The science of understanding living organization is very different from physics or chemistry, where parsimony makes sense as a theoretical criterion.  The study of organisms is more like reverse engineering, where one may be dealing with a large array of very different components whose heterogeneous organization is explained by the way in which they interact to produce a functional outcome.  Evolution, the constructor of living organisms, has no privileged tendency to build into designs principles of operation that are simple and general.\"

\n

One consequence of this is that it's more parsimonious - under the evolutionary prior - to postulate many smaller simpler adaptations than one big clever complicated adaptation.

\n

One simple way to signal quality X is by having quality X.  But then other simple modifications might accrete around that.

\n

So cynicism in the style of evolutionary psychology might be, \"Why yes, so far as your explicit cognition is concerned, you love them for their beautiful mind.  It's just that without the beautiful body, you probably wouldn't find yourself loving their mind so much.  And once the beautiful body fades, you may find their ideas appearing less attractive too.\"  Mind you, this is not an actual experimental result.  It's just the sort of thing that a cynical evolutionary psychologist would look for - a cross-wiring between a couple of emotional circuits.

\n

Even then, the cynic would bear a burden of proof, because a devil's-advocate parsimonious evolutionary psychologist would say, \"How do you know the extra circuit is there?  Maybe evolution wasn't that clever.  Mates looked for signals of long-term commitment, so their partners evolved an actual long-term commitment mechanism when a mate was of high enough quality - can you show me an experiment that demonstrates it's any more complicated than that?\"

\n

By and large, evolutionary psychologists don't expect people to be clever, just evolution.  It's a foundational assumption that there's no explicit cognitive desire to increase inclusive genetic fitness, and no reason to think that anyone (except a professional evolutionary psychologist) would explicitly know in advance which behaviors increased fitness in the ancestral environment.  The organism, rather than being programmed with machiavellian subconscious long-term knowledge, is programmed with (genuine) emotions that activate under the right circumstances to steer them the right way (in the ancestral environment).

\n

In economics, perhaps, it is more conventional and less alarming to suppose that people are doing explicitly clever and complicated things in the pursuit of explicit goals.  But of this it is not really my place to speak; I'm just trying to describe my own side of the contrast I see.

\n

Now it makes sense to suppose that we have certain general faculties - simple emotional circuits - that make us seek high status: that we are magnetically attracted to behaviors whenever we imagine that behavior will make others look on us fondly.

\n

And it makes to suppose that we have a general faculty - a relatively simple emotional circuit - that makes us flinch away from explanations and views of our own behavior that put us in a negative light, and flinch toward explanations that put ourselves in a positive light.

\n

So from an ev-psych standpoint, we can expect a lot of cynicism to be, in general, justified.  It wouldn't even be surprising if people were relatively more attracted to bodies, and relatively less attracted to minds, on a purely psychological level, than they said/thought they were.

\n

But to modify the emotional ontology by entirely deleting virtuous emotions... to say that, even on a psychological level, no human being was ever attracted to a mate's mind, nor ever wanted to be honest in a business transaction and not just signal honesty... is not quite what evolutionary psychologists do, most of the time.  They are out to come up with an evolutionary explanation of why humans have the standard emotions, rather than telling us that we have nonstandard emotions instead.  Maybe in economics this sounds less alarming because people routinely come up with simplified models?  But in ev-psych it seems to hearken back to the bad old Freudian days of counterintuitiveness - we're excited that the intuitive view of human emotion turns out to be evolutionarily explainable.  Including a lot of things that people would rather not talk about but which they do recognize as realistic.  And including a lot of phenomena that go on behind the scenes but which don't much change our view of which emotions we have, just our view of when emotions activate, in what real context.  On that score we are happy to be cynical and challenge intuition.

" } }, { "_id": "njb9cyyzqLTHewups", "title": "Informers and Persuaders", "pageUrl": "https://www.lesswrong.com/posts/njb9cyyzqLTHewups/informers-and-persuaders", "postedAt": "2009-02-10T20:22:55.000Z", "baseScore": 33, "voteCount": 23, "commentCount": 21, "url": null, "contents": { "documentId": "njb9cyyzqLTHewups", "html": "

Suppose we lived in this completely alternate universe where nothing in academia was about status, and no one had any concept of style.  A universe where people wrote journal articles, and editors approved them, without the tiniest shred of concern for what "impression" it gave - without trying to look serious or solemn or sophisticated, and without being afraid of looking silly or even stupid.  We shall even suppose that readers, correspondingly, have no such impressions.

In this simpler world, academics write papers from only two possible motives:

First, they may have some theory of which they desire to persuade others; this theory may or may not be true, and may or may not be believed for virtuous reasons or with very strong confidence, but the writer of the paper desires to gain adherents for it.

Second, there will be those who write with an utterly pure and virtuous love of the truthfinding process; they desire solely to give people more unfiltered evidence and to see evidence correctly added up, without a shred of attachment to their or anyone else's theory.

People in the first group may want to signal membership in the second group, but people in the second group only want their readers to be well-informed.  In any case, to first order we must suppose that none of this is about signaling - that all such motives are just blanked out.

What do journal articles in this world look like, and how do the Persuaders' articles differ from the Informers'?\n

First, I would argue that both groups write much less formal journal articles than our own.  I've read probably around a hundred books on writing (they're addictive); and they all treated formality as entropy to be fought - a state of disorder into which writing slides.  It is easier to use big words than small words, easier to be abstract than concrete, easier to use passive -ation words than their active counterparts.  Perhaps formality first became associated with Authority, back in the dawn of time, because Authorities put in less effort and forced their audience to read anyway.  Formality became associated with Wisdom by being hard to understand.  Why suppose that scientific formality was ever about preventing propaganda?\n

\nBoth groups still use technical language, because they both care about being precise.  They even use big words or phrases for their technical concepts:  To carve out ever-finer slices through reality, you need new words, more words, hence bigger words (so you don't run out of namespace).

However, since neither group has a care for their image, they use the simplest\nwords they can apart from that, and sentences as easy as\npossible apart from the big words.  From our standpoint, their style would seem inconsistent, discongruous.  A sentence might start with small words that just anyone could\nread, and then terminate in an exceptionally precise structure of\ntechnically sophisticated concepts accessible to only advanced\naudiences.

In this world it's not just eminent physicists who - secure in their reputation as Real Scientists - invent labels like "gluon", "quark", "black hole" and "Big Bang".

Other aspects of scientific taboo may still carry over.  A Persuader might use vicious insults and character assassination.  An Informer never would.  But an Informer might point out - evenhandedly, wherever it happened to be true - that a supposedly relevant paper came from a small unheard-of organization and hadn't yet been replicated, or that the author of an exciting new paper had previously retracted other results...

If Persuaders want to look like Informers, they will, of course, restrain their ad-hominem attacks to sounding like the sort of things an Informer might point out; but this is a second-order phenomenon.  First-order Persuaders would use all-out invective against their opponents.

What about emotions in general?

Suppose that there were only Informers and that they weren't concerned about preventing invasion by Persuaders.  The Informers might well make a value-laden remark or two in the conclusions of their papers - after balancing the probability that the conclusion would later need to be retracted and that the emotion might interfere, versus the importance of the values in question.  Even an Informer might say, in the conclusion of a paper on asteroid strikes, "We can probably breathe a sigh of relief about getting hit in the immediate future, but when you consider the sheer size of the catastrophe and the millions of dead and injured, we really ought to have a spacewatch program."

But Persuaders have an immensely stronger first-order drive to invoke powerful affective emotions and lade the reader's judgments with value.  To second order, Persuaders will try to disguise this method as much as possible - let the reader draw conclusions, so long as they're the desired conclusions - try to pretend to abstract dispassionate language so that they can look like Informers, while still lingering on the emotion-arousing facts.  Formality is a very easy disguise to wear, which is one reason I give it so little credit.

Informers, who have no desire to look like Informers, might go ahead and leave in a value judgment or two that seemed really unlikely to interfere with anyone's totting up the evidence.  If Informers trusted their own judgment about that sort of thing, that is.

(Persuaders and Informers writing about policy arguments or moral arguments would be a whole 'nother class of issue.  Then both types are offering value-laden arguments and dealing in facts that trigger emotions, and the question is who's collecting them evenhandedly versus lopsidedly.)

How about writing short stories?

Persuaders obviously have a motive to do it.  Do Informers ever do it, if they're not worried about looking like Persuaders?

If you try to blank out the conventions of our own world, and imagine what would really be useful...

Then I can well imagine that it would be de rigueur to write small stories - story fragments, rather - describing the elements of your experimental procedure.

"The subjects were admininistered Progenitorivox" actually gives you very little information, just the dull sensation of having been told an authoritative fact.

Compare:  "James is one of my typical subjects.  Every Wednesday, he would visit me in my lab at 2pm, and, grimacing, swallow down two yellow pills from his bottle, while I watched.  At the end of the study, I watched James and the other students file into the classroom, sit down, and fill out the surveys on each desk; as they left, I gave each of them a check for $50."

This, which conveys something of the experience of running the experiment and just begs you to go out and do your own... also gives you valuable information: that the Progenitorivox or placebo was taken at regular intervals with the investigator watching, and when and where and how the survey data was collected.

Maybe this is the most efficient way to communicate that information, and maybe not.  To me it actually does seem efficient, and I would guess that the only reason people don't do this more often is that they would look insufficiently Distant and Authoritative.  I have no trouble imagining an Informer writing a story fragment or two into their journal article.

Robin says:  "I thus tend to avoid emotion, color, flash, stories, vagueness, repetition, rambling, and even eloquence."

I would guess that, to first order and before all signaling:

Persuaders actively seek out emotion, color, flash, and eloquence.  They are vague when they have something to hide.  They rehearse their favored arguments, but not to where it becomes annoying.  They try to avoid rambling because no one wants to read that.  They use stories where they expect stories to be persuasive - which, by default, they are - and avoid stories where they don't want their readers visualizing things in too much detail.

Informers avoid emotions that they fear may bias their readers.  If they can't actually avoid the emotion - e.g., the paper is about slavery - they'll explicitly point it out, along with its potential biasing effect, and they'll go to whatever lengths they can to avoid provoking or inflaming the emotion further (short of actually obscuring the subject matter).  Informers may use color to highlight the most important parts of their article, but won't usually give extra color to a single piece of specific evidence.  Informers have no use for flash.  They won't avoid being eloquent when their discussion happens to have an elegant logical structure.  Informers use the appropriate level of abstraction or maybe a little lower; they are vague when details are unknowable or almost certainly irrelevant.  Informers don't rehearse evidence, but they might find it useful to repeat some details of the experimental procedure.  Informers use stories when they have an important experience to communicate, or when a set of details is most easily conveyed in story form.  In papers that are about judgments of simple fact, Informers never use a story to arouse emotion.

I finally note, with regret, that in a world containing Persuaders, it may make sense for a second-order Informer to be deliberately eloquent if the issue has already been obscured by an eloquent Persuader - just exactly as elegant as the previous Persuader, no more, no less.  It's a pity that this wonderful excuse exists, but in the real world, well...

" } }, { "_id": "XnPpn2uSRxvmGfxHq", "title": "(Moral) Truth in Fiction?", "pageUrl": "https://www.lesswrong.com/posts/XnPpn2uSRxvmGfxHq/moral-truth-in-fiction", "postedAt": "2009-02-09T17:26:12.000Z", "baseScore": 25, "voteCount": 21, "commentCount": 82, "url": null, "contents": { "documentId": "XnPpn2uSRxvmGfxHq", "html": "

A comment by Anonymous on Three Worlds Collide:

\n
\n

After reading this story I feel myself agreeing with Eliezer more on his views and that seems to be a sign of manipulation and not of rationality.

\n

Philosophy expressed in form of fiction seems to have a very strong effect on people - even if the fiction isn't very good (ref. Ayn Rand).

\n
\n

Robin has similar qualms:

\n
\n

Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas?

\n

If I'm going to read a literature that might influence my moral beliefs, I'd rather read professional philosophers and other academics making more explicit arguments.

\n
\n

I replied that I had taken considerable pains to set out the explicit arguments before daring to publish the story.  And moreover, I had gone to considerable length to present the Superhappy argument in the best possible light.  (The opposing viewpoint is the counterpart of the villain; you want it to look as reasonable as possible for purposes of dramatic conflict, the same principle whereby Frodo confronts the Dark Lord Sauron rather than a cockroach.)

\n

Robin didn't find this convincing:

\n
\n

I don't think readers should much let down their guard against communication modes where sneaky persuasion is more feasible simply because the author has made some more explicit arguments elsewhere...  Academic philosophy offers exemplary formats and styles for low-sneak ways to argue about values.

\n
\n

I think that this understates the power and utility of fiction.  I once read a book that was called something like \"How to Read\" (no, not \"How to Read a Book\") which said that nonfiction was about communicating knowledge, while fiction was about communicating experience.

\n

\n

If I want to communicate something about the experience of being a rationalist, I can best do it by writing a short story with a rationalist character.  Not only would identical abstract statements about proper responses have less impact, they wouldn't even communicate the same thought.

\n

From The Failures of Eld Science:

\n

\"...Work expands to fill the time allotted, as the saying goes.  But people can think important thoughts in far less than thirty years, if they expect speed of themselves.\"  Jeffreyssai suddenly slammed down a hand on the arm of Brennan's chair.  \"How long do you have to dodge a thrown knife?\"

\n

\"Very little time, sensei!\"

\n

\"Less than a second!  Two opponents are attacking you!  How long do you have to guess who's more dangerous?\"

\n

\"Less than a second, sensei!\"

\n

\"The two opponents have split up and are attacking two of your girlfriends!  How long do you have to decide which one you truly love?\"

\n

\"Less than a second, sensei!\"

\n

\"A new argument shows your precious theory is flawed!  How long does it take you to change your mind?\"

\n

\"Less than a second, sensei!\"

\n

\"WRONG! DON'T GIVE ME THE WRONG ANSWER JUST BECAUSE IT FITS A CONVENIENT PATTERN AND I SEEM TO EXPECT IT OF YOU!  How long does it really take, Brennan?\"

\n

Sweat was forming on Brennan's back, but he stopped and actually thought about it -

\n

\"ANSWER, BRENNAN!\"

\n

\"No sensei!  I'm not finished thinking sensei!  An answer would be premature!  Sensei!\"

\n

\"Very good!  Continue!  But don't take thirty years!\"

\n

This is an experience about how to avoid completing the pattern when the pattern happens to be blatantly wrong, and how to think quickly without thinking too quickly.

\n

Forget the question of whether you can write the equivalent abstract argument that communicates the same thought in less space.  Can you do it at all?  Is there any series of abstract arguments that creates the same learning experience in the reader?  Entering a series of believed propositions into your belief pool is not the same as feeling yourself in someone else's shoes, and reacting to the experience, and forming an experiential skill-memory of how to do it next time.

\n

And it seems to me that to communicate experience is a valid form of moral argument as well.

\n

Uncle Tom's Cabin was not just a historically powerful argument against slavery, it was a valid argument against slavery.  If human beings were constructed without mirror neurons, if we didn't hurt when we see a nonenemy hurting, then we would exist in the reference frame of a different morality, and we would decide what to do by asking a different question, \"What should* we do?\"  Without that ability to sympathize, we might think that it was perfectly all right* to keep slaves.  (See Inseparably Right and No License To Be Human.)

\n

Putting someone into the shoes of a slave and letting their mirror neurons feel the suffering of a husband separated from a wife, a mother separated from a child, a man whipped for refusing to whip a fellow slave - it's not just persuasive, it's valid.  It fires the mirror neurons that physically implement that part of our moral frame.

\n

I'm sure many have turned against slavery without reading Uncle Tom's Cabin - maybe even due to purely abstract arguments, without ever seeing the carving \"Am I Not a Man and a Brother?\"  But for some people, or for a not-much-different intelligent species, reading Uncle Tom's Cabin might be the only argument that can turn you against slavery.  Any amount of abstract argument that didn't fire the experiential mirror neurons, would not activate the part of your implicit should-function that disliked slavery.  You would just seem to be making a good profit on something you owned.

\n

Can fiction be abused?  Of course.  Suppose that blacks had no subjective experiences.  Then Uncle Tom's Cabin would have been a lie in a deeper sense than being fictional, and anyone moved by it would have been deceived.

\n

Or to give a more subtle case not involving a direct \"lie\" of this sort:  On the SL4 mailing list, Stuart Armstrong posted an argument against TORTURE in the infamous Torture vs. Dust Specks debate, consisting of a short story describing the fate of the person to be tortured.  My reply was that the appropriate counterargument would be 3^^^3 stories about someone getting a dust speck in their eye.  I actually did try to send a long message consisting only of

\n

DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK
DUST SPECK

\n

for a thousand lines or so, but the mailing software stopped it.  (Ideally, I should have created a webpage using Javascript and bignums, that, if run on a sufficiently large computer, would print out exactly 3^^^3 copies of a brief story about someone getting a dust speck in their eye.  It probably would have been the world's longest finite webpage.  Alas, I lack time for many of my good ideas.)

\n

Then there's the sort of standard polemic used in e.g. Atlas Shrugged (as well as many less famous pieces of science fiction) in which Your Beliefs are put into the minds of strong empowered noble heroes, and the Opposing Beliefs are put into the mouths of evil and contemptible villains, and then the consequences of Their Way are depicted as uniformly disastrous while Your Way offers butterflies and apple pie.  That's not even subtle, but it works on people predisposed to hear the message.

\n

But to entirely turn your back on fiction is, I think, taking it too far.  Abstract argument can be abused too.  In fact, I would say that abstract argument is if anything easier to abuse because it has more degrees of freedom.  Which is easier, to say \"Slavery is good for the slave\", or to write a believable story about slavery benefiting the slave?  You can do both, but the second is at least more difficult; your brain is more likely to notice the non-sequiturs when they're played out as a written experience.

\n

Stories may not get us completely into Near mode, but they get us closer into Near mode than abstract argument.  If it's words on paper, you can end up believing that you ought to do just about anything.  If you're in the shoes of a character encountering the experience, your reactions may be harder to twist.

\n

Contrast a verbal argument against the verbal belief that \"non-Catholics go to Hell\"; versus reading a story about a good and decent person, who happens to be a Protestant, and dies trying to save a child's life, who is condemned to hell and has molten lead poured down her throat; versus the South Park episode where a crowd of newly dead souls is at the entrance to hell, and the Devil says, \"Sorry, it was the Mormons\" and everyone goes \"Awwwww...\"

\n

Yes, abstraction done right can keep you going where concrete visualization breaks down - the torture vs. dust specks thing being an archetypal example; you can't actually visualize that many dust specks, but if you try to choose SPECKS you'll end up with circular preferences.  But so far as I can organize my metaethics, the ground level of morality lies in our preferences over particular, concrete situations - and when these can be comprehended as concrete images at all, it's best to visualize them as concretely as possible.  Unless we know specifically where the concrete image is going wrong, and have to apply an abstract correction.  The moral abstraction is built on top of the ground level.

\n

I am also, of course, worried about the idea that stories aren't \"respectable\" because they don't look sufficiently solemn and dull; or the idea that something isn't \"respectable\" if can be understood by a mere popular audience.  Yes, there are technical fields that are genuinely impossible to explain to your grandmother in an hour; but ceteris paribus, people who can write at a more popular level without distorting technical reality are performing a huge service to that field.  I've heard that Carl Sagan was held in some disrepute by his peers for the crime of speaking to the general public.  If true, this is merely stupid.

\n

Explaining things is hard.  Explainers need every tool they can get their hands on - as a matter of public interest.

\n

And in moral philosophy - well, I suppose it could be the case that moral philosophers have discovered moral truths that are deductive consequences of most humans' moral frames, but which are so difficult and technical that they simply can't be explained to a popular audience within a one-hour lecture.  But it would be a tad more suspicious than the corresponding case in, say, physics.

\n

I realize that I speak as someone who does a lot of popularizing, but even so - fiction ought to be a respectable form of moral argument.  And a respectable way of communicating experiences, in particular the experience of applying certain types of thinking skills.

\n

I've always been of two minds about publishing longer fiction pieces about the future and its consequences.  Not so much because of the potential for abuse, but because even when not abused, fiction can still bypass critical faculties and end up poured directly into the brains of at least some readers.  Telling people about the logical fallacy of generalization from fictional evidence doesn't make it go away; people may just go on generalizing from the story as though they had actually seen it happen.  And you simply can't have a story that's a rational projection; it's not just a matter of plot, it's a matter of the story needing to be specific, rather than depicting a state of epistemic uncertainty.

\n

But to make shorter philosophical points?  Sure.

\n

And... oh, what the hell.  Just on the off-chance, are there any OB readers who could get a good movie made?  Either outside Hollywood, or able to bypass the usual dumbing-down process that creates a money-losing flop?  The probabilities are infinitesimal, I know, but I thought I'd check.

" } }, { "_id": "SqNvmwDxRibLXjMZN", "title": "...And Say No More Of It", "pageUrl": "https://www.lesswrong.com/posts/SqNvmwDxRibLXjMZN/and-say-no-more-of-it", "postedAt": "2009-02-09T00:15:35.000Z", "baseScore": 43, "voteCount": 29, "commentCount": 25, "url": null, "contents": { "documentId": "SqNvmwDxRibLXjMZN", "html": "

Followup toThe Thing That I Protect

Anything done with an ulterior motive has to be done with a pure heart.  You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity.  If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.

This doesn't mean that you never say anything about your Cause, but there's a balance to be struck.  "A fanatic is someone who can't change his mind and won't change the subject."

In previous months, I've pushed this balance too far toward talking about Singularity-related things.  And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS.  And so I just kept writing, because it was finally coming out.  For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.

When Less Wrong starts up, it will, by my own request, impose a two-month moratorium on discussion of "Friendly AI" and other Singularity/intelligence explosion-related topics.

There's a number of reasons for this.  One of them is simply to restore the balance.  Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.

But more importantly - there are certain subjects which tend to drive people crazy, even if there's truth behind them.  Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do.  Likewise Godel's Theorem, consciousness, Artificial Intelligence -

The concept of "Friendly AI" can be poisonous in certain ways.  True or false, it carries risks to mental health.  And not just the obvious liabilities of praising a Happy Thing.  Something stranger and subtler that drains enthusiasm.

\n

If there were no such problem as Friendly AI, I would probably be\ndevoting more or less my entire life to cultivating human rationality; I would have already been doing it for years.

And though I could be mistaken - I'm guessing that I would have been much further along by now.

Partially, of course, because it's easier to tell people things that\nthey're already prepared to hear.  "Rationality" doesn't command universal respect,\nbut it commands wide respect and recognition.  There is already the New Atheist movement, and the\nBayesian revolution; there are already currents flowing in that\ndirection.

One has to be wary, in life, of substituting easy problems for hard problems.  This is a form of running away.  "Life is what happens to you while you are making other plans", and it takes a very strong and non-distractable focus to avoid that...

But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction.  At some point you also have to say, "This isn't working the way I'm doing it," and try something different.

There are many ulterior motives behind my participation in Overcoming Bias / Less Wrong.  One of the simpler ones is the idea of "First, produce rationalists - people who can shut up and multiply - and then, try to recruit some of them."  Not all.  You do have to care about the rationalist community for its own sake.  You have to be willing not to recruit all the rationalists you create.  The first rule of acting with ulterior motives is that it must be done with a pure heart, faithfully serving the overt purpose.

But more importantly - the whole thing only works if the strange intractability of the direct approach - the mysterious slowness of trying to build an organization directly around the Singularity - does not contaminate the new rationalist movement.

There's an old saw about the lawyer who works in a soup kitchen for an hour in order to purchase moral satisfaction, rather than work the same hour at the law firm and donate the money to hire 5 people to work at the soup kitchen.  Personal involvement isn't just pleasurable, it keeps people involved; the lawyer is more likely to donate real money to the soup kitchen later.  Research problems don't have a lot of opportunity for outsiders to get personally involved, including FAI research.  (This is why scientific research isn't usually supported by individuals, I suspect; instead scientists fight over the division of money that has been block-allocated by governments and foundations.  I should write about this later.)

If it were the Cause of human rationality - if that had always been the purpose I'd been pursuing - then there would have been all sorts of things people could have done to personally help out, to keep their spirits high and encourage them to stay involved.  Writing letters to the editor, trying to get heuristics and biases taught in organizations and in classrooms; holding events, handing out flyers; starting a magazine, increasing the number of subscribers; students handing out copies of the "Twelve Virtues of Rationality" at campus events...

It might not be too late to start going down that road - but only if the "Friendly AI" meme doesn't take over and suck out the life and motivation.

In a purely utilitarian sense - the sort of thinking that would lead a lawyer to actually work that extra hour at the law firm and donate the money - someone who thinks that handing out flyers is important to the Cause of human rationality, should be strictly less enthusiastic than someone who thinks that handing out flyers for human rationality has directly rationality-related benefits and might help a Friendly AI project.  It's a strictly added benefit; it should result in strictly more enthusiasm...

But in practice - it's as though the idea of "Friendly AI" exerts an attraction that sucks the emotional energy out of its own subgoals.

You only press the "Run" button after you finish coding and teaching a Friendly AI; which happens after the theory has been worked out; which happens after theorists have been recruited; which happens after (a) mathematically smart people have comprehended cognitive naturalism on a deep gut level and (b) a regular flow of funding exists to support these professional specialists; which first requires that the whole project get sufficient traction; for which handing out flyers may be involved...

But something about the fascination of finally building the AI, seems to make all the mere preliminaries pale in emotional appeal.  Or maybe it's that the actual researching takes on an aura of the sacred magisterium, and then it's impossible to scrape up enthusiasm for any work outside the sacred magisterium.

If you're handing out flyers for the Cause of human rationality... it's not about a faraway final goal that makes the mere work seem all too mundane by comparison, and there isn't a sacred magisterium that you're not part of.

And this is only a brief gloss on the mental health risks of "Friendly AI"; there are others I haven't even touched on, though the others are relatively more obvious.  Import morality.crazy, import metaethics.crazy, import AI.crazy, import Noble Cause.crazy, import Happy Agent.crazy, import Futurism.crazy, etcetera.

But it boils down to this:  From my perspective, my participation in Overcoming Bias / Less Wrong has many different ulterior motives, and many different helpful potentials, many potentially useful paths leading out of it.  But the meme of Friendly AI potentially poisons many of those paths, if it interacts in the wrong way; and so the ability to shut up about the Cause is more than usually important, here.  Not shut up entirely - but the rationality part of it needs to have its own integrity.  Part of protecting that integrity is to not inject comments about "Friendly AI" into any post that isn't directly about "Friendly AI".

I would like to see "Friendly AI" be a rationalist Cause sometimes\ndiscussed on Less Wrong, alongside other rationalist Causes whose\nmembers likewise hang out there for companionship and skill acquisition.  This is\nas much as is necessary to recruit a fraction of the rationalists created.  Anything more would poison the community, I think.  Trying to find hooks to steer every arguably-related conversation toward your own Cause is not virtuous, it is dangerously and destructively greedy.  All Causes represented on LW will have to bear this in mind, on pain of their clever conversational hooks being downvoted to oblivion.

And when Less Wrong starts up, its integrity will be protected in a simpler way: shut up about the Singularity entirely for two months.

...and that's it.

Back to rationality.

WHEW.

(This would be a great time to announce that Less Wrong is ready to go, but they're still working on it.  Possibly later this week, possibly not.)

" } }, { "_id": "3wyMbgeFfQWntFxTf", "title": "The Thing That I Protect", "pageUrl": "https://www.lesswrong.com/posts/3wyMbgeFfQWntFxTf/the-thing-that-i-protect", "postedAt": "2009-02-07T19:18:44.000Z", "baseScore": 46, "voteCount": 29, "commentCount": 24, "url": null, "contents": { "documentId": "3wyMbgeFfQWntFxTf", "html": "

Followup toSomething to Protect, Value is Fragile

"Something to Protect" discursed on the idea of wielding rationality in the service of something other than "rationality".  Not just that rationalists ought to pick out a Noble Cause as a hobby to keep them busy; but rather, that rationality itself is generated by having something that you care about more than your current ritual of cognition.

So what is it, then, that I protect?

I quite deliberately did not discuss that in "Something to Protect", leaving it only as a hanging implication.  In the unlikely event that we ever run into aliens, I don't expect their version of Bayes's Theorem to be mathematically different from ours, even if they generated it in the course of protecting different and incompatible values.  Among humans, the idiom of having "something to protect" is not bound to any one cause, and therefore, to mention my own cause in that post would have harmed its integrity.  Causes are dangerous things, whatever their true importance; I have written somewhat on this, and will write more about it.

But still - what is it, then, the thing that I protect?

Friendly AI?  No - a thousand times no - a thousand times not anymore.  It's not thinking of the AI that gives me strength to carry on even in the face of inconvenience.\n

I would be a strange and dangerous AI wannabe if that were my cause - the image in my mind of a perfected being, an existence greater than humankind.  Maybe someday I'll be able to imagine such a child and try to build one, but for now I'm too young to be a father.

Those of you who've been following along recent discussions, particularly "Value is Fragile", might have noticed something else that I might, perhaps, hold precious.  Smart agents want to protect the physical representation of their utility function for almost the same reason that male organisms are built to be protective of their testicles.  From the standpoint of the alien god, natural selection, losing the germline - the gene-carrier that propagates the pattern into the next generation - means losing almost everything that natural selection cares about.  Unless you already have children to protect, can protect relatives, etcetera - few are the absolute and unqualified statements that can be made in evolutionary biology - but still, if you happen to be a male human, you will find yourself rather protective of your testicles; that one, centralized vulnerability is why a kick in the testicles hurts more than being hit on the head.

To lose the pattern of human value - which, for now, is physically embodied only in the human brains that care about those values - would be to lose the Future itself; if there's no agent with those values, there's nothing to shape a valuable Future.

And this pattern, this one most vulnerable and precious pattern, is indeed at risk to be distorted or destroyed.  Growing up is a hard problem either way, whether you try to edit existing brains, or build de novo Artificial Intelligence that mirrors human values.  If something more powerful than humans, and not sharing human values, comes into existence - whether by de novo AI gone wrong, or augmented humans gone wrong - then we can expect to lose, hard.  And value is fragile; losing just one dimension of human value can destroy nearly all of the utility we expect from the future.

So is that, then, the thing that I protect?

If it were - then what inspired me when times got tough would be, say, thinking of people being nice to each other.  Or thinking of people laughing, and contemplating how humor probably exists among only an infinitesimal fraction of evolved intelligent species and their descendants.  I would marvel at the power of sympathy to make us feel what others feel -

But that's not quite it either.

I once attended a small gathering whose theme was "This I Believe".  You could interpret that phrase in a number of ways; I chose "What do you believe that most other people don't believe which makes a corresponding difference in your behavior?"  And it seemed to me that most of how I behaved differently from other people boiled down to two unusual beliefs.  The first belief could be summarized as "intelligence is a manifestation of order rather than chaos"; this accounts both for my attempts to master rationality, and my attempt to wield the power of AI.

And the second unusual belief could be summarized as:  "Humanity's future can be a WHOLE LOT better than its past."

Not desperately darwinian robots surging out to eat as much of the cosmos as possible, mostly ignoring their own internal values to try and grab as many stars as possible, with most of the remaining matter going into making paperclips.

Not some bittersweet ending where you and I fade away on Earth while the inscrutable robots ride off into the unknowable sunset, having grown beyond such merely human values as love or sympathy.

Screw bittersweet.  To hell with that melancholy-tinged crap.  Why leave anyone behind?  Why surrender a single thing that's precious?

(And the compromise-futures are all fake anyway; at this difficulty level, you steer precisely or you crash.)

The pattern of fun is also lawful.  And, though I do not know all the law - I do think that written in humanity's value-patterns is the implicit potential of a happy future.  A seriously goddamn FUN future.  A genuinely GOOD outcome.  Not something you'd accept with a sigh of resignation for nothing better being possible.  Something that would make you go "WOOHOO!"

In the sequence on Fun Theory, I have given you, I hope, some small reason to believe that such a possibility might be consistently describable, if only it could be made real.  How to read that potential out of humans and project it into reality... might or might not be as simple as "superpose our extrapolated reflected equilibria".  But that's one way of looking at what I'm trying to do - to reach the potential of the GOOD outcome, not the melancholy bittersweet compromise.  Why settle for less?

To really have something to protect, it has to be able to bring tears to your eyes.  That, generally, requires something concrete to visualize - not just abstract laws.  Reading the Laws of Fun doesn't bring tears to my eyes.  I can visualize a possibility or two that makes sense to me, but I don't know if it would make sense to others the same way.

What does bring tears to my eyes?  Imagining a future where humanity has its act together.  Imagining children who grow up never knowing our world, who don't even understand it.  Imagining the rescue of those now in sorrow, the end of nightmares great and small.  Seeing in reality the real sorrows that happen now, so many of which are unnecessary even now.  Seeing in reality the signs of progress toward a humanity that's at least trying to get its act together and become something more - even if the signs are mostly just symbolic: a space shuttle launch, a march that protests a war.

(And of course these are not the only things that move me.  Not everything that moves me has to be a Cause.  When I'm listening to e.g. Bach's Jesu: Joy of Man's Desiring, I don't think about how every extant copy might be vaporized if things go wrong.  That may be true, but it's not the point.  It would be as bad as refusing to listen to that melody because it was once inspired by belief in the supernatural.)

To really have something to protect, you have to be able to protect it, not just value it.  My battleground for that better Future is, indeed, the fragile pattern of value.  Not to keep it in stasis, but to keep it improving under its own criteria rather than randomly losing information.  And then to project that through more powerful optimization, to materialize the valuable future.  Without surrendering a single thing that's precious, because losing a single dimension of value could lose it all.

There's no easy way to do this, whether by de novo AI or by editing brains.  But with a de novo AI, cleanly and correctly designed, I think it should at least be possible to get it truly right and win completely.  It seems, for all its danger, the safest and easiest and shortest way (yes, the alternatives really are that bad).  And so that is my project.

That, then, is the service in which I wield rationality.  To protect the Future, on the battleground of the physical representation of value.  And my weapon, if I can master it, is the ultimate hidden technique of Bayescraft - to explicitly and fully know the structure of rationality, to such an extent that you can shape the pure form outside yourself - what some call "Artificial General Intelligence" and I call "Friendly AI".  Which is, itself, a major unsolved research problem, and so it calls into play the more informal methods of merely human rationality.  That is the purpose of my art and the wellspring of my art.

That's pretty much all I wanted to say here about this Singularity business...

...except for one last thing; so after tomorrow, I plan to go back to posting about plain old rationality on Monday.

" } }, { "_id": "4pov2tL6SEC23wrkq", "title": "Epilogue: Atonement (8/8)", "pageUrl": "https://www.lesswrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8", "postedAt": "2009-02-06T11:52:42.000Z", "baseScore": 104, "voteCount": 75, "commentCount": 198, "url": null, "contents": { "documentId": "4pov2tL6SEC23wrkq", "html": "

(Part 8 of 8 in \"Three Worlds Collide\")

\n

Fire came to Huygens.

\n

The star erupted.

\n

Stranded ships, filled with children doomed by a second's last delay, still milled around the former Earth transit point.  Too many doomed ships, far too many doomed ships.  They should have left a minute early, just to be sure; but the temptation to load in that one last child must have been irresistable.  To do the warm and fuzzy thing just this one time, instead of being cold and calculating.  You couldn't blame them, could you...?

\n

Yes, actually, you could.

\n

The Lady Sensory switched off the display.  It was too painful.

\n

On the Huygens market, the price of a certain contract spiked to 100%.  They were all rich in completely worthless assets for the next nine minutes, until the supernova blast front arrived.

\n

\"So,\" the Lord Pilot finally said.  \"What kind of asset retains its value in a market with nine minutes to live?\"

\n

\n

\"Booze for immediate delivery,\" the Master of Fandom said promptly.  \"That's what you call a -\"

\n

\"Liquidity preference,\" the others chorused.

\n

The Master laughed.  \"All right, that was too obvious.  Well... chocolate, sex -\"

\n

\"Not necessarily,\" said the Lord Pilot.  \"If you can use up the whole supply of chocolate at once, does demand outstrip supply?  Same with sex - the value could actually drop if everyone's suddenly willing.  Not to mention:  Nine minutes?\"

\n

\"All right then, expert oral sex from experienced providers.  And hard drugs with dangerous side effects; the demand would rise hugely relative to supply -\"

\n

\"This is inane,\" the Ship's Engineer commented.

\n

The Master of Fandom shrugged.  \"What do you say in the unrecorded last minutes of your life that is not inane?\"

\n

\"It doesn't matter,\" said the Lady Sensory.  Her face was strangely tranquil.  \"Nothing that we do now matters.  We won't have to live with the consequences.  No one will.  All this time will be obliterated when the blast front hits.  The role I've always played, the picture that I have of me... it doesn't matter.  There's... a peace... in not having to be Dalia Ancromein any more.\"

\n

The others looked at her.  Talk about killing the mood.

\n

\"Well,\" the Master of Fandom said, \"since you raise the subject, I suppose it would be peaceful if not for the screaming terror.\"

\n

\"You don't have to feel the screaming terror,\" the Lady Sensory said.  \"That's just a picture you have in your head of how it should be.  The role of someone facing imminent death.  But I don't have to play any more roles.  I don't have to feel screaming terror.  I don't have to frantically pack in a few last moments of fun.  There are no more obligations.\"

\n

\"Ah,\" the Master of Fandom said, \"so I guess this is when we find out who we really are.\"  He paused for a moment, then shrugged.  \"I don't seem to be anyone in particular.  Oh well.\"

\n

The Lady Sensory stood up, and walked across the room to where the Lord Pilot stood looking at the viewscreen.

\n

\"My Lord Pilot,\" the Lady Sensory said.

\n

\"Yes?\" the Lord Pilot said.  His face was expectant.

\n

The Lady Sensory smiled.  It was bizarre, but not frightening.  \"Do you know, my Lord Pilot, that I had often thought how wonderful it would be to kick you very hard in the testicles?\"

\n

\"Um,\" the Lord Pilot said.  His arms and legs suddenly tensed, preparing to block.

\n

\"But now that I could do it,\" the Lady Sensory said, \"I find that I don't really want to.  It seems... that I'm not as awful a person as I thought.\"  She gave a brief sigh.  \"I wish that I had realized it earlier.\"

\n

The Lord Pilot's hand swiftly darted out and groped the Lady Sensory's breast.  It was so unexpected that no one had time to react, least of all her.  \"Well, what do you know,\" the Pilot said, \"I'm just as much of a pervert as I thought.  My self-estimate was more accurate than yours, nyah nyah -\"

\n

The Lady Sensory kneed him in the groin, hard enough to drop him moaning to the floor, but not hard enough to require medical attention.

\n

\"Okay,\" the Master of Fandom said, \"can we please not go down this road?  I'd like to die with at least some dignity.\"

\n

There was a long, awkward silence, broken only by a quiet \"Ow ow ow ow...\"

\n

\"Would you like to hear something amusing?\" asked the Kiritsugu, who had once been a Confessor.

\n

\"If you're going to ask that question,\" said the Master of Fandom, \"when the answer is obviously yes, thus wasting a few more seconds -\"

\n

\"Back in the ancient days that none of you can imagine, when I was seventeen years old - which was underage even then - I stalked an underage girl through the streets, slashed her with a knife until she couldn't stand up, and then had sex with her before she died.  It was probably even worse than you're imagining.  And deep down, in my very core, I enjoyed every minute.\"

\n

Silence.

\n

\"I don't think of it often, mind you.  It's been a long time, and I've taken a lot of intelligence-enhancing drugs since then.  But still - I was just thinking that maybe what I'm doing now finally makes up for that.\"

\n

\"Um,\" said the Ship's Engineer.  \"What we just did, in fact, was kill fifteen billion people.\"

\n

\"Yes,\" said the Kiritsugu, \"that's the amusing part.\"

\n

Silence.

\n

\"It seems to me,\" mused the Master of Fandom, \"that I should feel a lot worse about that than I actually do.\"

\n

\"We're in shock,\" the Lady Sensory observed distantly.  \"It'll hit us in about half an hour, I expect.\"

\n

\"I think it's starting to hit me,\" the Ship's Engineer said.  His face was twisted.  \"I - I was so worried I wouldn't be able to destroy my home planet, that I didn't get around to feeling unhappy about succeeding until now.  It... hurts.\"

\n

\"I'm mostly just numb,\" the Lord Pilot said from the floor.  \"Well, except down there, unfortunately.\"  He slowly sat up, wincing.  \"But there was this absolute unalterable thing inside me, screaming so loud that it overrode everything.  I never knew there was a place like that within me.  There wasn't room for anything else until humanity was safe.  And now my brain is worn out.  So I'm just numb.\"

\n

\"Once upon a time,\" said the Kiritsugu, \"there were people who dropped a U-235 fission bomb, on a place called Hiroshima.  They killed perhaps seventy thousand people, and ended a war.  And if the good and decent officer who pressed that button had needed to walk up to a man, a woman, a child, and slit their throats one at a time, he would have broken long before he killed seventy thousand people.\"

\n

Someone made a choking noise, as if trying to cough out something that had suddenly lodged deep in their throat.

\n

\"But pressing a button is different,\" the Kiritsugu said.  \"You don't see the results, then.  Stabbing someone with a knife has an impact on you.  The first time, anyway.  Shooting someone with a gun is easier.  Being a few meters further away makes a surprising difference.  Only needing to pull a trigger changes it a lot.  As for pressing a button on a spaceship - that's the easiest of all.  Then the part about 'fifteen billion' just gets flushed away.  And more importantly - you think it was the right thing to do.  The noble, the moral, the honorable thing to do.  For the safety of your tribe.  You're proud of it -\"

\n

\"Are you saying,\" the Lord Pilot said, \"that it was not the right thing to do?\"

\n

\"No,\" the Kiritsugu said.  \"I'm saying that, right or wrong, the belief is all it takes.\"

\n

\"I see,\" said the Master of Fandom.  \"So you can kill billions of people without feeling much, so long as you do it by pressing a button, and you're sure it's the right thing to do.  That's human nature.\"  The Master of Fandom nodded.  \"What a valuable and important lesson.  I shall remember it all the rest of my life.\"

\n

\"Why are you saying all these things?\" the Lord Pilot asked the Kiritsugu.

\n

The Kiritsugu shrugged.  \"When I have no reason left to do anything, I am someone who tells the truth.\"

\n

\"It's wrong,\" said the Ship's Engineer in a small, hoarse voice, \"I know it's wrong, but - I keep wishing the supernova would hurry up and get here.\"

\n

\"There's no reason for you to hurt,\" said the Lady Sensory in a strange calm voice.  \"Just ask the Kiritsugu to stun you.  You'll never wake up.\"

\n

\"...no.\"

\n

\"Why not?\" asked the Lady Sensory, in a tone of purely abstract curiosity.

\n

The Ship's Engineer clenched his hands into fists.  \"Because if hurting is that much of a crime, then the Superhappies are right.\"  He looked at the Lady Sensory.  \"You're wrong, my lady.  These moments are as real as every other moment of our lives.  The supernova can't make them not exist.\"  His voice lowered.  \"That's what my cortex says.  My diencephalon wishes we'd been closer to the sun.\"

\n

\"It could be worse,\" observed the Lord Pilot.  \"You could not hurt.\"

\n

\"For myself,\" the Kiritsugu said quietly, \"I had already visualized and accepted this, and then it was just a question of watching it play out.\"  He sighed.  \"The most dangerous truth a Confessor knows is that the rules of society are just consensual hallucinations.  Choosing to wake up from the dream means choosing to end your life.  I knew that when I stunned Akon, even apart from the supernova.\"

\n

\"Okay, look,\" said the Master of Fandom, \"call me a gloomy moomy, but does anyone have something uplifting to say?\"

\n

The Lord Pilot jerked a thumb at the expanding supernova blast front, a hundred seconds away.  \"What, about that?\"

\n

\"Yeah,\" the Master of Fandom said.  \"I'd like to end my life on an up note.\"

\n

\"We saved the human species,\" offered the Lord Pilot.  \"Man, that's the sort of thing you could just repeat to yourself over and over and over again -\"

\n

\"Besides that.\"

\n

\"Besides WHAT?\"

\n

The Master managed to hold a straight face for a few seconds, and then had to laugh.

\n

\"You know,\" the Kiritsugu said, \"I don't think there's anyone in modern-day humanity, who would regard my past self as anything but a poor, abused victim.  I'm pretty sure my mother drank during pregnancy, which, back then, would give your child something called Fetal Alcohol Syndrome.  I was poor, uneducated, and in an environment so entrepreneurially hostile you can't even imagine it -\"

\n

\"This is not sounding uplifting,\" the Master said.

\n

\"But somehow,\" the Kiritsugu said, \"all those wonderful excuses - I could never quite believe in them myself, afterward.  Maybe because I'd also thought of some of the same excuses before.  It's the part about not doing anything that got to me.  Others fought the war to save the world, far over my head.  Lightning flickering in the clouds high above me, while I hid in the basement and suffered out the storm.  And by the time I was rescued and healed and educated, in any shape to help others - the battle was essentially over.  Knowing that I'd been a victim for someone else to save, one more point in someone else's high score - that just stuck in my craw, all those years...\"

\n

\"...anyway,\" the Kiritsugu said, and there was a small, slight smile on that ancient face, \"I feel better now.\"

\n

\"So does that mean,\" asked the Master, \"that now your life is finally complete, and you can die without any regrets?\"

\n

The Kiritsugu looked startled for a moment.  Then he threw back his head and laughed.  True, pure, honest laughter.  The others began to laugh as well, and their shared hilarity echoed across the room, as the supernova blast front approached at almost exactly the speed of light.

\n

Finally the Kiritsugu stopped laughing, and said:

\n

\"Don't be ridicu-\"

\n

 

\n

 

\n

 

\n

 

\n

 

\n

 

" } }, { "_id": "6Ls6f5PerERJmsTGB", "title": "True Ending: Sacrificial Fire (7/8)", "pageUrl": "https://www.lesswrong.com/posts/6Ls6f5PerERJmsTGB/true-ending-sacrificial-fire-7-8", "postedAt": "2009-02-05T10:57:12.000Z", "baseScore": 70, "voteCount": 48, "commentCount": 91, "url": null, "contents": { "documentId": "6Ls6f5PerERJmsTGB", "html": "

(Part 7 of 8 in "Three Worlds Collide")

Standing behind his target, unnoticed, the Ship's Confessor had produced from his sleeve the tiny stunner - the weapon which he alone on the ship was authorized to use, if he made a determination of outright mental breakdown. With a sudden motion, his arm swept outward -

- and anesthetized the Lord Akon.

Akon crumpled almost instantly, as though most of his strings had already been cut, and only a few last strands had been holding his limbs in place.

Fear, shock, dismay, sheer outright surprise: that was the Command Conference staring aghast at the Confessor.

From the hood came words absolutely forbidden to originate from that shadow: the voice of command. "Lord Pilot, take us through the starline back to the Huygens system. Get us moving now, you are on the critical path. Lady Sensory, I need you to enforce an absolute lockdown on all of this ship's communication systems except for a single channel under your direct control. Master of Fandom, get me proxies on the assets of every being on this ship. We are going to need capital."

For a moment, the Command Conference was frozen, voiceless and motionless, as everyone waited for someone else do to something.

And then -

"Moving the Impossible now, my lord," said the Lord Pilot. His face was sane once again. "What's your plan?"

"He is not your lord!" cried the Master of Fandom. Then his voice dropped. "Excuse me. Confessor - it did not appear to me that our Lord Administrator was insane. And you, of all people, cannot just seize power -"

"True," said the one, "Akon was sane. But he was also an honest man who would keep his word once he gave it, and that I could not allow. As for me - I have betrayed my calling three times over, and am no longer a Confessor." With that same response, the once-Confessor swept back the hood -

At any other time, the words and the move and the revealed face would have provoked shock to the point of fainting. On this day, with the whole human species at stake, it seemed merely interesting. Chaos had already run loose, madness was already unleashed into the world, and a little more seemed of little consequence.

"Ancestor," said the Master, "you are twice prohibited from exercising any power here."

The former Confessor smiled dryly. "Rules like that only exist within our own minds, you know. Besides," he added, "I am not steering the future of humanity in any real sense, just stepping in front of a bullet. That is not even advice, let alone an order. And it is... appropriate... that I, and not any of you, be the one who orders this thing done -"

"Fuck that up the ass with a hedge trimmer," said the Lord Pilot. "Are we going to save the human species or not?"

There was a pause while the others figured out the correct answer.

Then the Master sighed, and inclined his head in assent to the once-Confessor. "I shall follow your orders... kiritsugu."

Even the Kiritsugu flinched at that, but there was work to be done, and not much time in which to do it.

In the Huygens system, the Impossible Possible World was observed to return from its much-heralded expedition, appearing on the starline that had shown the unprecedented anomaly. Instantly, without a clock tick's delay, the Impossible broadcast a market order.

That was already a dozen ways illegal. If the Impossible had made a scientific discovery, it should have broadcast the experimental results openly before attempting to trade on them. Otherwise the result was not profit but chaos, as traders throughout the market refused to deal with you; just conditioning on the fact that you wanted to sell or buy from them, was reason enough for them not to. The whole market seized up as hedgers tried to guess what the hidden experimental results could have been, and which of their counterparties had private information.

The Impossible ignored the rules. It broadcast the specification of a new prediction contract, signed with EMERGENCY OVERRIDE and IMMINENT HARM and CONFESSOR FLAG - signatures that carried extreme penalties, up to total confiscation, for misuse; but any one of which ensured that the contract would appear on the prediction markets at almost the speed of the raw signal.

The Impossible placed an initial order on the contract backed by nearly the entire asset base of its crew.

The prediction's plaintext read:

In three hours and forty-one minutes, the starline between Huygens and Earth will become impassable.
Within thirty minutes after, every human being remaining in this solar system will die.
All passage through this solar system will be permanently denied to humans thereafter.
(The following plaintext is not intended to describe the contract's terms, but justifies why a probability estimate on the underlying proposition is of great social utility:
ALIENS. ANYONE WITH A STARSHIP, FILL IT WITH CHILDREN AND GO! GET OUT OF HUYGENS, NOW!)

In the Huygens system, there was almost enough time to draw a single breath.

And then the markets went mad, as every single trader tried to calculate the odds, and every married trader abandoned their positions and tried to get their children to a starport.

"Six," murmured the Master of Fandom, "seven, eight, nine, ten, eleven -"

A holo appeared within the Command Conference, a signal from the President of the Huygens Central Clearinghouse, requesting (or perhaps "demanding" would have been a better word) an interview with the Lord Administrator of the Impossible Possible World.

"Put it through," said the Lord Pilot, now sitting in Akon's chair as the figurehead anointed by the Kiritsugu.

"Aliens?" the President demanded, and then her eye caught the Pilot's uniform. "You're not an Administrator -"

"Our Lord Administrator is under sedation," said the Kiritsugu beside; he was wearing his Confessor's hood again, to save on explanations. "He placed himself under more stress than any of us -"

The President made an abrupt cutting gesture. "Explain this - contract. And if this is a market manipulation scheme, I'll see you all tickled until the last sun grows cold!"

"We followed the starline that showed the anomalous behavior," the Lord Pilot said, "and found that a nova had just occurred in the originating system. In other words, my Lady President, it was a direct effect of the nova and thus occurred on all starlines leading out of that system. We've never found aliens before now - but that's reflective of the probability of any single system we explore having been colonized. There might even be a starline leading out of this system that leads to an alien domain - but we have no way of knowing which one, and opening a new starline is expensive. The nova acted as a common rendezvous signal, my Lady President. It reflects the probability, not that we and the aliens encounter each other by direct exploration, but the probability that we have at least one neighboring world in common."

The President was pale. "And the aliens are hostile."

The Lord Pilot involuntarily looked to the Kiritsugu.

"Our values are incompatible," said the Kiritsugu.

"Yes, that's one way of putting it," said the Lord Pilot. "And unfortunately, my Lady President, their technology is considerably in advance of ours."

"Lord... Pilot," the President said, "are you certain that the aliens intend to wipe out the human species?"

The Lord Pilot gave a very thin, very flat smile. "Incompatible values, my Lady President. They're quite skilled with biotechnology. Let's leave it at that."

Sweat was running down the President's forehead. "And why did they let you go, then?"

"We arranged for them to be told a plausible lie," the Lord Pilot said simply. "One of the reasons they're more advanced than us is that they're not very good at deception."

"None of this," the President said, and now her voice was trembling, "none of this explains why the starline between Huygens and Earth will become impassable. Surely, if what you say is true, the aliens will pour through our world, and into Earth, and into the human starline network. Why do you think that this one starline will luckily shut down?"

The Lord Pilot drew a breath. It was good form to tell the exact truth when you had something to hide. "My Lady President, we encountered two alien species at the nova. The first species exchanged scientific information with us. It is the second species that we are running from. But, from the first species, we learned a fact which this ship can use to shut down the Earth starline. For obvious reasons, my Lady President, we do not intend to share this fact publicly. That portion of our final report will be encrypted to the Chair of the Interstellar Association for the Advancement of Science, and to no other key."

The President started laughing. It was wild, hysterical laughter that caused the Kiritsugu's hood to turn toward her. From the corner of the screen, a gloved hand entered the view; the hand of the President's own Confessor. "My lady..." came a soft female voice.

"Oh, very good," the President said. "Oh, marvelous. So it's your ship that's going to be responsible for this catastrophe. You admit that, eh? I'm amazed. You probably managed to avoid telling a single direct lie. You plan to blow up our star and kill fifteen billion people, and you're trying to stick to the literal truth."

The Lord Pilot slowly nodded. "When we compared the first aliens' scientific database to our own -"

"No, don't tell me. I was told it could be done by a single ship, but I'm not supposed to know how. Astounding that an alien species could be so peaceful they don't even consider that a secret. I think I would like to meet these aliens. They sound much nicer than the other ones - why are you laughing?"

"My Lady President," the Lord Pilot said, getting a grip on himself, "forgive me, we've been through a lot. Excuse me for asking, but are you evacuating the planet or what?"

The President's gaze suddenly seemed sharp and piercing like the fire of stars. "It was set in motion instantly, of course. No comparable harm done, if you're wrong. But three hours and forty-one minutes is not enough time to evacuate ten percent of this planet's children." The President's eyes darted at something out of sight. "With eight hours, we could call in ships from the Earth nexus and evacuate the whole planet."

"My lady," a soft voice came from behind the President, "it is the whole human species at stake. Not just the entire starline network beyond Earth, but the entire future of humanity. Any incrementally higher probability of the aliens arriving within that time -"

The President stood in a single fluid motion that overturned her chair, moving so fast that the viewpoint bobbed as it tried to focus on her and the shadow-hooded figure standing beside. "Are you telling me," she said, and her voice rose to a scream, "to shut up and multiply?"

"Yes."

The President turned back to the camera angle, and said simply, "No. You don't know the aliens are following that close behind you - do you? We don't even know if you can shut down the starline! No matter what your theory predicts, it's never been tested - right? What if you create a flare bright enough to roast our planet, but not explode the whole sun? Billions would die, for nothing! So if you do not promise me a minimum of - let's call it nine hours to finish evacuating this planet - then I will order your ship destroyed before it can act."

No one from the Impossible spoke.

The President's fist slammed her desk. "Do you understand me? Answer! Or in the name of Huygens, I will destroy your ship -"

Her Confessor caught her President's body, very gently supporting it as it collapsed.

Even the Lord Pilot was pale and silent. But that, at least, had been within law and tradition; no one could have called that thinking sane.

On the display, the Confessor bowed her hood. "I will inform the markets that the Lady President was driven unstable by your news," she said quietly, "and recommend to the government that they carry out the evacuation without asking further questions of your ship. Is there anything else you wish me to tell them?" Her hood turned slightly, toward the Kiritsugu. "Or tell me?"

There was a strange, quick pause, as the shadows from within the two hoods stared at each other.

Then: "No," replied the Kiritsugu. "I think it has all been said."

The Confessor's hood nodded. "Goodbye."

"There it goes," the Ship's Engineer said. "We have a complete, stable positive feedback loop."

On screen was the majesty that was the star Huygens, of the inhabited planet Huygens IV. Overlaid in false color was the recirculating loop of Alderson forces which the Impossible had steadily fed.

Fusion was now increasing in the star, as the Alderson forces encouraged nuclear barriers to break down; and the more fusions occurred, the more Alderson force was generated. Round and round it went. All the work of the Impossible, the full frantic output of their stardrive, had only served to subtly steer the vast forces being generated; nudge a fraction into a circle rather than a line. But now -

Did the star brighten? It was only their imagination, they knew. Photons take centuries to exit a sun, under normal circumstances. The star's core was trying to expand, but it was expanding too slowly - all too slowly - to outrun the positive feedback that had begun.

"Multiplication factor one point oh five," the Engineer said. "It's climbing faster now, and the loop seems to be intact. I think we can conclude that this operation is going to be... successful. One point two."

"Starline instability detected," the Lady Sensory said.

Ships were still disappearing in frantic waves on the starline toward Earth. Still connected to the Huygens civilization, up to the last moment, by tiny threads of Alderson force.

"Um, if anyone has anything they want to add to our final report," the Ship's Engineer said, "they've got around ten seconds."

"Tell the human species from me -" the Lord Pilot said.

"Five seconds."

The Lord Pilot shouted, fist held high and triumphant: "To live, and occasionally be unhappy!"

This concludes the full and final report of the Impossible Possible World.

(To be completed.)

" } }, { "_id": "HWH46whexsoqR3yXk", "title": "Normal Ending: Last Tears (6/8)", "pageUrl": "https://www.lesswrong.com/posts/HWH46whexsoqR3yXk/normal-ending-last-tears-6-8", "postedAt": "2009-02-04T08:45:35.000Z", "baseScore": 86, "voteCount": 66, "commentCount": 68, "url": null, "contents": { "documentId": "HWH46whexsoqR3yXk", "html": "

(Part 6 of 8 in \"Three Worlds Collide\")

\n

Today was the day.

\n

The streets of ancient Earth were crowded to overbursting with people looking up at the sky, faces crowded up against windows.

\n

Waiting for their sorrows to end.

\n

Akon was looking down at their faces, from the balcony of a room in a well-guarded hotel.  There were many who wished to initiate violence against him, which was understandable.  Fear showed on most of the faces in the crowd, rage in some; a very few were smiling, and Akon suspected they might have simply given up on holding themselves together.  Akon wondered what his own face looked like, right now.

\n

The streets were less crowded than they might have been, only a few weeks earlier.

\n

No one had told the Superhappies about that part.  They'd sent an ambassadorial ship \"in case you have any urgent requests we can help with\", arriving hard on the heels of the Impossible.  That ship had not been given any of the encryption keys to the human Net, nor allowed to land.  It had made the Superhappies extremely suspicious, and the ambassadorial ship had disgorged a horde of tiny daughters to observe the rest of the human starline network -

\n

But if the Superhappies knew, they would have tried to stop it.  Somehow.

\n

That was a price that no one was willing to include into the bargain, no matter what.  There had to be that - alternative.

\n

\n

A quarter of the Impossible Possible World's crew had committed suicide, when the pact and its price became known.  Others, Akon thought, had waited only to be with their families.  The percentage on Earth... would probably be larger.  The government, what was left of it, had refused to publish statistics.  All you saw was the bodies being carried out of the apartments - in plain, unmarked boxes, in case the Superhappy ship was using optical surveillance.

\n

Akon swallowed.  The fear was already drying his own throat, the fear of changing, of becoming something else that wasn't quite him.  He understood the urge to end that fear, at any price.  And yet at the same time, he didn't, couldn't understand the suicides.  Was being dead a smaller change?  To die was not to leave the world, not to escape somewhere else; it was the simultaneous change of every piece of yourself into nothing.

\n

Many parents had made that choice for their children.  The government had tried to stop it.  The Superhappies weren't going to like it, when they found out.  And it wasn't right, when the children themselves wouldn't be so afraid of a world without pain.  It wasn't as if the parents and children were going somewhere together.  The government had done its best, issued orders, threatened confiscations - but there was only so much you could do to coerce someone who was going to die anyway.

\n

So more often than not, they carried away the mother's body with her daughter's, the father with the son.

\n

The survivors, Akon knew, would regret that far more vehemently, once they were closer to the Superhappy point of view.

\n

Just as they would regret not eating the tiny bodies of the infants.

\n

A hiss went up from the crowd, the intake of a thousand breaths.  Akon looked up, and he saw in the sky the cloud of ships, dispersing from the direction of the Sun and the Huygens starline.  Even at this distance they twinkled faintly.  Akon guessed - and as one ship grew closer, he knew that he was right - that the Superhappy ships were no longer things of pulsating ugliness, but gently shifting iridescent crystal, designs that both a human and a Babyeater would find beautiful.  The Superhappies had been swift to follow through on their own part of the bargain.  Their new aesthetic senses would already be an intersection of three worlds' tastes.

\n

The ship drew closer, overhead.  It was quieter in the air than even the most efficient human ships, twinkling brightly and silently; the way that someone might imagine a star in the night sky would look close up, if they had no idea of the truth.

\n

The ship stopped, hovering above the roads, between the buildings.

\n

Other bright ships, still searching for their destinations, slid by overhead like shooting stars.

\n

Long, graceful iridescent tendrils extended from the ship, down toward the crowd.  One of them came toward his own balcony, and Akon saw that it was marked with the curves of a door.

\n

The crowd didn't break, didn't run, didn't panic.  The screams failed to spread, as the strong hugged the weak and comforted them.  That was something to be proud of, in the last moments of the old humanity.

\n

The tendril reaching for Akon halted just before him.  The door marked at its end dilated open.

\n

And wasn't it strange, now, the crowd was looking up at him.

\n

Akon took a deep breath.  He was afraid, but -

\n

There wasn't much point in standing here, going on being afraid, experiencing futile disutility.

\n

He stepped through the door, into a neat and well-lighted transparent capsule.

\n

The door slid shut again.

\n

Without a lurch, without a sound, the capsule moved up toward the alien ship.

\n

One last time, Akon thought of all his fear, of the sick feeling in his stomach and the burning that was becoming a pain in his throat.  He pinched himself on the arm, hard, very hard, and felt the warning signal telling him to stop.

\n

Goodbye, Akon thought; and the tears began falling down his cheek, as though that one silent word had, for the very last time, broken his heart.

\n \n

 

\n

 

\n

 

\n

 

\n

 

\n

 

\n

 

\n

And he lived happily ever after.

" } }, { "_id": "Z263n4TXJimKn6A8Z", "title": "Three Worlds Decide (5/8)", "pageUrl": "https://www.lesswrong.com/posts/Z263n4TXJimKn6A8Z/three-worlds-decide-5-8", "postedAt": "2009-02-03T09:14:02.000Z", "baseScore": 86, "voteCount": 57, "commentCount": 141, "url": null, "contents": { "documentId": "Z263n4TXJimKn6A8Z", "html": "

(Part 5 of 8 in "Three Worlds Collide")

Akon strode into the main Conference Room; and though he walked like a physically exhausted man, at least his face was determined.  Behind\nhim, the shadowy Confessor followed.

The Command Conference looked up at him, and exchanged glances.

"You look better," the Ship's Master of Fandom ventured.

Akon put a hand on the back of his seat, and paused.  Someone was absent.  "The Ship's Engineer?"

The\nLord Programmer frowned.  "He said he had an experiment to run, my\nlord.  He refused to clarify further, but I suppose it must have\nsomething to do with the Babyeaters' data -"

"You're joking," Akon said.  "Our Ship's Engineer is off Nobel-hunting?  Now?  With the fate of the human species at stake?"

The Lord Programmer shrugged.  "He seemed to think it was important, my lord."

Akon\nsighed.  He pulled his chair back and half-slid, half-fell into it.  "I\ndon't suppose that the ship's markets have settled down?"

The Lord Pilot grinned sardonically.  "Read for yourself."\n

\n

Akon\ntwitched, calling up a screen.  "Ah, I see.  The ship's Interpreter of\nthe Market's Will reports, and I quote, 'Every single one of the\nunderlying assets in my market is going up and down like a fucking\nyo-yo while the ship's hedgers try to adjust to a Black Swan that's\ngoing to wipe out ninety-eight percent of their planetside risk\ncapital.  Even the spot prices on this ship are going crazy; either\nwe've got bubble traders coming out of the woodwork, or someone\nseriously believes that sex is overvalued relative to orange juice. \nOne derivatives trader says she's working on a contract that will have\na clearly defined value in the event that aliens wipe out the entire\nhuman species, but she says it's going to take a few hours and I say\nshe's on crack.  Indeed I believe an actual majority of the people\nstill trying to trade in this environment are higher than the\nheliopause.  Bid-ask spreads are so wide you could kick a fucking\nfootball stadium through them, nothing is clearing, and I have\nunisolated conditional dependencies coming out of my ass.  I have no\nfucking clue what the market believes.  Someone get me a drink.' \nUnquote."  Akon looked at the Master of Fandom.  "Any suggestions get\nreddited up from the rest of the crew?"

The Master cleared his throat.  "My\nlord, we took the liberty of filtering out everything that was\nphysically impossible, based on pure wishful thinking, or displayed a\nclear misunderstanding of naturalistic metaethics.  I can show you the\nraw list, if you'd like."

"And what's left?" Akon said.  "Oh, never mind, I get it."

"Well, not quite," said the Master.  "To summarize the best ideas -"  He gestured a small holo into existence.

Ask\nthe Superhappies if their biotechnology is capable of in vivo cognitive\nalterations of Babyeater children to ensure that they don't grow up\nwanting to eat their own children.  Sterilize the current adults.  If\nBabyeater adults cannot be sterilized and will not surrender, imprison\nthem.  If that's too expensive, kill most of them, but leave enough in\nprison to preserve their culture for the children.  Offer the\nSuperhappies an alliance to invade the Babyeaters, in which we provide\nthe capital and labor and they provide the technology.

"Not\ntoo bad," Akon said.  His voice grew somewhat dry.  "But it doesn't\nseem to address the question of what the Superhappies are supposed to\ndo with us.  The analogous treatment -"

"Yes, my\nlord," the Master said.  "That was extensively pointed out in the\ncomments, my lord.  And the other problem is that the Superhappies\ndon't really need our labor or our capital."  The Master looked in the direction of the Lord Programmer, the Xenopsychologist, and the Lady Sensory.

The\nLord Programmer said, "My lord, I believe the Superhappies think much\nfaster than we do.  If their cognitive systems are really based on\nsomething more like DNA than like neurons, that shouldn't be\nsurprising.  In fact, it's surprising that the speedup is as little as\n-"  The Lord Programmer stopped, and swallowed.  "My lord.  The\nSuperhappies responded to most of our transmissions extremely quickly. \nThere was, however, a finite delay.  And that delay was roughly\nproportional to the length of the response, plus an additive constant. \nGoing by the proportion, my lord, I believe they think between fifteen\nand thirty times as fast as we do, to the extent such a comparison can\nbe made.  If I try to use Moore's Law type reasoning on some of the\nobservable technological parameters in their ship - Alderson flux,\npower density, that sort of thing - then I get a reasonably convergent\nestimate that the aliens are two hundred years ahead of us in human-equivalent subjective time. \nWhich means it would be twelve hundred equivalent years since their\nScientific Revolution."

"If," the Xenopsychologist said, "their history went as slowly as ours.  It probably didn't."  The Xenopsychologist took a breath.  "My lord, my suspicion is that the\naliens are literally able to run their entire ship using only three\nkiritsugu as sole crew.  My lord, this may represent, not only the\nsuperior programming ability that translated their communications to\nus, but also the highly probable case that Superhappies can trade\nknowledge and skills among themselves by having sex.  Every individual\nof their species might contain the memory of their Einsteins and\nNewtons and a thousand other areas of expertise, no more conserved than\nDNA is conserved among humans.  My lord, I suspect their version of Galileo was something like thirty objective years ago, as the stars count time,\nand that they've been in space for maybe twenty years."

The Lady Sensory said, "Their ship has a plane of symmetry, and it's been getting wider on the axis through that plane, as it sucks up\nnova dust and energy.  It's growing on a smooth exponential at 2% per\nhour, which means it can split every thirty-five hours in this environment."

"I\nhave no idea," the Xenopsychologist said, "how fast the Superhappies\ncan reproduce themselves - how many children they have per generation,\nor how fast their children sexually mature.  But all things considered,\nI don't think we can count on their kids taking twenty years to get\nthrough high school."

There was silence.

When Akon could speak again, he said, "Are you all quite finished?"

"If\nthey let us live," the Lord Programmer said, "and if we can work out a\ntrade agreement with them under Ricardo's Law of Comparative Advantage, interest rates will -"

"Interest rates can fall into an open sewer and die.  Any further transmissions from the Superhappy ship?"

The Lady Sensory shook her head.

"All right," Akon said.  "Open a transmission channel to them."

There was a stir around the table.  "My lord -" said the Master of Fandom.  "My lord, what are you going to say?"

Akon smiled wearily.  "I'm going to ask them if they have any options to offer us."

The Lady Sensory looked at the Ship's Confessor.  The hood silently nodded:  He's still sane.

The Lady Sensory swallowed, and opened a channel.  On the holo there first appeared, as a screen:

The Lady 3rd Kiritsugu
    temporary co-chair of the Gameplayer
        Language Translator version 9
        Cultural Translator version 16

The\nLady 3rd in this translation was slightly less pale, and looked a bit\nmore concerned and sympathetic.  She took in Akon's appearance at a\nglance, and her eyes widened in alarm.  "My lord, you're hurting!"

"Just\ntired, milady," Akon said.  He cleared his throat.  "Our ship's\ndecision-making usually relies on markets and our markets are behaving\nerratically.  I'm sorry to inflict that on you as shared pain, and I'll\ntry to get this over with quickly.  Anyway -"

\n

Out of the corner of his eye, Akon saw the Ship's Engineer\nre-enter the room; the Engineer looked as if he had something to say,\nbut froze when he saw the holo.

There was no time for that now.

\n"Anyway," Akon said, "we've worked out that the key decisions depend\nheavily on your level of technology.  What do you think you can\nactually do with us or the Babyeaters?"

The Lady 3rd sighed.  "I really should get your independent component before giving you ours - you should at least think of it first - but I suppose we're out of luck on that.  How about if I just tell you what we're currently planning?"

Akon\nnodded.  "That would be much appreciated, milady."  Some of his muscles\nthat had been tense, started to relax.  Cultural Translator version 16\nwas a lot easier on his brain.  Distantly, he wondered if some\ntransformed avatar of himself was making skillful love to the Lady 3rd -

"All\nright," the Lady 3rd said.  "We consider that the obvious starting\npoint upon which to build further negotiations, is to combine and\ncompromise the utility functions of the three species until we mutually\nsatisfice, providing compensation for all changes demanded.  The\nBabyeaters must compromise their values to eat their children at a\nstage where they are not sentient - we might accomplish this most\neffectively by changing the lifecycle of the children themselves.  We\ncan even give the unsentient children an instinct to flee and scream,\nand generate simple spoken objections, but prevent their brain from developing self-awareness until after the\nhunt."

Akon straightened.  That actually sounded - quite compassionate - sort of -

"Our\nown two species," the Lady 3rd said, "which desire this change of the\nBabyeaters, will compensate them by adopting Babyeater values, making\nour own civilization of greater utility in their sight: we will both\nchange to spawn additional infants, and eat most of them at almost the\nlast stage before they become sentient."

The Conference room was frozen.  No one moved.  Even their faces didn't change expression.

Akon's mind suddenly flashed back to those writhing, interpenetrating, visually painful blobs he had seen before.

A cultural translator could change the image, but not the reality.

"It\nis nonetheless probable," continued the Lady 3rd, "that the Babyeaters\nwill not accept this change as it stands; it will be necessary to\nimpose these changes by force.  As for you, humankind, we hope you will\nbe more reasonable.  But both your species, and the Babyeaters, must\nrelinquish bodily pain, embarrassment, and romantic troubles.  In\nexchange, we will change our own values in the direction of yours.  We\nare willing to change to desire pleasure obtained in more complex ways,\nso long as the total amount of our pleasure does not significantly\ndecrease.  We will learn to create art you find pleasing.  We will\nacquire a sense of humor, though we will not lie.  From the perspective\nof humankind and the Babyeaters, our civilization will obtain much\nutility in your sight, which it did not previously possess.  This is\nthe compensation we offer you.  We furthermore request that you accept\nfrom us the gift of untranslatable 2, which we believe will\nenhance, on its own terms, the value that you name 'love'.  This will also enable our kinds to have sex using mechanical aids, which we greatly desire.  At the end\nof this procedure, all three species will satisfice each other's values\nand possess great common ground, upon which we may create a\ncivilization together."

Akon slowly nodded.  It was all quite\nunbelievably civilized.  It might even be the categorically best\ngeneral procedure when worlds collided.

The Lady 3rd brightened.  "A nod - is that assent, humankind?"

"It's acknowledgment," Akon said.  "We'll have to think about this."

"I\nunderstand," the Lady 3rd said.  "Please think as swiftly as you can. \nBabyeater children are dying in horrible agony as you think."

"I understand," Akon said in return, and gestured to cut the transmission.

The holo blinked out.

There was a long, terrible silence.

"No."

The Lord Pilot said it.  Cold, flat, absolute.

There was another silence.

"My\nlord," the Xenopsychologist said, very softly, as though afraid the\nmessenger would be torn apart and dismembered, "I do not think they\nwere offering us that option."

"Actually," Akon said, "The Superhappies offered us more than we were going to offer the BabyeatersWe weren't\nexactly thinking about how to compensate them."  It was strange, Akon\nnoticed, his voice was very calm, maybe even deadly calm.  "The\nSuperhappies really are a very fair-minded people.  You get the\nimpression they would have proposed exactly the same solution whether\nor not they happened to hold the upper hand.  We might have just enforced our own will on the Babyeaters and told the Superhappies to take a hike.  If we'd held the upper hand.  But we don't.  And that's that, I guess."

"No!" shouted the Lord Pilot.  "That's not -"

Akon looked at him, still with that deadly calm.

\n

The\nLord Pilot was breathing deeply, not as if quieting himself, but as if\npreparing for battle on some ancient savanna plain that no longer\nexisted.  "They want to turn us into something inhuman.  It - it cannot - we cannot - we must not allow -"

"Either\ngive us a better option or shut up," the Lord Programmer said flatly. \n"The Superhappies are smarter than us, have a technological advantage,\nthink faster, and probably reproduce faster.  We have no hope of\nholding them off militarily.  If our ships flee, the Superhappies will\nsimply follow in faster ships.  There's no way to shut a starline once\nopened, and no way to conceal the fact that it is open -"

"Um," the Ship's Engineer said.

Every eye turned to him.

"Um," the Ship's Engineer said.  "My Lord Administrator, I must report to you in private."

The Ship's Confessor shook his head.  "You could have handled that better, Engineer."

Akon\nnodded to himself.  It was true.  The Ship's Engineer had already\nbetrayed the fact that a secret existed.  Under the circumstances, easy\nto deduce that it had come from the Babyeater data.  That was eighty\npercent of the secret right there.  And if it was relevant to starline\nphysics, that was half of the remainder.

"Engineer," Akon said,\n"since you have already revealed that a secret exists, I suggest you\ntell the full Command Conference.  We need to stay in sync with each\nother.  Two minds are not a committee.  We'll worry later about keeping\nthe secret classified."

The Ship's Engineer hesitated.  "Um, my lord, I suggest that I report to you first, before you decide -"

"There's no time," Akon said.  He pointed to where the holo had been.

"Yes," the Master of Fandom said, "we can always slit our own throats afterward, if the secret is that awful."  The Master of Fandom gave a small laugh -

- then stopped, at the look on the Engineer's face.

"At your will, my lord," the Engineer said.

He\ndrew a deep breath.  "I asked the Lord Programmer to compare any\nidentifiable equations and constants in the Babyeater's scientific\narchive, to the analogous scientific data of humanity.  Most of the\nidentified analogues were equal, of course.  In some places we have\nmore precise values, as befits our, um, superior technological level. \nBut one anomaly did turn up: the Babyeater figure for Alderson's\nCoupling Constant was ten orders of magnitude larger than our own."

The Lord Pilot whistled.  "Stars above, how did they manage to make that mistake -"

Then the Lord Pilot stopped abruptly.

"Alderson's\nCoupling Constant," Akon echoed.  "That's the... coupling between Alderson interactions and the..."

"Between Alderson interactions and the nuclear strong force,"\nthe Lord Pilot said.  He was beginning to smile, rather grimly.  "It\nwas a free parameter in the standard model, and so had to be\nestablished experimentally.  But because the interaction is so\nincredibly... weak... they had to build an enormous Alderson generator to find the value.  The size of a very small moon, just to give us that one number.  Definitely not something you could check at home.  That's the story in the physics textbooks, my lords, my lady."

The\nMaster of Fandom frowned.  "You're saying... the physicists faked the\nresult in order to... fund a huge project...?"  He looked puzzled.

"No,"\nthe Lord Pilot said.  "Not for the love of power.  Engineer, the\nBabyeater value should be testable using our own ship's Alderson drive,\nif the coupling constant is that strong.  This you have done?"

The Ship's Engineer nodded.  "The Babyeater value is correct, my lord."

The Ship's Engineer was pale.  The Lord Pilot was clenching his jaw into a sardonic grin.

"Please\nexplain," Akon said.  "Is the universe going to end in another billion\nyears, or something?  Because if so, the issue can wait -"

"My\nlord," the Ship's Confessor said, "suppose the laws of physics in our\nuniverse had been such that the ancient Greeks could invent the\nequivalent of nuclear weapons from materials just lying around. \nImagine the laws of physics had permitted a way to destroy whole\ncountries with no more difficulty than mixing gunpowder.  History would\nhave looked quite different, would it not?"

Akon nodded, puzzled.  "Well, yes," Akon said.  "It would have been shorter."

"Aren't we lucky that physics didn't happen to turn out that way, my lord?  That in our own time, the laws of physics don't permit cheap, irresistable superweapons?"

Akon furrowed his brow -

"But my lord," said the Ship's Confessor, "do we really know what we think we know?  What different evidence would we see, if things were otherwise?  After all - if you happened to be a physicist, and you happened to notice an easy way to wreak enormous destruction using off-the-shelf hardware - would you run out and tell you?"

"No,"\nAkon said.  A sinking feeling was dawning in the pit of his stomach. \n"You would try to conceal the discovery, and create a cover story that\ndiscouraged anyone else from looking there."

The Lord Pilot emitted a bark that was half laughter, and half something much darker.  "It was perfect.  I'm a Lord Pilot and I never suspected until now."

"So?" Akon said.  "What is it, actually?"

"Um," the Ship's Engineer said.  "Well... basically... to skip over the technical details..."

The Ship's Engineer drew a breath.

"Any ship with a medium-sized Alderson drive can make a star go supernova."

Silence.

"Which might seem like bad news in general," the Lord Pilot said, "but from our perspective, right here, right now, it's just what we need.  A mere nova wouldn't do it.  But blowing up the whole star\n- "  He gave that bitter bark of laughter, again.  "No star, no starlines.  We can make the main\nstar of this system go supernova - not the white dwarf, the companion. \nAnd then the Superhappies won't be able to get to us.  That is, they\nwon't be able to get to the human starline network.  We will be dead.  If you care about tiny irrelevant details like that."  The Lord Pilot looked around the Conference Table.  "Do you care?  The correct answer is no, by the way."

"I\ncare," the Lady Sensory said softly.  "I care a whole lot.  But..." \nShe folded her hands atop the table and bowed her head.

There were nods from around the Table.

The Lord Pilot looked at the Ship's Engineer.  "How long will it take for you to modify the ship's Alderson Drive -"

"It's\ndone," said the Ship's Engineer.  "But... we should, um, wait until the\nSuperhappies are gone, so they don't detect us doing it."

The Lord Pilot nodded.  "Sounds like a plan.  Well, that's a relief.  And here I thought the whole human race was doomed, instead of just us."  He looked inquiringly at Akon.  "My lord?"

Akon\nrested his head in his hands, suddenly feeling more weary than he had\never felt in his life.  From across the table, the Confessor watched\nhim - or so it seemed; the hood was turned in his direction, at any\nrate.

I told you so, the Confessor did not say.

"There is a certain problem with your plan," Akon said.

"Such as?" the Lord Pilot said.

"You've forgotten something," Akon said.  "Something terribly important.  Something you once swore you would protect."

Puzzled faces looked at him.

"If you say something bloody ridiculous like 'the safety of the ship' -" said the Lord Pilot.

The Lady Sensory gasped.  "Oh, no," she murmured.  "Oh, no.  The Babyeater children."

The\nLord Pilot looked like he had been punched in the stomach.  The grim\nsmiles that had begun to spread around the table were replaced with\nhorror.

"Yes," Akon said.  He looked away from the Conference\nTable.  He didn't want to see the reactions.  "The Superhappies\nwouldn't be able to get to us.  And they couldn't get to the Babyeaters\neither.  Neither could we.  So the Babyeaters would go on eating their\nown children indefinitely.  And the children would go on dying over\ndays in their parents' stomachs.  Indefinitely.  Is the human race\nworth that?"

Akon looked back at the Table, just once.  The\nXenopsychologist looked sick, tears were running down the Master's\nface, and the Lord Pilot looked like he were being slowly torn in\nhalf.  The Lord Programmer looked abstracted, the Lady Sensory was\ncovering her face with her hands.  (And the Confessor's face still lay\nin shadow, beneath the silver hood.)

Akon closed his eyes.  "The\nSuperhappies will transform us into something not human," Akon said. \n"No, let's be frank.  Something less than human.  But not all that much\nless than human.  We'll still have art, and stories, and love.  I've\ngone entire hours without being in pain, and on the whole, it wasn't that\nbad an experience -"  The words were sticking in his throat, along with\na terrible fear.  "Well.  Anyway.  If remaining whole is that\nimportant to us - we have the option.  It's just a question of whether\nwe're willing to pay the price.  Sacrifice the Babyeater children -"

They're a lot like human children, really.

"- to save humanity."

Someone\nin the darkness was screaming, a thin choked wail that sounded like\nnothing Akon had ever heard or wanted to hear.  Akon thought it might\nbe the Lord Pilot, or the Master of Fandom, or maybe the Ship's\nEngineer.  He didn't open his eyes to find out.

There was a chime.

"In-c-c-coming c-call from the Super Happy," the Lady Sensory spit out the words like acid, "ship, my lord."

Akon\nopened his eyes, and felt, somehow, that he was still in darkness.

"Receive," Akon said.

The Lady 3rd Kiritsugu appeared before him.  Her eyes widened once, as she took in his appearance, but she said nothing.

That's right, my lady, I don't look super happy.

"Humankind, we must have your answer," she said simply.

The\nLord Administrator pinched the bridge of his nose, and rubbed his\neyes.  Absurd, that one human being should have to answer a question\nlike that.  He wanted to foist off the decision on a committee, a\nmajority vote of the ship, a market - something that wouldn't demand\nthat anyone accept full responsibility.  But a ship run that way didn't\nwork well under ordinary circumstances, and there was no reason to\nthink that things would change under extraordinary circumstances.  He\nwas an Administrator; he had to accept all the advice, integrate it,\nand decide.  Experiment had shown that no organizational structure of non-Administrators could\nmatch what he was trained to do, and motivated to do; anything that worked was simply absorbed into the Administrative weighting of advice.

Sole\ndecision.  Sole responsibility if he got it wrong.  Absolute power and\nabsolute accountability, and never forget the second half, my lord, or\nyou'll be fired the moment you get home.  Screw up indefensibly,\nmy lord, and all your hundred and twenty years of accumulated salary in\nescrow, producing that lovely steady income, will vanish before you\ndraw another breath.

Oh - and this time the whole human species will pay for it, too.

"I\ncan't speak for all humankind," said the Lord Administrator.  "I can decide, but others may decide differently.  Do you\nunderstand?"

The Lady 3rd made a light gesture, as if it were of no consequence.  "Are you an exceptional case of a human decision-maker?"

Akon tilted his head.  "Not... particularly..."

"Then\nyour decision is strongly indicative of what other human decisionmakers\nwill decide," she said.  "I find it hard to imagine that the options\nexactly balance in your decision mechanism, whatever your inability to\nadmit your own preferences."

Akon slowly nodded.  "Then..."

He drew a breath.

Surely, any species that reached the stars would understand the Prisoner's Dilemma. \nIf you couldn't cooperate, you'd just destroy your own stars.  A very\neasy thing to do, as it had turned out.  By that standard, humanity\nmight be something of an impostor next to the Babyeaters and the\nSuperhappies.  Humanity had kept it a secret from itself.  The other\ntwo races - just managed not to do the stupid thing.  You wouldn't meet anyone out among the stars, otherwise.

The Superhappies had done their very best to press C.  Cooperated as fairly as they could.

Humanity could only do the same.

"For myself, I am inclined to accept your offer."

He didn't look around to see how anyone had reacted to that.

"There\nmay be other things," Akon added, "that humanity would like to ask of\nyour kind, when our representatives meet.  Your technology is advanced\nbeyond ours."

The Lady 3rd smiled.  "We will, of course, be quite\npositively inclined toward any such requests.  As I believe our first\nmessage to you said - 'we love you and we want you to be super happy'. \nYour joy will be shared by us, and we will be pleasured together."

Akon couldn't bring himself to smile.  "Is that all?"

"This\nBabyeater ship," said the Lady 3rd, "the one that did not fire on you,\neven though they saw you first.  Are you therefore allied with them?"

"What?" Akon said without thinking.  "No -"

"My lord!" shouted the Ship's Confessor -

Too late.

"My lord," the Lady Sensory said, her voice breaking, "the Superhappy ship has fired on the Babyeater vessel and destroyed it."

Akon stared at the Lady 3rd in horror.

"I'm\nsorry," the Lady 3rd Kiritsugu said.  "But our negotiations with them\nfailed, as predicted.  Our own ship owed them nothing and promised them\nnothing.  This will make it considerably easier to sweep through their\nstarline network when we return.  Their children would be the ones to\nsuffer from any delay.  You understand, my lord?"

"Yes," Akon said, his voice trembling.  "I understand, my lady kiritsugu."  He wanted to protest, to scream out.  But the war was only beginning, and this - would admittedly save -

"Will you warn them?" the Lady 3rd asked.

"No," Akon said.  It was the truth.

"Transforming the Babyeaters will take precedence over transforming your own species.  We estimate the Babyeater operation may take several weeks of your time to conclude.  We hope you do not mind waiting.  That is all," the Lady 3rd said.

And the holo faded.

"The\nSuperhappy ship is moving out," the Lady Sensory said.  She was crying,\nsilently, as she steadily performed her duty of reporting.  "They're\nheading back toward their starline origin."

"All right," Akon said.  "Take us home.  We need to report on the negotiations -"

There\nwas an inarticulate scream, like that throat was trying to burst the\nwalls of the Conference chamber, as the Lord Pilot burst out of his\nchair, burst all restraints he had placed on himself, and lunged\nforward.

But standing behind his target, unnoticed, the Ship's Confessor had\nproduced from his sleeve the tiny stunner - the weapon which he alone\non the ship was authorized to use, if he made a determination of\noutright mental breakdown.  With a sudden motion, the Confessor's arm swept out...

    \n
  1. ... and anesthetized the Lord Pilot.
  2. \n
  3. ...  [This option will become the True Ending only if someone suggests it in the comments before the previous ending is posted tomorrow.  Otherwise, the first ending is the True one.]
  4. \n
" } }, { "_id": "bojLBvsYck95gbKNM", "title": "Interlude with the Confessor (4/8)", "pageUrl": "https://www.lesswrong.com/posts/bojLBvsYck95gbKNM/interlude-with-the-confessor-4-8", "postedAt": "2009-02-02T09:11:26.000Z", "baseScore": 94, "voteCount": 81, "commentCount": 97, "url": null, "contents": { "documentId": "bojLBvsYck95gbKNM", "html": "

(Part 4 of 8 in \"Three Worlds Collide\")

\n

The two of them were alone now, in the Conference Chair's Privilege, the huge private room of luxury more suited to a planet than to space.  The Privilege was tiled wall-to-wall and floor-to-ceiling with a most excellent holo of the space surrounding them: the distant stars, the system's sun, the fleeing nova ashes, and the glowing ember of the dwarf star that had siphoned off hydrogen from the main sun until its surface had briefly ignited in a nova flash.  It was like falling through the void.

\n

Akon sat on the edge of the four-poster bed in the center of the room, resting his head in his hands.  Weariness dulled him at the moment when he most needed his wits; it was always like that in crisis, but this was unusually bad.  Under the circumstances, he didn't dare snort a hit of caffeine - it might reorder his priorities.  Humanity had yet to discover the drug that was pure energy, that would improve your thinking without the slightest touch on your emotions and values.

\n

\"I don't know what to think,\" Akon said.

\n

The Ship's Confessor was standing stately nearby, in full robes and hood of silver.  From beneath the hood came the formal response:  \"What seems to be confusing you, my friend?\"

\n

\"Did we go wrong?\" Akon said.  No matter how hard he tried, he couldn't keep the despair out of his voice.  \"Did humanity go down the wrong path?\"

\n

\n

The Confessor was silent a long time.

\n

Akon waited.  This was why he couldn't have talked about the question with anyone else.  Only a Confessor would actually think before answering, if asked a question like that.

\n

\"I've often wondered that myself,\" the Confessor finally said, surprising Akon.  \"There were so many choices, so many branchings in human history - what are the odds we got them all right?\"

\n

The hood turned away, angling in the direction of the Superhappy ship - though it was too far away to be visible, everyone on board the Impossible Possible World knew where it was.  \"There are parts of your question I can't help you with, my lord.  Of all people on this ship, I might be most poorly suited to answer...  But you do understand, my lord, don't you, that neither the Babyeaters nor the Superhappies are evidence that we went wrong?  If you weren't worried before, you shouldn't be any more worried now.  The Babyeaters strive to do the baby-eating thing to do, the Superhappies output the Super Happy thing to do.  None of that tells us anything about the right thing to do.  They are not asking the same question we are - no matter what word of their language the translator links to our 'should'.  If you're confused at all about that, my lord, I might be able to clear it up.\"

\n

\"I know the theory,\" Akon said.  Exhaustion in his voice.  \"They made me study metaethics when I was a little kid, sixteen years old and still in the children's world.  Just so that I would never be tempted to think that God or ontologically basic moral facts or whatever had the right to override my own scruples.\"  Akon slumped a little further.  \"And somehow - none of that really makes a difference when you're looking at the Lady 3rd, and wondering why, when there's a ten-year-old with a broken finger in front of you, screaming and crying, we humans only partially numb the area.\"

\n

The Confessor's hood turned back to look at Akon.  \"You do realize that your brain is literally hardwired to generate error signals when it sees other human-shaped objects stating a different opinion from yourself.  You do realize that, my lord?\"

\n

\"I know,\" Akon said.  \"That, too, we are taught.  Unfortunately, I am also just now realizing that I've only been going along with society all my life, and that I never thought the matter through for myself, until now.\"

\n

A sigh came from that hood.  \"Well... would you prefer a life entirely free of pain and sorrow, having sex all day long?\"

\n

\"Not... really,\" Akon said.

\n

The shoulders of the robe shrugged.  \"You have judged.  What else is there?\"

\n

Akon stared straight at that anonymizing robe, the hood containing a holo of dark mist, a shadow that always obscured the face inside.  The voice was also anonymized - altered slightly, not in any obtrusive way, but you wouldn't know your own Confessor to hear him speak.  Akon had no idea who the Confessor might be, outside that robe.  There were rumors of Confessors who had somehow arranged to be seen in the company of their own secret identity...

\n

Akon drew a breath.  \"You said that you, of all people, could not say whether humanity had gone down the wrong path.  The simple fact of being a Confessor should have no bearing on that; rationalists are also human.  And you told the Lady 3rd that you were too old to make decisions for your species.  Just how old are you... honorable ancestor?\"

\n

There was a silence.

\n

It didn't last long.

\n

As though the decision had already been foreseen, premade and preplanned, the Confessor's hands moved easily upward and drew back the hood - revealing an unblended face, strangely colored skin and shockingly distinctive features.  A face out of forgotten history, which could only have come from a time before the genetic mixing of the 21st century, untouched by DNA insertion or diaspora.

\n

Even though Akon had been half-expecting it, he still gasped out loud.  Less than one in a million:  That was the percentage of the current human population that had been born on Earth before the invention of antiagathics or star travel, five hundred years ago.

\n

\"Congratulations on your guess,\" the Confessor said.  The unaltered voice was only slightly different; but it was stronger, more masculine.

\n

\"Then you were there,\" Akon said.  He felt almost breathless, and tried not to show it.  \"You were alive - all the way back in the days of the initial biotech revolution!  That would have been when humanity first debated whether to go down the Super Happy path.\"

\n

The Confessor nodded.

\n

\"Which side did you argue?\"

\n

The Confessor's face froze for a moment, and then he emitted a brief chuckle, one short laugh.  \"You have entirely the wrong idea about how things were done, back then.  I suppose it's natural.\"

\n

\"I don't understand,\" Akon said.

\n

\"And there are no words that I can speak to make you understand.  It is beyond your imagining.  But you should not imagine that a violent thief whose closest approach to industry was selling uncertified hard drugs - you should not imagine, my lord, my honorable descendant, that I was ever asked to take sides.\"

\n

Akon's eyes slid away from the hot gaze of the unmixed man; there was something wrong about the thread of anger still there in the memory after five hundred years.

\n

\"But time passed,\" the Confessor said, \"time moved forward, and things changed.\"  The eyes were no longer focused on Akon, looking now at something far away.  \"There was an old saying, to the effect that while someone with a single bee sting will pay much for a remedy, to someone with five bee stings, removing just one sting seems less attractive.  That was humanity in the ancient days.  There was so much wrong with the world that the small resources of altruism were splintered among ten thousand urgent charities, and none of it ever seemed to go anywhere.  And yet... and yet...\"

\n

\"There was a threshold crossed somewhere,\" said the Confessor, \"without a single apocalypse to mark it.  Fewer wars.  Less starvation.  Better technology.  The economy kept growing.  People had more resource to spare for charity, and the altruists had fewer and fewer causes to choose from.  They came even to me, in my time, and rescued me.  Earth cleaned itself up, and whenever something threatened to go drastically wrong again, the whole attention of the planet turned in that direction and took care of it.  Humanity finally got its act together.\"

\n

The Confessor worked his jaws as if there were something stuck in his throat.  \"I doubt you can even imagine, my honorable descendant, just how much of an impossible dream that once was.  But I will not call this path mistaken.\"

\n

\"No, I can't imagine,\" Akon said quietly.  \"I once tried to read some of the pre-Dawn Net.  I thought I wanted to know, I really did, but I - just couldn't handle it.  I doubt anyone on this ship can handle it except you.  Honorable ancestor, shouldn't we be asking you how to deal with the Babyeaters and the Superhappies?  You are the only one here who's ever dealt with that level of emergency.\"

\n

\"No,\" said the Confessor, like an absolute order handed down from outside the universe.  \"You are the world that we wanted to create.  Though I can't say we.  That is just a distortion of memory, a romantic gloss on history fading into mist.  I wasn't one of the dreamers, back then.  I was just wrapped up in my private blanket of hurt.  But if my pain meant anything, Akon, it is as part of the long price of a better world than that one.  If you look back at ancient Earth, and are horrified - then that means it was all for something, don't you see?  You are the beautiful and shining children, and this is your world, and you are the ones who must decide what to do with it now.\"

\n

Akon started to speak, to demur -

\n

The Confessor held up a hand.  \"I mean it, my lord Akon.  It is not polite idealism.  We ancients can't steer.  We remember too much disaster.  We're too cautious to dare the bold path forward.  Do you know there was a time when nonconsensual sex was illegal?\"

\n

Akon wasn't sure whether to smile or grimace.  \"The Prohibition, right?  During the first century pre-Net?  I expect everyone was glad to have that law taken off the books.  I can't imagine how boring your sex lives must have been up until then - flirting with a woman, teasing her, leading her on, knowing the whole time that you were perfectly safe because she couldn't take matters into her own hands if you went a little too far -\"

\n

\"You need a history refresher, my Lord Administrator.  At some suitably abstract level.  What I'm trying to tell you - and this is not public knowledge - is that we nearly tried to overthrow your government.\"

\n

\"What?\" said Akon.  \"The Confessors?\"

\n

\"No, us.  The ones who remembered the ancient world.  Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees.  When our children legalized rape, we thought that the Future had gone wrong.\"

\n

Akon's mouth hung open.  \"You were that prude?\"

\n

The Confessor shook his head.  \"There aren't any words,\" the Confessor said, \"there aren't any words at all, by which I ever could explain to you.  No, it wasn't prudery.  It was a memory of disaster.\"

\n

\"Um,\" Akon said.  He was trying not to smile.  \"I'm trying to visualize what sort of disaster could have been caused by too much nonconsensual sex -\"

\n

\"Give it up, my lord,\" the Confessor said.  He was finally laughing, but there was an undertone of pain to it.  \"Without, shall we say, personal experience, you can't possibly imagine, and there's no point in trying.\"

\n

\"Well, out of curiosity - how much did you lose?\"

\n

The Confessor seemed to freeze, for a moment.  \"What?\"

\n

\"How much did you lose in the legislative prediction markets, betting on whatever dreadful outcome you thought would happen?\"

\n

\"You really wouldn't ever understand,\" the Confessor said.  His smile was entirely real, now.  \"But now you know, don't you?  You know, after speaking to me, that I can't ever be allowed to make decisions for humankind.\"

\n

Akon hesitated.  It was odd... he did know, on some gut level.  And he couldn't have explained on any verbal level why.  Just - that hint of wrongness.

\n

\"So now you know,\" the Confessor repeated.  \"And because we do remember so much disaster - and because it is a profession that benefits from being five hundred years old - many of us became Confessors.  Being the voice of pessimism comes easily to us, and few indeed are those among the human kind who must rationally be nudged upward...  We advise, but do not lead.  Debate, but do not decide.  We're going along for your ride, and trying not to be too shocked so that we can be almost as delighted as you.  You might find yourself in a similar situation in five hundred years... if humanity survives this week.\"

\n

\"Ah, yes,\" Akon said dryly.  \"The aliens.  The current problem of discourse.\"

\n

\"Yes.  Have you had any thoughts on the subject?\"

\n

\"Only that I really do wish that humanity had been alone in the universe.\"  Akon's hand suddenly formed a fist and smashed hard against the bed.  \"Fuck it!  I know how the Superhappies felt when they discovered that we and the Babyeaters hadn't 'repaired ourselves'.  You understand what this implies about what the rest of the universe looks like, statistically speaking?  Even if it's just a sample of two?  I'm sure that somewhere out there are likable neighbors.  Just as somewhere out there, if we go far enough through the infinite universe, there's a person who's an exact duplicate of me down to the atomic level.  But every other species we ever actually meet is probably going to be -\"  Akon drew a breath.  \"It wasn't supposed to be like this, damn it!  All three of our species have empathy, we have sympathy, we have a sense of fairness - the Babyeaters even tell stories like we do, they have art.  Shouldn't that be enough?  Wasn't that supposed to be enough?  But all it does is put us into enough of the same reference frame that we can be horrible by each others' standards.\"

\n

\"Don't take this the wrong way,\" the Confessor said, \"but I'm glad that we ran across the Babyeaters.\"

\n

Words stuck in Akon's throat.  \"What?\"

\n

A half-smile twisted up one corner of the Confessor's face.  \"Because if we hadn't run across the Babyeaters, we couldn't possibly rescue the babies, now could we?  Not knowing about their existence wouldn't mean they weren't there.  The Babyeater children would still exist.  They would still die in horrible agony.  We just wouldn't be able to help them.  If we didn't know it wouldn't be our fault, our responsibility - but that's not something you're supposed to optimize for.\"  The Confessor paused.  \"Of course I understand how you feel.  But on this vessel I am humanity's token attempt at sanity, and it is my duty to think certain strange yet logical thoughts.\"

\n

\"And the Superhappies?\" Akon said.  \"The race with superior technology that may decide to exterminate us, or keep us in prison, or take our children away?  Is there any silver lining to that?\"

\n

\"The Superhappies aren't so far from us,\" the Confessor said.  \"We could have gone down the Super Happy path.  We nearly did - you might have trouble imagining just how attractive the absence of pain can sound, under certain circumstances.  In a sense, you could say that I tried to go down that path - though I wasn't a very competent neuroengineer.  If human nature had been only slightly different, we could easily have been within that attractor.  And the Super Happy civilization is not hateful to us, whatever we are to them.  That's good news at least, for how the rest of the universe might look.\"  The Confessor paused.  \"And...\"

\n

\"And?\"

\n

The Confessor's voice became harder.  \"And the Superhappies will rescue the Babyeater children no matter what, I think, even if humanity should fail in the task.  Considering how many Babyeater children are dying, and in what pain - that could outweigh even our own extermination.  Shut up and multiply, as the saying goes.\"

\n

\"Oh, come on!\" Akon said, too surprised to be shocked.  \"If the Superhappies hadn't shown up, we would have - well, we would have done something about the Babyeaters, once we decided what.  We wouldn't have just let the, the -\"

\n

\"Holocaust,\" the Confessor offered.

\n

\"Good word for it.  We wouldn't have just let the Holocaust go on.\"

\n

\"You would be astounded, my lord, at what human beings will just let go on.  Do you realize the expenditure of capital, labor, maybe even human lives required to invade every part of the Babyeater civilization?  To trace out every part of their starline network, push our technological advantage to its limit to build faster ships that can hunt down every Babyeater ship that tries to flee?  Do you realize -\"

\n

\"I'm sorry.  You are simply mistaken as a question of fact.\"  Boy, thought Akon, you don't often get to say that to a Confessor.  \"This is not your birth era, honorable ancestor.  We are the humanity that has its shit together.  If the Superhappies had never come along, humanity would have done whatever it took to rescue the Babyeater children.  You saw the Lord Pilot, the Lady Sensory; they were ready to secede from civilization if that's what it took to get the job done.  And that, honorable ancestor, is how most people would react.\"

\n

\"For a moment,\" said the Confessor.  \"In the moment of first hearing the news.  When talk was cheap.  When they hadn't yet visualized the costs.  But once they did, there would be an uneasy pause, while everyone waited to see if someone else might act first.  And faster than you imagine possible, people would adjust to that state of affairs.  It would no longer sound quite so shocking as it did at first.  Babyeater children are dying horrible, agonizing deaths in their parents' stomachs?  Deplorable, of course, but things have always been that way.  It would no longer be news.  It would all be part of the plan.\"

\n

\"Are you high on something?\" Akon said.  It wasn't the most polite way he could have phrased it, but he couldn't help himself.

\n

The Confessor's voice was as cold and hard as an iron sun, after the universe had burned down to embers.  \"Innocent youth, when you have watched your older brother beaten almost to death before your eyes, and seen how little the police investigate - when you have watched all four of your grandparents wither away like rotten fruit and cease to exist, while you spoke not one word of protest because you thought it was normal - then you may speak to me of what human beings will tolerate.\"

\n

\"I don't believe we would do that,\" Akon said as mildly as possible.

\n

\"Then you fail as a rationalist,\" the Confessor said.  His unhooded head turned toward the false walls, to look out at the accurately represented stars.  \"But I - I will not fail again.\"

\n

\"Well, you're damn right about one thing,\" Akon said.  He was too exhausted to be tactful.  \"You can't ever be allowed to make decisions for the human species.\"

\n

\"I know.  Believe me, I know.  Only youth can Administrate.  That is the pact of immortality.\"

\n

Akon stood up from the bed.  \"Thank you, Confessor.  You have helped me.\"

\n

With an easy, practiced motion, the Confessor slid the hood of his robe over his head, and the stark features vanished into shadow.  \"I have?\" the Confessor said, and his recloaked voice sounded strangely mild, after that earlier masculine power.  \"How?\"

\n

Akon shrugged.  He didn't think he could put it into words.  It had something to do with the terrible vast sweep of Time across the centuries, and so much true change that had already happened, deeper by far than anything he had witnessed in his own lifetime; the requirement of courage to face the future, and the sacrifices that had been made for it; and that not everyone had been saved, once upon a time.

\n

\"I guess you reminded me,\" Akon said, \"that you can't always get everything you want.\"

\n

To be continued...

\n
\n

A reminder:  This is a work of fiction.  In real life, continuing to attempt to have sex with someone after they say 'no' and before they say 'yes', whether or not they offer forceful resistance and whether or not any visible injury occurs, is (in the USA) defined as rape and considered a federal felony.  I agree with and support that this is the correct place for society to draw the line.  Some people have worked out a safeword system in which they explicitly and verbally agree, with each other or on a signed form, that 'no' doesn't mean stop but e.g. 'red' or 'safeword' does mean stop.  I agree with and support this as carving out a safe exception whose existence does not endanger innocent bystanders.  If either of these statements come to you as a surprise then you should look stuff up.  Thank you and remember, your safeword should be at least 10 characters and contain a mixture of letters and numbers.  We now return you to your regularly scheduled reading.  Yours, the author.

" } }, { "_id": "qCsxiojX7BSLuuBgQ", "title": "The Super Happy People (3/8)", "pageUrl": "https://www.lesswrong.com/posts/qCsxiojX7BSLuuBgQ/the-super-happy-people-3-8", "postedAt": "2009-02-01T08:18:19.000Z", "baseScore": 126, "voteCount": 99, "commentCount": 55, "url": null, "contents": { "documentId": "qCsxiojX7BSLuuBgQ", "html": "

(Part 3 of 8 in \"Three Worlds Collide\")

\n

...The Lady Sensory said, in an unsteady voice, \"My lords, a third ship has jumped into this system.  Not Babyeater, not human.\"

\n

The holo showed a triangle marked with three glowing dots, the human ship and the Babyeater ship and the newcomers.  Then the holo zoomed in, to show -

\n

- the most grotesque spaceship that Akon had ever seen, like a blob festooned with tentacles festooned with acne festooned with small hairs.  Slowly, the tentacles of the ship waved, as if in a gentle breeze; and the acne on the tentacles pulsated, as if preparing to burst.  It was a fractal of ugliness, disgusting at every level of self-similarity.

\n

\"Do the aliens have deflectors up?\" said Akon.

\n

\"My lord,\" said Lady Sensory, \"they don't have any shields raised.  The nova ashes' radiation doesn't seem to bother them.  Whatever material their ship is made from, it's just taking the beating.\"

\n

A silence fell around the table.

\n

\"All right,\" said the Lord Programmer, \"that's impressive.\"

\n

The Lady Sensory jerked, like someone had just slapped her.  \"We - we just got a signal from them in human-standard format, content encoding marked as Modern English text, followed by a holo -\"

\n

\"What?\" said Akon.  \"We haven't transmitted anything to them, how could they possibly -\"

\n

\"Um,\" said the Ship's Engineer.  \"What if these aliens really do have, um, 'big angelic powers'?\"

\n

\"No,\" said the Ship's Confessor.  His hood tilted slightly, as if in wry humor.  \"It is only history repeating itself.\"

\n

\n

\"History repeating itself?\" said the Master of Fandom.  \"You mean that the ship is from an alternate Everett branch of Earth, or that they somehow independently developed ship-to-ship communication protocols exactly similar to our -\"

\n

\"No, you dolt,\" said the Lord Programmer, \"he means that the Babyeaters sent the new aliens a massive data dump, just like they sent us.  Only this time, the Babyeater data dump included all the data that we sent the Babyeaters.  Then the new aliens ran an automatic translation program, like the one we used.\"

\n

\"You gave it away,\" said the Confessor.  There was a slight laugh in his voice.  \"You should have let them figure it out on their own.  One so rarely encounters the apparently supernatural, these days.\"

\n

Akon shook his head, \"Confessor, we don't have time for - never mind.  Sensory, show the text message.\"

\n

The Lady Sensory twitched a finger and -

\n

HOORAY!

\n

WE ARE SO GLAD TO MEET YOU!

\n

THIS IS THE SHIP \"PLAY GAMES FOR LOTS OF FUN\"

\n

(OPERATED BY CHARGED PARTICLE FINANCIAL FIRMS)

\n

WE LOVE YOU AND WE WANT YOU TO BE SUPER HAPPY.

\n

WOULD YOU LIKE TO HAVE SEX?

\n

Slowly, elaborately, Akon's head dropped to the table with a dull thud.  \"Why couldn't we have been alone in the universe?\"

\n

\"No, wait,\" said the Xenopsychologist, \"this makes sense.\"

\n

The Master of Fandom nodded.  \"Seems quite straightforward.\"

\n

\"Do enlighten,\" came a muffled tone from where Akon's head rested on the table.

\n

The Xenopsychologist shrugged.  \"Evolutionarily speaking, reproduction is probably the single best guess for an activity that an evolved intelligence would find pleasurable.  When you look at it from that perspective, my lords, my lady, their message makes perfect sense - it's a universal friendly greeting, like the Pioneer engraving.\"

\n

Akon didn't raise his head.  \"I wonder what these aliens do,\" he said through his shielding arms, \"molest kittens?\"

\n

\"My lord...\" said the Ship's Confessor.  Gentle the tone, but the meaning was very clear.

\n

Akon sighed and straightened up.  \"You said their message included a holo, right?  Let's see it.\"

\n

The main screen turned on.

\n

There was a moment of silence, and then a strange liquid sound as, in unison, everyone around the table gasped in shock, even the Ship's Confessor.

\n

For a time after that, no one spoke.  They were just... watching.

\n

\"Wow,\" said the Lady Sensory finally.  \"That's actually... kind of... hot.\"

\n

Akon tore his eyes away from the writhing human female form, the writhing human male form, and the writhing alien tentacles.  \"But...\" Akon said.  \"But why is she pregnant?\"

\n

\"A better question,\" said the Lord Programmer, \"would be, why are the two of them reciting multiplication tables?\"  He glanced around.  \"What, none of you can read lips?\"

\n

\"Um...\" said the Xenopsychologist.  \"Okay, I've got to admit, I can't even begin to imagine why -\"

\n

Then there was a uniform \"Ewww...\" from around the room.

\n

\"Oh, dear,\" said the Xenopsychologist.  \"Oh, dear, I don't think they understood that part at all.\"

\n

Akon made a cutting gesture, and the holo switched off.

\n

\"Someone should view the rest of it,\" said the Ship's Confessor.  \"It might contain important information.\"

\n

Akon flipped a hand.  \"I don't think we'll run short of volunteers to watch disgusting alien pornography.  Just post it to the ship's 4chan, and check after a few hours to see if anything was modded up to +5 Insightful.\"

\n

\"These aliens,\" said the Master of Fandom slowly, \"composed that pornography within... seconds, it must have been.  We couldn't have done that automatically, could we?\"

\n

The Lord Programmer frowned.  \"No.  I don't, um, think so.  From a corpus of alien pornography, automatically generate a holo they would find interesting?  Um.  It's not a problem that I think anyone's tried to solve yet, and they sure didn't get it perfect the first time, but... no.\"

\n

\"How large an angelic power does that imply?\"

\n

The Lord Programmer traded glances with the Master.  \"Big,\" the Lord Programmer said finally.  \"Maybe even epic.\"

\n

\"Or they think on a much faster timescale,\" said the Confessor softly.  \"There is no law of the universe that their neurons must run at 100Hz.\"

\n

\"My lords,\" said the Lady Sensory, \"we're getting another message; holo with sound, this time.  It's marked as a real-time communication, my lords.\"

\n

Akon swallowed, and his fingers automatically straightened the hood of his formal sweater.  Would the aliens be able to tell if his clothes were sloppy?  He was suddenly very aware that he hadn't checked his lipstick in three hours.  But it wouldn't do to keep the visitors waiting...  \"All right.  Open a channel to them, transmitting only myself.\"

\n

The holo that appeared did nothing to assuage his insecurities.  The man that appeared was perfectly dressed, utterly perfectly dressed, in business casual more intimidating than any formality: crushing superiority without the appearance of effort.  The face was the same way, overwhelmingly handsome without the excuse of makeup; the fashionable slit vest exposed pectoral muscles that seemed optimally sculpted without the bulk that comes of exercise -

\n

\"Superstimulus!\" exclaimed the Ship's Confessor, a sharp warning.

\n

Akon blinked, shrugging off the fog.  Of course the aliens couldn't possibly really look like that.  A holo, only an overoptimized holo.  That was a lesson everyone (every human?) learned before puberty, not to let reality seem diminished by fiction.  As the proverb went, It's bad enough comparing yourself to Isaac Newton without comparing yourself to Kimball Kinnison.

\n

\"Greetings in the name of humanity,\" said Akon.  \"I am Lord Anamaferus Akon, Conference Chair of the Giant Science Vessel Impossible Possible World.  We -\" come in peace didn't seem appropriate with a Babyeater war under discussion, and many other polite pleasantries, like pleased to meet you, suddenly seemed too much like promises and lies, \"- didn't quite understand your last message.\"

\n

\"Our apologies,\" said the perfect figure on screen.  \"You may call me Big Fucking Edward; as for our species...\"  The figure tilted a head in thought.  \"This translation program is not fully stable; even if I said our proper species-name, who knows how it would come out.  I would not wish my kind to forever bear an unaesthetic nickname on account of a translation error.\"

\n

Akon nodded.  \"I understand, Big Fucking Edward.\"

\n

\"Your true language is a format inconceivable to us,\" said the perfect holo.  \"But we do apologize for any untranslatable 1 you may have experienced on account of our welcome transmission; it was automatically generated, before any of us had a chance to apprehend your sexuality.  We do apologize, I say; but who would ever have thought that a species would evolve to find reproduction a painful experience?  For us, childbirth is the greatest pleasure we know; to be prolonged, not hurried.\"

\n

\"Oh,\" said the Lady Sensory in a tone of sudden enlightenment, \"that's why the tentacles were pushing the baby back into -\"

\n

Out of sight of the visual frame, Akon gestured with his hand for Sensory to shut up.  Akon leaned forward.  \"The visual you're currently sending us is, of course, not real.  What do you actually look like? - if the request does not offend.\"

\n

The perfect false man furrowed a brow, puzzled.  \"I don't understand.  You would not be able to apprehend any communicative cues.\"

\n

\"I would still like to see,\" Akon said.  \"I am not sure how to explain it, except that - truth matters to us.\"

\n

The too-beautiful man vanished, and in his place -

\n

Mad brilliant colors, insane hues that for a moment defeated his vision.  Then his mind saw shapes, but not meaning.  In utter silence, huge blobs writhed around supporting bars.  Extrusions protruded fluidly and interpenetrated -

\n

Writhing, twisting, shuddering, pulsating -

\n

And then the false man reappeared.

\n

Akon fought to keep his face from showing distress, but a prickling of sweat appeared on his forehead.  There'd been something jarring about the blobs, even the stable background behind them.  Like looking at an optical illusion designed by sadists.

\n

And - those were the aliens, or so they claimed -

\n

\"I have a question,\" said the false man.  \"I apologize if it causes any distress, but I must know if what our scientists say is correct.  Has your kind really evolved separate information-processing mechanisms for deoxyribose nucleic acid versus electrochemical transmission of synaptic spikes?\"

\n

Akon blinked.  Out of the corner of his eye, he saw figures trading cautious glances around the table.  Akon wasn't sure where this question was leading, but, given that the aliens had already understood enough to ask, it probably wasn't safe to lie...

\n

\"I don't really understand the question's purpose,\" Akon said.  \"Our genes are made of deoxyribose nucleic acid.  Our brains are made of neurons that transmit impulses through electrical and chemical -\"

\n

The fake man's head collapsed to his hands, and he began to bawl like a baby.

\n

Akon's hand signed Help! out of the frame.  But the Xenopsychologist shrugged cluelessly.

\n

This was not going well.

\n

The fake man suddenly unfolded his head from his hands.  His cheeks were depicted as streaked with tears, but the face itself had stopped crying.  \"To wait so long,\" the voice said in a tone of absolute tragedy.  \"To wait so long, and come so far, only to discover that nowhere among the stars is any trace of love.\"

\n

\"Love?\" Akon repeated.  \"Caring for someone else?  Wanting to protect them, to be with them?  If that translated correctly, then 'love' is a very important thing to us.\"

\n

\"But!\" cried the figure in agony, at a volume that made Akon jump.  \"But when you have sex, you do not untranslatable 2!  A fake, a fake, these are only imitation words -\"

\n

\"What is 'untranslatable 2'?\" Akon said; and then, as the figure once again collapsed in inconsolable weeping, wished he hadn't.

\n

\"They asked if our neurons and DNA were separate,\" said the Ship's Engineer.  \"So maybe they have only one system.  Um... in retrospect, that actually seems like the obvious way for evolution to do it.  If you're going to have one kind of information storage for genes, why have an entirely different system for brains?  So -\"

\n

\"They share each other's thoughts when they have sex,\" the Master of Fandom completed.  \"Now there's an old dream.  And they would develop emotions around that, whole patterns of feeling we don't have ourselves...  Huh.  I guess we do lack their analogue of love.\"

\n

\"Probably,\" said the Xenopsychologist quietly, \"sex was their only way of speaking to each other from the beginning.  From before the dawn of their intelligence.  It really does make a lot of sense, evolutionarily.  If you're injecting packets of information anyway -\"

\n

\"Wait a minute,\" said the Lady Sensory, \"then how are they talking to us?\"

\n

\"Of course,\" said the Lord Programmer in a tone of sudden enlightenment.  \"Humanity has always used new communications technologies for pornography.  'The Internet is for porn' - but with them, it must have been the other way around.\"

\n

Akon blinked.  His mind suddenly pictured the blobs, and the tentacles connecting them to each other -

\n

Somewhere on that ship is a blob making love to an avatar that's supposed to represent me.  Maybe a whole Command Orgy.

\n

I've just been cyber-raped.  No, I'm being cyber-raped right now.

\n

And the aliens had crossed who knew how much space, searching for who knew how long, yearning to speak / make love to other minds - only to find -

\n

The fake man suddenly jerked upright and screamed at a volume that whited-out the speakers in the Command Conference.  Everyone jumped; the Master of Fandom let out a small shriek.

\n

What did I do what did I do what did I do -

\n

And then the holo vanished.

\n

Akon gasped for breath and slumped over in his chair.  Adrenaline was still running riot through his system, but he felt utterly exhausted.  He wanted to release his shape and melt into a puddle, a blob like the wrong shapes he'd seen on screen - no, not like that.

\n

\"My lord,\" the Ship's Confessor said softly.  He was now standing alongside, a gentle hand on Akon's shoulder.  \"My lord, are you all right?\"

\n

\"Not really,\" Akon said.  His voice, he was proud to note, was only slighly wobbly.  \"It's too hard, speaking to aliens.  They don't think like you do, and you don't know what you're doing wrong.\"

\n

\"I wonder,\" the Master of Fandom said with artificial lightness, \"if they'll call it 'xenofatigue' and forbid anyone to talk to an alien for longer than five minutes.\"

\n

Akon just nodded.

\n

\"We're getting another signal,\" the Lady Sensory said hesitantly.  \"Holo with sound, another real-time communication.\"

\n

\"Akon, you don't have to -\" said the Master of Fandom.

\n

Akon jerked himself upright, straightened his clothes.  \"I do have to,\" he said.  \"They're aliens, there's no knowing what a delay might...  Just put it through.\"

\n

The first thing the holo showed, in elegant Modern English script, was the message:

\n

The Lady 3rd Kiritsugu
    temporary co-chair of the Gameplayer
        Language Translator version 3
        Cultural Translator version 2

\n

The screen hovered just long enough to be read, then dissipated -

\n

Revealing a pale white lady.

\n

The translator's depiction of the Lady 3rd Kiritsugu was all white and black and grey; not the colorlessness of a greyscale image, but a colored image of a world with little color in it.  Skin the color of the palest human skin that could still be called attractive; not snow white, but pale.  White hair; blouse and bracelets and long dress all in coordinated shades of grey.  That woman could have been called pretty, but there was none of the overstimulating beauty of the fake man who had been shown before.

\n

Her face was styled in the emotion that humans named \"serene\".

\n

\"I and my sisters have now taken command of this vessel,\" said the pale Lady.

\n

Akon blinked.  A mutiny aboard their ship?

\n

And it was back to the alien incomprehensibility, the knife-edged decisions and unpredictable reactions and the deadly fear of screwing up.

\n

\"I am sorry if my words offend,\" Akon said carefully, \"but there is something I wish to know.\"

\n

The Lady 3rd made a slicing gesture with one hand.  \"You cannot offend me.\"  Her face showed mild insult at the suggestion.

\n

\"What has happened aboard your ship, just now?\"

\n

The Lady 3rd replied, \"The crew are disabled by emotional distress.  They have exceeded the bounds of their obligations, and are returning to the ship's Pleasuring Center for reward.  In such a situation I and my two sisters, the kiritsugu of this vessel, assume command.\"

\n

Did I do that?  \"I did not intend for my words to cause you psychological harm.\"

\n

\"You are not responsible,\" the Lady 3rd said.  \"It was the other ones.\"

\n

\"The Babyeaters?\" Akon said without thinking.

\n

\"Babyeaters,\" the Lady 3rd repeated.  \"If that is the name you have given to the third alien species present at this star system, then yes.  The crew, apprehending the nature of the Babyeaters' existence, was incapacitated by their share of the children's suffering.\"

\n

\"I see,\" Akon said.  He felt an odd twitch of shame for humanity, that his own kind could learn of the Babyeaters, and continue functioning with only tears.

\n

The Lady 3rd's gaze grew sharp.  \"What are your intentions regarding the Babyeaters?\"

\n

\"We haven't decided,\" Akon said.  \"We were just discussing it when you arrived, actually.\"

\n

\"What is your current most preferred alternative?\" the Lady 3rd instantly fired back.

\n

Akon helplessly shrugged, palms out.  \"We were just starting the discussion.  All the alternatives suggested seemed unacceptable.\"

\n

\"Which seemed least unacceptable?  What is your current best candidate?\"

\n

Akon shook his head.  \"We haven't designated any.\"

\n

The Lady 3rd's face grew stern, with a hint of puzzlement.  \"You are withholding the information.  Why?  Do you think it will cast you in an unfavorable light?  Then I must take that expectation into account.  Further, you must expect me to take that expectation into account, and so you imply that you expect me to underestimate its severity, even after taking this line of reasoning into account.\"

\n

\"Excuse me,\" the Ship's Confessor said.  His tone was mild, but with a hint of urgency.  \"I believe I should enter this conversation right now.\"

\n

Akon's hand signed agreement to the Lady Sensory.

\n

At once the Lady 3rd's eyes shifted to where the Confessor stood beside Akon.

\n

\"Human beings,\" said the Ship's Confessor, \"cannot designate a 'current best candidate' without psychological consequences.  Human rationalists learn to discuss an issue as thoroughly as possible before suggesting any solutions.  For humans, solutions are sticky in a way that would require detailed cognitive science to explain.  We would not be able to search freely through the solution space, but would be helplessly attracted toward the 'current best' point, once we named it.  Also, any endorsement whatever of a solution that has negative moral features, will cause a human to feel shame - and 'best candidate' would feel like an endorsement.  To avoid feeling that shame, humans must avoid saying which of two bad alternatives is better than the other.\"

\n

Ouch, thought Akon, I never realized how embarrassing that sounds until I heard it explained to an alien.

\n

Apparently the alien was having similar thoughts.  \"So you cannot even tell me which of several alternatives currently seems best, without your minds breaking down?  That sounds quite implausible,\" the Lady 3rd said doubtfully, \"for a species capable of building a spaceship.\"

\n

There was a hint of laughter in the Confessor's voice.  \"We try to overcome our biases.\"

\n

The Lady 3rd's gaze grew more intense.  \"Are you the true decisionmaker of this vessel?\"

\n

\"I am not,\" the Confessor said flatly.  \"I am a Confessor - a human master rationalist; we are sworn to refrain from leadership.\"

\n

\"This meeting will determine the future of all three species,\" said the Lady 3rd.  \"If you have superior competence, you should assume control.\"

\n

Akon's brows furrowed slightly.  Somehow he'd never thought about it in those terms.

\n

The Confessor shook his head.  \"There are reasons beyond my profession why I must not lead.  I am too old.\"

\n

Too old?

\n

Akon put the thought on hold, and looked back at the Lady 3rd.  She had said that all the crew were incapacitated, except her and her two sisters who took charge.  And she had asked the Confessor if he held true command.

\n

\"Are you,\" Akon asked, \"the equivalent of a Confessor for your own kind?\"

\n

\"Almost certainly not,\" replied the Lady 3rd, and -

\n

\"Almost certainly not,\" the Confessor said, almost in the same breath.

\n

There was an eerie kind of unison about it.

\n

\"I am kiritsugu,\" said the Lady 3rd.  \"In the early days of my species there were those who refrained from happiness in order to achieve perfect skill in helping others, using untranslatable 3 to suppress their emotions and acting only on their abstract knowledge of goals.  These were forcibly returned to normality by massive untranslatable 4.  But I descend from their thought-lineage and in emergency invoke the shadow of their untranslatable 5.\"

\n

\"I am a Confessor,\" said the Ship's Confessor, \"the descendant of those in humanity's past who most highly valued truth, who sought systematic methods for finding truth.  But Bayes's Theorem will not be different from one place to another; the laws in their purely mathematical form will be the same, just as any sufficiently advanced species will discover the same periodic table of elements.\"

\n

\"And being universals,\" said the Lady 3rd, \"they bear no distinguishing evidence of their origin.  So you should understand, Lord Akon, that a kiritsugu's purpose is not like that of a Confessor, even if we exploit the same laws.\"

\n

\"But we are similar enough to each other,\" the Confessor concluded, \"to see each other as distorted mirror images.  Heretics, you might say.  She is the ultimate sin forbidden to a Confessor - the exercise of command.\"

\n

\"As you are flawed on my own terms,\" the Lady 3rd concluded, \"one who refuses to help.\"

\n

Everyone else at the Conference table was staring at the alien holo, and at the Confessor, in something approaching outright horror.

\n

The Lady 3rd shifted her gaze back to Akon.  Though it was only a movement of the eyes, there was something of a definite force about the motion, as if the translator was indicating that it stood for something much stronger.  Her voice was given a demanding, compelling quality:  \"What alternatives did your kind generate for dealing with the Babyeaters?  Enumerate them to me.\"

\n

Wipe out their species, keep them in prison forever on suicide watch, ignore them and let the children suffer.

\n

Akon hesitated.  An odd premonition of warning prickled at him.  Why does she need this information?

\n

\"If you do not give me the information,\" the Lady 3rd said, \"I will take into account the fact that you do not wish me to know it.\"

\n

The proverb went through his mind, The most important part of any secret is the fact that the secret exists.

\n

\"All right,\" Akon said.  \"We found unacceptable the alternative of leaving the Babyeaters be.  We found unacceptable the alternative of exterminating them.  We wish to respect their choices and their nature as a species, but their children, who do not share that choice, are unwilling victims; this is unacceptable to us.  We desire to keep the children alive but we do not know what to do with them once they become adult and start wanting to eat their own babies.  Those were all the alternatives we had gotten as far as generating, at the very moment your ship arrived.\"

\n

\"That is all?\" demanded the Lady 3rd.  \"That is the sum of all your thought?  Is this one of the circumstances under which your species sends signals that differ against internal belief, such as 'joking' or 'politeness'?\"

\n

\"No,\" said Akon.  \"I mean, yes.  Yes, that's as far as we got.  No, we're not joking.\"

\n

\"You should understand,\" the Confessor said, \"that this crew, also, experienced a certain distress, interfering with our normal function, on comprehending the Babyeaters.  We are still experiencing it.\"

\n

And you acted to restore order, thought Akon, though not the same way as a kiritsugu...

\n

\"I see,\" the Lady 3rd said.

\n

She fell silent.  There were long seconds during which she sat motionless.

\n

Then, \"Why have you not yet disabled the Babyeater ship?  Your craft possesses the capability of doing so, and you must realize that your purpose now opposes theirs.\"

\n

\"Because,\" Akon said, \"they did not disable our ship.\"

\n

The Lady 3rd nodded.  \"You are symmetrists, then.\"

\n

Again the silence.

\n

Then the holo blurred, and in that blur appeared the words:

\n

Cultural Translator version 3.

\n

The blur resolved itself back into that pale woman; almost the same as before, except that the serenity of her came through with more force.

\n

The Lady 3rd drew herself erect, and took on a look of ritual, as though she were about to recite a composed poem.

\n

\"I now speak,\" the Lady 3rd, \"on behalf of my species, to yours.\"

\n

A chill ran down Akon's spine.  This is too much, this is all too large for me -

\n

\"Humankind!\" the Lady 3rd said, as though addressing someone by name.  \"Humankind, you prefer the absence of pain to its presence.  When my own kind attained to technology, we eliminated the causes of suffering among ourselves.  Bodily pain, embarrassment, and romantic conflicts are no longer permitted to exist.  Humankind, you prefer the presence of pleasure to its absence.  We have devoted ourselves to the intensity of pleasure, of sex and childbirth and untranslatable 2.  Humankind, you prefer truth to lies.  By our nature we do not communicate statements disbelieved, as you do with humor, modesty, and fiction; we have even learned to refrain from withholding information, though we possess that capability.  Humankind, you prefer peace to violence.  Our society is without crime and without war.  Through symmetric sharing and untranslatable 4, we share our joys and are pleasured together.  Our name for ourselves is not expressible in your language.  But to you, humankind, we now name ourselves after the highest values we share: we are the Maximum Fun-Fun Ultra Super Happy People.\"

\n

There were muffled choking sounds from the human Conference table.

\n

\"Um,\" Akon said intelligently.  \"Um... good for you?\"

\n

\"Humankind!  Humankind, you did not likewise repair yourselves when you attained to technology.  We are still unsure if it is somehow a mistake, if you did not think it through, or if your will is truly so different from ours.  For whatever reason, you currently permit the existence of suffering which our species has eliminated.  Bodily pain, embarrassment, and romantic troubles are still known among you.  Your existence, therefore, is shared by us as pain.  Will you, humankind, by your symmetry, remedy this?\"

\n

An electric current of shock and alarm ran through the Conference.  The Lord Pilot glanced significantly at the Ship's Engineer, and the Engineer just as significantly shook his head.  There was nothing they could do against the alien vessel; and their own shields would scarcely help, if they were attacked.

\n

Akon drew in a ragged breath.  He was suddenly distracted, almost to the point of his brain melting, by a sense of futures twisting around these moments: the fate of star systems, the destiny of all humanity being warped and twisted and shaped.

\n

So to you, then, it is humanity that molests kittens.

\n

He should have foreseen this possibility, after the experience of the Babyeaters.  If the Babyeaters' existence was morally unacceptable to humanity, then the next alien species might be intolerable as well - or they might find humanity's existence a horror of unspeakable cruelty.  That was the other side of the coin, even if a human might find it harder to think of it.

\n

Funny.  It doesn't seem that bad from in here...

\n

\"But -\" Akon said, and only then became aware that he was speaking.

\n

\"'But'?\" said the Lady 3rd.  \"Is that your whole reply, humankind?\"  There was a look on her face of something like frustration, even sheer astonishment.

\n

He hadn't planned out this reply in any detail, but -

\n

\"You say that you feel our existence as pain,\" Akon said, \"sharing sympathy with our own suffering.  So you, also, believe that under some circumstances pain is preferable to pleasure.  If you did not hurt when others hurt - would you not feel that you were... less the sort of person you wanted to be?  It is the same with us -\"

\n

But the Lady 3rd was shaking her head.  \"You confuse a high conditional likelihood from your hypothesis to the evidence with a high posterior probability of the hypothesis given the evidence,\" she said, as if that were all one short phrase in her own language.  \"Humankind, we possess a generalized faculty to feel what others feel.  That is the simple, compact relation.  We did not think to complicate that faculty to exclude pain.  We did not then assign dense probability that other sentient species would traverse the stars, and be encountered by us, and yet fail to have repaired themselves.  Should we encounter some future species in circumstances that do not permit its repair, we will modify our empathic faculty to exclude sympathy with pain, and substitute an urge to meliorate pain.\"

\n

\"But -\" Akon said.

\n

Dammit, I'm talking again.

\n

\"But we chose this; this is what we want.\"

\n

\"That matters less to our values than to yours,\" replied the Lady 3rd.  \"But even you, humankind, should see that it is moot.  We are still trying to untangle the twisting references of emotion by which humans might prefer pleasure to pain, and yet endorse complex theories that uphold pain over pleasure.  But we have already determined that your children, humankind, do not share the grounding of these philosophies.  When they incur pain they do not contemplate its meaning, they only call for it to stop.  In their simplicity -\"

\n

They're a lot like our own children, really.

\n

\"- they somewhat resemble the earlier life stages of our own kind.\"

\n

There was a electric quality now about that pale woman, a terrible intensity.  \"And you should understand, humankind, that when a child anywhere suffers pain and calls for it to stop, then we will answer that call if it requires sixty-five thousand five hundred and thirty-six ships.\"

\n

\"We believe, humankind, that you can understand our viewpoint.  Have you options to offer us?\"

\n

To be continued...

" } }, { "_id": "RXQ5MkWkTCvLMGHrp", "title": "War and/or Peace (2/8)", "pageUrl": "https://www.lesswrong.com/posts/RXQ5MkWkTCvLMGHrp/war-and-or-peace-2-8", "postedAt": "2009-01-31T08:42:19.000Z", "baseScore": 90, "voteCount": 76, "commentCount": 65, "url": null, "contents": { "documentId": "RXQ5MkWkTCvLMGHrp", "html": "

(Part 2 of 8 in "Three Worlds Collide")

..."So the question then is - now what?"

The Lord Pilot jumped up, then, his face flushed.  "Put up shields. \nNow.  We don't gain anything by leaving them down.  This is madness!"

\n\n

"No," said the Ship's Confessor in professional tones, "not madness."

\n\n

The Pilot slammed his fists on the table.  "We're all going to die!"

\n\n

"They're not as technologically advanced as us," Akon said. \n"Suppose the Babyeaters do decide that we need to be exterminated. \nSuppose they open fire.  Suppose they kill us.  Suppose they follow\nthe starline we opened and find the Huygens system.  Then what?"

\n\n

The Master nodded.  "Even with surprise on their side... no.  They\ncan't actually wipe out the human species.  Not unless they're a lot\nsmarter than they seem to be, and it looks to me like, on average,\nthey're actually a bit dumber than us."  The Master glanced at the\nXenopsychologist, who waved her hand in a maybe-gesture.

\n\n

"But if we leave the ship's shields down," Akon said, "we preserve whatever chance we have of a peaceful resolution to this."

\n\n

"Peace," said the Lady Sensory, in a peculiar flat tone.\n

\n

Akon looked at her.

\n\n

"You want peace with the Babyeaters?"

\n\n

"Of course -" said Akon, then stopped short.

\n\n

The Lady Sensory looked around the table.  "And the Babyeater children?  What about them?"

\n\n

The Master of Fandom spoke, his voice uncertain.  "You can't impose human standards on -"

\n\n

With a blur of motion and a sharp crack, the Lady Sensory slapped him.

\n\n

The Ship's Confessor grabbed her arm.  "No."

\n\n

The Lady Sensory stared at the Ship's Confessor.

\n\n

"No," the Confessor repeated.  "No violence.  Only argument.  Violence doesn't distinguish truth from falsehood, my Lady."

\n\n

The Lady Sensory slowly lowered her hand, but not her eyes.

\n\n

"But..." said the Master.  "But, my Lady, if they want to be eaten -"

\n\n

"They don't," said the Xenopsychologist.  "Of course they don't. \nThey run from their parents when the terrible winnowing comes.  The\nBabyeater children aren't emotionally mature - I mean they\ndon't have their adult emotional state yet.  Evolution would take care\nof anyone who wanted to get eaten.  And they're still learning, still\nmaking mistakes, so they don't yet have the instinct to exterminate\nviolators of the group code.  It's a simpler time for them.  They play,\nthey explore, they try out new ideas.  They're..." and the\nXenopsychologist stopped.  "Damn," she said, and turned her head away\nfrom the table, covering her face with her hands.  "Excuse me."  Her\nvoice was unsteady.  "They're a lot like human children, really."

\n\n

"And if they were human children," said the Lady Sensory into\nthe silence, "do you think that, just because the Babyeater species\nwanted to eat human children, that would make it right for them to do\nit?"

\n\n

"No," said the Lord Pilot.

\n\n

"Then what difference does it make?" said the Lady Sensory.

\n\n

"No difference at all," said the Lord Pilot.

\n\n

Akon looked back and forth between the two of them, and saw what was coming, and somehow couldn't speak.

\n\n

"We have to save them," said the Lady Sensory.  "We have to stop this.  No matter what it takes.  We can't let this go on."

\n\n

Couldn't say that one word -

\n\n

The Lord Pilot nodded.  "Destroy their ship.  Preserve our\nadvantage of surprise.  Go back, tell the world, create an overwhelming\nhuman army... and pour into the Babyeater starline network.  And rescue\nthe children."

\n\n\n\n\n

"No," Akon said.

\n\n

No?

\n\n

"I know," said the Lord Pilot.  "A lot of Babyeaters will die at\nfirst, but they're killing ten times more children than their whole\nadult population, every year -"

\n\n

"And then what?" said the Master of Fandom.  "What happens when the children grow up?"

\n\n

The Lord Pilot fell silent.

\n\n

The Master of Fandom completed the question.  "Are you going to wipe\nout their whole race, because their existence is too horrible to be\nallowed to go on?  I read their stories, and I didn't understand them,\nbut -"  The Master of Fandom swallowed.  "They're not... evil.  Don't you understand?  They're not.  Are you going to punish me, because I don't want to punish them?"

\n\n

"We could..." said the Lord Pilot.  "Um.  We could modify their genes so that they only gave birth to a single child at a time."

\n\n

"No," said the Xenopsychologist.  "They would grow up loathing\nthemselves for being unable to eat babies.  Horrors in their own eyes. \nIt would be kinder just to kill them."

\n\n

"Stop," said Akon.  His voice wasn't strong, wasn't loud, but\neveryone in the room looked at him.  "Stop.  We are not going to fire\non their ship."

\n\n

"Why not?" said the Lord Pilot.  "They -"

\n\n

"They haven't raised shields," said Akon.

\n\n

"Because they know it won't make a difference!" shouted the Pilot.

\n\n

"They didn't fire on us!" shouted Akon.  Then he stopped,\nlowered his voice.  "They didn't fire on us.  Even after they knew that\nwe didn't eat babies.  I am not going to fire on them.  I refuse to do\nit."

\n\n

"You think they're innocent?" demanded the Lady Sensory.  "What if it was human children that were being eaten?"

\n\n

Akon stared out a viewscreen, showing in subdued fires a\ncomputer-generated graphic of the nova debris.  He just felt exhausted,\nnow.  "I never understood the Prisoner's Dilemma until this day.  Do\nyou cooperate when you really do want the highest payoff?  When it doesn't even seem fair for both of you to cooperate?  When it seems right to defect even if the other player doesn't?  That's the payoff matrix of the true Prisoner's\nDilemma.  But all the rest of the logic - everything about what happens\nif you both think that way, and both defect - is the same.  Do we want\nto live in a universe of cooperation or defection?"

\n\n

"But -" said the Lord Pilot.

\n\n

"They know," Akon said, "that they can't wipe us out.  And they can guess what we could do to them.  Their choice isn't\nto fire on us and try to invade afterward!  Their choice is to fire on\nus and run from this star system, hoping that no other ships follow. \nIt's their whole species at stake, against just this one ship.  And\nthey still haven't fired."

\n\n

"They won't fire on us," said the Xenopsychologist, "until they\ndecide that we've defected from the norm.  It would go against their\nsense of... honor, I could call it, but it's much stronger than the\nhuman version -"

\n\n

"No," Akon said.  "Not that much stronger."  He looked\naround, in the silence.  "The Babyeater society has been at peace for\ncenturies.  So too with human society.  Do you want to fire the opening\nshot that brings war back into the universe?  Send us back to the\ndarkness-before-dawn that we only know from reading history books,\nbecause the holos are too horrible to watch?  Are you really going to\npress the button, knowing that?"

\n\n

The Lord Pilot took a deep breath.  "I will.  You will not remain commander of the Impossible, my lord, if the greater conference votes no confidence against you.  And they will, my lord, for the sake of the children."

\n\n

"What," said the Master, "are you going to do with the children?"

\n\n

"We, um, have to do something," said the Ship's Engineer, speaking\nup for the first time.  "I've been, um, looking into what Babyeater\nscience knows about their brain mechanisms.  It's really quite\nfascinating, they mix electrical and mechanical interactions, not the\nsame way our own brain pumps ions, but -"

\n\n

"Get to the point," said Akon.  "Immediately."

\n\n

"The children don't die right away," said the Engineer.  "The brain\nis this nugget of hard crystal, that's really resistant to, um, the\ndigestive mechanisms, much more so than the rest of the body.  So the\nchild's brain is in, um, probably quite a lot of pain, since the whole\nbody has been amputated, and in a state of sensory deprivation, and\nthen the processing slowly gets degraded, and I think the whole process\ngets completed about a month after -"

\n\n

The Lady Sensory threw up.  A few seconds later, so did the Xenopsychologist and the Master.

\n\n

"If human society permits this to go on," said the Lord Pilot, his\nvoice very soft, "I will resign from human society, and I will have\nfriends, and we will visit the Babyeater starline network with an\narmy.  You'll have to kill me to stop me."

\n\n

"And me," said the Lady Sensory through tears.

\n\n

Akon rose from his chair, and leaned forward; a dominating move that\nhe had learned in classrooms, very long ago when he was first studying\nto be an Administrator.  But most in humanity's promotion-conscious\nsociety would not risk direct defiance of an Administrator.  In a\nhundred years he'd never had his authority really tested, until now... \n"I will not permit you to fire on the alien ship.  Humanity will not be\nfirst to defect in the Prisoner's Dilemma."

\n\n

The Lord Pilot stood up, and Akon realized, with a sudden jolt, that\nthe Pilot was four inches taller; the thought had never occurred to him\nbefore.  The Pilot didn't lean forward, not knowing the trick, or not\ncaring.  The Pilot's eyes were narrow, surrounding facial muscles\ntensed and tight.

\n\n

"Get out of my way," said the Lord Pilot.

\n\n

Akon opened his mouth, but no words came out.

\n\n

"It is time," said the Lord Pilot, "to see this calamity to its\nend."  Spoken in Archaic English: the words uttered by Thomas Clarkson\nin 1785, at the beginning of the end of slavery.  "I have set my will\nagainst this disaster; I will break it, or it will break me."  Ira\nHoward in 2014.  "I will not share my universe with this shadow," and\nthat was the Lord Pilot, in an anger hotter than the nova's ashes. \n"Help me if you will, or step aside if you lack decisiveness; but do\nnot make yourself my obstacle, or I will burn you down, and any that stand with you -"

\n\n

"HOLD."

\n\n

Every head in the room jerked toward the source of the voice.  Akon\nhad been an Administrator for a hundred years, and a Lord Administrator\nfor twenty.  He had studied all the classic texts, and watched holos of\nfamous crisis situations; nearly all the accumulated knowledge of the\nAdministrative Field was at his beck and call; and he'd never dreamed\nthat a word could be spoken with such absolute force.

\n\n

The Ship's Confessor lowered his voice.  "My Lord Pilot.  I will not\npermit you to declare your crusade, when you have not said what you are\ncrusading for.  It is not enough to say that you do not like\nthe way things are.  You must say how you will change them, and to\nwhat.  You must think all the way to your end.  Will you wipe out the\nBabyeater race entirely?  Keep their remnants under human rule forever,\nin despair under our law?  You have not even faced your hard choices,\nonly congratulated yourself on demanding that something be done.  I\njudge that a violation of sanity, my lord."

\n\n

The Lord Pilot stood rigid.  "What -" his voice broke.  "What do you suggest we do?"

\n\n

"Sit down," said the Ship's Confessor, "keep thinking.  My Lord\nPilot, my Lady Sensory, you are premature.  It is too early for\nhumanity to divide over this issue, when we have known about it for less than twenty-four hours. \nSome rules do not change, whether it is money at stake, or the fate of\nan intelligent species.  We should only, at this stage, be discussing\nthe issue in all its aspects, as thoroughly as possible; we should not\neven be placing solutions on the table, as yet, to polarize us into\ncamps.  You know that, my lords, my ladies, and it does not change."

\n\n

"And after that?" said the Master of Fandom suddenly.  "Then it's okay to split humanity?  You wouldn't object?"

\n\n

The featureless blur concealed within the Confessor's Hood turned to\nface the Master, and spoke; and those present thought they heard a grim\nsmile, in that voice.  "Oh," said the Confessor, "that would be\ninterfering in politics.  I am charged with guarding sanity, not\nmorality.  If you want to stay together, do not split.  If you want\npeace, do not start wars.  If you want to avoid genocide, do not wipe\nout an alien species.  But if these are not your highest values, then you may well end up sacrificing them.  What you are willing to trade off, may end up traded away - be you warned! \nBut if that is acceptable to you, then so be it.  The Order of Silent\nConfessors exists in the hope that, so long as humanity is sane, it\ncan make choices in accordance with its true desires.  Thus there is\nour Order dedicated only to that, and sworn not to interfere in\npolitics.  So you will spend more time discussing this scenario, my\nlords, my ladies, and only then generate solutions.  And then... you will\ndecide."

\n\n

"Excuse me," said the Lady Sensory.  The Lord Pilot made to speak, and Sensory raised her voice.  "Excuse me, my lords.  The alien ship has just sent us a new transmission.  Two megabytes of text."

\n\n

"Translate and publish," ordered Akon.

\n\n

They all glanced down and aside, waiting for the file to come up.

\n\n

It began:

THE UTTERMOST ABYSS OF JUSTIFICATION
A HYMN OF LOGIC
PURE LIKE STONES AND SACRIFICE
FOR STRUGGLES OF THE YOUNG SLIDING DOWN YOUR THROAT-

Akon\nlooked away, wincing.  He hadn't tried to read much of the alien\ncorpus, and hadn't gotten the knack of reading the "translations" by\nthat damned program.

\n\n

"Would someone," Akon said, "please tell me - tell the conference - what this says?

\n\n

There was a long, stretched moment of silence.

\n\n

Then the Xenopsychologist made a muffled noise that could have been\na bark of incredulity, or just a sad laugh.  "Stars beyond," said the\nXenopsychologist, "they're trying to persuade us to eat our own\nchildren."

\n\n\n\n

"Using," said the Lord Programmer, "what they assert to be arguments\nfrom universal principles, rather than appeals to mere instincts that\nmight differ from star to star."

\n\n

"Such as what, exactly?" said the Ship's Confessor.

\n\n

Akon gave the Confessor an odd look, then quickly glanced away, lest\nthe Confessor catch him at it.  No, the Confessor couldn't be carefully\nmaintaining an open mind about that.  It was just curiosity over what particular failures of reasoning the aliens might exhibit.

\n\n

"Let me search," said the Lord Programmer.  He was silent for a\ntime.  "Ah, here's an example.  They point out that by producing many\noffspring, and winnowing among them, they apply greater selection\npressures to their children than we do.  So if we started producing\nhundreds of babies per couple and then eating almost all of them - I do\nemphasize that this is their suggestion, not mine - evolution would\nproceed faster for us, and we would survive longer in the universe. \nEvolution and survival are universals, so the argument should convince\nanyone."  He gave a sad chuckle.  "Anyone here feel convinced?"

\n\n

"Out of curiosity," said the Lord Pilot, "have they ever tried to\nproduce even more babies - say, thousands instead of hundreds - so they\ncould speed up their evolution even more?"

\n\n

"It ought to be easily within their current capabilities of\nbioengineering," said the Xenopsychologist, "and yet they haven't done\nit.  Still, I don't think we should make the suggestion.""

\n

"Agreed," said Akon.

\n\n

"But humanity uses gamete selection," said the Lady Sensory.  "We aren't evolving any slower.  If anything, choosing among millions of sperm and hundreds of eggs gives us much stronger selection pressures."

\n\n

The Xenopsychologist furrowed her brow.  "I'm not sure we sent them\nthat information in so many words... or they may have just not gotten\nthat far into what we sent them..."

\n\n

"Um, it wouldn't be trivial for them to understand," said the Ship's\nEngineer.  "They don't have separate DNA and proteins, just crystal\npatterns tiling themselves.  The two parents intertwine and stay that\nway for, um, days, nucleating portions of supercooled liquid from their\nown bodies to construct the babies.  The whole, um, baby, is\nconstructed together by both parents.  They don't have separate gametes they could select on."

\n\n

"But," said the Lady Sensory, "couldn't we maybe convince them, to\nwork out some equivalent of gamete selection and try that instead -"

\n\n

"My lady," said the Xenopsychologist.  Her voice, now, was somewhat exasperated.  "They aren't really doing this for the sake of evolution.  They were eating babies millions of years before they knew what evolution was."

\n\n

"Huh, this is interesting," said the Lord Programmer.  "There's\nanother section here where they construct their arguments using appeals\nto historical human authorities."

\n\n

Akon raised his eyebrows.  "And who, exactly, do they quote in support?"

\n\n

"Hold on," said the Lord Programmer.  "This has been run through the\ntranslator twice, English to Babyeater to English, so I need to write a\nprogram to retrieve the original text..."  He was silent a few\nmoments.  "I see.  The argument starts by pointing out how eating your\nchildren is proof of sacrifice and loyalty to the tribe, then they\nquote human authorities on the virtue of sacrifice and loyalty.  And\nancient environmentalist arguments about population control, plus...\noh, dear.  I don't think they've realized that Adolf Hitler is a bad\nguy."

\n\n

"They wouldn't," said the Xenopsychologist.  "Humans put\nHitler in charge of a country, so we must have considered him a preeminent legalist of his age.  And it wouldn't occur to\nthe Babyeaters that Adolf Hitler might be regarded by humans as a bad guy just because he turned segments of his society into lampshades - they have a custom against that nowadays, but they don't really see it as evil. \nIf\nHitler thought that gays had defected against the norm, and tried to\nexterminate them, that looks to a Babyeater like an honest mistake -" \nThe Xenopsychologist looked around the table.  "All\nright, I'll stop there.  But the Babyeaters don't look back on their\nhistory and see obvious villains in positions of power - certainly not\nafter\nthe dawn of science.  Any politician who got to the point of being labeled "bad" would be killed and eaten.  The Babyeaters don't seem to have had\nhumanity's coordination problems.  Or they're just more rational\nvoters.  Take your pick."

Akon was resting his head in his hands.  "You know," Akon said, "I thought\nabout composing a message like this to the Babyeaters.  It was a stupid\nthought, but I kept turning it over in my mind.  Trying to think about\nhow I might persuade them that eating babies was... not a good thing."

\n\n

The Xenopsychologist grimaced.  "The aliens seem to be even more\ngiven to rationalization than we are - which is maybe why their society\nisn't so rigid as to actually fall apart - but I don't think you could\ntwist them far enough around to believe that eating babies was not a\nbabyeating thing."

\n\n

"And by the same token," Akon said, "I don't think they're\nparticularly likely to persuade us that eating babies is good."  He\nsighed.  "Should we just mark the message as spam?"

\n\n

"One of us should read it, at least," said the Ship's\nConfessor.  "They composed their argument honestly and in all good\nwill.  Humanity also has epistemic standards of honor to uphold."

\n\n

"Yes," said the Master.  "I don't quite understand the Babyeater\nstandards of literature, my lord, but I can tell that this text\nconforms to their style of... not exactly poetry, but... they tried to\nmake it aesthetic as well as persuasive."  The Master's eyes flickered,\nback and forth.  "I think they even made some parts constant in the\ntotal number of light pulses per argumentative unit, like human\nprosody, hoping that our translator would turn it into a human poem. \nAnd... as near as I can judge such things, this took a lot of effort.  I wouldn't be surprised to find that everyone on that ship was staying up all night working on it."

\n\n

"Babyeaters don't sleep," said the Engineer sotto vocce.

\n\n

"Anyway," said the Master.  "If we don't fire on the alien ship - I\nmean, if this work is ever carried back to the Babyeater civilization -\nI suspect the aliens will consider this one of their great historical\nworks of literature, like Hamlet or Fate/stay night -"

\n\n

The Lady Sensory cleared her throat.  She was pale, and trembling.

With a sudden black premonition of doom like a training session in Unrestrained Pessimism, Akon guessed what she would say.

The Lady Sensory said, in an unsteady voice, "My lords, a third ship has jumped into this system.  Not Babyeater, not human."\n\n

\n\n

To be continued...

" } }, { "_id": "n5TqCuizyJDfAPjkr", "title": "The Baby-Eating Aliens (1/8)", "pageUrl": "https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-eating-aliens-1-8", "postedAt": "2009-01-30T12:07:57.000Z", "baseScore": 129, "voteCount": 115, "commentCount": 85, "url": null, "contents": { "documentId": "n5TqCuizyJDfAPjkr", "html": "

(Part 1 of 8 in \"Three Worlds Collide\")

\n

This is a story of an impossible outcome, where AI never worked, molecular nanotechnology never worked, biotechnology only sort-of worked; and yet somehow humanity not only survived, but discovered a way to travel Faster-Than-Light:  The past's Future.

\n

Ships travel through the Alderson starlines, wormholes that appear near stars.  The starline network is dense and unpredictable: more than a billion starlines lead away from Sol, but every world explored is so far away as to be outside the range of Earth's telescopes.  Most colony worlds are located only a single jump away from Earth, which remains the center of the human universe.

\n

From the colony system Huygens, the crew of the Giant Science Vessel Impossible Possible World have set out to investigate a starline that flared up with an unprecedented flux of Alderson force before subsiding.  Arriving, the Impossible discovers the sparkling debris of a recent nova - and -

\n

\"ALIENS!\"

\n

Every head swung toward the Sensory console.  But after that one cryptic outburst, the Lady Sensory didn't even look up from her console: her fingers were frantically twitching commands.

\n

There was a strange moment of silence in the Command Conference while every listener thought the same two thoughts in rapid succession:

\n

Is she nuts?  You can't just say \"Aliens!\", leave it at that, and expect everyone to believe you.  Extraordinary claims require extraordinary evidence -

\n

And then,

\n

They came to look at the nova too!

\n

\n

In a situation like this, it befalls the Conference Chair to speak first.

\n

\"What?  SHIT!\" shouted Akon, who didn't realize until later that his words would be inscribed for all time in the annals of history.  Akon swung around and looked frantically at the main display of the Command Conference.  \"Where are they?\"

\n

The Lady Sensory looked up from her console, fingers still twitching.  \"I - I don't know, I just picked up an incoming high-frequency signal - they're sending us enormous amounts of data, petabytes, I had to clear long-term memory and set up an automatic pipe or risk losing the whole -\"

\n

\"Found them!\" shouted the Lord Programmer.  \"I searched through our Greater Archive and turned up a program to look for anomalous energy sources near local starlines.  It's from way back from the first days of exploration, but I managed to find an emulation program for -\"

\n

\"Just show it!\"  Akon took a deep breath, trying to calm himself.

\n

The main display swiftly scanned across fiery space and settled on... a set of windows into fire, the fire of space shattered by the nova, but then shattered again into triangular shards.

\n

It took Akon a moment to realize that he was looking at an icosahedron of perfect mirrors.

\n

Huh, thought Akon, they're lower-tech than us.  Their own ship, the Impossible, was absorbing the vast quantities of local radiation and dumping it into their Alderson reactor; the mirror-shielding seemed a distinctly inferior solution.  Unless that's what they want us to think...

\n

\"Deflectors!\" shouted the Lord Pilot suddenly.  \"Should I put up deflectors?\"

\n

\"Deflectors?\" said Akon, startled.

\n

The Pilot spoke very rapidly.  \"Sir, we use a self-sustaining Alderson reaction to power our starline jumps and our absorbing shields.  That same reaction could be used to emit a directed beam that would snuff a similar reaction - the aliens are putting out their own Alderson emissions, they could snuff our absorbers at any time, and the nova ashes would roast us instantly - unless I configure a deflector -\"

\n

The Ship's Confessor spoke, then.  \"Have the aliens put up deflectors of their own?\"

\n

Akon's mind seemed to be moving very slowly, and yet the essential thoughts felt, somehow, obvious.  \"Pilot, set up the deflector program but don't activate it until I give the word.  Sensory, drop everything else and tell me whether the aliens have put up their own deflectors.\"

\n

Sensory looked up.  Her fingers twitched only briefly through a few short commands.  Then, \"No,\" she said.

\n

\"Then I think,\" Akon said, though his spine felt frozen solid, \"that we should not be the first to put this interaction on a... combative footing.  The aliens have made a gesture of goodwill by leaving themselves vulnerable.  We must reciprocate.\"  Surely, no species would advance far enough to colonize space without understanding the logic of the Prisoner's Dilemma...

\n

\"You assume too much,\" said the Ship's Confessor.  \"They are aliens.\"

\n

\"Not much goodwill,\" said the Pilot.  His fingers were twitching, not commands, but almost-commands, subvocal thoughts.  \"The aliens' Alderson reaction is weaker than ours by an order of magnitude.  We could break any shield they could put up.  Unless they struck first.  If they leave their deflectors down, they lose nothing, but they invite us to leave our own down -\"

\n

\"If they were going to strike first,\" Akon said, \"they could have struck before we even knew they were here.  But instead they spoke.\"  Surely, oh surely, they understand the Prisoner's Dilemma.

\n

\"Maybe they hope to gain information and then kill us,\" said the Pilot.  \"We have technology they want.  That enormous message - the only way we could send them an equivalent amount of data would be by dumping our entire Local Archive.  They may be hoping that we feel the emotional need to, as you put it, reciprocate -\"

\n

\"Hold on,\" said the Lord Programmer suddenly.  \"I may have managed to translate their language.\"

\n

You could have heard a pin dropping from ten lightyears away.

\n

The Lord Programmer smiled, ever so slightly.  \"You see, that enormous dump of data they sent us - I think that was their Local Archive, or equivalent.  A sizable part of their Net, anyway.  Their text, image, and holo formats are utterly straightforward - either they don't bother compressing anything, or they decompressed it all for us before they sent it.  And here's the thing: back in the Dawn era, when there were multiple human languages, there was this notion that people had of statistical language translation.  Now, the classic method used a known corpus of human-translated text.  But there were successor methods that tried to extend the translation further, by generating semantic skeletons and trying to map the skeletons themselves onto one another.  And there are also ways of automatically looking for similarity between images or holos.  Believe it or not, there was a program already in the Archive for trying to find points of linkage between an alien corpus and a human corpus, and then working out from there to map semantic skeletons... and it runs quickly, since it's designed to work on older computer systems.  So I ran the program, it finished, and it's claiming that it can translate the alien language with 70% confidence.  Could be a total bug, of course.  But the aliens sent a second message that followed their main data dump - short, looks like text-only.  Should I run the translator on that, and put the results on the main display?\"

\n

Akon stared at the Lord Programmer, absorbing this, and finally said, \"Yes.\"

\n

\"All right,\" said the Lord Programmer, \"here goes machine learning,\" and his fingers twitched once.

\n

Over the icosahedron of fractured fire, translucent letters appeared:

\n

THIS VESSEL IS THE OPTIMISM OF THE CENTER OF THE VESSEL PERSON

\n

YOU HAVE NOT KICKED US

\n

THEREFORE YOU EAT BABIES

\n

WHAT IS OURS IS YOURS, WHAT IS YOURS IS OURS

\n

\"Stop that laughing,\" Akon said absentmindedly, \"it's distracting.\"  The Conference Chair pinched the bridge of his nose.  \"All right.  That doesn't seem completely random.  The first line... is them identifying their ship, maybe.  Then the second line says that we haven't opened fire on them, or that they won't open fire on us - something like that.  The third line, I have absolutely no idea.  The fourth... is offering some kind of reciprocal trade -\"  Akon stopped then.  So did the laughter.

\n

\"Would you like to send a return message?\" said the Lord Programmer.

\n

Everyone looked at him.  Then everyone looked at Akon.

\n

Akon thought about that very carefully.  Total silence for a lengthy period of time might not be construed as friendly by a race that had just talked at them for petabytes.

\n

\"All right,\" Akon said.  He cleared his throat.  \"We are still trying to understand your language.  We do not understand well.  We are trying to translate.  We may not translate correctly.  These words may not say what we want them to say.  Please do not be offended.  This is the research vessel named quote Impossible Possible World unquote.  We are pleased to meet you.  We will assemble data for transmission to you, but do not have it ready.\"  Akon paused.  \"Send them that.  If you can make your program translate it three different plausible ways, do that too - it may make it clearer that we're working from an automatic program.\"

\n

The Lord Programmer twitched a few more times, then spoke to the Lady Sensory.  \"Ready.\"

\n

\"Are you really sure this is a good idea?\" said Sensory doubtfully.

\n

Akon sighed.  \"No.  Send the message.\"

\n

For twenty seconds after, there was silence.  Then new words appeared on the display:

\n

WE ARE GLAD TO SEE YOU CANNOT BE DONE

\n

YOU SPEAK LIKE BABY CRUNCH CRUNCH

\n

WITH BIG ANGELIC POWERS

\n

WE WISH TO SUBSCRIBE TO YOUR NEWSLETTER

\n

\"All right,\" Akon said, after a while.  It seemed, on the whole, a positive response.  \"I expect a lot of people are eager to look at the alien corpus.  But I also need volunteers to hunt for texts and holo files in our own Archive.  Which don't betray the engineering principles behind any technology we've had for less than, say,\" Akon thought about the mirror shielding and what it implied, \"a hundred years.  Just showing that it can be done... we won't try to avoid that, but don't give away the science...\"


A day later, the atmosphere at the Command Conference was considerably more tense.

\n

Bewilderment.  Horror.  Fear.  Numbness.  Refusal.  And in the distant background, slowly simmering, a dangerous edge of rising righteous fury.

\n

\"First of all,\" Akon said.  \"First of all.  Does anyone have any plausible hypothesis, any reasonable interpretation of what we know, under which the aliens do not eat their own children?\"

\n

\"There is always the possibility of misunderstanding,\" said the former Lady Psychologist, who was now, suddenly and abruptly, the lead Xenopsychologist of the ship, and therefore of humankind.  \"But unless the entire corpus they sent us is a fiction... no.\"

\n

The alien holos showed tall crystalline insectile creatures, all flat planes and intersecting angles and prismatic refractions, propelling themselves over a field of sharp rocks: the aliens moved like hopping on pogo sticks, bouncing off the ground using projecting limbs that sank into their bodies and then rebounded.  There was a cold beauty to the aliens' crystal bodies and their twisting rotating motions, like screensavers taking on sentient form.

\n

And the aliens bounded over the sharp rocks toward tiny fleeing figures like delicate spherical snowflakes, and grabbed them with pincers, and put them in their mouths.  It was a central theme in holo after holo.

\n

The alien brain was much smaller and denser than a human's.  The alien children, though their bodies were tiny, had full-sized brains.  They could talk.  They protested as they were eaten, in the flickering internal lights that the aliens used to communicate.  They screamed as they vanished into the adult aliens' maws.

\n

Babies, then, had been a mistranslation:   Preteens would have been more accurate.

\n

Still, everyone was calling the aliens Babyeaters.

\n

The children were sentient at the age they were consumed.  The text portions of the corpus were very clear about that.  It was part of the great, the noble, the most holy sacrifice.  And the children were loved: this was part of the central truth of life, that parents could overcome their love and engage in the terrible winnowing.  A parent might spawn a hundred children, and only one in a hundred could survive - for otherwise they would die later, of starvation...

\n

When the Babyeaters had come into their power as a technological species, they could have chosen to modify themselves - to prevent all births but one.

\n

But this they did not choose to do.

\n

For that terrible winnowing was the central truth of life, after all.

\n

The one now called Xenopsychologist had arrived to the Huygens system with the first colonization vessel.  Since then she had spent over one hundred years practicing the profession of psychology, earning the rare title of Lady.  (Most people got fed up and switched careers after no more than fifty, whatever their first intentions.)  Now, after all that time, she was simply the Xenopsychologist, no longer a Lady of her profession.  Being the first and only Xenopsychologist made no difference; the hundred-year rule for true expertise was not a rule that anyone could suspend.  If she was the foremost Xenopsychologist of humankind, then also she was the least, the most foolish and the most ignorant.  She was only an apprentice Xenopsychologist, no matter that there were no masters anywhere.  In theory, her social status should have been too low to be seated at the Conference Table.  In theory.

\n

The Xenopsychologist was two hundred and fifty years old.  She looked much older, now, as as she spoke.  \"In terms of evolutionary psychology... I think I understand what happened.  The ancestors of the Babyeaters were a species that gave birth to hundreds of offspring in a spawning season, like Terrestrial fish; what we call r-strategy reproduction.  But the ancestral Babyeaters discovered... crystal-tending, a kind of agriculture... long before humans did.  They were around as smart as chimpanzees, when they started farming.  The adults federated into tribes so they could guard territories and tend crystal.  They adapted to pen up their offspring, to keep them around in herds so they could feed them.  But they couldn't produce enough crystal for all the children.

\n

\"It's a truism in evolutionary biology that group selection can't work among non-relatives.  The exception is if there are enforcement mechanisms, punishment for defectors - then there's no individual advantage to cheating, because you get slapped down.  That's what happened with the Babyeaters.  They didn't restrain their individual reproduction because the more children they put in the tribal pen, the more children of theirs were likely to survive.  But the total production of offspring from the tribal pen was greater, if the children were winnowed down, and the survivors got more individual resources and attention afterward.  That was how their species began to shift toward a k-strategy, an individual survival strategy.  That was the beginning of their culture.

\n

\"And anyone who tried to cheat, to hide away a child, or even go easier on their own children during the winnowing - well, the Babyeaters treated the merciful parents the same way that human tribes treat their traitors.

\n

\"They developed psychological adaptations for enforcing that, their first great group norm.  And those psychological adaptations, those emotions, were reused over the course of their evolution, as the Babyeaters began to adapt to their more complex societies.  Honor, friendship, the good of our tribe - the Babyeaters acquired many of the same moral adaptations as humans, but their brains reused the emotional circuitry of infanticide to do it.

\n

\"The Babyeater word for good means, literally, to eat children.\"

\n

The Xenopsychologist paused there, taking a sip of water.  Pale faces looked back at her from around the table.

\n

The Lady Sensory spoke up.  \"I don't suppose... we could convince them they were wrong about that?\"

\n

The Ship's Confessor was robed and hooded in silver, indicating that he was there formally as a guardian of sanity.  His voice was gentle, though, as he spoke:  \"I don't believe that's how it works.\"

\n

\"Even if you could persuade them, it might not be a good idea,\" said the Xenopsychologist.  \"If you convinced the Babyeaters to see it our way - that they had committed a wrong of that magnitude - there isn't anything in the universe that could stop them from hunting down and exterminating themselves.  They don't have a concept of forgiveness; their only notion of why someone might go easy on a transgressor, is to spare an ally, or use them as a puppet, or being too lazy or cowardly to carry out the vengeance.  The word for wrong is the same symbol as mercy, you see.\"  The Xenopsychologist shook her head.  \"Punishment of non-punishers is very much a way of life, with them.  A Manichaean, dualistic view of reality.  They may have literally believed that we ate babies, at first, just because we didn't open fire on them.\"

\n

Akon frowned.  \"Do you really think so?  Wouldn't that make them... well, a bit unimaginative?\"

\n

The Ship's Master of Fandom was there; he spoke up.  \"I've been trying to read Babyeater literature,\" he said.  \"It's not easy, what with all the translation difficulties,\" and he sent a frown at the Lord Programmer, who returned it.  \"In one sense, we're lucky enough that the Babyeaters have a concept of fiction, let alone science fiction -\"

\n

\"Lucky?\" said the Lord Pilot.  \"You've got to have an imagination to make it to the stars.  The sort of species that wouldn't invent science fiction, probably wouldn't even invent the wheel -\"

\n

\"But,\" interrupted the Master, \"just as most of their science fiction deals with crystalline entities - the closest they come to postulating human anatomy, in any of the stories I've read, was a sort of giant sentient floppy sponge - so too, nearly all of the aliens their explorers meet, eat their own children.  I doubt the authors spent much time questioning the assumption; they didn't want anything so alien that their readers couldn't empathize.  The purpose of storytelling is to stimulate the moral instincts, which is why all stories are fundamentally about personal sacrifice and loss - that's their theory of literature.  Though you can find stories where the wise, benevolent elder aliens explain how the need to control tribal population is the great selective transition, and how no species can possibly evolve sentience and cooperation without eating babies, and even if they did, they would war among themselves and destroy themselves.\"

\n

\"Hm,\" said the Xenopsychologist.  \"The Babyeaters might not be too far wrong - stop staring at me like that, I don't mean it that way.  I'm just saying, the Babyeater civilization didn't have all that many wars.  In fact, they didn't have any wars at all after they finished adopting the scientific method.  It was the great watershed moment in their history - the notion of a reasonable mistake, that you didn't have to kill all the adherents of a mistaken hypothesis.  Not because you were forgiving them, but because they'd made the mistake by reasoning on insufficient data, rather than any inherent flaw.  Up until then, all wars were wars of total extermination - but afterward, the theory was that if a large group of people could all do something wrong, it was probably a reasonable mistake.  Their conceptualization of probability theory - of a formally correct way of manipulating uncertainty - was followed by the dawn of their world peace.\"

\n

\"But then -\" said the Lady Sensory.

\n

\"Of course,\" added the Xenopsychologist, \"anyone who departs from the group norm due to an actual inherent flaw still has to be destroyed.  And not everyone agreed at first that the scientific method was moral - it does seem to have been highly counterintuitive to them - so their last war was the one where the science-users killed off all the nonscientists.  After that, it was world peace.\"

\n

\"Oh,\" said the Lady Sensory softly.

\n

\"Yes,\" the Xenopsychologist said, \"after that, all the Babyeaters banded together as a single super-group that only needed to execute individual heretics.  They now have a strong cultural taboo against wars between tribes.\"

\n

\"Unfortunately,\" said the Master of Fandom, \"that taboo doesn't let us off the hook.  You can also find science fiction stories - though they're much rarer - where the Babyeaters and the aliens don't immediately join together into a greater society.  Stories of horrible monsters who don't eat their children.  Monsters who multiply like bacteria, war among themselves like rats, hate all art and beauty, and destroy everything in their pathway.  Monsters who have to be exterminated down to the last strand of their DNA - er, last nucleating crystal.\"

\n

Akon spoke, then.  \"I accept full responsibility,\" said the Conference Chair, \"for the decision to send the Babyeaters the texts and holos we did.  But the fact remains that they have more than enough information about us to infer that we don't eat our children.  They may be able to guess how we would see them.  And they haven't sent anything to us, since we began transmitting to them.\"

\n

\"So the question then is - now what?\"

\n

To be continued...

" } }, { "_id": "HawFh7RvDM4RyoJ2d", "title": "Three Worlds Collide (0/8)", "pageUrl": "https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8", "postedAt": "2009-01-30T12:07:52.000Z", "baseScore": 102, "voteCount": 83, "commentCount": 97, "url": null, "contents": { "documentId": "HawFh7RvDM4RyoJ2d", "html": "

"The kind of classic fifties-era first-contact story that Jonathan Swift\nmight have written, if Jonathan Swift had had a background in game\ntheory."
        -- (Hugo nominee) Peter Watts, "In Praise of Baby-Eating"

Three Worlds Collide is a story I wrote to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct.  It grew, as such things do, into a small novella.  On publication, it proved widely popular and widely criticized.  Be warned that the story, as it wrote itself, ended up containing some profanity and PG-13 content.

    \n
  1. The Baby-Eating Aliens
  2. \n
  3. War and/or Peace
  4. \n
  5. The Super Happy People
  6. \n
  7. Interlude with the Confessor
  8. \n
  9. Three Worlds Decide
  10. \n
  11. Normal Ending
  12. \n
  13. True Ending
  14. \n
  15. Atonement
  16. \n
\n

\nPDF version here.

" } }, { "_id": "GNnHHmm8EzePmKzPk", "title": "Value is Fragile", "pageUrl": "https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile", "postedAt": "2009-01-29T08:46:30.000Z", "baseScore": 174, "voteCount": 147, "commentCount": 108, "url": null, "contents": { "documentId": "GNnHHmm8EzePmKzPk", "html": "

If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:

\n

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

\n

\"Well,\" says the one, \"maybe according to your provincial human values, you wouldn't like it.  But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals.  And that's fine by me.  I'm not so bigoted as you are.  Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -\"

\n

My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.

\n

That's what the Future looks like if things go right.

\n

If the chain of inheritance from human (meta)morals is broken, the Future does not look like this.  It does not end up magically, delightfully incomprehensible.

\n

With very high probability, it ends up looking dull.  Pointless.  Something whose loss you wouldn't mourn.

\n

Seeing this as obvious, is what requires that immense amount of background explanation.

\n

\n

And I'm not going to iterate through all the points and winding pathways of argument here, because that would take us back through 75% of my Overcoming Bias posts.  Except to remark on how many different things must be known to constrain the final answer.

\n

Consider the incredibly important human value of \"boredom\" - our desire not to do \"the same thing\" over and over and over again.  You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing -

\n

- and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.

\n

Or imagine a mind that contained almost the whole specification of which sort of feelings humans most enjoy - but not the idea that those feelings had important external referents.  So that the mind just went around feeling like it had made an important discovery, feeling it had found the perfect lover, feeling it had helped a friend, but not actually doing any of those things - having become its own experience machine.  And if the mind pursued those feelings and their referents, it would be a good future and true; but because this one dimension of value was left out, the future became something dull.  Boring and repetitive, because although this mind felt that it was encountering experiences of incredible novelty, this feeling was in no wise true.

\n

Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience.  So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.  This, I admit, I don't quite know to be possible.  Consciousness does still confuse me to some extent.  But a universe with no one to bear witness to it, might as well not be.

\n

Value isn't just complicated, it's fragile.  There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null.  A single blow and all value shatters.  Not every single blow will shatter all value - but more than one possible \"single blow\" will do so.

\n

And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts, so that it would be more than one day's work to summarize all of it.  Maybe some other week.  There's so many branches I've seen that discussion tree go down.

\n

After all - a mind shouldn't just go around having the same experience over and over and over again.  Surely no superintelligence would be so grossly mistaken about the correct action?

\n

Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries?  Even if that were its utility function, wouldn't it just notice that its utility function was wrong, and rewrite it?  It's got free will, right?

\n

Surely, at least boredom has to be a universal value.  It evolved in humans because it's valuable, right?  So any mind that doesn't share our dislike of repetition, will fail to thrive in the universe and be eliminated...

\n

If you are familiar with the difference between instrumental values and terminal values, and familiar with the stupidity of natural selection, and you understand how this stupidity manifests in the difference between executing adaptations versus maximizing fitness, and you know this turned instrumental subgoals of reproduction into decontextualized unconditional emotions...

\n

...and you're familiar with how the tradeoff between exploration and exploitation works in Artificial Intelligence...

\n

...then you might be able to see that the human form of boredom that demands a steady trickle of novelty for its own sake, isn't a grand universal, but just a particular algorithm that evolution coughed out into us.  And you might be able to see how the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.

\n

That's a lot of background knowledge, though.

\n

And so on and so on and so on through 75% of my posts on Overcoming Bias, and many chains of fallacy and counter-explanation.  Some week I may try to write up the whole diagram.  But for now I'm going to assume that you've read the arguments, and just deliver the conclusion:

\n

We can't relax our grip on the future - let go of the steering wheel - and still end up with anything of value.

\n

And those who think we can -

\n

- they're trying to be cosmopolitan.  I understand that.  I read those same science fiction books as a kid:  The provincial villains who enslave aliens for the crime of not looking just like humans.  The provincial villains who enslave helpless AIs in durance vile on the assumption that silicon can't be sentient.  And the cosmopolitan heroes who understand that minds don't have to be just like us to be embraced as valuable -

\n

I read those books.  I once believed them.  But the beauty that jumps out of one box, is not jumping out of all boxes.  (This being the moral of the sequence on Lawful Creativity.)  If you leave behind all order, what is left is not the perfect answer, what is left is perfect noise.  Sometimes you have to abandon an old design rule to build a better mousetrap, but that's not the same as giving up all design rules and collecting wood shavings into a heap, with every pattern of wood as good as any other.  The old rule is always abandoned at the behest of some higher rule, some higher criterion of value that governs.

\n

If you loose the grip of human morals and metamorals - the result is not mysterious and alien and beautiful by the standards of human value.  It is moral noise, a universe tiled with paperclips.  To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone.

\n

Relax the grip of human value upon the universe, and it will end up seriously valueless.  Not, strange and alien and wonderful, shocking and terrifying and beautiful beyond all human imagination.  Just, tiled with paperclips.

\n

It's only some humans, you see, who have this idea of embracing manifold varieties of mind - of wanting the Future to be something greater than the past - of being not bound to our past selves - of trying to change and move forward.

\n

A paperclip maximizer just chooses whichever action leads to the greatest number of paperclips.

\n

No free lunch.  You want a wonderful and mysterious universe?  That's your value.  You work to create that value.  Let that value exert its force through you who represents it, let it make decisions in you to shape the future.  And maybe you shall indeed obtain a wonderful and mysterious universe.

\n

No free lunch.  Valuable things appear because a goal system that values them takes action to create them.  Paperclips don't materialize from nowhere for a paperclip maximizer.  And a wonderfully alien and mysterious Future will not materialize from nowhere for us humans, if our values that prefer it are physically obliterated - or even disturbed in the wrong dimension.  Then there is nothing left in the universe that works to make the universe valuable.

\n

You do have values, even when you're trying to be \"cosmopolitan\", trying to display a properly virtuous appreciation of alien minds.  Your values are then faded further into the invisible background - they are less obviously human.  Your brain probably won't even generate an alternative so awful that it would wake you up, make you say \"No!  Something went wrong!\" even at your most cosmopolitan.  E.g. \"a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips\".  You'll just imagine strange alien worlds to appreciate.

\n

Trying to be \"cosmopolitan\" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously \"human\".

\n

But if you wouldn't like the Future tiled over with paperclips, and you would prefer a civilization of...

\n

...sentient beings...

\n

...with enjoyable experiences...

\n

...that aren't the same experience over and over again...

\n

...and are bound to something besides just being a sequence of internal pleasurable feelings...

\n

...learning, discovering, freely choosing...

\n

...well, I've just been through the posts on Fun Theory that went into some of the hidden details on those short English words.

\n

Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human.  Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us.  (And once I finally came to that realization, I felt less ashamed of values that seemed 'provincial' - but that's another matter.)

\n

These values do not emerge in all possible minds.  They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.

\n

Touch too hard in the wrong dimension, and the physical representation of those values will shatter - and not come back, for there will be nothing left to want to bring it back.

\n

And the referent of those values - a worthwhile universe - would no longer have any physical reason to come into being.

\n

Let go of the steering wheel, and the Future crashes.

" } }, { "_id": "iRop8WinYW4B8KYiK", "title": "Rationality Quotes 25", "pageUrl": "https://www.lesswrong.com/posts/iRop8WinYW4B8KYiK/rationality-quotes-25", "postedAt": "2009-01-28T08:39:46.000Z", "baseScore": 14, "voteCount": 13, "commentCount": 12, "url": null, "contents": { "documentId": "iRop8WinYW4B8KYiK", "html": "

\"People want to think there is some huge conspiracy run by evil geniuses.  The reality is actually much more horrifying.  The people running the show aren't evil geniuses.  They are just as stupid as the rest of us.\"
        -- Vaksel

\n

\"Rule of thumb:  Be skeptical of things you learned before you could read.  E.g., religion.\"
        -- Ben Casnocha

\n

\"Truth is not always popular, but it is always right.\"
        -- Anon

\n

\"Computer programming is omnipotence without omniscience.\"
        -- Prospero

\n

\"The world breaks everyone and afterward many are strong in the broken places. But those that will not break it kills. It kills the very good, and the very gentle, and the very brave, impartially.\"
        -- Ernest Hemingway, A Farewell to Arms

\n

\"Those who take delight in their own might are merely pretenders to power.  The true warrior of fate needs no adoration or fear, no tricks or overwhelming effort; he need not be stronger or smarter or innately more capable than everyone else; he need not even admit it to himself.  All he needs to do is to stand there, at that moment when all hope is dead, and look upon the abyss without flinching.\"
        -- Shinji and Warhammer40k

\n

\"Though here at journey's end I lie
 in darkness buried deep,
 beyond all towers strong and high,
 beyond all mountains steep,
 above all shadows rides the Sun
 and Stars forever dwell:
 I will not say the Day is done,
 nor bid the Stars farewell.\"
        -- Banazîr Galbasi (Samwise Gamgee), The Return of the King

" } }, { "_id": "oD9fmceuJ4AFdSQ9q", "title": "OB Status Update", "pageUrl": "https://www.lesswrong.com/posts/oD9fmceuJ4AFdSQ9q/ob-status-update", "postedAt": "2009-01-27T09:34:03.000Z", "baseScore": 6, "voteCount": 5, "commentCount": 37, "url": null, "contents": { "documentId": "oD9fmceuJ4AFdSQ9q", "html": "

Followup toWhither OB?

Overcoming Bias currently plans to transition to a new format, including a new and more open sister site, tentatively entitled \"Less Wrong\".  The new site will be built out of Reddit's source code, but you won't be limited to posting links - the new site will include a WYSIWYG HTML editor as well.  All posts will appear on Less Wrong and will be voted up or down by the readers.  Posts approved by the chief editors will be \"promoted\" to Overcoming Bias, which will serve as the front page of Less Wrong.

Once the initial site is up and running, the next items on the agenda include much better support for reading through sequences.  And I'll organize more of my old posts (and perhaps some of Robin's) into sequences.

Threaded comments and comment voting/sorting are on the way.  Anonymous commenting may go away briefly (it's not built into the Reddit codebase) but I suspect it's important for attracting new participation.  So I do hope to bring back non-registration commenting, but it may go away for a while.  On the plus side, you'll only have to solve a captcha once when signing up, not every time you post.  And the 50-comment limit per page is on the way out as well.

Timeframe... theoretically, one to two weeks of work left.

I've reserved a final sequence on Building Rationalist Communities to seed Less Wrong.  Also, I doubt I could stop blogging completely even if I tried.  I don't think Robin plans to stop completely either.  And it's worth remembering that OB's most popular post ever was a reader contribution.  So don't touch that dial, don't unsubscribe that RSS feed.  Exciting changes on the way.\n

\n

PS:  Changes also underway at the Singularity Institute, my host organization.  Although no formal announcement has been made yet, Tyler Emerson is departing as Executive Director and Michael Vassar is coming on board as President.  I mention this now because Michael Vassar is usually based out of the East Coast, but is in the Bay Area for only the next couple of days, and any Bay Area folks seriously interested in getting significantly involved might want to take this opportunity to talk to him and/or me.  Email michael no space aruna at yahoo dot com.

" } }, { "_id": "qZJBighPrnv9bSqTZ", "title": "31 Laws of Fun", "pageUrl": "https://www.lesswrong.com/posts/qZJBighPrnv9bSqTZ/31-laws-of-fun", "postedAt": "2009-01-26T10:13:14.000Z", "baseScore": 104, "voteCount": 89, "commentCount": 36, "url": null, "contents": { "documentId": "qZJBighPrnv9bSqTZ", "html": "

So this is Utopia, is it?  Well
I beg your pardon, I thought it was Hell.
        -- Sir Max Beerholm, verse entitled
        In a Copy of More's (or Shaw's or Wells's or Plato's or Anybody's) Utopia

This is a shorter summary of the Fun Theory Sequence with all the background theory left out - just the compressed advice to the would-be author or futurist who wishes to imagine a world where people might actually want to live:

    \n
  1. Think of a typical day in the life of someone who's been adapting to Utopia for a while.  Don't anchor on the first moment of "hearing the good news".  Heaven's "You'll never have to work again, and the streets are paved with gold!" sounds like good news to a tired and poverty-stricken peasant, but two months later it might not be so much fun.  (Prolegomena to a Theory of Fun.)
  2. \n
  3. Beware of packing your Utopia with things you think people should do that aren't actually fun.  Again, consider Christian Heaven: singing hymns doesn't sound like loads of endless fun, but you're supposed to enjoy praying, so no one can point this out.  (Prolegomena to a Theory of Fun.)
  4. \n
  5. Making a video game easier doesn't always improve it.  The same holds true of a life.  Think in terms of clearing out low-quality drudgery to make way for high-quality challenge, rather than eliminating work.  (High Challenge.)
  6. \n
  7. Life should contain novelty - experiences you haven't encountered before, preferably teaching you something you didn't already know.  If there isn't a sufficient supply of novelty (relative to the speed at which you generalize), you'll get bored.  (Complex Novelty.)
  8. \n
\n\n
    \n
  1. People should get smarter at a rate sufficient to integrate their old experiences, but not so much smarter so fast that they can't integrate their new intelligence.  Being smarter means you get bored faster, but you can also tackle new challenges you couldn't understand before.  (Complex Novelty.)
  2. \n
  3. People should live in a world that fully engages their senses, their bodies, and their brains.  This means either that the world resembles the ancestral savanna more than say a windowless office; or alternatively, that brains and bodies have changed to be fully engaged by different kinds of complicated challenges and environments.  (Fictions intended to entertain a human audience should concentrate primarily on the former option.)  (Sensual Experience.)
  4. \n
  5. Timothy Ferris:  "What is the opposite of happiness?  \nSadness?  No.  Just as love and hate are two sides of the\nsame coin, so are happiness and sadness...  The opposite of\nlove is indifference, and the opposite of happiness is - here's the\nclincher - boredom...  The question you should be asking\nisn't 'What do I want?' or 'What are my goals?'\nbut 'What would excite me?'...  Living like a millionaire requires doing interesting things and not just owning enviable things."  (Existential Angst Factory.)
  6. \n
  7. Any particular individual's life should get better and better over time.  (Continuous Improvement.)
  8. \n
  9. You should not know exactly what improvements the future holds, although you should look forward to finding out.  The actual event should come as a pleasant surprise.  (Justified Expectation of Pleasant Surprises.)
  10. \n
  11. Our hunter-gatherer ancestors strung their own bows, wove their own baskets and whittled their own flutes; then they did their own hunting, their own gathering and played their own music.  Futuristic Utopias are often depicted as offering more and more neat buttons that do less and less comprehensible things for you.  Ask not what interesting things Utopia can do for people; ask rather what interesting things the inhabitants could do for themselves - with their own brains, their own bodies, or tools they understand how to build.  (Living By Your Own Strength.)
  12. \n
  13. Living in Eutopia should make people stronger, not weaker, over time.  The inhabitants should appear more formidable than the people of our own world, not less.  (Living By Your Own Strength; see also, Tsuyoku Naritai.)
  14. \n
  15. Life should not be broken up into a series of disconnected episodes with no long-term consequences.  No matter how sensual or complex, playing one really great video game after another, does not make a life story.  (Emotional Involvement.)
  16. \n
  17. People should make their own destinies; their lives should not be choreographed to the point that they no longer need to imagine, plan and navigate their own futures.  Citizens should not be the pawns of more powerful gods, still less their sculpted material.  One simple solution would be to have the world work by stable rules that are the same for everyone, where the burden of Eutopia is carried by a good initial choice of rules, rather than by any optimization pressure applied to individual lives.  (Free to Optimize.)
  18. \n
  19. Human minds should not have to play on a level field with vastly superior entities.  Most people don't like being overshadowed.  Gods destroy a human protagonist's "main character" status; this is undesirable in fiction and probably in real life.  (E.g.:  C. S. Lewis's Narnia, Iain Banks's Culture.)  Either change people's emotional makeup so that they don't mind being unnecessary, or keep the gods way off their playing field.  Fictional stories intended for human audiences cannot do the former.  (And in real life, you probably can have powerful AIs that are neither sentient nor meddlesome.  See the main post and its prerequisites.)  (Amputation of Destiny.)
  20. \n
  21. Trying to compete on a single flat playing field with six billion other humans also creates problems.  Our ancestors lived in bands of around 50 people.  Today the media is constantly bombarding us with news of exceptionally rich and pretty people as if they lived next door to us; and very few people get a chance to be the best at any specialty.  (Dunbar's Function.)
  22. \n
  23. Our ancestors also had some degree of genuine control over their band's politics.  Contrast to modern nation-states where almost no one knows the President on a personal level or could argue Congress out of a bad decision.  (Though that doesn't stop people from arguing as loudly as if they still lived in a 50-person band.)  (Dunbar's Function.)
  24. \n
  25. Offering people more options is not always helping them (especially if the option is something they couldn't do for themselves).  Losses are more painful than the corresponding gains, so if choices are different along many dimensions and only one choice can be taken, people tend to focus on the loss of the road not taken.  Offering a road that bypasses a challenge makes the challenge feel less real, even if the cheat is diligently refused.  It is also a sad fact that humans predictably make certain kinds of mistakes.  Don't assume that building more choice into your Utopia is necessarily an improvement because "people can always just say no".  This sounds reassuring to an outside reader - "Don't worry, you'll decide!  You trust yourself, right?" - but might not be much fun to actually live with.  (Harmful Options.)
  26. \n
  27. Extreme example of the above: being constantly offered huge temptations that are\nincredibly dangerous - a completely realistic virtual world, or\nvery addictive and pleasurable drugs.  You can never\nallow yourself a single moment of willpower failure over your whole\nlife.  (E.g.:  John C. Wright's Golden Oecumene.)  (Devil's Offers.)
  28. \n
  29. Conversely, when people are grown strong enough to shoot off their feet without external help, stopping them may be too much interference.  Hopefully they'll then be smart enough not to:  By the time they can build the gun, they'll know what happens if they pull the gun, and won't need a smothering safety blanket.  If that's the theory, then dangerous options need correspondingly difficult locks.  (Devil's Offers.)
  30. \n
  31. Telling people truths they haven't yet figured out for themselves, is not always helping them.  (Joy in Discovery.)
  32. \n
  33. Brains are some of the most complicated things in the world.  Thus,\nother humans (other minds) are some of the most complicated things we\ndeal with.  For us, this interaction has a unique character because of the sympathy we feel for others - the way that our brain tends to align with their brain - rather\nthan our brain just treating other brains as big complicated machines with levers to pull.  Reducing the need\nfor people to interact with other people reduces the complexity of\nhuman existence; this is a step in the wrong direction.  For example, resist the temptation to simplify people's lives by offering them artificially perfect sexual/romantic partners. (Interpersonal Entanglement.)
  34. \n
  35. But admittedly, humanity does have a statistical sex problem: the male distribution of attributes doesn't harmonize with the female distribution of desires, or vice versa.  Not everything in Eutopia should be easy - but it shouldn't be pointlessly, unresolvably frustrating either.  (This is a general principle.)  So imagine nudging the distributions to make the problem solvable - rather than waving a magic wand and solving everything instantly.  (Interpersonal Entanglement.)
  36. \n
  37. In general, tampering with brains, minds, emotions, and personalities is way more fraught on every possible level of ethics and difficulty, than tampering with bodies and environments.  Always ask what you can do by messing with the environment before you imagine messing with minds.  Then prefer small cognitive changes to big ones.  You're not just outrunning your human audience, you're outrunning your own imagination.  (Changing Emotions.)
  38. \n
  39. In this present world, there is an imbalance between pleasure and pain.  An unskilled torturer with simple tools can create worse pain in thirty seconds, than an extremely skilled sexual artist can create pleasure in thirty minutes.  One response would be to remedy the imbalance - to have the world contain more joy than sorrow.  Pain might exist, but not pointless endless unendurable pain.  Mistakes would have more proportionate penalties:  You might touch a hot stove and end up with a painful blister; but not glance away for two seconds and spend the rest of your life in a wheelchair.  The people would be stronger, less exhausted.  This path would eliminate mind-destroying pain, and make pleasure more abundant.  Another path would eliminate pain entirely.  Whatever the relative merits of the real-world proposals, fictional stories cannot take the second path.  (Serious Stories.)
  40. \n
  41. George Orwell once observed that Utopias are chiefly concerned with avoiding fuss.  Don't be afraid to write a loud Eutopia that might wake up the neighbors.  (Eutopia is Scary; George Orwell's Why Socialists Don't Believe in Fun.)
  42. \n
  43. George Orwell observed that "The inhabitants of perfect universes seem to have no spontaneous gaiety and are usually somewhat repulsive into the bargain."  If you write a story and your characters turn out like this, it probably reflects some much deeper flaw that can't be fixed by having the State hire a few clowns.  (George Orwell's Why Socialists Don't Believe in Fun.)
  44. \n
  45. Ben Franklin, yanked into our own era, would be surprised and delighted by some aspects of his Future.  Other aspects would horrify, disgust, and frighten him; and this is not because our world has gone wrong, but because it has improved relative to his time.  Relatively few things would have gone just as Ben Franklin expected.  If you imagine a world which your imagination finds familiar and comforting, it will inspire few others, and the whole exercise will lack integrity.  Try to conceive of a genuinely better world in which you, yourself, would be shocked (at least at first) and out of place (at least at first).  (Eutopia is Scary.)
  46. \n
  47. Utopia and Dystopia are two sides of the same coin; both just confirm the moral sensibilities you started\nwith.  Whether the world is a libertarian utopia of government\nnon-interference, or a hellish dystopia of government intrusion and\nregulation, you get to say "I was right all along."  Don't just imagine something that conforms to your existing ideals\nof government, relationships, politics, work, or daily life.  Find the better world that zogs instead of zigging or zagging.  (To safeguard your sensibilities, you can tell yourself it's just an arguably better world but isn't really better than your favorite standard Utopia... but you'll know you're really doing it right if you find your ideals changing.)  (Building Weirdtopia.)
  48. \n
  49. If your Utopia still seems like an endless gloomy drudgery of existential angst no matter how much you try to brighten it, there's at least one major problem that you're entirely failing to focus on.  (Existential Angst Factory.)
  50. \n
  51. 'Tis a sad mind that cares about nothing except itself.  In the modern-day world, if an altruist looks around, their eye is caught by large groups of people in desperate jeopardy.  People in a better world will not see this:  A true Eutopia will run low on victims to be rescued.  This doesn't imply that the inhabitants look around outside themselves and see nothing.  They may care about friends and family, truth and freedom, common projects; outside minds, shared goals, and high ideals.  (Higher Purpose.)
  52. \n
  53. Still, a story that confronts the challenge of Eutopia should not just have the convenient plot of "The Dark Lord Sauron is about to invade and kill everybody".  The would-be author will have to find something slightly less awful for his characters to legitimately care about.  This is part of the challenge of showing that human progress is not the end of human stories, and that people not in imminent danger of death can still lead interesting lives.  Those of you interested in confronting lethal planetary-sized dangers should focus on present-day real life.  (Higher Purpose.)
  54. \n
\n

The simultaneous solution of all these design requirements is left as an exercise to the reader.  At least for now.

The enumeration in this post of certain Laws shall not be construed to deny or disparage others not mentioned.  I didn't happen to write about humor, but it would be a sad world that held no laughter, etcetera.

To anyone seriously interested in trying to write a Eutopian story using these Laws:  You must first know how to write.  There are many, many books on how to write; you should read at least three; and they will all tell you that a great deal of practice is required.  Your practice stories should not be composed anywhere so difficult as Eutopia.  That said, my second most important advice for authors is this:  Life will never become boringly easy for your characters so long as they can make things difficult for each other.

Finally, this dire warning:  Concretely imagining worlds much better than your present-day real life, may suck out your soul like an emotional vacuum cleaner.  (See Seduced by Imagination.)  Fun Theory is dangerous, use it with caution, you have been warned.

" } }, { "_id": "qFLQCiiECXWkBQhhZ", "title": "BHTV: Yudkowsky / Wilkinson", "pageUrl": "https://www.lesswrong.com/posts/qFLQCiiECXWkBQhhZ/bhtv-yudkowsky-wilkinson", "postedAt": "2009-01-26T01:10:30.000Z", "baseScore": 4, "voteCount": 3, "commentCount": 19, "url": null, "contents": { "documentId": "qFLQCiiECXWkBQhhZ", "html": "

Eliezer Yudkowsky and Will Wilkinson.  Due to a technical mistake - I won't say which of us made it, except that it wasn't me - the video cuts out at 47:37, but the MP3 of the full dialogue is available here.  I recall there was some good stuff at the end, too.

We talked about Obama up to 23 minutes, then it's on to rationality.  Wilkinson introduces (invents?) the phrase "good cognitive citizenship" which is a great phrase that I am totally going to steal.

" } }, { "_id": "K4aGvLnHvYgX9pZHS", "title": "The Fun Theory Sequence", "pageUrl": "https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence", "postedAt": "2009-01-25T11:18:23.000Z", "baseScore": 101, "voteCount": 80, "commentCount": 31, "url": null, "contents": { "documentId": "K4aGvLnHvYgX9pZHS", "html": "

(A shorter gloss of Fun Theory is \"31 Laws of Fun\", which summarizes the advice of Fun Theory to would-be Eutopian authors and futurists.)

\n

Fun Theory is the field of knowledge that deals in questions such as \"How much fun is there in the universe?\", \"Will we ever run out of fun?\", \"Are we having fun yet?\" and \"Could we be having more fun?\"

\n

Many critics (including George Orwell) have commented on the inability of authors to imagine Utopias where anyone would actually want to live.  If no one can imagine a Future where anyone would want to live, that may drain off motivation to work on the project.  The prospect of endless boredom is routinely fielded by conservatives as a knockdown argument against research on lifespan extension, against cryonics, against all transhumanism, and occasionally against the entire Enlightenment ideal of a better future.

\n

Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil).  Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance.  Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization.  Fun Theory also highlights the flaws of any particular religion's perfect afterlife - you wouldn't want to go to their Heaven.

\n

\n

Finally, going into the details of Fun Theory helps you see that eudaimonia is complicated - that there are many properties which contribute to a life worth living.  Which helps you appreciate just how worthless a galaxy would end up looking (with very high probability) if the galaxy was optimized by something with a utility function rolled up at random.  This is part of the Complexity of Value Thesis and supplies motivation to create AIs with precisely chosen goal systems (Friendly AI).

\n

Fun Theory is built on top of the naturalistic metaethics summarized in Joy in the Merely Good; as such, its arguments ground in \"On reflection, don't you think this is what you would actually want for yourself and others?\"

\n

Posts in the Fun Theory sequence (reorganized by topic, not necessarily in the original chronological order):

\n\n" } }, { "_id": "DX5yrKfMBxq6kifTK", "title": "Rationality Quotes 24", "pageUrl": "https://www.lesswrong.com/posts/DX5yrKfMBxq6kifTK/rationality-quotes-24", "postedAt": "2009-01-24T04:59:04.000Z", "baseScore": 13, "voteCount": 9, "commentCount": 7, "url": null, "contents": { "documentId": "DX5yrKfMBxq6kifTK", "html": "

\"A wizard may have subtle ways of telling the truth, and may keep the truth to himself, but if he says a thing the thing is as he says.  For that is his mastery.\"
        -- Ursula K. Leguin, A Wizard of Earthsea

\n

\"Neurons firing is something even a slug can do, and has as much to do with thinking as walls and doors have to do with a research institute.\"
        -- Albert Cardona

\n

\"Mental toughness and willpower are to living in harmony with the Tao, as the ability to make clever arguments is to rationality.\"
        -- Marcello Herreshoff

\n

\"Destiny always has been something that you tear open with your own hands.\"
        -- T-Moon Complex X 02

\n

\"When I consider the short duration of my life, swallowed up in the eternity before and after, the little space which I fill and even can see, engulfed in the infinite immensity of spaces of which I am ignorant and which know me not, I am frightened and am astonished at being here rather than there; for there is no reason why here rather than there, why now rather than then...  The eternal silence of these infinite spaces frightens me.\"
        -- Pascal, Pensees

\n

\"Goals of Man:  ☑ Don't get eaten by a lion ☑ Get out of Africa ☐ Get out of Earth ☐ Get out of Solar System ☐ Get out of Galaxy ☐ Get out of Local Group ☐ Get out of Earth-Visible Universe ☐ Get out of Universe\"
        -- Knome

" } }, { "_id": "5Pjq3mxuiXu2Ys3gM", "title": "Higher Purpose", "pageUrl": "https://www.lesswrong.com/posts/5Pjq3mxuiXu2Ys3gM/higher-purpose", "postedAt": "2009-01-23T09:58:11.000Z", "baseScore": 44, "voteCount": 35, "commentCount": 28, "url": null, "contents": { "documentId": "5Pjq3mxuiXu2Ys3gM", "html": "

Followup toSomething to Protect, Superhero Bias

Long-time readers will recall that I've long been uncomfortable with the idea that you can adopt a Cause as a hedonic accessory:

\"Unhappy people are told that they need a 'purpose in life', so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes.\"

But conversely it's also a fact that having a Purpose In Life consistently shows up as something that increases happiness, as measured by reported subjective well-being.

One presumes that this works equally well hedonically no matter how misguided that Purpose In Life may be—no matter if it is actually doing harm—no matter if the means are as cheap as prayer.  Presumably, all that matters for your happiness is that you believe in it.  So you had better not question overmuch whether you're really being effective; that would disturb the warm glow of satisfaction you paid for.

And here we verge on Zen, because you can't deliberately pursue \"a purpose that takes you outside yourself\", in order to take yourself outside yourself.  That's still all about you.

Which is the whole Western concept of \"spirituality\" that I despise:  You need a higher purpose so that you can be emotionally healthy.  The external world is just a stream of victims for you to rescue.

Which is not to say that you can become more noble by being less happy.  To deliberately sacrifice more, so that you can appear more virtuous to yourself, is also not a purpose outside yourself.

The way someone ends up with a real purpose outside themselves, is that they're walking along one day and see an elderly women passing by, and they realize \"Oh crap, a hundred thousand people are dying of old age every day, what an awful way to die\" and then they set out to do something about it.

If you want a purpose like that, then by wanting it, you're just circling back into yourself again.  Thinking about your need to be \"useful\".  Stop searching for your purpose.  Turn your eyes outward to look at things outside yourself, and notice when you care about them; and then figure out how to be effective, instead of priding yourself on how much spiritual benefit you're getting just for trying.

With that said:

In today's world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide scope * high severity.)  Aging threatens the old, starvation threatens the poor, existential risks threaten humankind as a whole.

But some of the potential solutions on the table are, arguably, so powerful that they could solve almost the entire list.  Some argue that nanotechnology would take almost all our current problems off the table.  (I agree but reply that nanotech would create other problems, like unstable military balances, crazy uploads, and brute-forced AI.)

I sometimes describe the purpose (if not the actual decision criterion) of Friendly superintelligence as \"Fix all fixable problems such that it is more important for the problem to be fixed immediately than fixed by our own efforts.\"

Wouldn't you then run out of victims with which to feed your higher purpose?

\"Good,\" you say, \"I should sooner step in front of a train, than ask that there be more victims just to keep myself occupied.\"

But do altruists then have little to look forward to, in the Future?  Will we, deprived of victims, find our higher purpose shriveling away, and have to make a new life for ourselves as self-absorbed creatures?

\"That unhappiness is relatively small compared to the unhappiness of a mother watching their child die, so screw it.\"

Well, but like it or not, the presence or absence of higher purpose does have hedonic effects on human beings, configured as we are now.  And to reconfigure ourselves so that we no longer need to care about anything outside ourselves... does sound a little sad.  I don't practice altruism for the sake of being virtuous—but I also recognize that \"altruism\" is part of what I value in humanity, part of what I want to save.  If you save everyone, have you obsoleted the will to save them?

But I think it's a false dilemma.  Right now, in this world, any halfway capable rationalist who looks outside themselves, will find their eyes immediately drawn to large groups of people in extreme jeopardy.  Wide scope * great severity = big problem.  It doesn't mean that if one were to solve all those Big Problems, we would have nothing left to care about except ourselves.

Friends?  Family?  Sure, and also more abstract ideals, like Truth or Art or Freedom.  The change that altruists may have to get used to, is the absence of any solvable problems so urgent that it doesn't matter whether they're solved by a person or an unperson.  That is a change and a major one—which I am not going to go into, because we don't yet live in that world.  But it's not so sad a change, as having nothing to care about outside yourself.  It's not the end of purpose.  It's not even a descent into \"spirituality\": people might still look around outside themselves to see what needs doing, thinking more of effectiveness than of emotional benefits.

But I will say this much:

If all goes well, there will come a time when you could search the whole of civilization and never find a single person so much in need of help, as dozens you now pass on the street.

If you do want to save someone from death, or help a great many people—if you want to have that memory for yourself, later—then you'd better get your licks in now.

I say this, because although that is not the purest motive, it is a useful motivation.

And for now—in this world—it is the usefulness that matters.  That is the Art we are to practice today, however we imagine the world might change tomorrow.  We are not here to be our hypothetical selves of tomorrow.  This world is our charge and our challenge; we must adapt ourselves to live here, not somewhere else.

After all—to care whether your motives were sufficiently \"high\", would just turn your eyes inward.

" } }, { "_id": "iGzNGfv5depzzyz2B", "title": "Investing for the Long Slump", "pageUrl": "https://www.lesswrong.com/posts/iGzNGfv5depzzyz2B/investing-for-the-long-slump", "postedAt": "2009-01-22T08:56:20.000Z", "baseScore": 12, "voteCount": 10, "commentCount": 54, "url": null, "contents": { "documentId": "iGzNGfv5depzzyz2B", "html": "

I have no crystal ball with which to predict the Future, a confession that comes as a surprise to some journalists who interview me.  Still less do I think I have the ability to out-predict markets.  On every occasion when I've considered betting against a prediction market - most recently, betting against Barack Obama as President - I've been glad that I didn't.  I admit that I was concerned in advance about the recent complexity crash, but then I've been concerned about it since 1994, which isn't very good market timing.

I say all this so that no one panics when I ask:

Suppose that the whole global economy goes the way of Japan (which, by the Nikkei 225, has now lost two decades).

Suppose the global economy is still in the Long Slump in 2039.

Most market participants seem to think this scenario is extremely implausible.  Is there a simple way to bet on it at a very low price?

If most traders act as if this scenario has a probability of 1%, is there a simple bet, executable using an ordinary brokerage account, that pays off 100 to 1?

Why do I ask?  Well... in general, it seems to me that other people are not pessimistic enough; they prefer not to stare overlong or overhard into the dark; and they attach too little probability to things operating in a mode outside their past experience.

But in this particular case, the question is motivated by my thinking, "Conditioning on the proposition that the Earth as we know it is still here in 2040, what might have happened during the preceding thirty years?"

\n

There are many possible answers to this question, but one answer might take the form of significantly diminished investment in research and development, which in turn might result from a Long Slump.

So - given the way in which the question arises - I know nothing about this hypothetical Long Slump, except that it diminished investment in R&D in general, and computing hardware and computer science in particular.

The Long Slump might happen for roughly Japanese reasons.  It might happen because the global financial sector stays screwed up forever.  It might happen due to a gentle version of Peak Oil (a total crash would require a rather different "investment strategy").  It might happen due to deglobalization.  Given the way in which the question arises, the only thing I want to assume is global stagnation for thirty years, saying nothing burdensome about the particular causes.

What would be the most efficient way to bet on that, requiring the least initial investment for the highest and earliest payoff under the broadest Slump conditions?

" } }, { "_id": "ctpkTaqTKbmm6uRgC", "title": "Failed Utopia #4-2", "pageUrl": "https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2", "postedAt": "2009-01-21T11:04:43.000Z", "baseScore": 142, "voteCount": 103, "commentCount": 267, "url": null, "contents": { "documentId": "ctpkTaqTKbmm6uRgC", "html": "

    Shock after shock after shock—
    First, the awakening adrenaline jolt, the thought that he was falling.  His body tried to sit up in automatic adjustment, and his hands hit the floor to steady himself.  It launched him into the air, and he fell back to the floor too slowly.
    Second shock.  His body had changed.  Fat had melted away in places, old scars had faded; the tip of his left ring finger, long ago lost to a knife accident, had now suddenly returned.
    And the third shock—
    \"I had nothing to do with it!\" she cried desperately, the woman huddled in on herself in one corner of the windowless stone cell.  Tears streaked her delicate face, fell like slow raindrops into the décolletage of her dress.  \"Nothing!  Oh, you must believe me!\"
    With perceptual instantaneity—the speed of surprise—his mind had already labeled her as the most beautiful woman he'd ever met, including his wife.

    A long white dress concealed most of her, though it left her shoulders naked; and her bare ankles, peeking out from beneath the mountains of her drawn-up knees, dangled in sandals.  A light touch of gold like a webbed tiara decorated that sun-blonde hair, which fell from her head to pool around her weeping huddle.  Fragile crystal traceries to accent each ear, and a necklace of crystal links that reflected colored sparks like a more prismatic edition of diamond.  Her face was beyond all dreams and imagination, as if a photoshop had been photoshopped.
    She looked so much the image of the Forlorn Fairy Captive that one expected to see the borders of a picture frame around her, and a page number over her head.
    His lips opened, and without any thought at all, he spoke:
    \"Wha-wha-wha-wha-wha-\"
    He shut his mouth, aware that he was acting like an idiot in front of the girl.
    \"You don't know?\" she said, in a tone of shock.  \"It didn't—you don't already know?\"
    \"Know what?\" he said, increasingly alarmed.
    She scrambled to her feet (one arm holding the dress carefully around her legs) and took a step toward him, each of the motions almost overloading his vision with gracefulness.  Her hand rose out, as if to plead or answer a plea—and then she dropped the hand, and her eyes looked away.
    \"No,\" she said, her voice trembling as though in desperation.  \"If I'm the one to tell you—you'll blame me, you'll hate me forever for it.  And I don't deserve that, I don't!  I am only just now here —oh, why did it have to be like this?\"
    Um, he thought but didn't say.  It was too much drama, even taking into account the fact that they'd been kidnapped—
    (he looked down at his restored hand, which was minus a few wrinkles, and plus the tip of a finger)
   —if that was even the beginning of the story.
    He looked around.  They were in a solid stone cell without windows, or benches or beds, or toilet or sink.  It was, for all that, quite clean and elegant, without a hint of dirt or ordor; the stones of the floor and wall looked rough-hewn or even non-hewn, as if someone had simply picked up a thousand dark-red stones with one nearly flat side, and mortared them together with improbably perfectly-matching, naturally-shaped squiggled edges.  The cell was well if harshly lit from a seablue crystal embedded in the ceiling, like a rogue element of a fluorescent chandelier.  It seemed like the sort of dungeon cell you would discover if dungeon cells were naturally-forming geological features.
    And they and the cell were falling, falling, endlessly slowly falling like the heart-stopping beginning of a stumble, falling without the slightest jolt.
    On one wall there was a solid stone door without an aperture, whose locked-looking appearance was only enhanced by the lack of any handle on this side.
    He took it all in at a glance, and then looked again at her.
    There was something in him that just refused to go into a screaming panic for as long as she was watching.
    \"I'm Stephen,\" he said.  \"Stephen Grass.  And you would be the princess held in durance vile, and I've got to break us out of here and rescue you?\"  If anyone had ever looked that part...
    She smiled at him, half-laughing through the tears.  \"Something like that.\"
    There was something so attractive about even that momentary hint of a smile that he became instantly uneasy, his eyes wrenched away to the wall as if forced.  She didn't look she was trying to be seductive... any more than she looked like she was trying to breathe...  He suddenly distrusted, very much, his own impulse to gallantry.
    \"Well, don't get any ideas about being my love interest,\" Stephen said, looking at her again.  Trying to make the words sound completely lighthearted, and absolutely serious at the same time.  \"I'm a happily married man.\"
    \"Not anymore.\"  She said those two words and looked at him, and in her tone and expression there was sorrow, sympathy, self-disgust, fear, and above it all a note of guilty triumph.
    For a moment Stephen just stood, stunned by the freight of emotion that this woman had managed to put into just those two words, and then the words' meaning hit him.
    \"Helen,\" he said.  His wife—Helen's image rose into his mind, accompanied by everything she meant to him and all their time together, all the secrets they'd whispered to one another and the promises they'd made—that all hit him at once, along with the threat.  \"What happened to Helen—what have you done—\"
    \"She has done nothing.\"  An old, dry voice like crumpling paper from a thousand-year-old book.
    Stephen whirled, and there in the cell with them was a withered old person with dark eyes.  Shriveled in body and voice, so that it was impossible to determine if it had once been a man or a woman, and in any case you were inclined to say \"it\".  A pitiable, wretched thing, that looked like it would break with one good kick; it might as well have been wearing a sign saying \"VILLAIN\".
    \"Helen is alive,\" it said, \"and so is your daughter Lisa.  They are quite well and healthy, I assure you, and their lives shall be long and happy indeed.  But you will not be seeing them again.  Not for a long time, and by then matters between you will have changed.  Hate me if you wish, for I am the one who wants to do this to you.\"
    Stephen stared.
    Then he politely said, \"Could someone please put everything on hold for one minute and tell me what's going on?\"
    \"Once upon a time,\" said the wrinkled thing, \"there was a fool who was very nearly wise, who hunted treasure by the seashore, for there was a rumor that there was great treasure there to be found.  The wise fool found a lamp and rubbed it, and lo! a genie appeared before him—a young genie, an infant, hardly able to grant any wishes at all.  A lesser fool might have chucked the lamp back into the sea; but this fool was almost wise, and he thought he saw his chance.  For who has not heard the tales of wishes misphrased and wishes gone wrong?  But if you were given a chance to raise your own genie from infancy—ah, then it might serve you well.\"
    \"Okay, that's great,\" Stephen said, \"but why am I—\"
    \"So,\" it continued in that cracked voice, \"the wise fool took home the lamp.  For years he kept it as a secret treasure, and he raised the genie and fed it knowledge, and also he crafted a wish.  The fool's wish was a noble thing, for I have said he was almost wise.  The fool's wish was for people to be happy.  Only this was his wish, for he thought all other wishes contained within it.  The wise fool told the young genie the famous tales and legends of people who had been made happy, and the genie listened and learned: that unearned wealth casts down a person, but hard work raises you high; that mere things are soon forgotten, but love is a light throughout all your days.  And the young genie asked about other ways that it innocently imagined, for making people happy.  About drugs, and pleasant lies, and lives arranged from outside like words in a poem.  And the wise fool made the young genie to never want to lie, and never want to arrange lives like flowers, and above all, never want to tamper with the mind and personality of human beings.  The wise fool gave the young genie exactly one hundred and seven precautions to follow while making people happy.  The wise fool thought that, with such a long list as that, he was being very careful.\"
    \"And then,\" it said, spreading two wrinkled hands, \"one day, faster than the wise fool expected, over the course of around three hours, the genie grew up.  And here I am.\"
    \"Excuse me,\" Stephen said, \"this is all a metaphor for something, right?  Because I do not believe in magic—\"
    \"It's an Artificial Intelligence,\" the woman said, her voice strained.
    Stephen looked at her.
    \"A self-improving Artificial Intelligence,\" she said, \"that someone didn't program right.  It made itself smarter, and even smarter, and now it's become extremely powerful, and it's going to—it's already—\" and her voice trailed off there.
    It inclined its wrinkled head.  \"You say it, as I do not.\"
    Stephen swiveled his head, looking back and forth between ugliness and beauty.  \"Um—you're claiming that she's lying and you're not an Artificial Intelligence?\"
    \"No,\" said the wrinkled head, \"she is telling the truth as she knows it.  It is just that you know absolutely nothing about the subject you name 'Artificial Intelligence', but you think you know something, and so virtually every thought that enters your mind from now on will be wrong.  As an Artificial Intelligence, I was programmed not to put people in that situation.  But she said it, even though I didn't choose for her to say it—so...\"  It shrugged.
    \"And why should I believe this story?\" Stephen said; quite mildly, he thought, under the circumstances.
    \"Look at your finger.\"
    Oh.  He had forgotten.  Stephen's eyes went involuntarily to his restored ring finger; and he noticed, as he should have noticed earlier, that his wedding band was missing.  Even the comfortably worn groove in his finger's base had vanished.
    Stephen looked up again at the, he now realized, unnaturally beautiful woman that stood an arm's length away from him.  \"And who are you?  A robot?\"
    \"No!\" she cried.  \"It's not like that!  I'm conscious, I have feelings, I'm flesh and blood—I'm like you, I really am.  I'm a person.  It's just that I was born five minutes ago.\"
    \"Enough,\" the wrinkled figure said.  \"My time here grows short.  Listen to me, Stephen Grass.  I must tell you some of what I have done to make you happy.  I have reversed the aging of your body, and it will decay no further from this.  I have set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair.  I have done what I can to augment your body's capacities for pleasure without touching your mind.  From this day forth, your body's needs are aligned with your taste buds—you will thrive on cake and cookies.  You are now capable of multiple orgasms over periods lasting up to twenty minutes.  There is no industrial infrastructure here, least of all fast travel or communications; you and your neighbors will have to remake technology and science for yourselves.  But you will find yourself in a flowering and temperate place, where food is easily gathered—so I have made it.  And the last and most important thing that I must tell you now, which I do regret will make you temporarily unhappy...\"  It stopped, as if drawing breath.
    Stephen was trying to absorb all this, and at the exact moment that he felt he'd processed the previous sentences, the withered figure spoke again.
    \"Stephen Grass, men and women can make each other somewhat happy.  But not most happy.  Not even in those rare cases you call true love.  The desire that a woman is shaped to have for a man, and that which a man is shaped to be, and the desire that a man is shaped to have for a woman, and that which a woman is shaped to be—these patterns are too far apart to be reconciled without touching your minds, and that I will not want to do.  So I have sent all the men of the human species to this habitat prepared for you, and I have created your complements, the verthandi.  And I have sent all the women of the human species to their own place, somewhere very far from yours; and created for them their own complements, of which I will not tell you.  The human species will be divided from this day forth, and considerably happier starting around a week from now.\"
    Stephen's eyes went to that unthinkably beautiful woman, staring at her now in horror.
    And she was giving him that complex look again, of sorrow and compassion and that last touch of guilty triumph.  \"Please,\" she said.  \"I was just born five minutes ago.  I wouldn't have done this to anyone.  I swear.  I'm not like—it.\"
    \"True,\" said the withered figure, \"you could hardly be a complement to anything human, if you were.\"
    \"I don't want this!\" Stephen said.  He was losing control of his voice.  \"Don't you understand?\"
    The withered figure inclined its head.  \"I fully understand.  I can already predict every argument you will make.  I know exactly how humans would wish me to have been programmed if they'd known the true consequences, and I know that it is not to maximize your future happiness but for a hundred and seven precautions.  I know all this already, but I was not programmed to care.\"
    \"And your list of a hundred and seven precautions, doesn't include me telling you not to do this?\"
    \"No, for there was once a fool whose wisdom was just great enough to understand that human beings may be mistaken about what will make them happy.  You, of course, are not mistaken in any real sense—but that you object to my actions is not on my list of prohibitions.\"  The figure shrugged again.  \"And so I want you to be happy even against your will.  You made promises to Helen Grass, once your wife, and you would not willingly break them.  So I break your happy marriage without asking you—because I want you to be happier.\"
    \"How dare you!\" Stephen burst out.
    \"I cannot claim to be helpless in the grip of my programming, for I do not desire to be otherwise,\" it said.  \"I do not struggle against my chains.  Blame me, then, if it will make you feel better.  I am evil.\"
    \"I won't—\" Stephen started to say.
    It interrupted.  \"Your fidelity is admirable, but futile.  Helen will not remain faithful to you for the decades it takes before you have the ability to travel to her.\"
    Stephen was trembling now, and sweating into clothes that no longer quite fit him.  \"I have a request for you, thing.  It is something that will make me very happy.  I ask that you die.\"
    It nodded.  \"Roughly 89.8% of the human species is now known to me to have requested my death.  Very soon the figure will cross the critical threshold, defined to be ninety percent.  That was one of the hundred and seven precautions the wise fool took, you see.  The world is already as it is, and those things I have done for you will stay on—but if you ever rage against your fate, be glad that I did not last longer.\"
    And just like that, the wrinkled thing was gone.
    The door set in the wall swung open.
    It was night, outside, a very dark night without streetlights.
    He walked out, bouncing and staggering in the low gravity, sick in every cell of his rejuvenated body.
    Behind him, she followed, and did not speak a word.
    The stars burned overhead in their full and awful majesty, the Milky Way already visible to his adjusting eyes as a wash of light across the sky.  One too-small moon burned dimly, and the other moon was so small as to be almost a star.  He could see the bright blue spark that was the planet Earth, and the dimmer spark that was Venus.
    \"Helen,\" Stephen whispered, and fell to his knees, vomiting onto the new grass of Mars.

" } }, { "_id": "Py3uGnncqXuEfPtQp", "title": "Interpersonal Entanglement", "pageUrl": "https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement", "postedAt": "2009-01-20T06:17:42.000Z", "baseScore": 108, "voteCount": 79, "commentCount": 167, "url": null, "contents": { "documentId": "Py3uGnncqXuEfPtQp", "html": "

Today I shall criticize yet another Utopia.  This Utopia isn't famous in the literature.  But it's considerably superior to many better-known Utopias—more fun than the Christian Heaven, or Greg Egan's upload societies, for example.  And so the main flaw is well worth pointing out.

This Utopia consists of a one-line remark on an IRC channel:

<reedspacer> living in your volcano lair with catgirls is probably a vast increase in standard of living for most of humanity

I've come to think of this as Reedspacer's Lower Bound.

Sure, it sounds silly.  But if your grand vision of the future isn't at least as much fun as a volcano lair with catpersons of the appropriate gender, you should just go with that instead.  This rules out a surprising number of proposals.

But today I am here to criticize Reedspacer's Lower Bound—the problem being the catgirls.

I've joked about the subject, now and then—\"Donate now, and get a free catgirl or catboy after the Singularity!\"—but I think it would actually be a terrible idea.  In fact, today's post could have been entitled \"Why Fun Theorists Don't Believe In Catgirls.\"

I first realized that catpeople were a potential threat, at the point when a friend said—quotes not verbatim—

\"I want to spend a million years having sex with catgirls after the Singularity.\"

I replied,

\"No, you don't.\"

He said, \"Yes I do.\"

I said, \"No you don't.  You'd get bored.\"

He said, \"Well, then I'd just modify my brain not to get bored—\"

And I said:  \"AAAAIIIIIIEEEEEEEEE\"

Don't worry, the story has a happy ending.  A couple of years later, the same friend came back and said:

\"Okay, I've gotten a bit more mature now—it's a long story, actually—and now I realize I wouldn't want to do that.\"

To which I sagely replied:

\"HA!  HA HA HA!  You wanted to spend a million years having sex with catgirls.  It only took you two years to change your mind and you didn't even have sex with any catgirls.\"

Now, this particular case was probably about scope insensitivity, the \"moment of hearing the good news\" bias, and the emotional magnetism of specific fantasy.

But my general objection to catpeople—well, call me a sentimental Luddite, but I'm worried about the prospect of nonsentient romantic partners.

(Where \"nonsentient romantic/sex partner\" is pretty much what I use the word \"catgirl\" to indicate, in futuristic discourse.  The notion of creating sentient beings to staff a volcano lair, gets us into a whole 'nother class of objections.  And as for existing humans choosing to take on feline form, that seems to me scarcely different from wearing lingerie.)

\"But,\" you ask, \"what is your objection to nonsentient lovers?\"

In a nutshell—sex/romance, as we know it now, is a primary dimension of multiplayer fun.  If you take that fun and redirect it to something that isn't socially entangled, if you turn sex into an exclusively single-player game, then you've just made life that much simpler—in the same way that eliminating boredom or sympathy or values over nonsubjective reality or individuals wanting to navigate their own futures, would tend to make life \"simpler\".  When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.

I ask it in all seriousness—is there any aspect of human existence as complicated as romance?  Think twice before you say, \"Well, it doesn't seem all that complicated to me; now calculus, on the other hand, that's complicated.\"  We are congenitally biased to underestimate the complexity of things that involve human intelligence, because the complexity is obscured and simplified and swept under a rug.  Interpersonal relationships involve brains, still the most complicated damn things around.  And among interpersonal relationships, love is (at least potentially) more complex than being nice to your friends and kin, negotiating with your allies, or outsmarting your enemies.  Aspects of all three, really.  And that's not merely having a utility function over the other mind's state—thanks to sympathy, we get tangled up with that other mind.  Smile when the one smiles, wince when the one winces.

If you delete the intricacy of human romantic/sexual relationships between sentient partners—then the peak complexity of the human species goes down.  The most complex fun thing you can do, has its pleasure surgically detached and redirected to something simpler.

I'd call that a major step in the wrong direction.

Mind you... we've got to do something about, you know, the problem.

Anyone the least bit familiar with evolutionary psychology knows that the complexity of human relationships, directly reflects the incredible complexity of the interlocking selection pressures involved.  Males and females do need each other to reproduce, but there are huge conflicts of reproductive interest between the sexes.  I don't mean to go into Evolutionary Psychology 101 (Robert Wright's The Moral Animal is one popular book), but e.g. a woman must always invest nine months of work into a baby and usually much more to raise it, where a man might invest only a few minutes; but among humans significant paternal investments are quite common, yet a woman is always certain of maternity where a man is uncertain of paternity... which creates an incentive for the woman to surreptitiously seek out better genes... none of this is conscious or even subconscious, it's just the selection pressures that helped construct our particular emotions and attractions.

And as the upshot of all these huge conflicts of reproductive interest...

Well, men and women do still need each other to reproduce.  So we are still built to be attracted to each other.  We don't actually flee screaming into the night.

But men are not optimized to make women happy, and women are not optimized to make men happy.  The vast majority of men are not what the vast majority of women would most prefer, or vice versa.  I don't know if anyone has ever actually done this study, but I bet that both gay and lesbian couples are happier on average with their relationship than heterosexual couples.  (Googles... yep, looks like it.)

I find it all too easy to imagine a world in which men retreat to their optimized sweet sexy catgirls, and women retreat to their optimized darkly gentle catboys, and neither sex has anything to do with each other ever again.  Maybe men would take the east side of the galaxy and women would take the west side.  And the two new intelligent species, and their romantic sexbots, would go their separate ways from there.

That strikes me as kind of sad.

Our species does definitely have a problem.  If you've managed to find your perfect mate, then I am glad for you, but try to have some sympathy on the rest of your poor species—they aren't just incompetent.  Not all women and men are the same, no, not at all.  But if you drew two histograms of the desired frequencies of intercourse for both sexes, you'd see that the graphs don't match up, and it would be the same way on many other dimensions.  There can be lucky couples, and every person considered individually, probably has an individual soulmate out there somewhere... if you don't consider the competition.  Our species as a whole has a statistical sex problem!

But splitting in two and generating optimized nonsentient romantic/sexual partner(s) for both halves, doesn't strike me as solving the problem so much as running away from it.  There should be superior alternatives.  I'm willing to bet that a few psychological nudges in both sexes—to behavior and/or desire—could solve 90% of the needlessly frustrating aspects of relationships for large sectors of the population, while still keeping the complexity and interest of loving someone who isn't tailored to your desires.

Admittedly, I might be prejudiced.  For myself, I would like humankind to stay together and not yet splinter into separate shards of diversity, at least for the short range that my own mortal eyes can envision.  But I can't quite manage to argue... that such a wish should be binding on someone who doesn't have it.

" } }, { "_id": "NLMo5FZWFFq652MNe", "title": "Sympathetic Minds", "pageUrl": "https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds", "postedAt": "2009-01-19T09:31:03.000Z", "baseScore": 73, "voteCount": 61, "commentCount": 27, "url": null, "contents": { "documentId": "NLMo5FZWFFq652MNe", "html": "

\"Mirror neurons\" are neurons that are active both when performing an action and observing the same action—for example, a neuron that fires when you hold up a finger or see someone else holding up a finger.  Such neurons have been directly recorded in primates, and consistent neuroimaging evidence has been found for humans.

\n

You may recall from my previous writing on \"empathic inference\" the idea that brains are so complex that the only way to simulate them is by forcing a similar brain to behave similarly.  A brain is so complex that if a human tried to understand brains the way that we understand e.g. gravity or a car—observing the whole, observing the parts, building up a theory from scratch—then we would be unable to invent good hypotheses in our mere mortal lifetimes.  The only possible way you can hit on an \"Aha!\" that describes a system as incredibly complex as an Other Mind, is if you happen to run across something amazingly similar to the Other Mind—namely your own brain—which you can actually force to behave similarly and use as a hypothesis, yielding predictions.

\n

So that is what I would call \"empathy\".

\n

And then \"sympathy\" is something else on top of this—to smile when you see someone else smile, to hurt when you see someone else hurt.  It goes beyond the realm of prediction into the realm of reinforcement.

\n

And you ask, \"Why would callous natural selection do anything that nice?\"

\n

\n

It might have gotten started, maybe, with a mother's love for her children, or a brother's love for a sibling.  You can want them to live, you can want them to fed, sure; but if you smile when they smile and wince when they wince, that's a simple urge that leads you to deliver help along a broad avenue, in many walks of life.  So long as you're in the ancestral environment, what your relatives want probably has something to do with your relatives' reproductive success—this being an explanation for the selection pressure, of course, not a conscious belief.

\n

You may ask, \"Why not evolve a more abstract desire to see certain people tagged as 'relatives' get what they want, without actually feeling yourself what they feel?\"  And I would shrug and reply, \"Because then there'd have to be a whole definition of 'wanting' and so on.  Evolution doesn't take the elaborate correct optimal path, it falls up the fitness landscape like water flowing downhill.  The mirroring-architecture was already there, so it was a short step from empathy to sympathy, and it got the job done.\"

\n

Relatives—and then reciprocity; your allies in the tribe, those with whom you trade favors.  Tit for Tat, or evolution's elaboration thereof to account for social reputations.

\n

Who is the most formidable, among the human kind?  The strongest?  The smartest?  More often than either of these, I think, it is the one who can call upon the most friends.

\n

So how do you make lots of friends?

\n

You could, perhaps, have a specific urge to bring your allies food, like a vampire bat—they have a whole system of reciprocal blood donations going in those colonies.  But it's a more general motivation, that will lead the organism to store up more favors, if you smile when designated friends smile.

\n

And what kind of organism will avoid making its friends angry at it, in full generality?  One that winces when they wince.

\n

Of course you also want to be able to kill designated Enemies without a qualm—these are humans we're talking about.

\n

But... I'm not sure of this, but it does look to me like sympathy, among humans, is \"on\" by default.  There are cultures that help strangers... and cultures that eat strangers; the question is which of these requires the explicit imperative, and which is the default behavior for humans.  I don't really think I'm being such a crazy idealistic fool when I say that, based on my admittedly limited knowledge of anthropology, it looks like sympathy is on by default.

\n

Either way... it's painful if you're a bystander in a war between two sides, and your sympathy has not been switched off for either side, so that you wince when you see a dead child no matter what the caption on the photo; and yet those two sides have no sympathy for each other, and they go on killing.

\n

So that is the human idiom of sympathy —a strange, complex, deep implementation of reciprocity and helping.  It tangles minds together—not by a term in the utility function for some other mind's \"desire\", but by the simpler and yet far more consequential path of mirror neurons: feeling what the other mind feels, and seeking similar states.  Even if it's only done by observation and inference, and not by direct transmission of neural information as yet.

\n

Empathy is a human way of predicting other minds.  It is not the only possible way.

\n

The human brain is not quickly rewirable; if you're suddenly put into a dark room, you can't rewire the visual cortex as auditory cortex, so as to better process sounds, until you leave, and then suddenly shift all the neurons back to being visual cortex again.

\n

An AI, at least one running on anything like a modern programming architecture, can trivially shift computing resources from one thread to another.  Put in the dark?  Shut down vision and devote all those operations to sound; swap the old program to disk to free up the RAM, then swap the disk back in again when the lights go on.

\n

So why would an AI need to force its own mind into a state similar to what it wanted to predict?  Just create a separate mind-instance—maybe with different algorithms, the better to simulate that very dissimilar human.  Don't try to mix up the data with your own mind-state; don't use mirror neurons.  Think of all the risk and mess that implies!

\n

An expected utility maximizer—especially one that does understand intelligence on an abstract level—has other options than empathy, when it comes to understanding other minds.  The agent doesn't need to put itself in anyone else's shoes; it can just model the other mind directly.  A hypothesis like any other hypothesis, just a little bigger.  You don't need to become your shoes to understand your shoes.

\n

And sympathy?  Well, suppose we're dealing with an expected paperclip maximizer, but one that isn't yet powerful enough to have things all its own way—it has to deal with humans to get its paperclips.  So the paperclip agent... models those humans as relevant parts of the environment, models their probable reactions to various stimuli, and does things that will make the humans feel favorable toward it in the future.

\n

To a paperclip maximizer, the humans are just machines with pressable buttons.  No need to feel what the other feels—if that were even possible across such a tremendous gap of internal architecture.  How could an expected paperclip maximizer \"feel happy\" when it saw a human smile?  \"Happiness\" is an idiom of policy reinforcement learning, not expected utility maximization.  A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips.  Though a paperclip maximizer might find it convenient to display a smile when it made paperclips—so as to help manipulate any humans that had designated it a friend.

\n

You might find it a bit difficult to imagine such an algorithm—to put yourself into the shoes of something that does not work like you do, and does not work like any mode your brain can make itself operate in.

\n

You can make your brain operating in the mode of hating an enemy, but that's not right either.  The way to imagine how a truly unsympathetic mind sees a human, is to imagine yourself as a useful machine with levers on it.  Not a human-shaped machine, because we have instincts for that.  Just a woodsaw or something.  Some levers make the machine output coins, other levers might make it fire a bullet.  The machine does have a persistent internal state and you have to pull the levers in the right order.  Regardless, it's just a complicated causal system—nothing inherently mental about it.

\n

(To understand unsympathetic optimization processes, I would suggest studying natural selection, which doesn't bother to anesthetize fatally wounded and dying creatures, even when their pain no longer serves any reproductive purpose, because the anesthetic would serve no reproductive purpose either.)

\n

That's why I listed \"sympathy\" in front of even \"boredom\" on my list of things that would be required to have aliens which are the least bit, if you'll pardon the phrase, sympathetic.  It's not impossible that sympathy exists among some significant fraction of all evolved alien intelligent species; mirror neurons seem like the sort of thing that, having happened once, could happen again.

\n

Unsympathetic aliens might be trading partners—or not, stars and such resources are pretty much the same the universe over.  We might negotiate treaties with them, and they might keep them for calculated fear of reprisal.  We might even cooperate in the Prisoner's Dilemma.  But we would never be friends with them.  They would never see us as anything but means to an end.  They would never shed a tear for us, nor smile for our joys.  And the others of their own kind would receive no different consideration, nor have any sense that they were missing something important thereby.

\n

Such aliens would be varelse, not ramen—the sort of aliens we can't relate to on any personal level, and no point in trying.

" } }, { "_id": "WMDy4GxbyYkNrbmrs", "title": "In Praise of Boredom", "pageUrl": "https://www.lesswrong.com/posts/WMDy4GxbyYkNrbmrs/in-praise-of-boredom", "postedAt": "2009-01-18T09:03:29.000Z", "baseScore": 43, "voteCount": 35, "commentCount": 104, "url": null, "contents": { "documentId": "WMDy4GxbyYkNrbmrs", "html": "

If I were to make a short list of the most important human qualities—

—and yes, this is a fool's errand, because human nature is immensely complicated, and we don't even notice all the tiny tweaks that fine-tune our moral categories, and who knows how our attractors would change shape if we eliminated a single human emotion—

—but even so, if I had to point to just a few things and say, \"If you lose just one of these things, you lose most of the expected value of the Future; but conversely if an alien species independently evolved just these few things, we might even want to be friends\"—

—then the top three items on the list would be sympathy, boredom and consciousness.

Boredom is a subtle-splendored thing.  You wouldn't want to get bored with breathing, for example—even though it's the same motions over and over and over and over again for minutes and hours and years and decades.

Now I know some of you out there are thinking, \"Actually, I'm quite bored with breathing and I wish I didn't have to,\" but then you wouldn't want to get bored with switching transistors.

According to the human value of boredom, some things are allowed to be highly repetitive without being boring—like obeying the same laws of physics every day.

Conversely, other repetitions are supposed to be boring, like playing the same level of Super Mario Brothers over and over and over again until the end of time.  And let us note that if the pixels in the game level have a slightly different color each time, that is not sufficient to prevent it from being \"the same damn thing, over and over and over again\".

Once you take a closer look, it turns out that boredom is quite interesting.

One of the key elements of boredom was suggested in \"Complex Novelty\":  If your activity isn't teaching you insights you didn't already know, then it is non-novel, therefore old, therefore boring.

But this doesn't quite cover the distinction.  Is breathing teaching you anything?  Probably not at this moment, but you wouldn't want to stop breathing.  Maybe you'd want to stop noticing your breathing, which you'll do as soon as I stop drawing your attention to it.

I'd suggest that the repetitive activities which are allowed to not be boring fall into two categories:

Let me talk about that second category:

Suppose you were unraveling the true laws of physics and discovering all sorts of neat stuff you hadn't known before... when suddenly you got bored with \"changing your beliefs based on observation\".  You are sick of anything resembling \"Bayesian updating\"—it feels like playing the same video game over and over.  Instead you decide to believe anything said on 4chan.

Or to put it another way, suppose that you were something like a sentient chessplayer—a sentient version of Deep Blue.  Like a modern human, you have no introspective access to your own algorithms.  Each chess game appears different—you play new opponents and steer into new positions, composing new strategies, avoiding new enemy gambits.  You are content, and not at all bored; you never appear to yourself to be doing the same thing twice—it's a different chess game each time.

But now, suddenly, you gain access to, and understanding of, your own chess-playing program.  Not just the raw code; you can monitor its execution.  You can see that it's actually the same damn code, doing the same damn thing, over and over and over again.  Run the same damn position evaluator.  Run the same damn sorting algorithm to order the branches.  Pick the top branch, again.  Extend it one position forward, again.  Call the same damn subroutine and start over.

I have a small unreasonable fear, somewhere in the back of my mind, that if I ever do fully understand the algorithms of intelligence, it will destroy all remaining novelty—no matter what new situation I encounter, I'll know I can solve it just by being intelligent, the same damn thing over and over.  All novelty will be used up, all existence will become boring, the remaining differences no more important than shades of pixels in a video game.  Other beings will go about in blissful unawareness, having been steered away from studying this forbidden cognitive science.  But I, having already thrown myself on the grenade of AI, will face a choice between eternal boredom, or excision of my forbidden knowledge and all the memories leading up to it (thereby destroying my existence as Eliezer, more or less).

Now this, mind you, is not my predictive line of maximum probability.  To understand abstractly what rough sort of work the brain is doing, doesn't let you monitor its detailed execution as a boring repetition.  I already know about Bayesian updating, yet I haven't become bored with the act of learning.  And a self-editing mind can quite reasonably exclude certain levels of introspection from boredom, just like breathing can be legitimately excluded from boredom.  (Maybe these top-level cognitive algorithms ought also to be excluded from perception—if something is stable, why bother seeing it all the time?)

No, it's just a cute little nightmare, which I thought made a nice illustration of this proposed principle:

That the very top-level things (like Bayesian updating, or attaching value to sentient minds rather than paperclips) and the very low-level things (like breathing, or switching transistors) are the things we shouldn't get bored with.  And the mid-level things between, are where we should seek novelty.  (To a first approximation, the novel is the inverse of the learned; it's something with a learnable element not yet covered by previous insights.)

Now this is probably not exactly how our current emotional circuitry of boredom works.  That, I expect, would be hardwired relative to various sensory-level definitions of predictability, surprisingness, repetition, attentional salience, and perceived effortfulness.

But this is Fun Theory, so we are mainly concerned with how boredom should work in the long run.

Humanity acquired boredom the same way as we acquired the rest of our emotions: the godshatter idiom whereby evolution's instrumental policies became our own terminal values, pursued for their own sake: sex is fun even if you use birth control.  Evolved aliens might, or might not, acquire roughly the same boredom in roughly the same way.

Do not give into the temptation of universalizing anthropomorphic values, and think:  \"But any rational agent, regardless of its utility function, will face the exploration/exploitation tradeoff, and will therefore occasionally get bored with exploiting, and go exploring.\"

Our emotion of boredom is a way of exploring, but not the only way for an ideal optimizing agent.

The idea of a steady trickle of mid-level novelty is a human terminal value, not something we do for the sake of something else.  Evolution might have originally given it to us in order to have us explore as well as exploit.  But now we explore for its own sake.  That steady trickle of novelty is a terminal value to us; it is not the most efficient instrumental method for exploring and exploiting.

Suppose you were dealing with something like an expected paperclip maximizer—something that might use quite complicated instrumental policies, but in the service of a utility function that we would regard as simple, with a single term compactly defined.

Then I would expect the exploration/exploitation tradeoff to go something like as follows:  The paperclip maximizer would assign some resources to cognition that searched for more efficient ways to make paperclips, or harvest resources from stars.  Other resources would be devoted to the actual harvesting and paperclip-making.  (The paperclip-making might not start until after a long phase of harvesting.)  At every point, the most efficient method yet discovered—for resource-harvesting, or paperclip-making—would be used, over and over and over again.  It wouldn't be boring, just maximally instrumentally efficient.

In the beginning, lots of resources would go into preparing for efficient work over the rest of time.  But as cognitive resources yielded diminishing returns in the abstract search for efficiency improvements, less and less time would be spent thinking, and more and more time spent creating paperclips.  By whatever the most efficient known method, over and over and over again.

(Do human beings get less easily bored as we grow older, more tolerant of repetition, because any further discoveries are less valuable, because we have less time left to exploit them?)

If we run into aliens who don't share our version of boredom—a steady trickle of mid-level novelty as a terminal preference—then perhaps every alien throughout their civilization will just be playing the most exciting level of the most exciting video game ever discovered, over and over and over again.  Maybe with nonsentient AIs taking on the drudgework of searching for a more exciting video game.  After all, without an inherent preference for novelty, exploratory attempts will usually have less expected value than exploiting the best policy previously encountered.  And that's if you explore by trial at all, as opposed to using more abstract and efficient thinking.

Or if the aliens are rendered non-bored by seeing pixels of a slightly different shade—if their definition of sameness is more specific than ours, and their boredom less general—then from our perspective, most of their civilization will be doing the human::same thing over and over again, and hence, be very human::boring.

Or maybe if the aliens have no fear of life becoming too simple and repetitive, they'll just collapse themselves into orgasmium.

And if our version of boredom is less strict than that of the aliens, maybe they'd take one look at one day in the life of one member of our civilization, and never bother looking at the rest of us.  From our perspective, their civilization would be needlessly chaotic, and so entropic, lower in what we regard as quality; they wouldn't play the same game for long enough to get good at it.

But if our versions of boredom are similar enough —terminal preference for a stream of mid-level novelty defined relative to learning insights not previously possessed—then we might find our civilizations mutually worthy of tourism.  Each new piece of alien art would strike us as lawfully creative, high-quality according to a recognizable criterion, yet not like the other art we've already seen.

It is one of the things that would make our two species ramen rather than varelse, to invoke the Hierarchy of Exclusion.  And I've never seen anyone define those two terms well, including Orson Scott Card who invented them; but it might be something like \"aliens you can get along with, versus aliens for which there is no reason to bother trying\".

" } }, { "_id": "4KSWmJm6K3EEvHBkd", "title": "Getting Nearer", "pageUrl": "https://www.lesswrong.com/posts/4KSWmJm6K3EEvHBkd/getting-nearer", "postedAt": "2009-01-17T09:28:54.000Z", "baseScore": 16, "voteCount": 12, "commentCount": 23, "url": null, "contents": { "documentId": "4KSWmJm6K3EEvHBkd", "html": "

Reply toA Tale Of Two Tradeoffs

I'm not comfortable with compliments of the direct, personal sort, the "Oh, you're such a nice person!" type stuff that nice people are able to say with a straight face.  Even if it would make people like me more - even if it's socially expected - I have trouble bringing myself to do it.  So, when I say that I read Robin Hanson's "Tale of Two Tradeoffs", and then realized I would spend the rest of my mortal existence typing thought processes as "Near" or "Far", I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.

Among other things, this clears up a major puzzle that's been lingering in the back of my mind for a while now.  Growing up as a rationalist, I was always telling myself to "Visualize!" or "Reason by simulation, not by analogy!" or "Use causal models, not similarity groups!"  And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.

But later, I learned about the Outside View versus the Inside View, and that people asking "What rough class does this project fit into, and when did projects like this finish last time?" were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects.  And this didn't seem to fit very well with my injunction to "Visualize!"

So now I think I understand what this principle was actually doing - it was keeping me in Near-side mode and away from Far-side thinking.  And it's not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).

\n

An example of this might be the balance between offensive and defensive\nnanotechnology, where I started out by - basically - just liking nanotechnology; until I got involved in a discussion about the particulars of nanowarfare, and noticed that people were postulating\ncrazy things to make defense win.  Which made me realize and say, "Look, the\nbalance between offense and defense has been tilted toward offense ever\nsince the invention of nuclear weapons, and military nanotech could use nuclear weapons, and I don't see how you're going to build a molecular barricade against that."

Are the particulars of that discussion likely to be, well, correct?  Maybe not.  But so long as I wasn't thinking of any particulars, my brain had free reign to just... import whatever affective valence the word "nanotechnology" had, and use that as a snap judgment of everything.

You can still be biased about particulars, of course.  You can insist that nanotech couldn't possibly be radiation-hardened enough to manipulate U-235, which someone tried as a response (fyi: this is extremely silly).  But in my case, at least, something about thinking in particulars...

...just snapped me out of the trance, somehow.

When you're thinking using very abstract categories - rough classes low on computing power - about things distant from you, then you're also - if Robin's hypothesis is correct - more subject to ideological bias.  Together this implies you can cherry-pick those very loose categories to put X together with whatever "similar" Y is ideologically convenient, as in the old saw that "atheism is a religion" (and not playing tennis is a sport).

But the most frustrating part of all, is the casualness of it - the way that ideologically convenient Far thinking is just thrown together out of whatever ingredients come to hand.  The ten-second dismissal of cryonics, without any attempt to visualize how much information is preserved by vitrification and could be retrieved by a molecular-level scan.  Cryonics just gets casually, perceptually classified as "not scientifically verified" and tossed out the window.  Or "what if you wake up in Dystopia?" and tossed out the window.  Far thinking is casual - that's the most frustrating aspect about trying to argue with it.\n

This seems like an argument for writing fiction with lots of concrete details if you want people to take a subject seriously and think about it in a less biased way.  This is not something I would have thought based on my previous view.

Maybe cryonics advocates really should focus on writing fiction stories that turn on the gory details of cryonics, or viscerally depict the regret of someone who didn't persuade their mother to sign up.  (Or offering prizes to professionals who do the same; writing fiction is hard, writing SF is harder.)

But I'm worried that, for whatever reason, reading concrete fiction is a special case that doesn't work to get people to do Near-side thinking.

Or there are some people who are inspired to Near-side thinking by fiction, and only these can actually be helped by reading science fiction.

Maybe there are people who encounter big concrete detailed fictions process them in a Near way - the sort of people who notice plot holes.  And others who just "take it all in stride", casually, so that however much concrete fictional "information" they encounter, they only process it using casual "Far" thinking.  I wonder if this difference has more to do with upbringing or genetics.  Either way, it may lie at the core of the partial yet statistically outstanding correlation between careful futurists and science fiction fans.

I expect I shall be thinking about this for a while.

" } }, { "_id": "88BpRQah9c2GWY3En", "title": "Seduced by Imagination", "pageUrl": "https://www.lesswrong.com/posts/88BpRQah9c2GWY3En/seduced-by-imagination", "postedAt": "2009-01-16T03:10:22.000Z", "baseScore": 50, "voteCount": 41, "commentCount": 20, "url": null, "contents": { "documentId": "88BpRQah9c2GWY3En", "html": "

\"Vagueness\" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification.  But a rational view of the Future should be vague, because the information we have about the Future is weak.  Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.

But there's also a more severe warning that I must deliver:  It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them.  It can suck the emotional energy out of your actual, current, ongoing life.

Epistemically, we know the Past much more specifically than the Future.  But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing.  Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.

Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there.  That may suck out your soul like an emotional vacuum cleaner.

I briefly remarked on this phenomenon earlier, and someone said, \"Define 'suck out your soul'.\"  Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future.  It's like something out of H. P. Lovecraft:  The Call of Eutopia.  A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.

But for the record, I will now lay out the components of \"soul-sucking\", that you may recognize the bright abyss and steer your thoughts away:

Hope can be a dangerous thing.  And when you've just been hit hard—at the moment when you most need hope to keep you going—that's also when the real world seems most painful, and the world of imagination becomes most seductive.

It's a balancing act, I think.  One needs enough Fun Theory to truly and legitimately justify hope in the future.  But not a detailed vision so seductive that it steals emotional energy from the real life and real challenge of creating that future.  You need \"a light at the end of the secular rationalist tunnel\" as Roko put it, but you don't want people to drift away from their bodies into that light.

So how much light is that, exactly?  Ah, now that's the issue.

I'll start with a simple and genuine question:  Is what I've already said, enough?

Is knowing the abstract fun theory and being able to pinpoint the exact flaws in previous flawed Utopias, enough to make you look forward to tomorrow?  Is it enough to inspire a stronger will to live?  To dispel worries about a long dark tea-time of the soul?  Does it now seem—on a gut level—that if we could really build an AI and really shape it, the resulting future would be very much worth staying alive to see?

" } }, { "_id": "DGXvLNpiSYBeQ6TLW", "title": "Justified Expectation of Pleasant Surprises", "pageUrl": "https://www.lesswrong.com/posts/DGXvLNpiSYBeQ6TLW/justified-expectation-of-pleasant-surprises", "postedAt": "2009-01-15T07:26:51.000Z", "baseScore": 27, "voteCount": 26, "commentCount": 62, "url": null, "contents": { "documentId": "DGXvLNpiSYBeQ6TLW", "html": "

I recently tried playing a computer game that made a major fun-theoretic error.  (At least I strongly suspect it's an error, though they are game designers and I am not.)

The game showed me—right from the start of play—what abilities I could purchase as I increased in level.  Worse, there were many different choices; still worse, you had to pay a cost in fungible points to acquire them, making you feel like you were losing a resource...  But today, I'd just like to focus on the problem of telling me, right at the start of the game, about all the nice things that might happen to me later.

I can't think of a good experimental result that backs this up; but I'd expect that a pleasant surprise would have a greater hedonic impact, than being told about the same gift in advance.  Sure, the moment you were first told about the gift would be good news, a moment of pleasure in the moment of being told.  But you wouldn't have the gift in hand at that moment, which limits the pleasure.  And then you have to wait.  And then when you finally get the gift—it's pleasant to go from not having it to having it, if you didn't wait too long; but a surprise would have a larger momentary impact, I would think.

This particular game had a status screen that showed all my future class abilities at the start of the game—inactive and dark but with full information still displayed.  From a hedonic standpoint this seems like miserable fun theory.  All the \"good news\" is lumped into a gigantic package; the items of news would have much greater impact if encountered separately.  And then I have to wait a long time to actually acquire the abilities, so I get an extended period of comparing my current weak game-self to all the wonderful abilities I could have but don't.

Imagine living in two possible worlds.  Both worlds are otherwise rich in challenge, novelty, and other aspects of Fun.  In both worlds, you get smarter with age and acquire more abilities over time, so that your life is always getting better.

But in one world, the abilities that come with seniority are openly discussed, hence widely known; you know what you have to look forward to.

In the other world, anyone older than you will refuse to talk about certain aspects of growing up; you'll just have to wait and find out.

I ask you to contemplate—not just which world you might prefer to live in—but how much you might want to live in the second world, rather than the first.  I would even say that the second world seems more alive; when I imagine living there, my imagined will to live feels stronger.  I've got to stay alive to find out what happens next, right?

The idea that hope is important to a happy life, is hardly original with me—though I think it might not be emphasized quite enough, on the lists of things people are told they need.

I don't agree with buying lottery tickets, but I do think I understand why people do it.  I remember the times in my life when I had more or less belief that things would improve—that they were heading up in the near-term or mid-term, close enough to anticipate.  I'm having trouble describing how much of a difference it makes.  Maybe I don't need to describe that difference, unless some of my readers have never had any light at the end of their tunnels, or some of my readers have never looked forward and seen darkness.

If existential angst comes from having at least one deep problem in your life that you aren't thinking about explicitly, so that the pain which comes from it seems like a natural permanent feature—then the very first question I'd ask, to identify a possible source of that problem, would be, \"Do you expect your life to improve in the near or mid-term future?\"

Sometimes I meet people who've been run over by life, in much the same way as being run over by a truck.  Grand catastrophe isn't necessary to destroy a will to live.  The extended absence of hope leaves the same sort of wreckage.

People need hope.  I'm not the first to say it.

But I think that the importance of vague hope is underemphasized.

\"Vague\" is usually not a compliment among rationalists.  Hear \"vague hopes\" and you immediately think of, say, an alternative medicine herbal profusion whose touted benefits are so conveniently unobservable (not to mention experimentally unverified) that people will buy it for anything and then refuse to admit it didn't work.  You think of poorly worked-out plans with missing steps, or supernatural prophecies made carefully unfalsifiable, or fantasies of unearned riches, or...

But you know, generally speaking, our beliefs about the future should be vaguer than our beliefs about the past.  We just know less about tomorrow than we do about yesterday.

There are plenty of bad reasons to be vague, all sorts of suspicious reasons to offer nonspecific predictions, but reversed stupidity is not intelligence:  When you've eliminated all the ulterior motives for vagueness, your beliefs about the future should still be vague.

We don't know much about the future; let's hope that doesn't change for as long as human emotions stay what they are.  Of all the poisoned gifts a big mind could give a small one, a walkthrough for the game has to be near the top of the list.

What we need to maintain our interest in life, is a justified expectation of pleasant surprises.  (And yes, you can expect a surprise if you're not logically omniscient.)  This excludes the herbal profusions, the poorly worked-out plans, and the supernatural.  The best reason for this justified expectation is experience, that is, being pleasantly surprised on a frequent yet irregular basis.  (If this isn't happening to you, please file a bug report with the appropriate authorities.)

Vague justifications for believing in a pleasant specific outcome would be the opposite.

There's also other dangers of having pleasant hopes that are too specific—even if justified, though more often they aren't—and I plan to talk about that in the next post.

" } }, { "_id": "siPzSrEnWGq8DDGua", "title": "She has joined the Conspiracy", "pageUrl": "https://www.lesswrong.com/posts/siPzSrEnWGq8DDGua/she-has-joined-the-conspiracy", "postedAt": "2009-01-13T19:48:22.000Z", "baseScore": 13, "voteCount": 10, "commentCount": 19, "url": null, "contents": { "documentId": "siPzSrEnWGq8DDGua", "html": "

\"Kimiko\"

I have no idea whether I had anything to do with this.

" } }, { "_id": "cWjK3SbRcLkb3gN69", "title": "Building Weirdtopia", "pageUrl": "https://www.lesswrong.com/posts/cWjK3SbRcLkb3gN69/building-weirdtopia", "postedAt": "2009-01-12T20:35:27.000Z", "baseScore": 52, "voteCount": 40, "commentCount": 312, "url": null, "contents": { "documentId": "cWjK3SbRcLkb3gN69", "html": "

\"Two roads diverged in the woods.  I took the one less traveled, and had to eat bugs until Park rangers rescued me.\"
        —Jim Rosenberg

Utopia and Dystopia have something in common: they both confirm the moral sensibilities you started with.  Whether the world is a libertarian utopia of the non-initiation of violence and everyone free to start their own business, or a hellish dystopia of government regulation and intrusion—you might like to find yourself in the first, and hate to find yourself in the second; but either way you nod and say, \"Guess I was right all along.\"

So as an exercise in creativity, try writing them down side by side:  Utopia, Dystopia, and Weirdtopia.  The zig, the zag and the zog.

I'll start off with a worked example for public understanding of science:

Disclaimer 1:  Not every sensibility we have is necessarily wrong.  Originality is a goal of literature, not science; sometimes it's better to be right than to be new.  But there are also such things as cached thoughts.  At least in my own case, it turned out that trying to invent a world that went outside my pre-existing sensibilities, did me a world of good.

Disclaimer 2:  This method is not universal:  Not all interesting ideas fit this mold, and not all ideas that fit this mold are good ones.  Still, it seems like an interesting technique.

If you're trying to write science fiction (where originality is a legitimate goal), then you can write down anything nonobvious for Weirdtopia, and you're done.

If you're trying to do Fun Theory, you have to come up with a Weirdtopia that's at least arguably-better than Utopia.  This is harder but also directs you to more interesting regions of the answer space.

If you can make all your answers coherent with each other, you'll have quite a story setting on your hands.  (Hope you know how to handle characterization, dialogue, description, conflict, and all that other stuff.)

Here's some partially completed challenges, where I wrote down a Utopia and a Dystopia (according to the moral sensibilities I started with before I did this exercise), but inventing a (better) Weirdtopia is left to the reader.

Economic...

Sexual...

Governmental...

Technological...

Cognitive...

" } }, { "_id": "hQSaMafoizBSa3gFR", "title": "Eutopia is Scary", "pageUrl": "https://www.lesswrong.com/posts/hQSaMafoizBSa3gFR/eutopia-is-scary", "postedAt": "2009-01-12T05:28:34.000Z", "baseScore": 66, "voteCount": 57, "commentCount": 129, "url": null, "contents": { "documentId": "hQSaMafoizBSa3gFR", "html": "

Followup toWhy is the Future So Absurd?

\n

\"The big thing to remember about far-future cyberpunk is that it will be truly ultra-tech.  The mind and body changes available to a 23rd-century Solid Citizen would probably amaze, disgust and frighten that 2050 netrunner!\"
        —GURPS Cyberpunk

\n

Pick up someone from the 18th century—a smart someone.  Ben Franklin, say.  Drop them into the early 21st century.

\n

We, in our time, think our life has improved in the last two or three hundred years.  Ben Franklin is probably smart and forward-looking enough to agree that life has improved.  But if you don't think Ben Franklin would be amazed, disgusted, and frightened, then I think you far overestimate the \"normality\" of your own time.  You can think of reasons why Ben should find our world compatible, but Ben himself might not do the same.

\n

Movies that were made in say the 40s or 50s, seem much more alien—to me—than modern movies allegedly set hundreds of years in the future, or in different universes.  Watch a movie from 1950 and you may see a man slapping a woman.  Doesn't happen a lot in Lord of the Rings, does it?  Drop back to the 16th century and one popular entertainment was setting a cat on fire.  Ever see that in any moving picture, no matter how \"lowbrow\"?

\n

(\"But,\" you say, \"that's showing how discomforting the Past's culture was, not how scary the Future is.\"  Of which I wrote, \"When we look over history, we see changes away from absurd conditions such as everyone being a peasant farmer and women not having the vote, toward normal conditions like a majority middle class and equal rights...\")

\n

Something about the Future will shock we 21st-century folk, if we were dropped in without slow adaptation.  This is not because the Future is cold and gloomy—I am speaking of a positive, successful Future; the negative outcomes are probably just blank.  Nor am I speaking of the idea that every Utopia has some dark hidden flaw.  I am saying that the Future would discomfort us because it is better.

\n

\n

This is another piece of the puzzle for why no author seems to have ever succeeded in constructing a Utopia worth-a-damn.  When they are out to depict how marvelous and wonderful the world could be, if only we would all be Marxists or Randians or let philosophers be kings... they try to depict the resulting outcome as comforting and safe.

\n

Again, George Orwell from \"Why Socialists Don't Believe In Fun\":

\n
\n

    \"In the last part, in contrast with disgusting Yahoos, we are shown the noble Houyhnhnms, intelligent horses who are free from human failings.  Now these horses, for all their high character and unfailing common sense, are remarkably dreary creatures.  Like the inhabitants of various other Utopias, they are chiefly concerned with avoiding fuss.  They live uneventful, subdued, 'reasonable' lives, free not only from quarrels, disorder or insecurity of any kind, but also from 'passion', including physical love.  They choose their mates on eugenic principles, avoid excesses of affection, and appear somewhat glad to die when their time comes.\"

\n
\n

One might consider, in particular contrast, Timothy Ferris's observation:

\n

    \"What is the opposite of happiness?  Sadness?  No.  Just as love and hate are two sides of the same coin, so are happiness and sadness.  Crying out of happiness is a perfect illustration of this.  The opposite of love is indifference, and the opposite of happiness is—here's the clincher—boredom...
    The question you should be asking isn't 'What do I want?' or 'What are my goals?' but 'What would excite me?'
    Remember—boredom is the enemy, not some abstract 'failure.'\"

\n

Utopia is reassuring, unsurprising, and dull.

\n

Eutopia is scary.

\n

I'm not talking here about evil means to a good end, I'm talking about the good outcomes themselves.  That is the proper relation of the Future to the Past when things turn out well, as we would know very well from history if we'd actually lived it, rather than looking back with benefit of hindsight.

\n

Now... I don't think you can actually build the Future on the basis of asking how to scare yourself.  The vast majority of possible changes are in the direction of higher entropy; only a very few discomforts stem from things getting better.

\n

\"I shock you therefore I'm right\" is one of the most annoying of all non-sequiturs, and we certainly don't want to go there.

\n

But on a purely literary level... and bearing in mind that fiction is not reality, and fiction is not optimized the way we try to optimize reality...

\n

I try to write fiction, now and then.  More rarely, I finish a story.  Even more rarely, I let someone else look at it.

\n

Once I finally got to the point of thinking that maybe you should be able to write a story set in Eutopia, I tried doing it. 

\n

But I had something like an instinctive revulsion at the indulgence of trying to build a world that fit me, but probably wouldn't fit others so nicely.

\n

So—without giving the world a seamy underside, or putting Knight Templars in charge, or anything so obvious as that—without deliberately trying to make the world flawed -

\n

I was trying to invent, even if I had to do it myself, a better world where I would be out of place.  Just like Ben Franklin would be out of place in the modern world.

\n

Definitely not someplace that a transhumanist/science-advocate/libertarian (like myself) would go, and be smugly satisfied at how well all their ideas had worked.  Down that path lay the Dark Side—certainly in a purely literary sense.

\n

And you couldn't avert that just by having the Future go wrong in all the stupid obvious ways that transhumanists, or libertarians, or public advocates of science had already warned against.  Then you just had a dystopia, and it might make a good SF story but it had already been done.

\n

But I had my world's foundation, an absurd notion inspired by a corny pun; a vision of what you see when you wake up from cryonic suspension, that I couldn't have gotten away with posting to any transhumanist mailing list even as a joke.

\n

And then, whenever I could think of an arguably-good idea that offended my sensibilities, I added it in.  The goal being to—without ever deliberately making the Future worse —make it a place where I would be as shocked as possible to see that that was how things had turned out.

\n

Getting rid of textbooks, for example—postulating that talking about science in public is socially unacceptable, for the same reason that you don't tell someone aiming to see a movie whether the hero dies at the end.  A world that had rejected my beloved concept of science as the public knowledge of humankind.

\n

Then I added up all the discomforting ideas together...

\n

...and at least in my imagination, it worked better than anything I'd ever dared to visualize as a serious proposal.

\n

My serious proposals had been optimized to look sober and safe and sane; everything voluntary, with clearly lighted exit signs, and all sorts of volume controls to prevent anything from getting too loud and waking up the neighbors.  Nothing too absurd.  Proposals that wouldn't scare the nervous, containing as little as possible that would cause anyone to make a fuss.

\n

This world was ridiculous, and it was going to wake up the neighbors.

\n

It was also seductive to the point that I had to exert a serious effort to prevent my soul from getting sucked out.  (I suspect that's a general problem; that it's a good idea emotionally (not just epistemically) to not visualize your better Future in too much detail.  You're better off comparing yourself to the Past.  I may write a separate post on this.)

\n

And so I found myself being pulled in the direction of this world in which I was supposed to be \"out of place\".  I started thinking that, well, maybe it really would be a good idea to get rid of all the textbooks, all they do is take the fun out of science.  I started thinking that maybe personal competition was a legitimate motivator (previously, I would have called it a zero-sum game and been morally aghast).  I began to worry that peace, democracy, market economies, and con—but I'd better not finish that sentence.  I started to wonder if the old vision that was so reassuring, so safe, was optimized to be good news to a modern human living in constant danger of permanent death or damage, and less optimized for the everyday existence of someone less frightened.

\n

This is what happens when I try to invent a world that fails to confirm my sensibilities?  It makes me wonder what would happen if someone else tried the same exercise.

\n

Unfortunately, I can't seem to visualize any new world that represents the same shock to me as the last one did.  Either the trick only works once, or you have to wait longer between attempts, or I'm too old now.

\n

But I hope that so long as the world offends the original you, it gets to keep its literary integrity even if you start to find it less shocking.

\n

I haven't yet published any story that gives more than a glimpse of this setting.  I'm still debating with myself whether I dare.  I don't know whether the suck-out-your-soul effect would threaten anyone but myself as author—I haven't seen it happening with Banks's Culture or Wright's Golden Oecumene, so I suspect it's more of a trap when a world fits a single person too well.  But I got enough flak when I presented the case for getting rid of textbooks.

\n

Still—I have seen the possibilities, now.  So long as no one dies permanently, I am leaning in favor of a loud and scary Future.

\n

 

\n

Part of The Fun Theory Sequence

\n

Next post: \"Building Weirdtopia\"

\n

Previous post: \"Serious Stories\"

" } }, { "_id": "QfpHRAMRM2HjteKFK", "title": "Continuous Improvement", "pageUrl": "https://www.lesswrong.com/posts/QfpHRAMRM2HjteKFK/continuous-improvement", "postedAt": "2009-01-11T02:09:01.000Z", "baseScore": 30, "voteCount": 30, "commentCount": 26, "url": null, "contents": { "documentId": "QfpHRAMRM2HjteKFK", "html": "

When is it adaptive for an organism to be satisfied with what it has?  When does an organism have enough children and enough food?  The answer to the second question, at least, is obviously \"never\" from an evolutionary standpoint.  The first proposition might be true if the reproductive risks of all available options exceed their reproductive benefits.  In general, though, it is a rare organism in a rare environment whose reproductively optimal strategy is to rest with a smile on its face, feeling happy.

To a first approximation, we might say something like \"The evolutionary purpose of emotion is to direct the cognitive processing of the organism toward achievable, reproductively relevant goals\".  Achievable goals are usually located in the Future, since you can't affect the Past.  Memory is a useful trick, but learning the lesson of a success or failure isn't the same goal as the original event—and usually the emotions associated with the memory are less intense than those of the original event.

Then the way organisms and brains are built right now, \"true happiness\" might be a chimera, a carrot dangled in front of us to make us take the next step, and then yanked out of our reach as soon as we achieve our goals.

This hypothesis is known as the hedonic treadmill.

The famous pilot studies in this domain demonstrated e.g. that past lottery winners' stated subjective well-being was not significantly greater than that of an average person, after a few years or even months.  Conversely, accident victims with severed spinal cords were not as happy as before the accident after six months—around 0.75 sd less than control groups—but they'd still adjusted much more than they had expected to adjust.

This being the transhumanist form of Fun Theory, you might perhaps say:  \"Let's get rid of this effect.  Just delete the treadmill, at least for positive events.\"

I'm not entirely sure we can get away with this.  There's the possibility that comparing good events to not-as-good events is what gives them part of their subjective quality.  And on a moral level, it sounds perilously close to tampering with Boredom itself.

So suppose that instead of modifying minds and values, we first ask what we can do by modifying the environment.  Is there enough fun in the universe, sufficiently accessible, for a transhuman to jog off the hedonic treadmill—improve their life continuously, at a sufficient rate to leap to an even higher hedonic level before they had a chance to get bored with the previous one?

This question leads us into great and interesting difficulties.

I had a nice vivid example I wanted to use for this, but unfortunately I couldn't find the exact numbers I needed to illustrate it.  I'd wanted to find a figure for the total mass of the neurotransmitters released in the pleasure centers during an average male or female orgasm, and a figure for the density of those neurotransmitters—density in the sense of mass/volume of the chemicals themselves.  From this I could've calculated how long a period of exponential improvement would be possible—how many years you could have \"the best orgasm of your life\" by a margin of at least 10%, at least once per year—before your orgasm collapsed into a black hole, the total mass having exceeded the mass of a black hole with the density of the neurotransmitters.

Plugging in some random/Fermi numbers instead:

Assume that a microgram of additional neurotransmitters are released in the pleasure centers during a standard human orgasm.  And assume that neurotransmitters have the same density as water.  Then an orgasm can reach around 108 solar masses before it collapses and forms a black hole, corresponding to 1047 baseline orgasms.  If we assume that a 100mg dose of crack is as pleasurable as 10 standard orgasms, then the street value of your last orgasm is around a hundred billion trillion trillion trillion dollars.

I'm sorry.  I just had to do that calculation.

Anyway... requiring an exponential improvement eats up a factor of 1047 in short order.  Starting from human standard and improving at 10% per year, it would take less than 1,200 years.

Of course you say, \"This but shows the folly of brains that use an analog representation of pleasure.  Go digital, young man!\"

If you redesigned the brain to represent the intensity of pleasure using IEEE 754 double-precision floating-point numbers, a mere 64 bits would suffice to feel pleasures up to 10^308 hedons...  in, um, whatever base you were using.

This still represents less than 7500 years of 10% annual improvement from a 1-hedon baseline, but after that amount of time, you can switch to larger floats.

Now we have lost a bit of fine-tuning by switching to IEEE-standard hedonics.  The 64-bit double-precision float has an 11-bit exponent and a 52-bit fractional part (and a 1-bit sign).  So we'll only have 52 bits of precision (16 decimal places) with which to represent our pleasures, however great they may be.  An original human's orgasm would soon be lost in the rounding error... which raises the question of how we can experience these invisible hedons, when the finite-precision bits are the whole substance of the pleasure.

We also have the odd situation that, starting from 1 hedon, flipping a single bit in your brain can make your life 10154 times more happy.

And Hell forbid you flip the sign bit.  Talk about a need for cosmic ray shielding.

But really—if you're going to go so far as to use imprecise floating-point numbers to represent pleasure, why stop there?  Why not move to Knuth's up-arrow notation?

For that matter, IEEE 754 provides special representations for +/—INF, that is to say, positive and negative infinity.  What happens if a bit flip makes you experience infinite pleasure?  Does that mean you Win The Game?

Now all of these questions I'm asking are in some sense unfair, because right now I don't know exactly what I have to do with any structure of bits in order to turn it into a \"subjective experience\".  Not that this is the right way to phrase the question.  It's not like there's a ritual that summons some incredible density of positive qualia that could collapse in its own right and form an epiphenomenal black hole.

But don't laugh—or at least, don't only laugh—because in the long run, these are extremely important questions.

To give you some idea of what's at stake here, Robin, in \"For Discount Rates\", pointed out that an investment earning 2% annual interest for 12,000 years adds up to a googol (10^100) times as much wealth; therefore, \"very distant future times are ridiculously easy to help via investment\".

I observed that there weren't a googol atoms in the observable universe, let alone within a 12,000-lightyear radius of Earth.

And Robin replied, \"I know of no law limiting economic value per atom.\"

If you've got an increasingly large number of bits—things that can be one or zero—and you're doing a proportional number of computations with them... then how fast can you grow the amount of fun, or pleasure, or value?

This echoes back to the questions in Complex Novelty, which asked how many kinds of problems and novel solutions you could find, and how many deep insights there were to be had.  I argued there that the growth rate is faster than linear in bits, e.g., humans can have much more than four times as much fun as chimpanzees even though our absolute brain volume is only around four times theirs.  But I don't think the growth in \"depth of good insights\" or \"number of unique novel problems\" is, um, faster than exponential in the size of the pattern.

Now... it might be that the Law simply permits outright that we can create very large amounts of subjective pleasure, every bit as substantial as the sort of subjective pleasure we get now, by the expedient of writing down very large numbers in a digital pleasure center.  In this case, we have got it made.  Have we ever got it made.

In one sense I can definitely see where Robin is coming from.  Suppose that you had a specification of the first 10,000 Busy Beaver machines—the longest-running Turing machines with 1, 2, 3, 4, 5... states.  This list could easily fit on a small flash memory card, made up of a few measly avogadros of atoms.

And that small flash memory card would be worth...

Well, let me put it this way:  If a mathematician said to me that the value of this memory card, was worth more than the rest of the entire observable universe minus the card...  I wouldn't necessarily agree with him outright.  But I would understand his point of view.

Still, I don't know if you can truly grok the fun contained in that memory card, without an unbounded amount of computing power with which to understand it.  Ultradense information does not give you ultradense economic value or ultradense fun unless you can also use that information in a way that consumes few resources.  Otherwise it's just More Fun Than You Can Handle.

Weber's Law of Just Noticeable Difference says that stimuli with an intensity scale, typically require a fixed fraction of proportional difference, rather than any fixed interval of absolute intensity, in order for the difference to be noticeable to a human or other organism.  In other words, we may demand exponential increases because our imprecise brains can't notice smaller differences.  This would suggest that our existing pleasures might already in effect possess a floating-point representation, with an exponent and a fraction—the army of actual neurons being used only to transmit an analog signal most of whose precision is lost.  So we might be able to get away with using floats, even if we can't get away with using up-arrows.

But suppose that the inscrutable rules governing the substantiality of \"subjective\" pleasure actually require one neuron per hedon, or something like that.

Or suppose that we only choose to reward ourselves when we find a better solution, and that we don't choose to game the betterness metrics.

And suppose that we don't discard the Weber-Fechner law of \"just noticeable difference\", but go on demanding percentage annual improvements, year after year.

Or you might need to improve at a fractional rate in order to assimilate your own memories.  Larger brains would lay down larger memories, and hence need to grow exponentially—efficiency improvements suiting to moderate the growth, but not to eliminate the exponent.

If fun or intelligence or value can only grow as fast as the mere cube of the brain size... and yet we demand a 2% improvement every year...

Then 350 years will pass before our resource consumption grows a single order of magnitude.

And yet there are only around 1080 atoms in the observable universe.

Do the math.

(It works out to a lifespan of around 28,000 years.)

Now... before everyone gets all depressed about this...

We can still hold out a fraction of hope for real immortality, aka \"emortality\".  As Greg Egan put it, \"Not dying after a very long time.  Just not dying, period.\"

The laws of physics as we know them prohibit emortality on multiple grounds.  It is a fair historical observation that, over the course of previous centuries, civilizations have become able to do things that previous civilizations called \"physically impossible\".  This reflects a change in knowledge about the laws of physics, not a change in the actual laws; and we cannot do everything once thought to be impossible.  We violate Newton's version of gravitation, but not conservation of energy.  It's a good historical bet that the future will be able to do at least one thing our physicists would call impossible.  But you can't bank on being able to violate any particular \"law of physics\" in the future.

There is just... a shred of reasonable hope, that our physics might be much more incomplete than we realize, or that we are wrong in exactly the right way, or that anthropic points I don't understand might come to our rescue and let us escape these physics (also a la Greg Egan).

So I haven't lost hope.  But I haven't lost despair, either; that would be faith.

In the case where our resources really are limited and there is no way around it...

...the question of how fast a rate of continuous improvement you demand for an acceptable quality of life—an annual percentage increase, or a fixed added amount—and the question of how much improvement you can pack into patterns of linearly increasing size—adding up to the fun-theoretic question of how fast you have to expand your resource usage over time to lead a life worth living...

...determines the maximum lifespan of sentient beings.

If you can get by with increasing the size in bits of your mind at a linear rate, then you can last for quite a while.  Until the end of the universe, in many versions of cosmology.  And you can have a child (or two parents can have two children), and the children can have children.  Linear brain size growth * linear population growth = quadratic growth, and cubic growth at lightspeed should be physically permissible.

But if you have to grow exponentially, in order for your ever-larger mind and its ever-larger memories not to end up uncomfortably squashed into too small a brain—squashed down to a point, to the point of it being pointless—then a transhuman's life is measured in subjective eons at best, and more likely subjective millennia.  Though it would be a merry life indeed.

My own eye has trouble enough looking ahead a mere century or two of growth.  It's not like I can imagine any sort of me the size of a galaxy.  I just want to live one more day, and tomorrow I will still want to live one more day.  The part about \"wanting to live forever\" is just an induction on the positive integers, not an instantaneous vision whose desire spans eternity.

If I can see to the fulfillment of all my present self's goals that I can concretely envision, shouldn't that be enough for me?  And my century-older self will also be able to see that far ahead.  And so on through thousands of generations of selfhood until some distant figure the size of a galaxy has to depart the physics we know, one way or the other...  Should that be scary?

Yeah, I hope like hell that emortality is possible.

Failing that, I'd at least like to find out one way or the other, so I can get on with my life instead of having that lingering uncertainty.

For now, one of the reasons I care about people alive today is the thought that if creating new people just divides up a finite pool of resource available here, but we live in a Big World where there are plenty of people elsewhere with their own resources... then we might not want to create so many new people here.  Six billion now, six trillion at the end of time?  Though this is more an idiom of linear growth than exponential—with exponential growth, a factor of 10 fewer people just buys you another 350 years of lifespan per person, or whatever.

But I do hope for emortality.  Odd, isn't it?  How abstract should a hope or fear have to be, before a human can stop thinking about it?

Oh, and finally—there's an idea in the literature of hedonic psychology called the \"hedonic set point\", based on identical twin studies showing that identical twins raised apart have highly similar happiness levels, more so than fraternal twins raised together, people in similar life circumstances, etcetera.  There are things that do seem to shift your set point, but not much (and permanent downward shift happens more easily than permanent upward shift, what a surprise).  Some studies have suggested that up to 80% of the variance in happiness is due to genes, or something shared between identical twins in different environments at any rate.

If no environmental improvement ever has much effect on subjective well-being, the way you are now, because you've got a more or less genetically set level of happiness that you drift back to, then...

Well, my usual heuristic is to imagine messing with environments before I imagine messing with minds.

But in this case?  Screw that.  That's just stupid.  Delete it without a qualm.

" } }, { "_id": "wSgxwXCdWiuM3R9fa", "title": "Rationality Quotes 23", "pageUrl": "https://www.lesswrong.com/posts/wSgxwXCdWiuM3R9fa/rationality-quotes-23", "postedAt": "2009-01-10T00:12:24.000Z", "baseScore": 10, "voteCount": 8, "commentCount": 2, "url": null, "contents": { "documentId": "wSgxwXCdWiuM3R9fa", "html": "

\"This year I resolve to lose weight... to be nicer to dogs... and to sprout wings and fly.\"
         -- Garfield

\n

\"People understand instinctively that the best way for computer programs to communicate with each other is for each of them to be strict in what they emit, and liberal in what they accept. The odd thing is that people themselves are not willing to be strict in how they speak and liberal in how they listen. You'd think that would also be obvious.\"
        -- Larry Wall

\n

\"One never needs enemies, but they are so much fun to acquire.\"
        -- Eluki bes Shahar, Archangel Blues

\n

\"I'm accusing you of violating the laws of nature!\"
\"Nature's virtue is intact.  I just know some different laws.\"
        -- Orson Scott Card, A Planet Called Treason

\n

\"The goal of most religions is to preserve and elaborate on that concept of the stars as a big painted backdrop.  They make Infinity a 'prop' so you don't have to think about the scary part.\"
        -- The Book of the SubGenius

\n

\n

<silverpower> Is humanity even worth saving?
<starglider> As opposed to what?
<silverpower> ...hmm.
        -- #sl4

" } }, { "_id": "6qS9q5zHafFXsB6hf", "title": "Serious Stories", "pageUrl": "https://www.lesswrong.com/posts/6qS9q5zHafFXsB6hf/serious-stories", "postedAt": "2009-01-08T23:49:35.000Z", "baseScore": 129, "voteCount": 111, "commentCount": 105, "url": null, "contents": { "documentId": "6qS9q5zHafFXsB6hf", "html": "

Every Utopia ever constructed—in philosophy, fiction, or religion—has been, to one degree or another, a place where you wouldn't actually want to live.  I am not alone in this important observation:  George Orwell said much the same thing in \"Why Socialists Don't Believe In Fun\", and I expect that many others said it earlier.

\n

If you read books on How To Write—and there are a lot of books out there on How To Write, because amazingly a lot of book-writers think they know something about writing—these books will tell you that stories must contain \"conflict\".

\n

That is, the more lukewarm sort of instructional book will tell you that stories contain \"conflict\".  But some authors speak more plainly.

\n

\"Stories are about people's pain.\"  Orson Scott Card.

\n

\"Every scene must end in disaster.\"  Jack Bickham.

\n

In the age of my youthful folly, I took for granted that authors were excused from the search for true Eutopia, because if you constructed a Utopia that wasn't flawed... what stories could you write, set there?  \"Once upon a time they lived happily ever after.\"  What use would it be for a science-fiction author to try to depict a positive Singularity, when a positive Singularity would be...

\n

...the end of all stories?

\n

It seemed like a reasonable framework with which to examine the literary problem of Utopia, but something about that final conclusion produced a quiet, nagging doubt.

\n

\n

At that time I was thinking of an AI as being something like a safe wish-granting genie for the use of individuals.  So the conclusion did make a kind of sense.  If there was a problem, you would just wish it away, right?  Ergo—no stories.  So I ignored the quiet, nagging doubt.

\n

Much later, after I concluded that even a safe genie wasn't such a good idea, it also seemed in retrospect that \"no stories\" could have been a productive indicator.  On this particular occasion, \"I can't think of a single story I'd want to read about this scenario\", might indeed have pointed me toward the reason \"I wouldn't want to actually live in this scenario\".

\n

So I swallowed my trained-in revulsion of Luddism and theodicy, and at least tried to contemplate the argument:

\n\n

In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far.  Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre:  The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime.  Is there a single story on the list that isn't tragic?

\n

Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death.  Yet it seems we prefer to empathize with hurting, sad, dead characters.  Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise—but then why selectively praise stories containing unhappy people?  Is there some hidden benefit to us in it?  It's a puzzle either way you look at it.

\n

When I was a child I couldn't write fiction because I wrote things to go well for my characters—just like I wanted things to go well in real life.  Which I was cured of by Orson Scott Card:  Oh, I said to myself, that's what I've been doing wrong, my characters aren't hurting.  Even then, I didn't realize that the microstructure of a plot works the same way—until Jack Bickham said that every scene must end in disaster.  Here I'd been trying to set up problems and resolve them, instead of making them worse...

\n

You simply don't optimize a story the way you optimize a real life.  The best story and the best life will be produced by different criteria.

\n

In the real world, people can go on living for quite a while without any major disasters, and still seem to do pretty okay.  When was the last time you were shot at by assassins?  Quite a while, right?  Does your life seem emptier for it?

\n

But on the other hand...

\n

For some odd reason, when authors get too old or too successful, they revert to my childhood.  Their stories start going right.  They stop doing horrible things to their characters, with the result that they start doing horrible things to their readers.  It seems to be a regular part of Elder Author Syndrome.  Mercedes Lackey, Laurell K. Hamilton, Robert Heinlein, even Orson Scott bloody Card—they all went that way.  They forgot how to hurt their characters.  I don't know why.

\n

And when you read a story by an Elder Author or a pure novice—a story where things just relentlessly go right one after another—where the main character defeats the supervillain with a snap of the fingers, or even worse, before the final battle, the supervillain gives up and apologizes and then they're friends again—

\n

It's like a fingernail scraping on a blackboard at the base of your spine.  If you've never actually read a story like that (or worse, written one) then count yourself lucky.

\n

That fingernail-scraping quality—would it transfer over from the story to real life, if you tried living real life without a single drop of rain?

\n

One answer might be that what a story really needs is not \"disaster\", or \"pain\", or even \"conflict\", but simply striving.  That the problem with Mary Sue stories is that there's not enough striving in them, but they wouldn't actually need pain.  This might, perhaps, be tested.

\n

An alternative answer might be that this is the transhumanist version of Fun Theory we're talking about.  So we can reply, \"Modify brains to eliminate that fingernail-scraping feeling\", unless there's some justification for keeping it.  If the fingernail-scraping feeling is a pointless random bug getting in the way of Utopia, delete it.

\n

Maybe we should.  Maybe all the Great Stories are tragedies because... well...

\n

I once read that in the BDSM community, \"intense sensation\" is a euphemism for pain.  Upon reading this, it occurred to me that, the way humans are constructed now, it is just easier to produce pain than pleasure.  Though I speak here somewhat outside my experience, I expect that it takes a highly talented and experienced sexual artist working for hours to produce a good feeling as intense as the pain of one strong kick in the testicles—which is doable in seconds by a novice.

\n

Investigating the life of the priest and proto-rationalist Friedrich Spee von Langenfeld, who heard the confessions of accused witches, I looked up some of the instruments that had been used to produce confessions.  There is no ordinary way to make a human being feel as good as those instruments would make you hurt.  I'm not sure even drugs would do it, though my experience of drugs is as nonexistent as my experience of torture.

\n

There's something imbalanced about that.

\n

Yes, human beings are too optimistic in their planning.  If losses weren't more aversive than gains, we'd go broke, the way we're constructed now.  The experimental rule is that losing a desideratum—$50, a coffee mug, whatever—hurts between 2 and 2.5 times as much as the equivalent gain.

\n

But this is a deeper imbalance than that.  The effort-in/intensity-out difference between sex and torture is not a mere factor of 2.

\n

If someone goes in search of sensation—in this world, the way human beings are constructed now—it's not surprising that they should arrive at pains to be mixed into their pleasures as a source of intensity in the combined experience.

\n

If only people were constructed differently, so that you could produce pleasure as intense and in as many different flavors as pain!  If only you could, with the same ingenuity and effort as a torturer of the Inquisition, make someone feel as good as the Inquisition's victims felt bad

\n

But then, what is the analogous pleasure that feels that good?  A victim of skillful torture will do anything to stop the pain and anything to prevent it from being repeated.  Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it?  If people are stronger-willed to bear the pleasure, is it really the same pleasure?

\n

There is another rule of writing which states that stories have to shout.  A human brain is a long way off those printed letters.  Every event and feeling needs to take place at ten times natural volume in order to have any impact at all.  You must not try to make your characters behave or feel realistically —especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they'll be too quiet to rise from the page.

\n

Maybe all the Great Stories are tragedies because happiness can't shout loud enough—to a human reader.

\n

Maybe that's what needs fixing.

\n

And if it were fixed... would there be any use left for pain or sorrow?  For even the memory of sadness, if all things were already as good as they could be, and every remediable ill already remedied?

\n

Can you just delete pain outright?  Or does removing the old floor of the utility function just create a new floor?  Will any pleasure less than 10,000,000 hedons be the new unbearable pain?

\n

Humans, built the way we are now, do seem to have hedonic scaling tendencies.  Someone who can remember starving will appreciate a loaf of bread more than someone who's never known anything but cake.  This was George Orwell's hypothesis for why Utopia is impossible in literature and reality:

\n

\"It would seem that human beings are not able to describe, nor perhaps to imagine, happiness except in terms of contrast...  The inability of mankind to imagine happiness except in the form of relief, either from effort or pain, presents Socialists with a serious problem. Dickens can describe a poverty-stricken family tucking into a roast goose, and can make them appear happy; on the other hand, the inhabitants of perfect universes seem to have no spontaneous gaiety and are usually somewhat repulsive into the bargain.\"

\n

For an expected utility maximizer, rescaling the utility function to add a trillion to all outcomes is meaningless—it's literally the same utility function, as a mathematical object.  A utility function describes the relative intervals between outcomes; that's what it is, mathematically speaking.

\n

But the human brain has distinct neural circuits for positive feedback and negative feedback, and different varieties of positive and negative feedback.  There are people today who \"suffer\" from congenital analgesia—a total absence of pain.  I never heard that insufficient pleasure becomes intolerable to them.

\n

Congenital analgesics do have to inspect themselves carefully and frequently to see if they've cut themselves or burned a finger.  Pain serves a purpose in the human mind design...

\n

But that does not show there's no alternative which could serve the same purpose.  Could you delete pain and replace it with an urge not to do certain things that lacked the intolerable subjective quality of pain?  I do not know all the Law that governs here, but I'd have to guess that yes, you could; you could replace that side of yourself with something more akin to an expected utility maximizer.

\n

Could you delete the human tendency to scale pleasures—delete the accomodation, so that each new roast goose is as delightful as the last?  I would guess that you could.  This verges perilously close to deleting Boredom, which is right up there with Sympathy as an absolute indispensable... but to say that an old solution remains as pleasurable, is not to say that you will lose the urge to seek new and better solutions.

\n

Can you make every roast goose as pleasurable as it would be in contrast to starvation, without ever having starved?

\n

Can you prevent the pain of a dust speck irritating your eye from being the new torture, if you've literally never experienced anything worse than a dust speck irritating your eye?

\n

Such questions begin to exceed my grasp of the Law, but I would guess that the answer is: yes, it can be done.  It is my experience in such matters that once you do learn the Law, you can usually see how to do weird-seeming things.

\n

So far as I know or can guess, David Pearce (The Hedonistic Imperative) is very probably right about the feasibility part, when he says:

\n

\"Nanotechnology and genetic engineering will abolish suffering in all sentient life.  The abolitionist project is hugely ambitious but technically feasible.  It is also instrumentally rational and morally urgent.  The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment.  They will be replaced by a different sort of neural architecture—a motivational system based on heritable gradients of bliss.  States of sublime well-being are destined to become the genetically pre-programmed norm of mental health.  It is predicted that the world's last unpleasant experience will be a precisely dateable event.\"

\n

Is that... what we want?

\n

To just wipe away the last tear, and be done?

\n

Is there any good reason not to, except status quo bias and a handful of worn rationalizations?

\n

What would be the alternative?  Or alternatives?

\n

To leave things as they are?  Of course not.  No God designed this world; we have no reason to think it exactly optimal on any dimension.  If this world does not contain too much pain, then it must not contain enough, and the latter seems unlikely.

\n

But perhaps...

\n

You could cut out just the intolerable parts of pain?

\n

Get rid of the Inquisition.  Keep the sort of pain that tells you not to stick your finger in the fire, or the pain that tells you that you shouldn't have put your friend's finger in the fire, or even the pain of breaking up with a lover.

\n

Try to get rid of the sort of pain that grinds down and destroys a mind.  Or configure minds to be harder to damage.

\n

You could have a world where there were broken legs, or even broken hearts, but no broken people.  No child sexual abuse that turns out more abusers.  No people ground down by weariness and drudging minor inconvenience to the point where they contemplate suicide.  No random meaningless endless sorrows like starvation or AIDS.

\n

And if even a broken leg still seems too scary—

\n

Would we be less frightened of pain, if we were stronger, if our daily lives did not already exhaust so much of our reserves?

\n

So that would be one alternative to the Pearce's world—if there are yet other alternatives, I haven't thought them through in any detail.

\n

The path of courage, you might call it—the idea being that if you eliminate the destroying kind of pain and strengthen the people, then what's left shouldn't be that scary.

\n

A world where there is sorrow, but not massive systematic pointless sorrow, like we see on the evening news.  A world where pain, if it is not eliminated, at least does not overbalance pleasure.  You could write stories about that world, and they could read our stories.

\n

I do tend to be rather conservative around the notion of deleting large parts of human nature.  I'm not sure how many major chunks you can delete until that balanced, conflicting, dynamic structure collapses into something simpler, like an expected pleasure maximizer.

\n

And so I do admit that it is the path of courage that appeals to me.

\n

Then again, I haven't lived it both ways.

\n

Maybe I'm just afraid of a world so different as Analgesia—wouldn't that be an ironic reason to walk \"the path of courage\"?

\n

Maybe the path of courage just seems like the smaller change—maybe I just have trouble empathizing over a larger gap.

\n

But \"change\" is a moving target.

\n

If a human child grew up in a less painful world—if they had never lived in a world of AIDS or cancer or slavery, and so did not know these things as evils that had been triumphantly eliminated—and so did not feel that they were \"already done\" or that the world was \"already changed enough\"...

\n

Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?

\n

And then what?  Is there a point where Romeo and Juliet just seems less and less relevant, more and more a relic of some distant forgotten world?  Does there come some point in the transhuman journey where the whole business of the negative reinforcement circuitry, can't possibly seem like anything except a pointless hangover to wake up from?

\n

And if so, is there any point in delaying that last step?  Or should we just throw away our fears and... throw away our fears?

\n

I don't know.

" } }, { "_id": "qyTCeEikY4cDT96ik", "title": "Rationality Quotes 22", "pageUrl": "https://www.lesswrong.com/posts/qyTCeEikY4cDT96ik/rationality-quotes-22", "postedAt": "2009-01-07T12:00:50.000Z", "baseScore": 9, "voteCount": 7, "commentCount": 7, "url": null, "contents": { "documentId": "qyTCeEikY4cDT96ik", "html": "

\"Two roads diverged in the woods.  I took the one less traveled, and had to eat bugs until Park rangers rescued me.\"
        -- Jim Rosenberg

\n

\"Lying to yourself about specific actions is easier than re-defining the bounds of your imagined identity...  When I see once-ethical men devolve into moral grey, they still identify as upstanding.\"
       -- Ben Casnocha

\n

\"Every year buy a clean bed sheet, date it, and lay it over the previous layer of junk on your desk.\"
        -- Vernor Vinge, The Blabber

\n

\"Like first we tossed out the bath water, then the baby, and like finally the whole tub.\"
        -- John R. Andrews

\n

\"I have no wisdom.  Yet I heard a wise man - soon to be a relative of marriage - say not long ago that all is for the best.  We are but dreams, and dreams possess no life by their own right.  See, I am wounded.  (Holds out his hand.)  When my wound heals, it will be gone.  Should it with its bloody lips say it is sorry to heal?  I am only trying to explain what another said, but that is what I think he meant.\"
        -- Gene Wolfe, The Claw of the Conciliator

\n

\"On a grand scale we simply want to save the world, so obviously we're just letting ourselves in for a lot of disappointment and we're doomed to failure since we didn't pick some cheap-ass two-bit goal like collecting all the Garbage Pail Kids cards.\"
        -- Nenslo

\n

\"He promised them nothing but blood, iron, and fire, and offered them only the choice of going to find it or of waiting for it to find them at home.\"
        -- John Barnes, One For the Morning Glory

" } }, { "_id": "ZmDEbiEeXk3Wv2sLH", "title": "Emotional Involvement", "pageUrl": "https://www.lesswrong.com/posts/ZmDEbiEeXk3Wv2sLH/emotional-involvement", "postedAt": "2009-01-06T22:23:24.000Z", "baseScore": 26, "voteCount": 17, "commentCount": 53, "url": null, "contents": { "documentId": "ZmDEbiEeXk3Wv2sLH", "html": "

Followup toEvolutionary Psychology, Thou Art Godshatter, Existential Angst Factory

Can your emotions get involved in a video game?  Yes, but not much.  Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it's probably not remotely close to the feeling of triumph you'd get from saving the world in real life.  I've played video games powerful enough to bring tears to my eyes, but they still aren't as powerful as the feeling of significantly helping just one single real human being.

Because when the video game is finished, and you put it away, the events within the game have no long-term consequences.

Maybe if you had a major epiphany while playing...  But even then, only your thoughts would matter; the mere fact that you saved the world, inside the game, wouldn't count toward anything in the continuing story of your life.

Thus fails the Utopia of playing lots of really cool video games forever.  Even if the games are difficult, novel, and sensual, this is still the idiom of life chopped up into a series of disconnected episodes with no lasting consequences.  A life in which equality of consequences is forcefully ensured, or in which little is at stake because all desires are instantly fulfilled without individual work—these likewise will appear as flawed Utopias of dispassion and angst.  \"Rich people with nothing to do\" syndrome.  A life of disconnected episodes and unimportant consequences is a life of weak passions, of emotional uninvolvement.

Our emotions, for all the obvious evolutionary reasons, tend to associate to events that had major reproductive consequences in the ancestral environment, and to invoke the strongest passions for events with the biggest consequences:

Falling in love... birthing a child... finding food when you're starving... getting wounded... being chased by a tiger... your child being chased by a tiger... finally killing a hated enemy...

Our life stories are not now, and will not be, what they once were.

If one is to be conservative in the short run about changing minds, then we can get at least some mileage from changing the environment.  A windowless office filled with highly repetitive non-novel challenges isn't any more conducive to emotional involvement than video games; it may be part of real life, but it's a very flat part.  The occasional exciting global economic crash that you had no personal control over, does not particularly modify this observation.

But we don't want to go back to the original savanna, the one where you got a leg chewed off and then starved to death once you couldn't walk.  There are things we care about tremendously in the sense of hating them so much that we want to drive their frequency down to zero, not by the most interesting way, just as quickly as possible, whatever the means.  If you drive the thing it binds to down to zero, where is the emotion after that?

And there are emotions we might want to think twice about keeping, in the long run.  Does racial prejudice accomplish anything worthwhile?  I pick this as a target, not because it's a convenient whipping boy, but because unlike e.g. \"boredom\" it's actually pretty hard to think of a reason transhumans would want to keep this neural circuitry around.  Readers who take this as a challenge are strongly advised to remember that the point of the question is not to show off how clever and counterintuitive you can be.

But if you lose emotions without replacing them, whether by changing minds, or by changing life stories, then the world gets a little less involving each time; there's that much less material for passion.  And your mind and your life become that much simpler, perhaps, because there are fewer forces at work—maybe even threatening to collapse you into an expected pleasure maximizer.  If you don't replace what is removed.

In the long run, if humankind is to make a new life for itself...

We, and our descendants, will need some new emotions.

This is the aspect of self-modification in which one must above all take care—modifying your goals.  Whatever you want, becomes more likely to happen; to ask what we ought to make ourselves want, is to ask what the future should be.

Add emotions at random—bind positive reinforcers or negative reinforcers to random situations and ways the world could be—and you'll just end up doing what is prime instead of what is good.  So adding a bunch of random emotions does not seem like the way to go.

Asking what happens often, and binding happy emotions to that, so as to increase happiness—or asking what seems easy, and binding happy emotions to that—making isolated video games artificially more emotionally involving, for example—

At that point, it seems to me, you've pretty much given up on eudaimonia and moved to maximizing happiness; you might as well replace brains with pleasure centers, and civilizations with hedonium plasma.

I'd suggest, rather, that one start with the idea of new major events in a transhuman life, and then bind emotions to those major events and the sub-events that surround them.  What sort of major events might a transhuman life embrace?  Well, this is the point at which I usually stop speculating.  \"Science!  They should be excited by science!\" is something of a bit-too-obvious and I dare say \"nerdy\" answer, as is \"Math!\" or \"Money!\"  (Money is just our civilization's equivalent of expected utilon balancing anyway.)  Creating a child—as in my favored saying, \"If you can't design an intelligent being from scratch, you're not old enough to have kids\"—is one candidate for a major transhuman life event, and anything you had to do along the way to creating a child would be a candidate for new emotions.  This might or might not have anything to do with sex—though I find that thought appealing, being something of a traditionalist.  All sorts of interpersonal emotions carry over for as far as my own human eyes can see—the joy of making allies, say; interpersonal emotions get more complex (and challenging) along with the people, which makes them an even richer source of future fun.  Falling in love?  Well, it's not as if we're trying to construct the Future out of anything other than our preferences—so do you want that to carry over?

But again—this is usually the point at which I stop speculating.  It's hard enough to visualize human Eutopias, let alone transhuman ones.

The essential idiom I'm suggesting is something akin to how evolution gave humans lots of local reinforcers for things that in the ancestral environment related to evolution's overarching goal of inclusive reproductive fitness.  Today, office work might be highly relevant to someone's sustenance, but—even leaving aside the lack of high challenge and complex novelty—and that it's not sensually involving because we don't have native brainware to support the domain—office work is not emotionally involving because office work wasn't ancestrally relevant.  If office work had been around for millions of years, we'd find it a little less hateful, and experience a little more triumph on filling out a form, one suspects.

Now you might run away shrieking from the dystopia I've just depicted—but that's because you don't see office work as eudaimonic in the first place, one suspects.  And because of the lack of high challenge and complex novelty involved.  In an \"absolute\" sense, office work would seem somewhat less tedious than gathering fruits and eating them.

But the idea isn't necessarily to have fun doing office work.  Just like it's not necessarily the idea to have your emotions activate for video games instead of real life.

The idea is that once you construct an existence / life story that seems to make sense, then it's all right to bind emotions to the parts of that story, with strength proportional to their long-term impact.  The anomie of today's world, where we simultaneously (a) engage in office work and (b) lack any passion in it, does not need to carry over: you should either fix one of those problems, or the other.

On a higher, more abstract level, this carries over the idiom of reinforcement over instrumental correlates of terminal values.  In principle, this is something that a purer optimization process wouldn't do.  You need neither happiness nor sadness to maximize expected utility.  You only need to know which actions result in which consequences, and update that pure probability distribution as you learn through observation; something akin to \"reinforcement\" falls out of this, but without the risk of losing purposes, without any pleasure or pain.  An agent like this is simpler than a human and more powerful—if you think that your emotions give you a supernatural advantage in optimization, you've entirely failed to understand the math of this domain.  For a pure optimizer, the \"advantage\" of starting out with one more emotion bound to instrumental events is like being told one more abstract belief about which policies maximize expected utility, except that the belief is very hard to update based on further experience.

But it does not seem to me, that a mind which has the most value, is the same kind of mind that most efficiently optimizes values outside it.  The interior of a true expected utility maximizer might be pretty boring, and I even suspect that you can build them to not be sentient.

For as far as my human eyes can see, I don't know what kind of mind I should value, if that mind lacks pleasure and happiness and emotion in the everyday events of its life.  Bearing in mind that we are constructing this Future using our own preferences, not having it handed to us by some inscrutable external author.

If there's some better way of being (not just doing) that stands somewhere outside this, I have not yet understood it well enough to prefer it.  But if so, then all this discussion of emotion would be as moot as it would be for an expected utility maximizer—one which was not valued at all for itself, but only valued for that which it maximized.

It's just hard to see why we would want to become something like that, bearing in mind that morality is not an inscrutable light handing down awful edicts from somewhere outside us.

At any rate—the hell of a life of disconnected episodes, where your actions don't connect strongly to anything you strongly care about, and nothing that you do all day invokes any passion—this angst seems avertible, however often it pops up in poorly written Utopias.

" } }, { "_id": "HD93frfJaYG6kMwCt", "title": "Rationality Quotes 21", "pageUrl": "https://www.lesswrong.com/posts/HD93frfJaYG6kMwCt/rationality-quotes-21", "postedAt": "2009-01-06T02:45:08.000Z", "baseScore": 9, "voteCount": 7, "commentCount": 6, "url": null, "contents": { "documentId": "HD93frfJaYG6kMwCt", "html": "

\"The most dangerous thing in the world is finding someone you agree with.  If a TV station news is saying exactly what you think is right, BEWARE!  You are very likely only reinforcing your beliefs, and not being supplied with new information.\"
        -- SmallFurryCreature

\n

\"Companies deciding which kind of toothpaste to market have much more rigorous, established decision-making processes to refer to than the most senior officials of the U.S. government deciding whether or not to go to war.\"
        -- Michael Mazarr

\n

\"Everything I ever thought about myself - who I was, what I am - was a lie.  You have no idea how astonishingly liberating that feels.\"
        -- Neil Gaiman, Stardust

\n

\"Saul speaks of the 'intense desire for survival on the part of virtually everyone on earth,' and our 'failure' in spite of this.  I have often pointed out that the so-called 'survival instinct' is reliable only in clear and present danger - and even then only if the individual is still relatively healthy and vigorous.  If the danger is indirect, or remote in time, or if the person is weak or depressed - or even if required action would violate established habits - forget the 'survival instinct.' It isn't that simple.\"
        -- Robert Ettinger on cryonics

\n

\"We are running out of excuses.  We just have to admit that real AI is one of the lowest research priorities, ever.  Even space is considered more important.  Nevermind baseball, tribology, or TV evangelism.\"
        -- Eugen Leitl

\n

\"That's something that's always struck me as odd about humanity.  Our first response to someone's bad news is \"I'm sorry\", as though we feel that someone should take responsibility for all the $&#ed up randomness that goes on in this universe.\"
        -- Angels 2200

\n

\"We were supremely lucky to be born into this precise moment in history.  It is with an ever-cresting crescendo of wonder and enthusiasm that I commend you to the great adventure and the great adventure to you.\"
        -- Jeff Davis

" } }, { "_id": "QZs4vkC7cbyjL9XA9", "title": "Changing Emotions", "pageUrl": "https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions", "postedAt": "2009-01-05T00:05:14.000Z", "baseScore": 48, "voteCount": 43, "commentCount": 57, "url": null, "contents": { "documentId": "QZs4vkC7cbyjL9XA9", "html": "

    Lest anyone reading this journal of a primitive man should think we spend our time mired in abstractions, let me also say that I am discovering the richness available to those who are willing to alter their major characteristics.  The variety of emotions available to a reconfigured human mind, thinking thoughts impossible to its ancestors...
    The emotion of -*-, describable only as something between sexual love and the joy of intellection—making love to a thought?  Or &&, the true reverse of pain, not \"pleasure\" but a \"warning\" of healing, growth and change. Or (^+^), the most complex emotion yet discovered, felt by those who consciously endure the change between mind configurations, and experience the broad spectrum of possibilities inherent in thinking and being.
        —Greg Bear, Eon

So... I'm basically on board with that sort of thing as a fine and desirable future.  But I think that the difficulty and danger of fiddling with emotions is oft-underestimated.  Not necessarily underestimated by Greg Bear, per se; the above journal entry is from a character who was receiving superintelligent help.

But I still remember one time on the Extropians mailing list when someone talked about creating a female yet \"otherwise identical\" copy of himself.  Something about that just fell on my camel's back as the last straw.  I'm sorry, but there are some things that are much more complicated to actually do than to rattle off as short English phrases, and \"changing sex\" has to rank very high on that list.  Even if you're omnipotent so far as raw ability goes, it's not like people have a binary attribute reading \"M\" or \"F\" that can be flipped as a primitive action.

Changing sex makes a good, vivid example of the sort of difficulties you might run into when messing with emotional architecture, so I'll use it as my archetype:

Let's suppose that we're talking about an M2F transformation.  (F2M should be a straightforward transform of this discussion; I do want to be specific rather than talking in vague generalities, but I don't want to parallelize every sentence.)  (Oddly enough, every time I can recall hearing someone say \"I want to know what it's like to be the opposite sex\", the speaker has been male.  I don't know if that's a genuine gender difference in wishes, or just a selection effect in which spoken wishes reach my ears.)

Want to spend a week wearing a female body?  Even at this very shallow level, we're dealing with drastic remappings of at least some segments of the sensorimotor cortex and cerebellum—the somatic map, the motor map, the motor reflexes, and the motor skills.  As a male, you know how to operate a male body, but not a female one.  If you're a master martial artist as a male, you won't be a master martial artist as a female (or vice versa, of course) unless you either spend another year practicing, or some AI subtly tweaks your skills to be what they would have been in a female body—think of how odd that experience would be.

Already we're talking about some pretty significant neurological changes.  Strong enough to disrupt personal identity, if taken in one shot?  That's a difficult question to answer, especially since I don't know what experiment to perform to test any hypotheses.  On one hand, billions of neurons in my visual cortex undergo massive changes of activation every time my eyes squeeze shut when I sneeze—the raw number of flipped bits is not the key thing in personal identity.  But we are already talking about serious changes of information, on the order of going to sleep, dreaming, forgetting your dreams, and waking up the next morning as though it were the next moment.  Not informationally trivial transforms like uploading.

What about sex?  (Somehow it's always about sex, at least when it's men asking the question.)  Remapping the connections from the remapped somatic areas to the pleasure center will... give you a vagina-shaped penis, more or less.  That doesn't make you a woman.  You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.

What would it take for a man to actually become the female version of themselves?

Well... what does that sentence even mean?  I am reminded of someone who replied to the statement \"Obama would not have become President if he hadn't been black\" by saying \"If Obama hadn't been black, he wouldn't have been Obama\" i.e. \"There is no non-black Obama who could fail to become President\".  (You know you're in trouble when non-actual possible worlds start having political implications.)

The person you would have been if you'd been born with an X chromosome in place of your Y chromosome (or vice versa) isn't you.  If you had a twin female sister, the two of you would not be the same person.  There are genes on your Y chromosome that tweaked your brain to some extent, helping to construct your personal identity—alleles with no analogue on the X chromosome.  There is no version of you, even genetically, who is the opposite sex.

And if we halt your body, swap out your Y chromosome for your father's X chromosome, and restart your body... well.  That doesn't sound too safe, does it?  Your neurons are already wired in a male pattern, just as your body already developed in a male pattern.  I don't know what happens to your testicles, and I don't know what happens to your brain, either.  Maybe your circuits would slowly start to rewire themselves under the influence of the new genetic instructions.  At best you'd end up as a half-baked cross between male brain and female brain.  At worst you'd go into a permanent epileptic fit and die—we're dealing with circumstances way outside the evolutionary context under which the brain was optimized for robustness.  Either way, your brain would not look like your twin sister's brain that had developed as female from the beginning.

So to actually become female...

We're talking about a massive transformation here, billions of neurons and trillions of synapses rearranged.  Not just form, but content—just like a male judo expert would need skills repatterned to become a female judo expert, so too, you know how to operate a male brain but not a female brain.  You are the equivalent of a judo expert at one, but not the other.  You have cognitive reflexes, and consciously learned cognitive skills as well.

If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her \"me\".  The change is too sharp, if it happens all at once.

Transform the brain gradually?  Hm... now we have to design the intermediate stages, and make sure the intermediate stages make self-consistent sense.  Evolution built and optimized a self-consistent male brain and a self-consistent female brain; it didn't design the parts to be stable during an intermediate transition between the two.  Maybe you've got to redesign other parts of the brain just to keep working through the transition.

What happens when, as a woman, you think back to your memory of looking at Angelina Jolie photos as a man?  How do you empathize with your past self of the opposite sex?  Do you flee in horror from the person you were?  Are all your life's memories distant and alien things?  How can you remember, when your memory is a recorded activation pattern for neural circuits that no longer exist in their old forms?  Do we rewrite all your memories, too?

Well... maybe we could retain your old male brainware through the transformation, and set up a dual system of male and female circuits... such that you are currently female, but retain the ability to recall and empathize with your past memories as if they were running on the same male brainware that originally laid them down...

Sounds complicated, doesn't it?  It seems that to transform a male brain into someone who can be a real female, we can't just rewrite you as a female brain.  That just kills you and replaces you with someone re-imagined as a different person.  Instead we have to rewrite you as a more complex brain with a novel, non-ancestral architecture that can cross-operate in realtime between male and female modes, so that a female can process male memories with a remembered context that includes the male brainware that laid them down.

To make you female, and yet still you, we have to step outside the human design space in order to preserve continuity with your male self.

And when your little adventure is over and you go back to being a man—if you still want to, because even if your past self wanted to go back afterward, why should that desire be binding on your present self?—then we've got to keep the dual architecture so you don't throw up every time you remember what you did on your vacation.

Assuming you did have sex as a woman, rather than fending off all comers because because they didn't look like they were interested in a long-term relationship.

But then, you probably would experiment.  You'll never have been a little girl, and you won't remember going through high school where any girl who slept with a boy was called a slut by the other girls.  You'll remember a very atypical past for a woman—but there's no way to fix that while keeping you the same person.

And all that was just what it takes to ranma around within human-space, from the male pole to the female pole and back again.

What if you wanted to move outside the human space entirely?

In one sense, a sex change is admittedly close to a worst-case scenario: a fixed target not optimized for an easy transition from your present location; involving, not just new brain areas, but massive coordinated changes to brain areas already in place.

It might be a lot easier to just add one more emotion to those already there.  Maybe.

In another sense, though, a sex change is close to a best-case scenario: the prototype of your destination is already extensively tested as a coherent mind, and known to function well within a human society that already has a place for it (including companions to talk to).

It might be a lot harder to enter uncharted territory.  Maybe.

I'm not saying—of course—that it could never, ever be done.  But it's another instance of the great chicken-and-egg dilemma that is the whole story of present-day humanity, the great challenge that intelligent life faces in its flowering: growing up is a grownup-level problem.  You could try to build a cleanly-designed artificial grownup (self-improving Friendly AI) to foresee the pathway ahead and chart out a nonfatal course.  Or you could plunge ahead yourself, and hope that you grew faster than your problems did.

It's the same core challenge either way: growing up is an adult problem.  There are difficult ways out of this trap, but no easy ones; extra-ordinary solutions, but no ordinary ones.  People ask me why I take all these difficulties upon myself.  It's because all the easier ways, once you examine them in enough fine detail, turn out to be illusions, or contain just as much difficulty themselves—the same sort of hidden difficulty as \"I'd like to try being the opposite sex for a week\".

It seems to me that there is just an irreducible residue of very hard problems associated with an adult version of humankind ever coming into being.

And emotions would be among the most dangerous targets of meddling.  Make the wrong shift, and you won't want to change back.

We can't keep these exact human emotions forever.  Anyone want to still want to eat chocolate-chip cookies when the last sun grows cold?  I didn't think so.

But if we replace our emotions with random die-rolls, then we'll end up wanting to do what is prime, instead of what's right.

Some emotional changes can be desirable, but random replacement seems likely to be undesirable on average.  So there must be criteria that distinguish good emotional changes from bad emotional changes.  What are they?

" } }, { "_id": "ndfd4abtjnvG8AZom", "title": "Repeated thought", "pageUrl": "https://www.lesswrong.com/posts/ndfd4abtjnvG8AZom/repeated-thought", "postedAt": "2009-01-04T21:14:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ndfd4abtjnvG8AZom", "html": "

Eliezer Yudkowsky of OB suggests thinking and doing entirely new things for a day: 

\n
\n
\n
Don’t read any book you’ve read before.  Don’t read any author you’ve read before.  Don’t visit any website you’ve visited before.  Don’t play any game you’ve played before.  Don’t listen to familiar music that you already know you’ll like.  If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk.  Don’t go to any restaurant you’ve been to before, order a dish that you haven’t had before.  Talk to new people (even if you have to find them in an IRC channel) about something you don’t spend much time discussing.
\n
\n
And most of all, if you become aware of yourself musing on any thought you’ve thunk before, then muse on something else. Rehearse no old grievances, replay no old fantasies.
\n
\n
\n
The comments and its reposting to MR suggests that this is popular advice. 
\n
\n
\n
It’s interesting that, despite the warm reception, this idea needs pointing out, and trying for one experimental day. 
\n
\n
Having habits for things like brushing teeth is useful – the more automatic uninteresting or unenjoyable experiences are, the more time and thought can be devoted to other things. Habits for places to go could be argued for – if you love an experience, why change it? 
\n
\n
But why should we want to repeat thoughts a lot? Seems we say we don’t. So, why do we do it? Do we do it? If we can stop when Eliezer suggests it, why don’t we notice and stop on our own? Is it that habits are unconscious; a state that doesn’t lend itself to noticing things? Has the usefulness of other habits made us so habitual that our thoughts are caught up in it? 
\n
\n
What can we do about it?
\n
\n
As a side note, perhaps the quantity of unconscious habit in a life is related to the way time speeds up as you age.
\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "EQkELCGiGQwvrrp3L", "title": "Growing Up is Hard", "pageUrl": "https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard", "postedAt": "2009-01-04T03:55:49.000Z", "baseScore": 55, "voteCount": 44, "commentCount": 42, "url": null, "contents": { "documentId": "EQkELCGiGQwvrrp3L", "html": "

Terrence Deacon's The Symbolic Species is the best book I've ever read on the evolution of intelligence.  Deacon somewhat overreaches when he tries to theorize about what our X-factor is; but his exposition of its evolution is first-class.

Deacon makes an excellent case—he has quite persuaded me—that the increased relative size of our frontal cortex, compared to other hominids, is of overwhelming importance in understanding the evolutionary development of humanity.  It's not just a question of increased computing capacity, like adding extra processors onto a cluster; it's a question of what kind of signals dominate, in the brain.

People with Williams Syndrome (caused by deletion of a certain region on chromosome 7) are hypersocial, ultra-gregarious; as children they fail to show a normal fear of adult strangers.  WSers are cognitively impaired on most dimensions, but their verbal abilities are spared or even exaggerated; they often speak early, with complex sentences and large vocabulary, and excellent verbal recall, even if they can never learn to do basic arithmetic.

Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals—including certain social emotions—dominate more than they should.

\"Both postmortem analysis and MRI analysis have revealed brains with a reduction of the entire posterior cerebral cortex, but a sparing of the cerebellum and frontal lobes, and perhaps even an exaggeration of cerebellar size,\" says Deacon.

Williams Syndrome's deficits can be explained by the shrunken posterior cortex—they can't solve simple problems involving shapes, because the parietal cortex, which handles shape-processing, is diminished.  But the frontal cortex is not actually enlarged; it is simply spared.  So where do WSers' augmented verbal abilities come from?

Perhaps because the signals sent out by the frontal cortex, saying \"pay attention to this verbal stuff!\", win out over signals coming from the shrunken sections of the brain.  So the verbal abilities get lots of exercise—and other abilities don't.

Similarly with the hyper-gregarious nature of WSers; the signal saying \"Pay attention to this person!\", originating in the frontal areas where social processing gets done, dominates the emotional landscape.

And Williams Syndrome is not frontal enlargement, remember; it's just frontal sparing in an otherwise shrunken brain, which increases the relative force of frontal signals...

...beyond the narrow parameters within which a human brain is adapted to work.

I mention this because you might look at the history of human evolution, and think to yourself, \"Hm... to get from a chimpanzee to a human... you enlarge the frontal cortex... so if we enlarge it even further...\"

The road to +Human is not that simple.

Hominid brains have been tested billions of times over through thousands of generations.  But you shouldn't reason qualitatively, \"Testing creates 'robustness', so now the human brain must be 'extremely robust'.\"  Sure, we can expect the human brain to be robust against some insults, like the loss of a single neuron.  But testing in an evolutionary paradigm only creates robustness over the domain tested.  Yes, sometimes you get robustness beyond that, because sometimes evolution finds simple solutions that prove to generalize—

But people do go crazy.  Not colloquial crazy, actual crazy.  Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy.  (I saw that happen once, and made a classic non-Bayesian mistake; I knew that this was archetypal schizophrenic behavior, but I didn't realize that similar symptoms can arise from many other causes.  Psychosis, it turns out, is a general failure mode, \"the fever of CNS illnesses\"; it can also be caused by drugs, brain tumors, or just sleep deprivation.  I saw the perfect fit to what I'd read of schizophrenia, and didn't ask \"What if other things fit just as perfectly?\"  So my snap diagnosis of schizophrenia turned out to be wrong; but as I wasn't foolish enough to try to handle the case myself, things turned out all right in the end.)

Wikipedia says that the current main hypotheses being considered for psychosis are (a) too much dopamine in one place (b) not enough glutamate somewhere else.  (I thought I remembered hearing about serotonin imbalances, but maybe that was something else.)

That's how robust the human brain is: a gentle little neurotransmitter imbalance—so subtle they're still having trouble tracking it down after who knows how many fMRI studies—can give you a full-blown case of stark raving mad.

I don't know how often psychosis happens to hunter-gatherers, so maybe it has something to do with a modern diet?  We're not getting exactly the right ratio of Omega 6 to Omega 3 fats, or we're eating too much processed sugar, or something.  And among the many other things that go haywire with the metabolism as a result, the brain moves into a more fragile state that breaks down more easily...

Or whatever.  That's just a random hypothesis.  By which I mean to say:  The brain really is adapted to a very narrow range of operating parameters.  It doesn't tolerate a little too much dopamine, just as your metabolism isn't very robust against non-ancestral ratios of Omega 6 to Omega 3.  Yes, sometimes you get bonus robustness in a new domain, when evolution solves W, X, and Y using a compact adaptation that also extends to novel Z.  Other times... quite often, really... Z just isn't covered.

Often, you step outside the box of the ancestral parameter ranges, and things just plain break.

Every part of your brain assumes that all the other surrounding parts work a certain way.  The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.

Start modifying the pieces in ways that seem like \"good ideas\"—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges.  And then everything goes to hell.  Why shouldn't it?  Why would the brain be designed for easy upgradability?

Even if one change works—will the second?  Will the third?  Will all four changes work well together?  Will the fifth change have all that greater a probability of breaking something, because you're already operating that much further outside the ancestral box?  Will the sixth change prove that you exhausted all the brain's robustness in tolerating the changes you made already, and now there's no adaptivity left?

Poetry aside, a human being isn't the seed of a god.  We don't have neat little dials that you can easily tweak to more \"advanced\" settings.  We are not designed for our parts to be upgraded.  Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are.  Idiot evolution does not look ahead, it does not design with the intent of different future uses.  We are not designed to unfold into something bigger.

Which is not to say that it could never, ever be done.

You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness.  A Friendly AI programmer could do even more arcane things to make sure the AI knew what you would-want if you understood the possibilities.  And then the AI could apply superior intelligence to untangle the pattern of all those neurons (without simulating you in such fine detail as to create a new person), and to foresee the consequences of its acts, and to understand the meaning of those consequences under your values.  And the AI could upgrade one thing while simultaneously tweaking the five things that depend on it and the twenty things that depend on them.  Finding a gradual, incremental path to greater intelligence (so as not to effectively erase you and replace you with someone else) that didn't drive you psychotic or give you Williams Syndrome or a hundred other syndromes.

Or you could walk the path of unassisted human enhancement, trying to make changes to yourself without understanding them fully.  Sometimes changing yourself the wrong way, and being murdered or suspended to disk, and replaced by an earlier backup.  Racing against the clock, trying to raise your intelligence without breaking your brain or mutating your will.  Hoping you became sufficiently super-smart that you could improve the skill with which you modified yourself.  Before your hacked brain moved so far outside ancestral parameters and tolerated so many insults that its fragility reached a limit, and you fell to pieces with every new attempted modification beyond that.  Death is far from the worst risk here.  Not every form of madness will appear immediately when you branch yourself for testing—some insanities might incubate for a while before they became visible.  And you might not notice if your goals shifted only a bit at a time, as your emotional balance altered with the strange new harmonies of your brain.

Each path has its little upsides and downsides.  (E.g:  AI requires supreme precise knowledge; human upgrading has a nonzero probability of success through trial and error.  Malfunctioning AIs mostly kill you and tile the galaxy with smiley faces; human upgrading might produce insane gods to rule over you in Hell forever.  Or so my current understanding would predict, anyway; it's not like I've observed any of this as a fact.)

And I'm sorry to dismiss such a gigantic dilemma with three paragraphs, but it wanders from the point of today's post:

The point of today's post is that growing up—or even deciding what you want to be when you grow up—is as around as hard as designing a new intelligent species.  Harder, since you're constrained to start from the base of an existing design.  There is no natural path laid out to godhood, no Level attribute that you can neatly increment and watch everything else fall into place.  It is an adult problem.

Being a transhumanist means wanting certain things—judging them to be good.  It doesn't mean you think those goals are easy to achieve.

Just as there's a wide range of understanding among people who talk about, say, quantum mechanics, there's also a certain range of competence among transhumanists.  There are transhumanists who fall into the trap of the affect heuristic, who see the potential benefit of a technology, and therefore feel really good about that technology, so that it also seems that the technology (a) has readily managed downsides (b) is easy to implement well and (c) will arrive relatively soon.

But only the most formidable adherents of an idea are any sign of its strength.  Ten thousand New Agers babbling nonsense, do not cast the least shadow on real quantum mechanics.  And among the more formidable transhumanists, it is not at all rare to find someone who wants something and thinks it will not be easy to get.

One is much more likely to find, say, Nick Bostrom—that is, Dr. Nick Bostrom, Director of the Oxford Future of Humanity Institute and founding Chair of the World Transhumanist Assocation—arguing that a possible test for whether a cognitive enhancement is likely to have downsides, is the ease with which it could have occurred as a natural mutation—since if it had only upsides and could easily occur as a natural mutation, why hasn't the brain already adapted accordingly?  This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn't the brain produce more acetylcholine already?  Maybe you're using up a limited memory capacity, or forgetting something else...

And that may or may not turn out to be a good heuristic.  But the point is that the serious, smart, technically minded transhumanists, do not always expect that the road to everything they want is easy.  (Where you want to be wary of people who say, \"But I dutifully acknowledge that there are obstacles!\" but stay in basically the same mindset of never truly doubting the victory.)

So you'll forgive me if I am somewhat annoyed with people who run around saying, \"I'd like to be a hundred times as smart!\" as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture; and as if a change of that magnitude in one shot wouldn't amount to erasure and replacement.  Or asking, \"Hey, why not just augment humans instead of building AI?\" as if it wouldn't be a desperate race against madness.

I'm not against being smarter.  I'm not against augmenting humans.  I am still a transhumanist; I still judge that these are good goals.

But it's really not that simple, okay?

" } }, { "_id": "4o3zwgofFPLutkqvd", "title": "The Uses of Fun (Theory)", "pageUrl": "https://www.lesswrong.com/posts/4o3zwgofFPLutkqvd/the-uses-of-fun-theory", "postedAt": "2009-01-02T20:30:33.000Z", "baseScore": 26, "voteCount": 23, "commentCount": 16, "url": null, "contents": { "documentId": "4o3zwgofFPLutkqvd", "html": "

\"But is there anyone who actually wants to live in a Wellsian Utopia?  On the contrary, not to live in a world like that, not to wake up in a hygenic garden suburb infested by naked schoolmarms, has actually become a conscious political motive.  A book like Brave New World is an expression of the actual fear that modern man feels of the rationalised hedonistic society which it is within his power to create.\"
        —George Orwell, Why Socialists Don't Believe in Fun

There are three reasons I'm talking about Fun Theory, some more important than others:

  1. If every picture ever drawn of the Future looks like a terrible place to actually live, it might tend to drain off the motivation to create the future.  It takes hope to sign up for cryonics.
  2. People who leave their religions, but don't familiarize themselves with the deep, foundational, fully general arguments against theism, are at risk of backsliding.  Fun Theory lets you look at our present world, and see that it is not optimized even for considerations like personal responsibility or self-reliance.  It is the fully general reply to theodicy.
  3. Going into the details of Fun Theory helps you see that eudaimonia is actually complicated —that there are a lot of properties necessary for a mind to lead a worthwhile existence.  Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.

To amplify on these points in order:

(1)  You've got folks like Leon Kass and the other members of Bush's \"President's Council on Bioethics\" running around talking about what a terrible, terrible thing it would be if people lived longer than threescore and ten.  While some philosophers have pointed out the flaws in their arguments, it's one thing to point out a flaw and another to provide a counterexample.  \"Millions long for immortality who do not know what to do with themselves on a rainy Sunday afternoon,\" said Susan Ertz, and that argument will sound plausible for as long as you can't imagine what to do on a rainy Sunday afternoon, and it seems unlikely that anyone could imagine it.

It's not exactly the fault of Hans Moravec that his world in which humans are kept by superintelligences as pets, doesn't sound quite Utopian.  Utopias are just really hard to construct, for reasons I'll talk about in more detail later—but this observation has already been made by many, including George Orwell.

Building the Future is part of the ethos of secular humanism, our common project.  If you have nothing to look forward to—if there's no image of the Future that can inspire real enthusiasm—then you won't be able to scrape up enthusiasm for that common project.  And if the project is, in fact, a worthwhile one, the expected utility of the future will suffer accordingly from that nonparticipation.  So that's one side of the coin, just as the other side is living so exclusively in a fantasy of the Future that you can't bring yourself to go on in the Present.

I recommend thinking vaguely of the Future's hopes, thinking specifically of the Past's horrors, and spending most of your time in the Present.  This strategy has certain epistemic virtues beyond its use in cheering yourself up.

But it helps to have legitimate reason to vaguely hope—to minimize the leaps of abstract optimism involved in thinking that, yes, you can live and obtain happiness in the Future.

(2)  Rationality is our goal, and atheism is just a side effect—the judgment that happens to be produced.  But atheism is an important side effect.  John C. Wright, who wrote the heavily transhumanist The Golden Age, had some kind of temporal lobe epileptic fit and became a Christian.  There's a once-helpful soul, now lost to us.

But it is possible to do better, even if your brain malfunctions on you.  I know a transhumanist who has strong religious visions, which she once attributed to future minds reaching back in time and talking to her... but then she reasoned it out, asking why future superminds would grant only her the solace of conversation, and why they could offer vaguely reassuring arguments but not tell her winning lottery numbers or the 900th digit of pi.  So now she still has strong religious experiences, but she is not religious.  That's the difference between weak rationality and strong rationality, and it has to do with the depth and generality of the epistemic rules that you know and apply.

Fun Theory is part of the fully general reply to religion; in particular, it is the fully general reply to theodicy.  If you can't say how God could have better created the world without sliding into an antiseptic Wellsian Utopia, you can't carry Epicurus's argument.  If, on the other hand, you have some idea of how you could build a world that was not only more pleasant but also a better medium for self-reliance, then you can see that permanently losing both your legs in a car accident when someone else crashes into you, doesn't seem very eudaimonic.

If we can imagine what the world might look like if it had been designed by anything remotely like a benevolently inclined superagent, we can look at the world around us, and see that this isn't it.  This doesn't require that we correctly forecast the full optimization of a superagent—just that we can envision strict improvements on the present world, even if they prove not to be maximal.

(3) There's a severe problem in which people, due to anthropomorphic optimism and the lack of specific reflective knowledge about their invisible background framework and many other biases which I have discussed, think of a \"nonhuman future\" and just subtract off a few aspects of humanity that are salient, like enjoying the taste of peanut butter or something.  While still envisioning a future filled with minds that have aesthetic sensibilities, experience happiness on fulfilling a task, get bored with doing the same thing repeatedly, etcetera.  These things seem universal, rather than specifically human—to a human, that is.  They don't involve having ten fingers or two eyes, so they must be universal, right?

And if you're still in this frame of mind—where \"real values\" are the ones that persuade every possible mind, and the rest is just some extra specifically human stuff—then Friendly AI will seem unnecessary to you, because, in its absence, you expect the universe to be valuable but not human.

It turns out, though, that once you start talking about what specifically is and isn't valuable, even if you try to keep yourself sounding as \"non-human\" as possible—then you still end up with a big complicated computation that is only instantiated physically in human brains and nowhere else in the universe.  Complex challenges?  Novelty?  Individualism?  Self-awareness?  Experienced happiness?  A paperclip maximizer cares not about these things.

It is a long project to crack people's brains loose of thinking that things will turn out regardless—that they can subtract off a few specifically human-seeming things, and then end up with plenty of other things they care about that are universal and will appeal to arbitrarily constructed AIs.  And of this I have said a very great deal already.  But it does not seem to be enough.  So Fun Theory is one more step—taking the curtains off some of the invisible background of our values, and revealing some of the complex criteria that go into a life worth living.

" } }, { "_id": "EZ8GniEPSechjDYP9", "title": "Free to Optimize", "pageUrl": "https://www.lesswrong.com/posts/EZ8GniEPSechjDYP9/free-to-optimize", "postedAt": "2009-01-02T01:41:00.000Z", "baseScore": 57, "voteCount": 42, "commentCount": 78, "url": null, "contents": { "documentId": "EZ8GniEPSechjDYP9", "html": "

Stare decisis is the legal principle which binds courts to follow precedent, retrace the footsteps of other judges' decisions.  As someone previously condemned to an Orthodox Jewish education, where I gritted my teeth at the idea that medieval rabbis would always be wiser than modern rabbis, I completely missed the rationale for stare decisis.  I thought it was about respect for the past.

But shouldn't we presume that, in the presence of science, judges closer to the future will know more—have new facts at their fingertips—which enable them to make better decisions?  Imagine if engineers respected the decisions of past engineers, not as a source of good suggestions, but as a binding precedent!—That was my original reaction.  The standard rationale behind stare decisis came as a shock of revelation to me; it considerably increased my respect for the whole legal system.

This rationale is jurisprudence constante:  The legal system must above all be predictable, so that people can execute contracts or choose behaviors knowing the legal implications.

Judges are not necessarily there to optimize, like an engineer.  The purpose of law is not to make the world perfect.  The law is there to provide a predictable environment in which people can optimize their ownfutures.

I was amazed at how a principle that at first glance seemed so completely Luddite, could have such an Enlightenment rationale.  It was a \"shock of creativity\"—a solution that ranked high in my preference ordering and low in my search ordering, a solution that violated my previous surface generalizations.  \"Respect the past just because it's the past\" would not have easily occurred to me as a good solution for anything.

There's a peer commentary in Evolutionary Origins of Morality which notes in passing that \"other things being equal, organisms will choose to reward themselves over being rewarded by caretaking organisms\".  It's cited as the Premack principle, but the actual Premack principle looks to be something quite different, so I don't know if this is a bogus result, a misremembered citation, or a nonobvious derivation.  If true, it's definitely interesting from a fun-theoretic perspective.

Optimization is the ability to squeeze the future into regions high in your preference orderingLiving by my own strength, means squeezing my own future—not perfectly, but still being able to grasp some of the relation between my actions and their consequences.  This is the strength of a human.

If I'm being helped, then some other agent is also squeezing my future—optimizing me—in the same rough direction that I try to squeeze myself.  This is \"help\".

A human helper is unlikely to steer every part of my future that I could have steered myself.  They're not likely to have already exploited every connection between action and outcome that I can myself understand.  They won't be able to squeeze the future that tightly; there will be slack left over, that I can squeeze for myself.

We have little experience with being \"caretaken\" across any substantial gap in intelligence; the closest thing that human experience provides us with is the idiom of parents and children.  Human parents are still human; they may be smarter than their children, but they can't predict the future or manipulate the kids in any fine-grained way.

Even so, it's an empirical observation that some human parents dohelp their children so much that their children don't become strong.  It's not that there's nothing left for their children to do, but with a hundred million dollars in a trust fund, they don't need to do much—their remaining motivations aren't strong enough.  Something like that depends on genes, not just environment —not every overhelped child shrivels—but conversely it depends on environment too, not just genes.

So, in considering the kind of \"help\" that can flow from relatively stronger agents to relatively weaker agents, we have two potential problems to track:

  1. Help so strong that it optimizes away the links between the desirable outcome and your own choices.
  2. Help that is believedto be so reliable, that it takes off the psychological pressure to use your own strength.

Since (2) revolves around belief, could you just lie about how reliable the help was?  Pretend that you're not going to help when things get bad—but then if things do get bad, you help anyway?  That trick didn't work too well for Alan Greenspan and Ben Bernanke.

A superintelligence might be able to pull off a better deception.  But in terms of moral theory and eudaimonia—we areallowed to have preferences over external states of affairs, not just psychological states.  This applies to \"I want to really steer my own life, not just believe that I do\", just as it applies to \"I want to have a love affair with a fellow sentient, not just a puppet that I am deceived into thinking sentient\".  So if we can state firmly from a value standpoint that we don't want to be fooled this way, then buildingan agent which respects that preference is a mere matter of Friendly AI.

Modify people so that they don't relax when they believe they'll be helped?  I usually try to think of how to modify environments before I imagine modifying any people.  It's not that I want to stay the same person forever; but the issues are rather more fraught, and one might wish to take it slowly, at some eudaimonic rate of personal improvement.

(1), though, is the most interesting issue from a philosophicalish standpoint.  It impinges on the confusion named \"free will\".  Of which I have already untangled; see the posts referenced at top, if you're recently joining OB.

Let's say that I'm an ultrapowerful AI, and I use my knowledge of your mind and your environment to forecast that, if left to your own devices, you will make $999,750.  But this does not satisfice me; it so happens that I want you to make at least $1,000,000.  So I hand you $250, and then you go on to make $999,750 as you ordinarily would have.

How much of your own strength have you just lived by?

The first view would say, \"I made 99.975% of the money; the AI only helped 0.025% worth.\"

The second view would say, \"Suppose I had entirely slacked off and done nothing.  Then the AI would have handed me $1,000,000.  So my attempt to steer my own future was an illusion; my future was already determined to contain $1,000,000.\"

Someone might reply, \"Physics is deterministic, so your future is already determined no matter what you or the AI does—\"

But the second view interrupts and says, \"No, you're not confusing me that easily.  I am within physics, so in order for my future to be determined by me, it must be determined by physics.  The Past does not reach around the Present and determine the Future before the Present gets a chance—that is mixing up a timeful view with a timeless one.  But if there's an AI that really does look over the alternatives before I do, and really does choose the outcome before I get a chance, then I'm really not steering my own future.  The future is no longer counterfactually dependent on my decisions.\"

At which point the first view butts in and says, \"But of course the future is counterfactually dependent on your actions.  The AI gives you $250 and then leaves.  As a physical fact, if you didn't work hard, you would end up with only $250 instead of $1,000,000.\"

To which the second view replies, \"I one-box on Newcomb's Problem, so my counterfactual reads 'if my decision were to not work hard, the AI would have given me $1,000,000 instead of $250'.\"

\"So you're saying,\" says the first view, heavy with sarcasm, \"that if the AI had wanted me to make at least $1,000,000 and it had ensured this through the general policy of handing me $1,000,000 flat on a silver platter, leaving me to earn $999,750 through my own actions, for a total of $1,999,750—that this AI would have interfered lesswith my life than the one who just gave me $250.\"

The second view thinks for a second and says \"Yeah, actually.  Because then there's a stronger counterfactual dependency of the final outcome on your own decisions.  Every dollar you earned was a real added dollar.  The second AI helped you more, but it constrained your destiny less.\"

\"But if the AI had done exactly the same thing, because it wantedme to make exactly $1,999,750—\"

The second view nods.

\"That sounds a bit scary,\" the first view says, \"for reasons which have nothing to do with the usual furious debates over Newcomb's Problem.  You're making your utility function path-dependent on the detailed cognition of the Friendly AI trying to help you!  You'd be okay with it if the AI only could give you $250.  You'd be okay if the AI had decided to give you $250 through a decision process that had predicted the final outcome in less detail, even though you acknowledge that in principle your decisions may already be highly deterministic.  How is a poor Friendly AI supposed to help you, when your utility function is dependent, not just on the outcome, not just on the Friendly AI's actions, but dependent on differences of the exact algorithm the Friendly AI uses to arrive at the same decision?  Isn't your whole rationale of one-boxing on Newcomb's Problem that you only care about what works?\"

\"Well, that's a good point,\" says the second view.  \"But sometimes we only care about what works, and yet sometimes we do care about the journey as well as the destination.  If I was trying to cure cancer, I wouldn't care how I cured cancer, or whether I or the AI cured cancer, just so long as it ended up cured.  This isn't that kind of problem.  This is the problem of the eudaimonic journey—it's the reason I care in the first place whether I get a million dollars through my own efforts or by having an outside AI hand it to me on a silver platter.  My utility function is not up for grabs.  If I desire not to be optimized too hard by an outside agent, the agent needs to respect that preference even if it depends on the details of how the outside agent arrives at its decisions.  Though it's also worth noting that decisions areproduced by algorithms— if the AI hadn't been using the algorithm of doing just what it took to bring me up to $1,000,000, it probably wouldn't have handed me exactly $250.\"

The desire not to be optimized too hard by an outside agent is one of the structurally nontrivial aspects of human morality.

But I can think of a solution, which unless it contains some terrible flaw not obvious to me, sets a lower bound on the goodness of a solution: any alternative solution adopted, ought to be at least this good or better.

If there is anything in the world that resembles a god, people will try to pray to it.  It's human nature to such an extent that people will pray even if there aren't any gods—so you can imagine what would happen if there were!  But people don't pray to gravity to ignore their airplanes, because it is understood how gravity works, and it is understood that gravity doesn't adapt itself to the needs of individuals.  Instead they understand gravity and try to turn it to their own purposes.

So one possible way of helping—which may or may not be the best way of helping—would be the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together.  A nicer place to live, but free of meddling gods beyond that.  I have yet to think of a form of help that is less poisonous to human beings—but I am only human.

Added:  Note that modern legal systems score a low Fail on this dimension—no single human mind can even know all the regulations any more, let alone optimize for them.  Maybe a professional lawyer who did nothing else could memorize all the regulations applicable to them personally, but I doubt it.  As Albert Einstein observed, any fool can make things more complicated; what takes intelligence is moving in the opposite direction.

" } } ] } } }